http://yuba.stanford.edu/~casado/onix-osdi.pdf
a platform on top of which a network control plane can be implemented as a distributed system. Control planes written within Onix operate on a global view of the network, and use basic state distribution primitives provided by the platform. Thus Onix provides a general API for control plane implementations, while allowing them to make their own trade-offs among consistency, durability, and scalability.
use well-known and general-purpose techniques from the distributed systems literature rather than the more specialized algorithms found in routing protocols and other network control mechanisms
Onix’s API consists of a data model that represents the network infrastructure, with each network element corresponding to one or more data objects. The control logic can: read the current state associated with that object; alter the network state by operating on these objects; and register for notifications of state changes to these objects.
While Onix handles the replication and distribution of NIB data, it relies on application-specific logic to both detect and provide conflict resolution of network state as it is exchanged between Onix instances, as well as between an Onix instance and a network element. The control logic may also dictate the consistency guarantees for state disseminated between Onix instances using distributed locking and consensus algorithms.
Onix provides applications with control over the consistency and durability of the network state. In more detail:
partition
aggregation
consistency and durability
how Onix distributes its Network Information Base and the consistency semantics an application can expect from it.
a one-hop, eventually-consistent, memory-only DHT (similar to Dynamo [9]), relaxing the consistency and durability guarantees provided by the replicated database.
Onix implements a transactional persistent database backed by a replicated state machine for disseminating all state updates requiring durability and simplified consistency management.
The
replicated database comes with severe performance limitations, and therefore it is intended to serve only as a reliable dissemination mechanism for slowly changing network state. Thetransactional database provides a flexible SQL-based querying API together with triggers and rich data models for applications to use directly, as necessary. To integrate the replicated database with theNIB , Onix includes import/export modules that interact with the database. These components load and store entity declarations and their attributes from/to the transactional database. Applications can easily group NIB modifications together into a single transaction to be exported to the database. When the import module receives a trigger invocation from the database about changed database contents, it applies the changes to the NIB.
Onix design does not dictate a particular protocol for managing network element forwarding state. Rather, the primary interface to the application is the NIB, and any suitable protocol supported by the elements in the network can be used under the covers to keep the NIB entities in sync with the actual network state.
The NIB is the central integration point …
x (label) Time from new value put before all instances get it (ms)
y (label) Probability
Figure 7: A CDF showing the latency of updating a DHT value at one node, and for that update to be fetched by another node in a 5-node network.
[19] HUNT, P., KONAR, M., JUNQUEIRA, F. P., AND REED, B.
[31] TERRY, D. B., THEIMER, M. M., PETERSEN, K., DEMERS, A. J.,
SPREITZER, M. J., AND HAUSER, C. H.