network state changes … in batches of forwarding rule modifications at multiple switches.
In this paper, we observe that a large network-state update typically consists of a set of sub-updates that are independent of one another w.r.t. the traffic they affect, and hence sub-updates can be installed in parallel, in any order.
very little consideration has been given so far to another important problem: optimizing the installation of forwarding rules to allow a vast fraction of flows to be processed according to the updated network state as soon as possible
tight flow packing to achieve maximum link utilization in Google’s B4 [5] can require frequent changes. On the other hand, installing or modifying a large number of rules across a pool of (potentially heterogeneous) switches can be a time-consuming operation due to the substantial latencies incurred in processing rule operations on the switch and updating switch chips accordingly. These latencies are due to hard-to-overcome technological as well as economical factors [2].
Solving this difficult scheduling problem is important because updates are often on the critical path, e.g., for implementing policy changes or service provisioning.
Fig. 5 shows the CDF of flow installation time for 1000 flows in an IBM topology [7] with 18 switches (all switches are edge switches) and a FatTree topology with 20 switches (k = 4, 8 ToR switches are edge switches).
[2] A. Curtis, J. Mogul, J. Tourrilhes, and P. Yalagandula. DevoFlow: Scaling Flow Management for High-Performance Networks. In SIGCOMM, 2011