虚拟网络嵌入的文章

所需积分/C币:9 2018-06-07 23:45:08 749KB PDF
4
收藏 收藏
举报

很好的文章,Software-defined networking has emerged as a powerful approach to improve the customizability and flexibility of networks. In this work, we focus on virtualization in the realm of software-defined network- ing, and study how to embed virtual networks in this environment. Virtual network embedding is an important problem because intelligent embedding can lead to better performance and a more efficient allocation of network resources compared to random mapping. In software-defined networking, the pres- ence of a central controller is a complicating factor, and customizable routing and differences in resource sharing present new opportunities and challenges. We identify two aspects of virtual network embedding in software-defined networks: virtual node and link mapping, and controller placement. We tackle these problems together, developing techniques to perform embedding with two goals: balancing the load on the substrate network and minimizing controller-to-switch delays. We evaluate our techniques with sim- ulation and Mininet emulation, and show that they are able to optimize for one of the above objectives while keeping the other within reasonable bounds
M. Demirci, M. Ammar/Computer Communications 45(2014)1-10 iv. We evaluated the impact of both techniques on the objective virtual networks, and then describe the vn embedding problem metrics(stress and controller-to-switch delays) as well as in the sdn environment other metrics such as end-to-end delays and throughput We also analyzed the effects of changes in the tunable 3. 1. Identifying SDN resources parameters of our methods and definitions Virtualization in software-defined networks requires a mecha- The remainder of the paper is organized as follows. Section 2 nism to share network resources among multiple slices. a classifi presents a brief review of related work in the literature. In Sec- cation of these resources was presented in the flow visor work [2] tion 3, we describe resource sharing in SdN and formally define We summarize these as follows the vn embedding problem. Section 4 presents our approach and Switch CPU: Sharing computational power between slices at two variations of vn embedding techniques with different switches is orchestrated considering two main tasks performed priorities. Section 5 contains results from evaluation efforts con- for each slice generating new flow messages to be sent to the cor ducted via simulation and Mininet [5 emulation, and Section 6 responding controller, and handling controller requests regarding summarizes the paper. the slice Cpu power demands at a switch increase with the num- ber of virtual nodes and links at mapped to it/traversing it 2 Related work Bandwidth: Isolation of bandwidth is handled via per-slice ueues at each port on Open flow switches. Each slice has its Vn embedding with node and link constraints is a complex own queue at each port, and these queues are serviced according problem and can be reduced to the NP-hard multi way separator to the resource allocation policy. In particular, FlowVisor imple problem [11]. Much of the research in this area has focused on mentation uses minimum bandwidth queues, whereby a slice is developing heuristics by shrinking the problem space through var- guaranteed a minimum of X% of the bandwidth on a link, and pos ious assumptions [12] sibly more if the link is not fully utilized. In our design, we take into Various techniques for performing vn embedding in traditional account both the number of virtual links that share a substrate link networks have been proposed by researchers [13. Hov and the number of controller -to-switch connections that must go tinctions of the sDn environment(such as the centrality and the ever, dis over this link. balancing these numbers across the substrate is use- importance of the controller, and differences in the virtualization ful to avoid potential hot spots and reduce the possibility of technology) necessitate a new approach We utilize some defini- congestion tions and ideas from the stress-balancing vn assignment algorithm Flow entry space: The number of flow entries that can be used by each slice is also limited at each switch when a controller is over presented in [ 8] and adapt them to our problem. We define stress its limit, it is not allowed to insert any new rules at the switch. This in a modified way, and identify stress-balancing as one of two VN embedding objectives considered in this paper. is another reason to consider the number of virtual links traversing SDn is a suitable platform for network virtualization and a substrate node as part of the load on that node researchers have been working on ways to provide virtualization to enable the coexistence of multiple vNs in the SDN environment 3. 2. VN embedding problem FlowVisor [2 has been proposed as a virtualization solution in SDN. In this work, we assume the availability of a FlowVisor-like Network model: We take the underlying sdn infrastructure as lalization tool and analyze flow visor to identify the network the network substrate and model it as a graph(V, E), where V i resources to be shared in the presence of multiple VNs coexisting the set of substrate nodes and e is the set of substrate links on an sdn substrate. with flow visor each vn runs its own con We can think of vn-embedding in sdn as a combination of two troller, so we include controller placement in the problem defini sub-problems: 1)deciding which switches to include in a slice, and tion. In contrast, another Sdn virtualization solution FlowN 3 2) how to configure the routing within the slice. This problem be utilizes container -based virtualization to run a modified version comes interesting only when there are many VNs sharing the of the noX controller [14 and it does not employ a separate con underlying infrastructure. So we are focusing on the problem of troller for each Vn but it maps NOX APi calls between the physical embedding multiple VNs on a substrate of Open Flow-enabled and virtual networks We choose flow visor as the tool for virtual switches using FlowVisor ization because it is the more established and widespread solution We address controller placement and vn embedding as a joint The placement of SDn controllers has been studied [15] with problem With multiple vn requests, the location of the controlle the goal of minimizing the average and maximum delays from for each Vn can be either predetermined in the vn request (as the controller to the switches. The authors do not propose a spe some users mdy desire the ability to select precise locations for cific method for this placement, but they evaluate how much con the central controllers of their vns) and fixed at a certain location troller placement affects said delays by finding the optimal or adjustable, which means that there is no specific limitation by placement via brute force. The location of the network is specified the user regarding the controller location and the embedding algo rithm is free to select the location. To this end, we consider two op beforehand except for the placement of the controller, which can tions for each VN: VN embedding with fixed-location controller(vnE be moved around. In this work, we incorporate the objective of de lay minimization into our embedding techniques. However, we F) and vn embedding with adjustable-location controller(vNE-A). consider a more general situation where there are many vns The only difference between these two approaches is in the whose controllers are either fixed at a predetermined location or determination of the controller location for the vn. in vnea the free to be placed anywhere. Furthermore, the topology of each controller location can be set to any node in the substrate topolo VN is given, but the mapping of that topology to the SDN substrate since we assume that the software controller can run on a host al the same location as a switch is yet to be determined Our objective for vn embedding is twofold balancing the load on switches and links while trying to minimize the average 3. Model and problem statement controller-to-Switch delay for each VN. As d measure of load,we concentrate on stress from previous work 8] in the vn embedding In this section, we first briefly discuss the nature of resource literature. We modify the definition of stress given in [8 where it sharing in software-defined networks in the presence of multiple was defined for a substrate node as the number of virtual nodes M. Demirci, M. Ammar/Computer Communications 45(2014)1-10 mapped to it, and for a substrate link as the number of virtual links component placement along with controller placement. One traversing it. We add two things to these definitions: for node method strives to balance the load and resource usage in the net stress, we take into account the additional cpu power and flow en- work as much as possible while keeping controller-to-switch de try space load that comes from virtual links that traverse a node: lays within certain bounds, and the other method minimizes and for link stress, we add a component for the total load due to these delays while keeping the maximum stress under a thresh all the controller-to-switch communications going over a particu- old. These embedding methods are uncoordinated 13, i.e virtual lar link. the definition of stress in this work is as follows node mapping and virtual link mapping are performed in two For a substrate node vE v, stress is a weighted combination of separate stages the number of virtual nodes assigned to it (nN(u)), and the nuimber of virtual links traversing it(n,(D)). More formally, if SN(v) denotes the stress of substrate node v, then 4. 1. Stress-balancing embedding (sbe) SN(v)=a nN(v)+B n(v), VvEv The goal of the first heuristic is to produce low and balanced Modifying the positive parameters or and B in the above defini stress on substrate nodes and links while making sure that the tion will change the importance given to each component of node worst controller-to-switch delay does not exceed a certain thresh stress. If hosting a virtual node is considered to cause more stress old Reducing stress on nodes is important for decreasing the pos- for a substrate node than being part of a virtual link, x will be as- ibility that resources such as CPu processing power and flow signed a bigger value relative to B, and vice versa. In a special case tables will get exhausted Keeping stress low on links helps avoid if(a, B)=(1, 0), the definition reduces to the simpler form given in congestion and may result in a lower percentage of the link's band- [8]. We assume all virtual nodes place an equal load on a substrate width being consumed For delays, we use a threshold r to limit node so we only take into account the number of virtual nodes the maximum controller-to-switch delay. mapped to a substrate node in the above calculation We borrow the concept of neighborhood resource availability For a substrate linke e, total link stress is the sum of data traf (NR)for substrate nodes from [8] fic stress and contr I traffic stress. Data traffic stress is placed on e by virtual links traversing it( tr(e) is the number of such virtual NR(v)=(S max-SN(D).2ISumax-Si(e) links), and each virtual link contributes an amount equal to the ra tio of its bandwidth demand and the capacity of e Control traffic tress is a weighted contribution for each controller-to-switch path SNmax= maxvevSN(v) is the maximum node stress, and going over this link. For controller Ci, each controller-to-switch SLmax= maxeEE SL()is the maximum link stress in the substrate. L, connection contributes "i to the stress of each link on the path, in the set of substrate links adjacent to u where 0< yi<1. Let cile) denote the number of controller-to A high nr signifies that the node and the links connected to it switch connections traversing link e in the ith vn out of numV total are lightly loaded We make use of nr to inform our node selection VNs in the systen. More formally, if Si(e)denotes the stress of sub- process. The main idea in this technique, similar to [8], is to map strate link e. then high-degree virtual nodes to substrate nodes with high nr, since high-degree virtual nodes are going to set up more connections tr() demand SL(e) n:C;(e),Ve∈E 2 and need more resources We perform this mapping in a controlled 台cp(e) way to ensure that the delay threshold r is not exceeded The de tails of the heuristic are shown in Algorithm 1, and an explanation In calculating the link stress, the demand of a virtual link is of the steps of the heuristic is given below. capped at the lowest substrate link capacity in the corresponding Step 1.0. Precomputation of pairwise delays: Given the substrate path since that is an implicit limit on the highest rate the virtual network topology(V, E) and the delays between neighboring link can achieve. if bandwidth demands are not known in advance nodes, find the shortest path between all V-V 1 node pairs we assune they are all equal. yi can be different for each VN and populate the pairwise delay matrix for all these pairs depending on the intensity of its control traffic. A higher vi would Step 1. 1. Mapping VNs with fixed-location controllers: Assume signify that the load caused by control traffic is more important for the heuristic is given numV VNS, numF of which having fixed con this vn than that in a VN with a lower i value troller locations For each of these numf VNs, it finds all substrate The formal problem definition for VN embedding in software- nodes whose distances from the controller are at most r to form defined networks is given below. the set of potential hosts, H, for virtual nodes. Then, it performs d c Let Slice; be the ith slice sharing a network of switches The vir- one-to-one mapping between virtual and substrate nodes such al topology of this slice, VT, defines the nodes and links included that virtual nodes with higher degrees are mapped to substrate in this slice. VT: is described as a graph(V;,(), where V, is the set of nodes with higher NRS within the set H. After that. it maps each virtual nodes, and e; is the set of virtual links. This slice, along with virtual link to the shortest path between its source and destina s controller Ci, constitute virtual network N Our problem can tion nodes then be stated as follows Step 1.2. Mapping VNs with adjustable-location controllers: After Given the virtual topology of all numV VNs, and information as the above step, the heuristic moves onto the VNs with adjust to whether the location of each Ci is fixed or adjustable, organize ble-location controllers It orders the VNs according to their sizes, all VNs in a way that will(1) minimize the maximum stress on a i.e. VNs with the most virtual links are mapped first. The controller switch or link while keeping all controller-to-switch delays under for the biggest VN is attached to the available substrate node with a threshold r, or(2)minimize the average controller-to-switch de- he highest NR value, and then, the heuristic maps its nodes one by lay for each N; while keeping the maximum stress under a thresh one, according to the rules in Step 1. 1, and repeats this until all vNs old t are mapped Picking all nodes from H ensures that we never go over the de- 4. VN embedding techniques lay threshold R for any vn. The rest is fixing all controllers to high Nr nodes and performing the mappings according to the high In this section, we describe two heuristic methods to tackle degree to high principle advocated in [8 to produce low and the NP-hard vn embedding problem by addressing virtual balanced stress. M. Demirci, M. Ammar/Computer Communications 45(2014)1-10 4.2. Delay-minimizing embedding(Dme) Algorithm 1. Stress-balancing embedding heuristic(sBe) 1: Input: Substrate network(V, E, numF VNs with fixed The second heuristic primarily aims to minimize average delays location controllers(N1,., NrumF), numV-numF VNS between Vn controllers and switches used by the vn. Keeping with adjustable-location controllers(nnumF+1,., Nnumv these delays low is crucial for the timely response of the controller where numF< numb, and the maximum delay threshold to switch events that require intervention. The maximum stress is R not minimized but it is controlled by an upper limit t we define t 2: Compute the shortest path for all node pairs(x y)st as a pair(n, ti)to be able to impose limits on both maximum x,ycV node stress and maximum link stress These limits can be high or 3: Calculate the delay for all such node pairs low relative to each other depending on what we want to priori- 4:fori1→ numf do tize. the main idea is to select nodes that are near their corre 5: Find the set h of nodes such that the delay from Ci to the sponding controllers, and then rerouting to avoid exceeding node is at most r stress thresholds. The technique is shown in algorithmic form in 6: Calculate the nr for all members of h Algorithm 2, and the operation is described in detail in the follow 7: Sort all virtual nodes in Vi in the order of decreasing node Ing steps: degree in the virtual topology. Step 2.0. Precomputation of pairwise delays: Given the 8:forj=1→|l substrate network topology (V, E) and the delays between 9: Map vi to h s.t. Nr(h)=maxveHNR(v). neighboring nodes, find the shortest path between all V.V-1 10: Remove h from h node pairs and populate the pairwise delay matrix for all these 11: end for paIrs. 12:forj=1→E Set the path for virtual link ei to the shortest path L Step 2. 1. Mapping VNs with fixed-location controllers: Given numy Is, numF of which having fixed-location controllers, the heuristic between its source and destination tarts by ranking the switches in the substrate by the delay from the controller of each of these numF VNs. For each of these numF 14: end for 15: end for VNs, it then performs a one-to-one mapping between all virtual nodes and the substrate nodes that are closest to the controller 16: Order remaining VNs in terms of decreasing number of such that the stress on any of the substrate nodes will not exceed virtual links 17: for i=numF+1 nunV do TN after this stage. It determines the shortest path between the pair of nodes in each virtual link and selects that as the route for the 18: Calculate nr for all substrate nodes virtual link 19: Attach Ci to the node with the highest nr value Step 2. 2. Mapping VNs with adjustable-location controllers: The 20: Find the set h of nodes such that the delay from ci to the heuristic orders the vNs according to their sizes, i.e., VNs with node is at most r the highest number of virtual links are mapped first. For each VN 21: Sort all virtual nodes in Vi in the order of decreasing to be mapped, it finds three nodes with the lowest average dis lode degree in the virtual topology tance to all the other substrate nodes and attaches the controller 22:forj=1→vl|do of the vn to the node with the highest NR out of that three. Then 23: Map v to h E H s.t. NR(h)=maxyeH NR(v) it performs the above described fixed-controller VN mapping 24: Remove h from h procedure 25: end for Step 23. Virtual link rerouting: After the initial mapping of all 26:forj=1→Edo VNs is completed, the next step is ensuring that the maximum 27: Set the path for virtual link e to the shortest path stress threshold t is not exceeded. to this end. the heuristic takes between its source and destination advantage of the rerouting possibilities presented by the underly 28: end for ing SDN substrate to move virtual links away from highly stressed 29: end for substrate nodes and links that are causing t to be exceeded. It cal 30: Return the final mapping and quit culates the stress on every substrate node and link, and then the maximum node stress, SNmax= maxey SN(v), and the maximum link stress, SI.max=maxe-FSr(e). If either SNmax exceeds TN or S,max exceeds TL, then the heuristic performs rerouting such that each B B 12 C Substrate topology with delays owest-ce ay mapping DME mapping Fig. 2. Numbers next to edges represent link delays in ms Nodes marked with a v'are virtual nodes. a star next to a node signifies that the controller is placed at that location M. Demirci, M. Ammar/Computer Communications 45(2014)1-10 virtual link traversing a substrate node/link whose stress the node with the largest nr helps situate the vn in a relatively exceeds the threshold is remapped to the next shortest path that empty area in the substrate and routing flexibility is exploited to does not traverse said node/link, until both SNmax TN and limit maximum stress Smax≤T1 5. Evaluation Algorithm 2. Delay-minimizing embedding heuristIc (dme 1: Input: Substrate network(V, E), numF VNS with fixed In this section, we evaluate our VN embedding techniques with location controllers(N1,., NnumF), numV-numF VNs with simulation and Mininet [5 emulation. We perform VN embed adjustable-location controllers(NnumF+1,., Nnumv) where dings in an offline manner, assuming all VN requests are known numf numV and the maximum stress threshold 2: Compute the shortest path for all node pairs(x, y).t 5.1. Metrics Xy∈V 3: Calculate the delay for all such node pairs Below are the metrics that we use to evaluate our vn embed 4:fori=1→ numf do ding techniques 5: Sort all nodes in Ni according to delay from Ci 6: Available nodes←V Controller-to-switch delay: average and maximum (for each VN) Node stress: average and maximum (across the 7:forj=1→v substrate Map yi to the available node v; such that yj is the Link stress: average and maximum ( across the closest to Ci and SN(vi)+a< TN (a is from Eq (1). substrate 9: Remove vi from the list of available nodes for n End-to-end delay: average(for each VN) 10: end for Throughput: average rate of data delivery on all 11:forj=1→|E|do virtual links over the period of data 12: Set the path for virtual link e to the shortest path transfer(for each VN). For a virtual between its source and destination link, this is calculated by dividing 13: end for the amount of the data transferred 14 end for over that link by the time spent to 15: Order remaining VNs in terms of decreasing number of complete the transfer. virtual links 16: for i=numF+1- nunv do As controller-to-switch delay and stress on substrate compo 17: Calculate nr for all substrate nodes nents are two things that we are optimizing for, we study how they 18: Find the 3 substrate nodes with the lowest average are affected by different types of embedding through simulation. In distance to other nodes addition, we want to gain insights as to how these goals are influ 19: Attach Ci to the one with the highest nr among this 3 encing vn performance metrics, such as end-to-end delays and 20: Available nodes←V throughput. to this end, we emulate scenarios on Mininet to 21: for j 22 Map vi to the available node vj such that u is the with different embedding methods nd throughput are changing Understand how end-to-end delays closest to Ci and SN(vi)+as TN 23: Remove 1: from the list of available nodes for n 5.2. Strategy 24: end for As substrate networks, we utilize 10 different topologies from 25:forj-1→E 26: Set the path for virtual link e to the shortest path the Internet Topology Zoo [ 16. These topologies vary in size and between its source and destination shape, allowing us to evaluate our techniques with different net- work properties. We construct 20 different virtual topologies, rang 27: end for 2 8 end for ing from 5 nodes to 20 nodes For each of these virtual topologies, ndomly select a coeffi he range[0.3, 0.7] as the prob 29 Calculate stress for each node in v and each link in e ability of connection between any two virtual node pairs. Thi 30: SNmax +maxveVSN(v), Simax maxeEESi(e gives us some variation in graph density. We randomly assign 31:tnax←v∈Vs.t.SN()= SNmax fixed-location controllers to 5 of these VNs, the rest have adjust 32:emax←e∈Est.SL(e)-S 33: if SNmax Tn or SLmax >tL lble-location controllers (except for the part of the evaluation 34: Until SN(Umax )< TN, reroute virtual links away from vma where we vary the percentage of fixed-location controllers, in to the next shortest path that does not traverse vr Section 5.3.5) The parameters in the node stress formula from Section 3 are 35: Until Si(emax)< TL, reroute virtual links away from emax lected as(a, B)=(1, 1). We assume all virtual link demands are to the next shortest path that does not traverse ema the same, and also equal to all substrate link capacities For y 36: else we pick a random value from the range 0,0.5] for each VN sepa 37: Return the final mapping and quit 38: end if rately. This provides a variation in the contribution of control traf fic toward link stress for different VNS. We do not let 'i go over 0.5 39: Go to 29 in this experiment because that could cause control traffic to dom inate link stress and lower the number of viable locations for con trollers, thus reducing the flexibility in controller placement(more this at the end of th on, in Section 5.3.6) The delay-minimizing embedding results in low controller-to- We compare SBe(Algorithm 1)and de(Algorithm 2)against switch delays because it greedily selects switches that have the two simpler variations: pure stress-balancing mapping and lowest delay to the controller. Attaching the central controller to naive delay-minimizing mapping. Pure stress-balancing mapping M. Demirci, M. Ammar/Computer Communications 45(2014)1-10 executes sBe without regard to delays (i.e, with an extremely high significantly. However, unless stress thresholds are unreasonably delay threshold r). naive delay-minimizing mapping places the low, dme may be able to get away with rerouting only a small adjustable-location controllers randomly, and then maps virtual fraction of virtual links nodes to the closest possible substrate nodes to minimize control- ler-to-switch delays, similar to dme yet without any attention to 5.3. 2. Controller-to-switch delays stress. We also consider purely random mapping, which performs We look at how different embedding techniques affect the de virtual node mapping and controller localization completely ran lays between the Vn controller and the switches in the VN. We domly over the entire substrate but it performs badly enough to disrupt graph scales, so we do not show it in all graphs and we de consider both average and maximun controller-to-SwWitch delays for each VN: the average is calculated over all of the switches in- clare it infeasible for practical use compared to the other options We run evaluations with the following objectives cluded in a given VN, and the maximum is the delay from the con oller to the farthest switch in the vn Average and maximum controller-to-switch delays for all vNs Comparing average and maximum controller-to-switch delays for SBE, DI ME, and two simpler variations of these are shown in Figs. 3 and 4 respectively, for SBE, DMe, naive de- lay-minimizing mapping, and pure stress-balancing mapping Each techniques, data point in these graphs represents the value for one of 20 vNs, ii. Comparing average and maximum stress for the same four averaged over mappings to 10 substrate topologies. All 20 values techniques, produced by the same technique are sorted in increasing order iii Comparing average end-to-end delays and throughput using and then plotted. The X-axis is labeled"Fraction of VNs"because Mi iV. Varying the percentage of vNs with fixed-location control the x-coordinate corresponding to a y-value signifies the fraction of all VNs that have a delay less than or equal to that y-value. lers to see how this affects metrics v. Varying stress threshold t and the contribution of control VNs in this experiment (i.e, 4 out of 20)have delays of at most 10 ms, for the particular technique that the data point belongs to To produce results, we map all 20 VNs together to each of the substrate topologies using all of the different techniques to be eval uated. For each VN, we average the resulting metrics over the 10 pure stress balai substrate topologies. SBE uses a controller-to-switch delay thresh DME -. old R of 50 ms, which is identified as a common target for maxi naive delay minimizing mum latency in [15 because it is the target restoration time of a SoNET ring. DME uses a variable stress threshold T=(N, tL) depending on the stress values achieved by sbe for the same sce nario. by default Tn and ti are set to be 50% more than the snmax and SLmax procuded by SBe, respectively, for a given substrate and set of vNs, howeyer we evaluate the effect of different t values in section 5.3.6 四0599000 53. Results 最=: 5.3.1. DME with a small example Before moving onto larger topologies, let us demonstrate the 00.10.20.30.40.506 0.80.9 properties of one of our heuristics: DME. We consider placing a Fraction of ns 3-node vn on a 5-node substrate topology with link delays ranging Fig. 3. Average controller-to-switch delays for all VNs: SBE, DME, naive delay from 5 ms to 20 ms. There are C(5, 3)-10 possible virtual node minimizing mapping and pure stress balancing mapping mappings, and 5 possible controller locations, resulting in a total of 50 VN mappings. Average controller-to-switch delays for these 50 mappings range from 5 ms to almost 17 ms DMe produces the mapping with the average controller-to-Switch delay of pure stress balancing 5.33 mS, the second lowest value among the 50. Fig. 2 shows the topology of the substrate network along with the lowest-delay DME -i- naive delay minimizing and dme mappings The simple 3-node vn topology does not allow much variation in node stress, but link stress values are more variable. We set ? (control traffic coefficient to 0.5. If the link stress threshold Ti is set to 2.5, dme does not need to perform any rerouting. How- ever, if Ti is reduced to 1.5, DMe manages to get under the thresh old by rerouting a virtual link and increasing its expected end-to end delay from 16 ms to 30 ms in the process. This example gives us an idea about the behavior of dme. it seems that dme should be able to achieve optimal or near-optimal controller-to-switch delays. It may or may not need to perform rerouting depending on stress thresholds. If rerouting is required dMe can reduce 00.10.20.3040.50.60.70.80.91 the maximun stress under the threshold even in a small topology Fraction of vns with very few route options. Controller-to-switch delays are not affected by rerouting since there is no node remapping, but Fig. 4. Maximum controller-to-switch delays for all VNs: SBE, DME, naive delay end-to-end delays for rerouted virtual links may increase minimizing mapping, and pure stress balancing mapping 8 M. Demirci, M. Ammar/Computer Communications 45(2014)1-10 We use this X-axis in the other graphs in the remainder of this Table 2 evaluation section as well Average and maximum link stress In Fig 3, we see that dme produces very close results to(in Avg link stress Max link stress some case better than) the naive delay-minimizing mapping, even Pure stress balancing 6.5 9.8 though the former is also trying to limit stress. We infer that the 10.9 stress thresholds Tn and ti do not significantly affect the mapping DME 10.3 16.1 n most cases, and when they do, the effects are minimal, possibly Naive dclay minimizing 12.7 24.3 because dme utilizes minimally invasive rerouting in Step B.3. in stead of remapping nodes in cases where stress thresholds are ex ceeded after the previous steps. SBE and the pure stress-balancing random heuristic produce higher delays, but sBe is able to perform a bit better than pure stress-balancing mostly because it forces itself DME… naive delay minimizing. to keep controller-to-switch delays under the threshold r as it is doing the mapping Maximum delays shown in Fig 4 exhibit a similar pattern: Ma imum delays for DMe and naive delay-minimizing mapping are close to each other and maximum delays for the other two tech E9卫 niques are higher as expected. We see several instances where the worst-case delay for SBe is close to the threshold of 50 ms, phile the worst-case delays for pure stress-balancing exceed 50 ms in those cases. This suggests that sbe has made an active ef- 10 fort to keep the maximum delay under 50 ms 533 Node and link stress 00.10.20.3040.5060.70.80.91 We now analyze the impact of the heuristics on node and link Fraction of vns stresses. For this experiment, we use the results collected from the simulation in the previous subsection( Section 5.3.2). But in Fig. 5. Average end-to-end delays for all VNs: SBE, DME, random mapping, and the naive delay minimizing heuristic stead of reporting stress values per vN, we focus on the overall average across the entire substrate in all 10 topologies Average and maximum node stress values from all runs are shown in Table 1, and average and maximum link stress values are shown in Table 2 for all four techniques considered while random these values do not tell much on their own, a comparison among naive delay minimizing- then yields the following: SBe performs comparably to pure stress balancing, and it may be preferable since it provides significant savings in maximum controller-to-switch delays. The naive de- 16 lay-minimizing heuristic is about twice as bad for stress as optimal heuristics, and dme performs somewhere in between these ends 12 due to its stress balancing components 5.3.4. End-to-end delays and throughput 8 For this part of the evaluation, we use Mininet 2.0 to emulate the operation of an sdn with multiple vNs placed on it. 10 differ ent VNs are embedded on 3 substrate networks sliced using Flow 00.10.20.30.4050.60.70.80.9 Visor. Bandwidth for all substrate links is set to 100Mbps. We use Fraction of vns ping to measure end-to-end delays, and execute file transfer be tween every pair of nodes via TCP with Iperf [17 Fig. 6. Average throughput for all VNs: SBE, DME, random mapping, and the naive Fig 5 demonstrates average end-to-end delays from SBE, DME, delay minimizing heuristic. random mapping, and the naive delay-rminimizing mapping DMe is performing considerably better than SBe and random mapping, situations due to forced rerouting increasing end-to-end delays offering an average of 45% reduction compared to sbe and a 65% in DMe sbe does better than random mapping, probably because reduction compared to random mapping. DME tends to cluster it tries avoid distant low-degree nodes the virtual nodes of a Vn as closely as possible without exceeding Fig 6 shows average throughput results collected over 20 emu stress bounds, so it is good at minimizing both controller-to-switch lations. DME achieves slightly higher throughput than random delays and end-to-end delays. The naive delay-minimizing map mapping since it actively avoids nodes with unacceptably high ping heuristic performs closely to DME, and even better in some stress, but they are still comparable in terms of averages. The naive delay-minimizing heuristic performs the worst since it maps the VN in a confined area without any regard for node and link stress Table 1 Average and maximum node stress The stress-balancing SBe performs considerably better than the other methods. Also, there is less variation in achieved throughput Avg node stress Max node stress among different VNs(the highest average is 43% more than the Pure stress balancing 114 21.0 lowest, as opposed to 57% for DME and 67% for random mapping SBE 12.1 23.5 indicating a more balanced network. These results suggest that our DME 324 Naive delay minimizing 19.2 41.1 techniques translate into improved performance in a realistic net ork emulation environment M. Demirci, M. Ammar/ Computer Communications 45 (2014)1-10 Table 3 Average and maximum node stress with varying numF/numv ratios for SBe numF/numv= num F/numV=0.75 x cumInum=0.5… Avg node stress Max node stress num F/numV=0.2 rumF/num∨=0--—-- numF/numV=0 10.8 17.1 numF/numV=0.25 119 212 numF/numV=0.5 12.8 numF/numV=0.75 13.6 numF/numB= 1 14.3 31.6 b 40 5.3.5. Varying the percentage of fixed-location controllers In our model, we provide the option of fixing the location of the E20 .E…日…日…日… controller because a user may want to place the controller for her VN at a certain location, such as her home PC or a server at a spe =2♀-日口 cific data center Assume there are numv vns to be placed numF 20.3040.50.60.70.80.91 Ith fixed-location controllers and the remaining numb F Fraction of ns with adjustable-location controllers e run simulations with 5 different numF/numv ratios: 0, 0. 25, 0.5, 0.75 and 1. We use the Fig 8. Maximum controller-to-switch delays for all VNs: DME with varying same 20 VNs we have been using throughout this section and 3 numF/numv ratios substrate topologies for this experiment. For each topology, we run the experiment 10 times and average the values for each VN from all mappings numF/numV. For both metrics, a higher percentage of fixed-loca We evaluate SBe and dMe, both at their strong suits: minimiz- tion controllers leads to higher delays due to decreased flexibilit ing average stress for SBE, and minimizing controller-to-switch de- and suboptimal controller locations. However, this effect is more lays for dme. We present results regarding four metrics: average pronounced with the maximum delay values, and for smaller per- and maximum node stress for SBe, and average and maximum con- centages (as seen from percentagewise bigger differences toward troller-to-switch delay for DME. We do not give results for link the bottom in Fig 8). A possible explanation of this situation is that stress separately because node stress results are sufficient in dem yith the large number of VNs on the substrates, fixing more of the onstrating the relevant trends. controllers has a smaller effect on the average since some of the Table 3 displays average and maximum node stress across the fixed controller locations may actually end up being suitable by substrate achieved by sbe at 5 different values of numF/numv chance. On the other hand, fixing the location of another controller We observe approximately 30% increase in average node stress increases the probability of hitting a bad controller location (i.e and 85% increase in maximum node stress as the numF/numV ratio one that is not suitable for delay minimization), so maximum delay goes from 0 to 1. An increase in stress is expected since some of the values rise faster for low numF/numV ratios But as the ratios get fixed-location controllers can be attached to nodes with low de larger, the probability of having already hit the worst controller grees and nr values that are suboptimal for stress minimization location also increases, so maximum delays do not rise as fast for The maximum stress is affected more because as the percentage high numF/numv ratios. of fixed-location controllers increases, so does the probability of Therefore, the takeaway from this exercise is that an unlucky combination of fixed controller locations that puts mul- tiple controllers close to each other, thus making the occurrence of The variation in the percentage of VNs with fixed controller a high-stress node more likely. This impact becomes lessened for locations has a bigger effect on maximum values than aver high numF/numv ratios because the damage is already done by ages, particularly maximum controller-to-switch delays then ii. For low percentages, the negative effects that come with Figs. 7 and 8 show average and maximum controller-to-switch fixed-location controllers are mild delays for all VNs achieved by dme at 5 different values of iii. For high percentages (75% and up), the flexibility that comes from adjustable-location controllers has negligible positive Impact numF/numv=1 numFinumV=0.75----k 5.3.6. Varying T and y 30 numF/numV=0.5 numF/numV=025…… In order to gain insight into how the stress threshold t affe numF/numV=0. the results produced by dme, we look at different levels of T. For our previous experiments, we had set the two components of T, TN and Ti, to 50% more than SNmax and SLmax produced by sBe 20 Let us define a stress allowance multiplier, denoted by o, such that Q TN=G SNmax and tL=o. SLmax. So far, we have used o=1., and a15 we now evaluate a few other values for g Table 4 summarizes our findings. We only look at average and maximum controller-to-switch delays because stress values Table 4 Delays with varying T for DME 0 Avg delay (ms) Max delay(ms) 00.10.20.30.40.50.60.70.80.9 G=1 12.37 44 Fraction of vns 24, G=1.5 Fig. 7. Average controller-to-switch delays for all VNs: DME with varying 1389 7.65 13.89 numF/numb ratios 10 M. Demirci, M. Ammar/Computer Communications 45(2014)1-10 Table 5 ii. The presence of the central controllers in software-defined Delays with varying y for DME networks as critical nodes has a significant impact on VN Avg delay(ms) Max delay (ms) embedding objectives and the locations of controllers must =0 7.49 lly in order to better optimize for perfo =0.25 8.95 16.04 mance goals γ=0. 10.76 23.12 =0.75 2547 To our knowledge, this paper presents the first effort to con 11.91 sider vn embedding and sdn controller placement together There are avenues for future work regarding this problem. First, are directly dependent on o so it is not very interesting to study one can envision a coordinated embedding solution where con stress in this case values are averaged over 20 VNs placed over 3 troller placement, virtual node mapping and virtual link mapping substrates, so the maximum delays here are average-maximum stages are not sequential. Second, evaluation may be improved via delays When (= 1, there is a tight control over stress, which al hich al- a dynamic system where VN requests are not known in advance lows less freedom in the effort to minimize delays, which rise by but arrive at different times, and or via more finely-tuned vN re about 50% on average compared to the case when C=1.5. Beyond quests that include varying resource deantis explicit informa this, the gains are marginal. For o=2 and g=3, there is no signif tion about preferred controller location options etc. Third, VN icant reduction in delays, except for a few cases where maximum reconfiguration (reassignment or migration in sdn is a natura delays can be reduced thanks to the extra degree of freedom. extension and an important problem that we are planning to con- Therefore, 0=1.5 appears to be a reasonable choice for DMe, centrate on next and we use that for every other experiment in this evaluation Another parameter that we experiment with is y Acknowledgements defining link stress in section 3, the definition of link stress affects both mapping techniques and potentially all metrics, but we are This work was supported in part by NsF grants CNS 1319490 presenting the effect we found the most interesting. For this simu CNS-1017152 and cns-1017237 lation, we assume , is equal for all VNs, and evaluate 5 different y values. Table 5 presents the controller-to-switch delay results References again averaged for 20 VNs placed over 3 substrates We see that [1 N McKeown, T. Anderson, H Balakrishnan, G. Parulkar, L Peterson, J. Rexford, the delays increase with increasing ,, but a hit more slowly after S. Shenker, J. Turner, Openflow: enabling innovation in campus nctworks, 2=0.5. This effect is probably due to the fact that when ?in SigComMcoMput.CommunreV.38(2)(2008)69-74,http://dx.doi.org/ creases. control traffic and thus the locations of the controllers 10.1145/1355734.1355746 and the links around it)become more critical in determining link [2]R. Sherwood, G. Gibb. kxYap, G Appenzeller.MCasado,N.McKeown,G stress. This takes away some of the ability to move the controllers Ninth USENIX Con ference on Operating Systems Design and me dings of the freely to minimize delays because if a controller is moved to a low NR node, it will be difficult to keep link stress within bounds [3 D. Rutskoy, E. Keller, J Rexford, Scalable network virtualization in software- definednetworksIeeEInternetComput17(2)(2013)20-27,http:// Hence, stress becomes the dominant factor in controller place dx. doi. org/10.1109/MIC2012.144. ment, causing delays to rise. but they do not rise too much because [4geni:globalenvironmentfornetworkinnovations<http://www.geni.net>. even with less flexible controller locations, dME is still a delay [5B. Lantz, B Heller, N. McKeown, a network in a laptop: rapid prototyping for software-defined networks in: proceedings of the ninth ACm sIgComm minimizing heuristic and it still has flexibility in virtual node Workshop on Hot Topics in Networks, Hotnets-IX. ACM. New York, NY, USA, placement to keep delays as low as possible. 2010,pp.19:1-19:6,http://dx.doi.org/10.1145/1868447.1868466. [6J. Turner, D. Taylor, Diversifying the internet, in: Proceedings of IFFF GLOBECOM. 2005 G Conclusions and future work [7 N Feamster, L Gao, J Rexford, How to lease the internet in your spare time, SIGCOMM Comput. Commun. Rev. 37(1)(2007)61-64 8Y. Zhu, M. Ammar, Algorithms for assigning substrate nctwork resources to In this work, we have studied techniques to perform vn map- virtual network components. in: Proceedings of IEEE INFOCOM, 2006 ing on an SDN substrate. The goals of our techniques were load [9]. Lu, J Turner, Efficient mapping of virtual network onto a shared substrate, balancing between switches as well as delay minimization between Tech rep, Washington University in St Louis, June 2006. [10 M. Yu,Y. Yi, J. Rexford, M. Chiang, Rethinking virtual network embedding controllers and switches. Each technique focuses on optimizing for ubstrate support for path splitting and migration, SIGCOMM Comput one of these goals while keeping the other one in check. Both Commun.Rev.38(∠J08)17-29.http://dx.doi.org/10.1145/1355734.1355737 techniques couple vn controller placement with virtual node and [11 D.G. Andersen, Theoretical approaches to node assignment, Comput. Sci. Dep link assignment. Various input parameters in our definitions and 2002)86 [12N. Chowdhury, R. Boutaba, A survey of network virtualization, Comput techniques determine the extent of flexibility in embedding efforts Networks54(5)(2010)862-876 and the range of improvements in evaluation metrics. [13 A. Fischer, Botero, M. Beck, H. De Meer, X. Hesselbach, Virtual network The main takeaways from this work are the following embedding: a survey, IEEE Commun. Surv. Tutorials PP(99)(2013)1-19, http://dx.doi.org/10.1109/surv∠01301301300155 [14Nox.<http:/www.noxrepo.org/>. i. There is a tradeoff between minimizing controller-to-switch 1 15 B Heller, R. Sherwood, N. McKeown, The controller placement problem, in delays and balancing the load on substrate nodes and links, Proceedings of the First Workshop on Hot Topics in Software Defined Networks,Hotsdn'12,Acm,NewyoRk,Ny,Usa,2012,pp.7-12,http:// however, these two goals are not in direct conflict each dx. doi. org10.1145/2342441.2342444 other. While it may not be possible to optimize both [ 16S. Knight, H Nguyen, N Falkner, R Bowden, M. Roughan, The internet topology goals at the Same time, it is quite possible to achieve near- Zoo.IeEeJ.Sel.AreasCommun29(9)(2011)1765-1775.http://dx.doi.org 10.1109/SAC2011.111002 optimalresultsforonemetricwhilekeepingtheother[17Iperf-tcpandUdpbandwidthperformancemeasurementtoolchttp:I/ metric within reasonable bounds rf.sourceforge. net/

...展开详情
试读 10P 虚拟网络嵌入的文章
立即下载 低至0.43元/次 身份认证VIP会员低至7折
一个资源只可评论一次,评论内容不能少于5个字
您会向同学/朋友/同事推荐我们的CSDN下载吗?
谢谢参与!您的真实评价是我们改进的动力~
关注 私信
上传资源赚钱or赚积分
最新推荐
虚拟网络嵌入的文章 9积分/C币 立即下载
1/10
虚拟网络嵌入的文章第1页
虚拟网络嵌入的文章第2页

试读结束, 可继续读1页

9积分/C币 立即下载