TECS-SENNA: Traffic Engineering Control System for SDN/OpenFlow Network

Project Description

OpenFlow Interface

In the TECS-SENNA project, we focus on providing a transparent interface for our management unit toward the OpenFlow network. The main issue is to analyze the controller scalability for feasible architecture planning as the following details: First, based on the network topology, the traffic flow statistics, application QoS requirements, and the peak flow initiation rate, a completed capacity analysis of the controller is evaluated in terms of the throughput and latency. Second, based on the capacity analysis, a further comparison of centralized and distributed architectures for SDN controllers is examined with respect to the number of, the placement of, and the control domain of the controllers. It includes signaling overhead characterization and reduction for each possible architecture. Third, the optimal number and location of SDN controllers are determined for the distributed case. In particular, the requirements of network state synchronization among multiple controllers are defined, and the consistency performance is also evaluated accordingly.

To solve the addressed scalability issues, we deploy the facility location with the optimal number of controller(s) and switches depeding on the network flow scale. First, the network structure perspective of OpenFlow interface is operated via the facility location planning. The OpenFlow channels may be a separate dedicated network, or it may use the network managed by the OpenFlow switch (in-band controller connection). Hence, the planar location planning is employed for the separate network case for its one-hop centralized connection, and the network location planning for the in-band controller with possibly multi-hop signaling transmissions. Next, the flow perspective is operated over the designated location planning of OpenFlow controller(s). The network throughput and transmission latency are addressed for different location planning with respect to different types of applications. The scalability of OpenFlow channels is well analyzed with the proposed optimal arhitecture planning for OpenFlow Interface.

Traffic Measurement and Performance Evaluation

Utilizing OpenFlow interface, the network state information and traffic statistics are acquired to initiate traffic modeling and performance analysis, in terms of delay, stability, capacity, and reliability. Given a topology and peak flow imitation rate, the demanded information for each node in the network is determined and the updated frequency of such information to the controller is also decided. Machine learning techniques are utilized here to dynamically minimize communication overhead of state information. That is, by identifying data fusion opportunities, the information of network states and traffic statistics required by the controller are reduced for the overhead minimization. Moverover, based on the collected traffic characteristics, the proper traffic models are identified or developed to characterize the inherent correlation structures for the elastic traffic, such as QoS sensitive video, audio, and data center traffic, and the inelastic traffic, e.g. delay insensitive email, file transfer, and http traffic. Thus, the structural/behavioral traffic models will be developed to match the collected traffic statistics of the active flows and thus characterize the traffic patterns in oder to accurate performance evaluation and predication.

Network Policy Management

Based on traffic analysis, operators policies, and Quality of Service (QoS) requirements, the network parameters are optimized by the following adaptive network policy management framework, which include dynamic switch slicing, traffic-aware virtual scheduling, load-balanced and mobility-aware routing to achieve load balancing, admission and congestion control, and failure recovery.

Dynamic switch slicing: Current computer networks deliver a wide range of applications, which generate different traffic flows. Because of the significantly distinct traffic properties of different applications, a single QoS provisioning scheme, such as scheduling algorithm, is difficult to simultaneously satisfy the diverse QoS requirements of all applications. The objective of optimal dynamic switch slicing is to adaptively allocate network resources, e.g., bandwidth, to the existing virtual switches in such a way that each virtual switch can achieve the throughput that satisfies the service demands of its users. Thus, the joint admission control and switch slicing solutions can be developed to allow maximum number of flows to be admitted while satisfying their respective rate requirements.

Traffic-aware virtual scheduling: Through dynamic switch slicing, network resources are optimally distributed among virtual switches according to application demands. The objective of traffic-aware virtual scheduling is to distribute network resource of a virtual RAP among its flows based on the statistical traffic properties of its associated applications to provide predefined levels of QoS guarantees to each flow while maximizing the overall throughput of the virtual switch. Each virtual switch may need to operte under a different scheduling algorithm to achieve the throughput-optimal QoS provisioning since virtual switches can serve different types of applications. The network applications are classifed into two categories: light-tailed applications for voice/audio traffic, and heavy-tailed applications for video streams. Thus, the maximum-weight scheduling algorithm based on the classifications of network applications will be developed in order to achieve an effective throughput-optimal scheduling, especially focus on the heavy tailed traffic.     

Load-balanced and mobility-aware routing: Since mobile Internet, multimedia, and cloud applications are dominating computer networks, a number of traffic flows need to be re-routed simultaneously when a number of mobile users experience handovers across radio access points which are connected to different switches. Such frequent handovers can lead to traffic congestions. The key objectives for load-balanced and mobility-aware routing are to achieve per-flow QoS guarantees, seamless mobility and network-wide performance guarantees. Moreover, to guarantee seamless mobility, a proactive routing approach using mobility prediction of mobile users will be proposed, so that new routes are computed even before the users arrive at a RAP connected to a new switche in such a way that the network utilization is maximized while satisfying the QoS requirements.

Network Control Update

Utilizing the parameters set by network policy management, the updates to flow information are determined, which includes flow definition, topology update, routing, and resource allocation. Such updates rely on the following control message dissemination. Network control dissemination: According to the network state and traffic estimation, the new scheme is developed to determine three events: the time periods of new flow definitions to be proactively installed at nodes, the frequency of flow redefinition, and the flow table aggregation. It achieves network stability by filtering the transient conditions. The reliability and efficiency of our designs are also enhanced as illustrated in the following. By developing the controller discovery schemes for OpenFlow switches in the presence of controller failure, the control plane robustness is ensured. Also, with the aid of performance convergence analysis of the control plane in the presence of the link or switch failures of the control network, the necessaries of the deploying out-of-band or dedicated control networks are determined. In addition, our designs allow adaptive control policies that cope with network performance dynamics, thus provide great efficiency in SDN.

You are visitor:

since 03/08/2014.