Browsing by Author "Blose, Max"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
- ItemOpen AccessDevelopment of scalable hybrid switching-based software-defined networking using reinforcement learning(2024) Blose, Max; Akinyemi, Lateef Adesola; Nicolls, FrederickAs the global internet traffic continues to grow exponentially, there is a growing need for cutting-edge switching technologies to manage this growth. One of the most recent innovations is Software Defined Networking (SDN), which refers to the disintegration of the infrastructure layer and the logically centralized control layer. SDN is a cutting-edge networking approach that provides network agility, programming flexibility, and enhanced network performance over traditional switching networks. Even though SDN has some great benefits, there is a need to address and manage scalability challenges to guarantee optimal, scalable, and rapid data traffic switching within Service Provider network infrastructure, including Data Centres environments. These scalability issues are inherent to SDN's logically centralized control layer. Whenever a packet belonging to a new flow has to be transported, OpenFlow switch has to interact with the logically centralized SDN controller through southbound OpenFlow Application Programming Interface between OpenFlow switch and the logically centralized SDN controller. This results to an increase in communication overhead between the two instances. The control layer overhead traffic can impede scalability due to the controller's limited processing memory. There is therefore a strong incentive to enhance the scalability of SDN operations. To address the SDN scalability issues identified by creating a scalable hybrid switching solution using machine learning algorithms. We propose an SDN OpenFlow model switch which collaborate with the traditional switch to represent a scalable framework of Hybrid Routing with Reinforcement Learning (sHRRL). We implement a reinforcement algorithm to randomly explore new routes and discover the most optimal path through the Q-learning algorithm. This primitive and model-free form of reinforcement learning utilizes the Markov Decision Process and the Bellman's equation to reiteratively update Q-values in Q-table for every transition in the network environment state, until Q-function has converged to the best Q-Values. The greedy strategy is employed to guide the reinforcement learning agent in selecting the most suitable Q-values from the Q-Table. To ensure that the machine learning algorithm is able to discover a sufficient amount of possible routes and has a sufficient understanding of the network environment, sufficient training and evaluation episodes should be conducted. The proposed hybrid switching methodology was benchmarked against the standard SDN OpenFlow switch in terms of network performance metrics, including average throughput and packet exchange transmission rates, CPU load, and delay, to compare the two switching approaches. When statistically comparing the test results, it was observed that the number of packets exchanged by the hybrid switch was greater by more than sixty percent compared to the Open Flow switch which saturated first. The average throughput results demonstrate that the hybrid switching routing scheme achieves high throughput results. The first type of switch to reach saturation is the Open Flow switch, as it does not explore all available paths. Consequently, the hybrid switch is more efficient than the Open Flow Switch when it comes to CPU load. The average CPU load for the Open Flow switch is fifteen percent (15%) higher than for the hybrid Switch. Our analysis of the simulation data suggests that the Q-learning-based reinforcement learning framework, sHRRL, enhances the performance of the hybrid switch when compared to the Open Flow switches. We are therefore of the opinion that the hybrid switching model proposed utilizing machine learning algorithms can address the scalability issues in the design of SDN controller networks, particularly in data centre environments where high switching speeds are of paramount importance.
- ItemOpen AccessDevelopment of scalable hybrid switching-based software-defined networking using reinforcement learning(2024) Blose, Max; Akinyemi, Lateef Adesola; Nicolls, FrederickAs the global internet traffic continues to grow exponentially, there is a growing need for cutting-edge switching technologies to manage this growth. One of the most recent innovations is Software Defined Networking (SDN), which refers to the disintegration of the infrastructure layer and the logically centralized control layer. SDN is a cutting-edge networking approach that provides network agility, programming flexibility, and enhanced network performance over traditional switching networks. Even though SDN has some great benefits, there is a need to address and manage scalability challenges to guarantee optimal, scalable, and rapid data traffic switching within Service Provider network infrastructure, including Data Centres environments. These scalability issues are inherent to SDN's logically centralized control layer. Whenever a packet belonging to a new flow has to be transported, OpenFlow switch has to interact with the logically centralized SDN controller through southbound OpenFlow Application Programming Interface between OpenFlow switch and the logically centralized SDN controller. This results to an increase in communication overhead between the two instances. The control layer overhead traffic can impede scalability due to the controller's limited processing memory. There is therefore a strong incentive to enhance the scalability of SDN operations. To address the SDN scalability issues identified by creating a scalable hybrid switching solution using machine learning algorithms. We propose an SDN OpenFlow model switch which collaborate with the traditional switch to represent a scalable framework of Hybrid Routing with Reinforcement Learning (sHRRL). We implement a reinforcement algorithm to randomly explore new routes and discover the most optimal path through the Q-learning algorithm. This primitive and model-free form of reinforcement learning utilizes the Markov Decision Process and the Bellman's equation to reiteratively update Q-values in Q-table for every transition in the network environment state, until Q-function has converged to the best Q-Values. The greedy strategy is employed to guide the reinforcement learning agent in selecting the most suitable Q-values from the Q-Table. To ensure that the machine learning algorithm is able to discover a sufficient amount of possible routes and has a sufficient understanding of the network environment, sufficient training and evaluation episodes should be conducted. The proposed hybrid switching methodology was benchmarked against the standard SDN OpenFlow switch in terms of network performance metrics, including average throughput and packet exchange transmission rates, CPU load, and delay, to compare the two switching approaches. When statistically comparing the test results, it was observed that the number of packets exchanged by the hybrid switch was greater by more than sixty percent compared to the Open Flow switch which saturated first. The average throughput results demonstrate that the hybrid switching routing scheme achieves high throughput results. The first type of switch to reach saturation is the Open Flow switch, as it does not explore all available paths. Consequently, the hybrid switch is more efficient than the Open Flow Switch when it comes to CPU load. The average CPU load for the Open Flow switch is fifteen percent (15%) higher than for the hybrid Switch. Our analysis of the simulation data suggests that the Q-learning-based reinforcement learning framework, sHRRL, enhances the performance of the hybrid switch when compared to the Open Flow switches. We are therefore of the opinion that the hybrid switching model proposed utilizing machine learning algorithms can address the scalability issues in the design of SDN controller networks, particularly in data centre environments where high switching speeds are of paramount importance.