Browsing by Author "Mwangama, Joyce"
Now showing 1 - 15 of 15
Results Per Page
Sort Options
- ItemOpen AccessAdmission Control in Sliced Networks, with Predictive Analytics(2023) Ngufor, Perose; Mwangama, Joyce; Lysko AlbertOver the years, the telecommunications industry has constantly adapted to accommodate the rising demand for more specialised network connectivity. Network slicing was introduced as a solution for providing specialised networks to customers. However, network slicing has a set of objectives which require that legacy network functions be revisited and updated to support network slicing. One such function is admission control. This work proposes two admission control algorithms and investigates how the admission control function can be improved by incorporating traffic forecasting into the admission control process. In this work, we present the state-of-the-art in admission control in sliced networks and the state-of-the-art of the application of predictive analytics to admission control. We design and evaluate two intra-slice admission control algorithms namely, the Decision Matrix algorithm and the Utility Index algorithm. A real IP network dataset, containing network flows collected from the University of Cauca, Popayán network is used for the simulation and evaluation of these admission control algorithms. The proposed admission control algorithms presented various strengths, with the Utility Index algorithm being highly profitable to the operator and the Decision Matrix algorithm being suitable for traffic with a large proportion of high priority traffic. A traffic forecasting model was implemented based on the Holt-Winters Exponential Smoothing predictive model. This forecasting model was trained using the network data from the real IP network dataset and then incorporated into the admission control process. For prediction-based admission control, the traffic forecasting model was used to forecast resource requirements of future network traffic in each slice and pre-emptively make provisions for the upcoming traffic. The performance of the intra-slice admission control algorithms with and without the influence of traffic forecasting was analysed and it was found that the use of predictive analytics to predict future slice traffic allows for dynamic allocation of slice resources. Prediction-based admission control, when compared to the admission control without predictions, showed better performance in terms of probability of blocking, system utilisation, and profitability to the operator.
- ItemOpen AccessAn open vendor agnostic fog computing framework for mission critical and data dense applications(2018) Chirindo, Tasimba Denford David; Mwangama, JoyceDigital innovation from the Internet of Things (IoT), Artificial Intelligence, Tactile Internet and Industry 4.0 applications is transforming the way we work, commute, shop and play. Current deployment strategies of these applications emphasize mandatory cloud connectivity. However, this is not feasible in many real-world situations particularly where data dense and mission critical applications with stringent requirements are concerned. Cloud computing offers unlimited on-demand computing, storage and networking power for industry to leverage. However, as its scope and scale continues to expand, its limitations like high latency, accessibility, security and compliance shortcomings prevent its greater use and applicability particularly in scenarios where real-time communication and the quality of rapid computing delivered is a necessity. Fog computing hopes to bridge this gap by introducing an intermediary computing layer between end users and the cloud. At present, architectures for fog computing exist in specialized areas with current implementations being proprietary, vendor-locked and requiring dramatic and non-transferable changes to hardware and software to meet vendor requirements. Moreover, fog computing is still quite a recent area which makes the state of the art incipient regarding architecture definitions, middleware and real-world implementations. There is therefore an urgent need for standardization of these technologies. This is of paramount importance as otherwise, there will exist multiple and not necessarily compatible solutions which could lead to a fragmented marketplace that would fail to grow. In an effort to address these limitations in current fog architectures, this dissertation proposes and implements a novel fog computing architecture that aligns the reference architectures from a leading industry consortium, OpenFog, and a leading standards setting organization, the European Telecommunications Standards Institute (ETSI). This cooperation framework from industry, academia and regulatory institute aims to make it easier for both application developers and infrastructure solution providers to develop towards a common, open and interoperable fog computing environment. The proposed framework has the following attributes: modular, plug-in design, generic, open, standards compliant, vendor agnostic and runs on high volume standard hardware whilst preserving the benefits offered by public clouds such as containerization, virtualization, orchestration, manageability and efficiency. Moreover, for the various stakeholders in the fog value chain where it is key to strike a balance between information technology and business operations, this thesis tenders insights and best practices to help achieve these multiple and sometimes competing goals. The proposed framework was implemented in a testbed environment made up entirely of free and open source software, therefore creating a convenient point of departure for further research by others. Two geographically distributed fog node data centres and a cloud management and orchestration tool were setup in the testbed. While this evaluation framework and practical implementation demonstrated proof of concept, further evaluations were conducted to benchmark the performance against existing alternative solutions. These evaluations were based on a prototype industrial IoT application that was deployed on the testbed to evaluate the impact of the Open Vendor Agnostic Fog Framework (OVAFF) solution on application performance. The implementation showed that the proposed OVAFF solution is feasible, implementable and supports distributed edge cloud data centres. Results from the prototype application showed that OVAFF can extremely provide up to tenfold throughput and ultra-low latency, jitter and packet loss rate better than the remote clouds. Moreover, there is more superiority exhibited by the OVAFF for other non-performance based attributes like data reduction, compliance and geographical locality of control. In addition, the results also pointed towards the viability of open business models like federated infrastructure sharing and a fog market place in the fog ecosystem. Finally, this thesis tackled the highlighted open challenges in current fog systems such as orchestration, distribution, tiering, heterogeneity and resilience; which were outlined in the research motivation and problem definition.
- ItemOpen AccessComparative Analysis of Coulomb Counting and Extended Kalman Filter for State of Charge Estimation in Battery Management Systems(2024) Francis, Christopher; Mwangama, Joyce; Awodele KehindeState of Charge (SOC) is simply a measure of the amount of available charge in a battery cell. It is not possible to directly measure SOC because it is a function of the stoichiometric concentration of ions in the cell, hence current and voltage measurements were used to obtain the required accurate and precise estimation. Various authors have proposed methods for estimating SOC, however most authors have presented only high level reports. In this research, a comparative investigation of the traditional Coulomb Counting (CC) method, and the state-of-the-art Extended Kalman Filter method for SOC estimation was undertaken using a model based approach, involving simulation using Simulink and Simscape. Besides a current integration model, a cell model was developed and parameterized using a Lithium based Nickel Cobalt Aluminium (NCA) oxide battery's pulse discharge test data. The Extended Kalman Filter (EKF) was implemented to estimate the SOC of the cell model and the performance of the estimation models were evaluated on the metric of RMSE, and convergence time. It was concluded that the EKF method, outperformed the CC method as a state-of-the-art SOC estimation technique, employed in battery management system (BMS) by battery developers for the EV use case.
- ItemOpen AccessDesign of a backend system to integrate health information systems – case study: ministry of health and social services (MoHSS)-Namibia(2021) Shoopala, Anna-Liisa; Mwangama, JoyceInformation systems are the key to institution organization and decision making. In the health care field, there is a lot of data flow, from the patient demographic information (through the electronic medical records), the patient's medication dispersal methods called pharmaceutical data, laboratory data to hospital organization information such bed allocation. Healthcare information system is a system that manages, store, transmit and display healthcare data. Most of the healthcare data in Namibia are unstructured, there is a heterogeneous environment in which different health information systems are distributed in different departments [1][2]. A lot of data is generated but never used in decision-making due to the fragmentation. The integration of these systems would create a flood of big data into a centralized database. With information technology and new generation networks becoming a called for innovations in every day's operations, the adaptations of accessing big data through information applications and systems in an integrated way will facilitate the performances of practical work in health care. The aim of this dissertation is to find a way in which these vertical Health Information System can be integrated into a unified system. A prototype of a back-end system is used to illustrate how the present healthcare systems that are in place with the Ministry of Health and Social Service facilities in Namibia, can be integrated to promote a more unified system usage. The system uses other prototypes of subsystems that represent the current systems to illustrate how they operate and, in the end, how the integration can improve service delivery in the ministry. The proposed system is expected to benefit the ministry in its daily operations as it enables instant authorized access to data without passing through middlemen. It will improve and preserve data integrity by eliminating multiple handling of data through a single data admission point. With one entry point to the systems, manual work will be reduced hence also reducing cost. Generally, it will ensure efficiency and then increase the quality of service provided.
- ItemOpen AccessDesign, Implementation, and Evaluation of a Telehaptics System over a 4G Mobile Network Testbed(2023) Chepkoech, Maurine; Mwangama, JoyceTelesurgery, the remote provision of surgical services using robotics and information communication technologies, will contribute to addressing the shortage of surgeons and the limited access to advanced surgical services, especially in low- and middle-income countries (LMICs). However, the realization of these systems is hampered by a lack of supporting technologies, unreliable communication networks, and costly surgical equipment. For instance, most of the existing telesurgery systems only rely on audio and visual feedback during an operation. However, achieving high-fidelity remote operation requires adequate involvement of human senses beyond audio and visual data. This can be done by integrating haptic feedback. Haptic feedback entails the transmission and perception of the human sense of touch. In telesurgery, haptics will improve the accuracy of the procedure by enabling the surgeon to feel the consistency of force against body tissues and differentiate between bones, flesh, and body fluids. Therefore, this work presents the design and development of a telehaptics system for transmitting haptic feedback over a distance for possible integration into teleoperation systems, such as telesurgery. Haptic communication requires stringent network performance, including low end-to-end latency of less than 10 ms and ultra-reliability, i.e., greater than 99.99%, for an interactive experience. These requirements can be achieved over modern mobile networks such as 4G and 5G. Unfortunately, mobile network coverage in most LMICs is limited, with the dominant technologies being 2G and 3G. Consequently, 4G and 5G mobile network testbeds have been implemented using open-source software frameworks and commercial radio equipment to achieve haptic communication's stringent latency and reliability requirements. The developed testbed provides a platform to evaluate and validate the telehaptics system before its integration into real-world telesurgery systems. The open- source approach adopted in realizing the 4G and 5G mobile network testbed overcomes the rigid and expensive proprietary restrictions and achieves permissionless, faster, cheaper, and more flexible modern network deployment. All haptic data packets were delivered to the slave node in the forward transmission loop, and to the master node in the feedback loop and an average end-to-end latency of 10 ms was achieved from the evaluation of the prototype telehaptics system over the configured 4G mobile network testbed. On the other hand, while the 5G mobile network testbed was configured and deployed, its connection was unstable, and average end-to-end latency performance was above the requirements for haptic communication, i.e. 12 ms. This was due to the bandwidth limitation in the current release of the open-source 5G radio access network software stack, srsRAN, used in testbed implementation. Consequently, due to the unstable connection, the 5G mobile network testbed could not support the experimentation period for the prototype telehaptics system. Hence, future developments of this work will entail fine-tuning the 5G network testbed to achieve less than 5 ms end-to-end latency and greater than 99.999% reliability, as specified in the 3rd Generation Partnership Project standards.
- ItemOpen AccessImpact of network security on SDN controller performance(University of Cape Town, 2020) Kodzai, Carlton; Mwangama, JoyceInternet Protocol network architectures are gradually evolving from legacy flat networks to new modern software defined networking approaches. This evolution is crucial as it provides the ideal supporting network structure, architecture and framework that supports the technologies that are also evolving in software-based systems like Network Functions Virtualization (NFV). The connectivity requirements resulting from this paradigm shift in technology is being driven by new bandwidth requirements emanating from the huge number of new use cases from 5G networks and Internet of things (IoT) future technologies. Network security remains a key critical requirement of these new modern network architectures to deliver a highly available, reliable service and guaranteed quality of service. Unprotected networks will usually experience service interruptions and cases of system non-availability due to network attacks such as denial-of services and virus attacks which can render key network components unusable or totally unavailable. With the centralized approach of the Software Defined Networking architecture, the SDN controller becomes a key network point that is susceptible to internal and external attacks from hackers and many forms of network breaches. It being the heart of the SDN network makes it a single point of failure and it is crucial that the security of the controller is guaranteed to avoid unnecessary irrecoverable loss of valuable production time, data and money. The SDN controller design should be guided by a robust security policy framework with a very sound remedy and business continuity plan in the event of any form of a security attack. Security designs and research work in SDN controllers have been done with focus on achieving the most reliable and scalable platforms through self-healing and replication processes. In this dissertation the research that was done proposed a security solution for the SDN controller and evaluated the impact of the security solution on the overall SDN controller performance. As part of the research work literature review of the SDN controller and related technology carried out. The SDN controller interfaces were analyzed and the security threats that attack interfaces were explored. With link to a robust security framework a security solution was used in the experiments that analyzed the attacks from the external network sources which focused on securing the southbound interface by use of a netfilter with iptables firewall on the SDN controller. The SDN controller was subjected to denial service attack packets and the impact of the mitigation action observed on the SDN controller resources. Given that the network security layer introduced an additional overhead on the SDN controller's processors the security feature negatively affected the controller performance. The impact of the security overhead will inform on the future designs and possibly achieve a trade-off point between the level of security of the network and overall system performance due to security policies. The research analyzed and determined the performance impact of this crucial design aspect and how the additional loading due to network security affected the SDN controller normal operation.
- ItemOpen AccessImproving the Reliability of Smart Distribution Grids Using Software-based Networking(2021) Brown, Gerhard Errol; Ventura, Neco; Mwangama, JoyceSmart grid is a combination of technologies that emerged in response to the rapid changes in the way humans generate, transfer, distribute and use energy. The smart grid paradigm shifts the focus from bulk generation with centralised grid control to distributed electricity generation, energy storage and greater consumer participation in grid operations. Many organisations and institutions are contributing to this initiative by developing new frameworks, architectures and standards that aim to support its adoption and improvement. Most of these new developments are based on modern Information Communication Technology (ICT) such as the Internet of Things (IoT). Although many governments have made commitments to pursue smart grid as part of their national policies and strategies, some nations, especially those with developing economies, have struggled to make significant strides in smart grid deployment. An important characteristic of smart grid is its resilience and its ability to self-heal, enabled by better use of grid knowledge and the distribution of grid intelligence. This creates a challenge for many utilities that need to improve their existing grid ICT infrastructure to meet more stringent communication requirements. These requirements will likely include very high communication reliability with high throughput and low latency. Although most networks can be developed or improved to meet these requirements using modern network hardware, the cost and complexity involved in implementing such designs in large-scale distribution grid networks may be too high. To overcome this challenge alternative ways of designing smart grid communication networks using software approaches are needed. This study proposes a new software-based networking platform, based on Industrial Internet of Things (IIoT) technology, that aims to support smart grid reliability by enabling reliability-centred smart grid systems and by reacting immediately to communication problems using real-time monitoring techniques. Using the principles of software defined networking (SDN), network functions virtualisation (NFV) and machine-tomachine communication (M2M), the design proposed in this work aims to provide a more flexible and affordable approach to developing and maintaining large-scale grid communication networks while offering several features that improve grid reliability and performance. Experimental simulations were conducted with this architecture implemented in an emulated network environment, using a topology based on a model of a real city distribution grid. Results from the experimental evaluation show that a software-based communication network is easy to set up, maintain and scale using virtual machines capable of running on existing grid IT infrastructure. Furthermore, the results show that by using the features of SDN, NFV and M2M a smart grid communication network can be designed that can automatically detect and recover from at least six different simulated communication failures without impacting the operation of a functional smart grid application supported by the network. The results also support this platform's capability to reduce network congestion using a scheduled network data buffering service, resulting in end-to-end network latency improvements from 0.6 seconds to 0.05 milliseconds. From these results, we conclude that software-based networking can offer promising design alternatives for smart distribution grids, capable of improving the grid's overall reliability. This conclusion is drawn from the fact that software-based networks not only offer many features that can improve communication reliability and performance, but also have the potential to reduce the cost and complexity of network implementation and maintenance. This study can potentially improve the uptake of smart grid as it offers utilities design options that are more flexible and affordable to implement and maintain.
- ItemOpen AccessIP addressing, transition and security in 5G networks(2018) Bartocho, Evans Kiptoo; Mwangama, JoyceThe number of devices on the Internet is always increasing and there is need for reliable IP addressing. 5G network will be built on two main technologies; SDN and NFV which will make it elastic and agile compared to its predecessors. Elasticity will ensure that additional devices can always be added to the network. IPv4 addresses are already depleted and cannot support the expansion of the Internet to ensure the realization of future networks. IPv6 addressing has been proposed to support 5G networking because of the sufficient number of addresses that the protocol provides. However, IPv4 addressing will still be used concurrently with IPv6 addressing in networks until they become fully IPv6 based. The structure of IPv4 header is different from IPv6 header hence the two protocols are incompatible. There is need for seamless intercommunication between devices running IPv4 and IPv6 in future networks. Three technologies namely; Dual Stack, Tunneling and Translation have been proposed to ensure that there is smooth transition from IPv4 to IPv6 protocol. This dissertation demonstrates Tunneling of IPv6 over IPv4. Also, this research work reviews network security threats of past networks that are likely to be experienced in 5G networks. To counter them, reliable IP security strategies used in current networks are proposed for use in next generation networks. This dissertation evaluates and analyzes IPv4, IPv6 network and Tunneling models in an SDN network environment. The performance of an IPv4 only network is compared to the IPv6 only network. Also, devices addressed with both protocols are connected. The results obtained illustrate that IPv4 and IPv6 devices can effectively communicate in a 5G network environment. In addition, a tunnel is used to run IPv6 protocol over an IPv4 network. The devices on both ends of the tunnel could communicate with each other effectively.
- ItemOpen AccessLeveraging Next Generation Mobile Networks for Drone Telemetry and Payload Communication(2023) Mombeshora, Ngonidzashe; Mwangama, Joycesmall Unmanned Aerial Systems (sUAS) have seen their adoption increasing over the past recent years. The adoption is by hobbyists for leisure or by the industry for business and commercial use and as such, use case applications may vary enormously. Such use cases include but are not limited to drone delivery, precision agriculture, search and rescue and surveillance. As the adoption continues to increase, so do the use cases and drone applications. However, drones have much more to offer, and their capabilities are not to be limited to the current possible applications. There is a plethora of drone applications that have not been made possible, mainly due to technological limitations. The main limitation to be addressed in this project pertains to communication. Drone use cases such as 8K video streaming, Augmented Reality and Virtual Reality (AR/VR), autonomous flights, and long-range surveillance requiring Beyond Visual Line of Sight (BVLOS) command and control are yet to be realized with efficiency for commercial viability. Limitations to be addressed in terms of communication include line of sight usage, data rates and latencies. This project investigates the use of mobile/cellular networks, specifically 5G (Fifth Generation) mobile networks, as a feasible option to address these limitations. Experiments will be done by creating a mobile network test-bed using open-source mobile network stacks such as OpenAirInterface and integrating that with current drone communication technologies such as MAVlink to realize a drone communication stack that utilizes mobile networks for communication. 4G Long Term Evolution (LTE), 5G Non-Standalone (NSA) and a 5G Standalone (SA) test-bed stack will be implemented, and flight tests will be carried out to draw out and assess the advantages and disadvantages that cellular networks bring forth. And how 5G can push forward the drone ecosystem towards more novel and unrealized use case applications. Whilst at the same time assessing the viability of these mobile network realisations in their current state and development roadmaps. It is to be noted that at the time of writing Open Source 5G testbeds are still quite early in their development phase, and hence might not perform according to the theoretical standards and expectations.
- ItemOpen AccessRelay assisted device-to-device communication with channel uncertainty(2019) Uyoata, Uyoata Etuk; Mwangama, Joyce; Dlodlo, MqheleThe gains of direct communication between user equipment in a network may not be fully realised due to the separation between the user equipment and due to the fading that the channel between these user equipment experiences. In order to fully realise the gains that direct (device-to-device) communication promises, idle user equipment can be exploited to serve as relays to enforce device-to-device communication. The availability of potential relay user equipment creates a problem: a way to select the relay user equipment. Moreover, unlike infrastructure relays, user equipment are carried around by people and these users are self-interested. Thus the problem of relay selection goes beyond choosing which device to assist in relayed communication but catering for user self-interest. Another problem in wireless communication is the unavailability of perfect channel state information. This reality creates uncertainty in the channel and so in designing selection algorithms, channel uncertainty awareness needs to be a consideration. Therefore the work in this thesis considers the design of relay user equipment selection algorithms that are not only device centric but that are relay user equipment centric. Furthermore, the designed algorithms are channel uncertainty aware. Firstly, a stable matching based relay user equipment selection algorithm is put forward for underlay device-to-device communication. A channel uncertainty aware approach is proposed to cater to imperfect channel state information at the devices. The algorithm is combined with a rate based mode selection algorithm. Next, to cater to the queue state at the relay user equipment, a cross-layer selection algorithm is proposed for a twoway decode and forward relay set up. The algorithm proposed employs deterministic uncertainty constraint in the interference channel, solving the selection algorithm in a heuristic fashion. Then a cluster head selection algorithm is proposed for device-to-device group communication constrained by channel uncertainty in the interference channel. The formulated rate maximization problem is solved for deterministic and probabilistic constraint scenarios, and the problem extended to a multiple-input single-out scenario for which robust beamforming was designed. Finally, relay utility and social distance based selection algorithms are proposed for full duplex decode and forward device-to-device communication set up. A worst-case approach is proposed for a full channel uncertainty scenario. The results from computer simulations indicate that the proposed algorithms offer spectral efficiency, fairness and energy efficiency gains. The results also showed clearly the deterioration in the performance of networks when perfect channel state information is assumed.
- ItemOpen AccessSDN based security using cognitive algorithm against DDOS(2018) Faizan, Shah Ali; Mwangama, JoyceThe internet and communication industry continue to develop new technologies rapidly, which has caused a boom in smart and networking device manufacturing. With new trends, operators are constantly battling towards deploying multiple systems to cater for the need of all users. The higher bandwidth utilization and flexibility demanded new networking solutions which paved way for Software Defined Network (SDN). SDN is centralized platform which works with other technologies such as Network Function Virtualization (NFV) to offer reliable, flexible and centrally controllable network solutions. It offers remote access control with logical design of the system, security and resource management. Traditional and new developing networks despite their advantages present numerous security challenges. With growing users worldwide, bandwidth related security risks such as Distributed Denial of Service (DDOS) are of grave concern. This encourages towards reliable and rapid response solutions such as Cognitive Algorithms (CA) which can adapt to a threat in real time environment. This dissertation proposes the use of CA to deploy security and mitigation measures against potential DDOS flooding attack to avoid network failure and memory depletion in SDN. The experiment done in proof of concept (PoC) provided proof of greater network resource utilization by limiting the attack while mitigation policies are implemented. It also shows that CA can adapt to growing and evolving network attack strength to counter as much as possible without the intervention of the operator. The work for future solutions based on CA and Artificial Intelligence (AI) for security have been established.
- ItemOpen AccessSoftware defined networking for radio telescopes: a case study on the applicability of SDN for MeerKAT(2022) Slabber, Martin; Ventura, Neco; Mwangama, JoyceScientific instruments like radio telescopes depend on high-performance networks for internal data exchange. The high bandwidth data exchange between the components of a radio telescope makes use of multicast networking. Complex multicast networks are hard to maintain and grow, and specific installations require modified network switches. This study evaluates Software Defined Networking (SDN) for use in the MeerKAT radio telescope to alleviate the management complexity and allow for a vendor-neutral implementation. The purpose of this dissertation is to verify that an SDN multicast network can produce suitable paths for data flow through the network and to see if such an implementation is easier to maintain and grow. There is little literature regarding SDN for radio telescope networks; however, there is considerable work where different aspects of SDN are discussed and demonstrated for video streaming. SDN with multicast for video streaming, although simpler, forms the background research. Considerable work was put into understanding and documenting the different aspects of a radio telescope affecting the data network. The telescope network controller generates the OpenFlow rules required by the SDN controller and is a new concept introduced in this work. The telescope network controller is fitted with two placement algorithms to demonstrate its flexibility. Both algorithms are suitable for the expected workload, but they produce very different traffic patterns. The two algorithms are not compared to one another, they were created to demonstrate the ease of adding domain specific knowledge to an SDN. The telescope network controller makes it easy to introduce and use new flow placement algorithms, thus making traffic engineering feasible for the radio telescope. Complex multicast networks are easier to maintain and grow with SDN. SDN allows customised packet forwarding rules typically unattainable with standard routing and other standard network protocols and implementations. A radio telescope with a software-defined data network is resilient, easier to maintain, vendor-neutral, and possesses advanced traffic engineering mechanisms.
- ItemOpen AccessSpectral efficiency optimization with channel state information of a massive MIMO System(2022) Chingore, Paul Chakanetsa; Mwangama, JoyceThe 5G network is expected to provide high data rate transmissions at very low latencies. To meet these high data rates the exploration of the under-utilized millimetre Wave (mm-Wave) frequency spectrum for hereafter broadband cellular communication networks is a focal point. Mm-Wave communication motivates the utilization of massive-MIMO. However, they are some limitations in the use of massive-MIMO since large –scale antenna arrays have high cost as well as the high-power consumption of huge Radio Frequency (RF) chains. This is a major drawback in the adoption of fully digital precoding in large-array systems. This research focuses on reducing the number of RF chains while using fixed large number of arrays for spatial multiplexing gains. A hybrid precoding architecture for mm Wave systems has been proposed for a system that has imperfect channel state information. Many wireless communication operations can be formulated as nonconvex non-smooth optimization problems. Often there is lack of effective algorithms for these problems especially in the event that the optimization variables are non-linear and coupled in some nonconvex constraints. To add on to that it is close to impossible to have perfect channel state information (CSI) in a wireless system. To optimize the spectral efficiency of imperfect CSI, an algorithm called penalty dual decomposition (PDD) is proposed for these problems. The PDD is a double-loop iterative algorithm that has a guaranteed convergence to Karush-Kuhn-Tucker (KKT) solution of the hybrid precoding problem under a mild assumption. The KKT solution supports the multi-stream transmission with few RF Chains. Simulation results reviews that the proposed PDD algorithm is capable of achieving better spectral efficiency than MAP and OMP even though they are few RF chains.
- ItemOpen AccessTime Sensitive Networking for Wi-Fi Based Wireless Industrial Environments(2021) Kinabo, Arnold Baraka Doste; Mwangama, Joyce; Lysko, AlbertIn production industries, mission-critical assignments require networks characterised by deterministic low latency, dedicated bandwidth resources, and, chiefly, reliability. Several fieldbus technologies are specially placed for this. Their commonality is that they run on standard Ethernet. The relatively new Time Sensitive Networking (TSN) is among these technologies. It is a set of Ethernet standards that guarantees determinism for real-time usecases. TSN sets itself apart in that it is vendor-agnostic. And so, it promotes interoperability among standard-conformant devices. Being based on Ethernet, even TSN is plagued by downsides associated with cabled networks, most importantly, the limited range and mobility. In this regard, wireless networks are an attractive option – it would be an opportunistic venture to operate TSN in the wireless medium. Previous works have tried to address how this can be done, but as yet, it is an open problem. The issue is that most wireless networks are not optimised for determinism. Most lack the scheduling, synchronisation and other capabilities that timing-stringent applications require. Wi-Fi, for instance, suffers from many issues stemming from randomised medium access and interference, which remove the predictability from its communications. Critical TSN traffic needs special consideration when run with other services in current Wi-Fi. That being said, the key research question is: can one contend with the problem of transmitting TSN and non-TSN traffic together in the same wireless network? To answer this, the work develops a TSN simulation model that operates in Wi-Fi, whose test results can be studied to aid in analysing wireless TSN. The prototype model runs in a simulation environment, and was developed using methods that involved reusing and modifying the present wireless architecture to support the TSN traffic. Through the course of several iterative experiments, it was revealed that although the current generation of Wi-Fi can support TSN traffic, it does so inefficiently. Even with no interference, the TSN traffic experiences low losses only when the network capacity utilisation is very low, below a small percentage value. Considering the typically low demands on bandwidth in many TSN applications, this inefficient operation may still be sufficient for operating TSN over existing Wi-Fi networks. For more robust and general applications, Wi-Fi requires further enhancements to its mode of operation in order to support prioritisation of TSN traffic and more accurately cope with higher loads.
- ItemOpen AccessUsing machine learning techniques in developing an autonomous network orchestration scheme in 5g networks(2020) Mohamad, Anfar Mohamad Rimas; Mwangama, JoyceNetwork Orchestrators are the brains of 5G networks. The orchestrator is responsible for the orchestration and management of Network Function Virtualisation Infrastructure (NFVI), understanding network services on NFVI and software resources. The International Telecommunication Union (ITU) have categorized three main 5G network services for the orchestration. So called, Enhanced Mobile Broadband (eMBB), Ultra-reliable and Low-latency Communications (uRLLC) and Massive Machine Type Communications (mMTC). Categorizing the network is achieved in 5G by a method called network slicing. In the future, a device connecting to a 5G network will be in one of three slices (eMBB, uRLLC and mMTC) based on network characteristics. The focus of this dissertation goes to the eMBB slice. Normally day-today internet users will use the eMBB slices. Thus, all the daily internet access such as watching YouTube videos, making Skype video calls, calling via WhatsApp, downloading files, listening to online radio and whatnot will happen via eMBB slice. However, this approach neglects the importance of the web application a user is using in the eMBB slice. For example, a family doctor may give first aid assistance via a Skype video call in an emergency situation. Thus the call of the doctor, in this case, should be prioritized over other normal daily web tasks. Thus, there is a requirement of prioritizing usual web-tasks in certain scenarios which eMBB slice neglects. It is possible to detect websites or web plications with modern-day technologies. Hence, these type of website detection algorithms can be improved to detect web-tasks (Skype voice calling, Skype video calling, etc...) to provide a separate slice within eMBB slices upon doctor's request. The goal of this study is to identifying web-tasks by capturing the network data packets flowing in and out of the system and perform an application-based classification by using machine learning techniques. After the classification, data was fed to the 5G Orchestrator or to the 5G Core. The Orchestrator will allocate a number of Network Function Virtual Machines to provide best quality of service (QoS) based on generated slice information. iv In this research, a Website Task Finger Printing (WTFP) algorithm is introduced to identify web traffic (such as identifying if a user is watching a video on Facebook, rather than just detecting the website that they are viewing). Possible applications of the developed algorithm vary from 5G ultra slicing to network security. This study delves deeper into Website Finger Printing (WFP). Traditional papers only describe how to identify websites by using statistical analysis, whereas this study shows how we can identify what task a user is performing rather than just which website they are currently visiting. The identifier captures the inbound and outbound data and then uses the packet length histogram as the main feature. After that, application-based features were extracted by using heuristic logical filters to prepare a feature vector for the Machine Learning (ML) algorithm. A trained Multi-layer Perceptron (MLP) based Artificial Neural Network (ANN) was selected as the classifier after comparing results with Support Vector Machine (SVM), Recurrent Neural Network (RNN) and Convolutional Neural Network (CNN). The MLP algorithm was able to classify website tasks with 95.50% accuracy. After classification, the classified class was sent to the 5G Orchestrator, then it refers to programed Network Service Descriptor and based on our specifications generates a new slice by using Network Slice Engine (NSE). After that, it monitorsthe present bitrate of the slice by using Zabbix. Next, the Orchestrator either increase or decrease the bitrate to give the optimum Quality of Service (QoS) by using Auto Scaling Engine (ASE). The algorithm also used to generate specific QoS by using Open5G Core. Therefore, this study shows that it is possible to allocate slices based on webtasks in 5G Mobile network thus proposing to investigate further; to enable web-task based slicing for the future mobile networks.