Accurate Throughput Prediction (TP) represents a cornerstone for reliable adaptive streaming in challenging mediums, such as cellular networks. Challenged by the highly dynamic wireless medium, recent state-of-the-art solutions adopt Deep Learning (DL) models to improve TP accuracy. However, these models perform poorly in critical, rare network conditions, leading to degraded user Quality of Experience (QoE). Such performance results from depending solely on the model's capacity and power of learning, without integrating system knowledge into the design. In this paper, we propose MATURE, a novel multi-stage DL-based TP model designed to capture network operating context to improve prediction accuracy and user experience. MATURE's operation involves characterising the operating context before estimating the network throughput. Our performance evaluation shows that MATURE improves the average user QoE by 4% - 90% in critical network conditions when compared to state-of-the-art.
The continuous rise of multimedia entertainment has led to an increased demand for delivering outstanding user experience of multimedia content. However, modelling user-perceived Quality of Experience (QoE) is a challenging task, resulting in efforts for better understanding and measurement of user-perceived QoE. To evaluate user QoE, subjective quality assessment, where people watch and grade videos, and objective quality assessment in which videos are graded using one or many objective metrics are conducted. While there is a plethora of video databases available for subjective and objective video quality assessment, these videos are artificially infused with various temporal and spatial impairments. Videos being assessed are artificially distorted with startup delay, bitrate changes, and stalls due to rebuffering events. To conduct a more credible quality assessment, a reproduction of original user experiences while watching different types of streams on different types and quality of networks is needed. To aid current efforts in bridging the gap between the mapping of objective video QoE metrics to user experience, we developed DashReStreamer, an open-source framework for re-creating adaptively streamed video in real networks. The framework takes inputs in the form of video logs captured by the client in a non-regulated setting, along with an.mpd file or a YouTube URL. The ultimate result is a video sequence that encompasses all the data extracted from the video log. DashReStreamer also calculates popular video quality metrics like PSNR, SSIM, MS-SSIM and VMAF. Finally, DashReStreamer allows creating impaired video sequences from the popular streaming platform, YouTube. As a demonstration of framework usage we created a database of 332 realistic video clips, based on video logs collected from real mobile and wireless networks. Every video clip is supplemented with bandwidth trace and video logs used in its creation and also with objective metrics calculation reports. In addition to dataset, we performed subjective evaluation of video content, assessing its effect on overall user QoE. We believe that this dataset and framework will allow the research community to better understand the impacts of video QoE dynamics.
AI-driven data analysis methods have garnered attention in enhancing the performance of wireless networks. One such application is the prediction of downlink throughput in mobile cellular networks. Accurate throughput predictions have demonstrated significant application benefits, such as improving the quality of experience in adaptive video streaming. However, the high degree of variability in cellular link behaviour, coupled with device mobility and diverse traffic demands, presents a complex problem. Numerous published studies have explored the application of machine learning to address this problem, displaying potential when trained and evaluated with traffic traces collected from operational networks. The focus of this paper is an empirical investigation of machine learning-based throughput prediction that runs in real-time on a smartphone, and its evaluation with video streaming in a range of real-world cellular network settings. We report on a number of key challenges that arise when performing prediction “in the wild”, dealing with practical issues one encounters with online data (not traces) and the limitations of real smartphones. These include data sampling, distribution shift, and data labelling. We describe our current solutions to these issues and quantify their efficacy, drawing lessons that we believe will be valuable to network practitioners planning to use such methodologies in operational cellular networks.
Different industries are observing the positive impact of 360 video on the user experience. However, the performance of VR systems continues to fall short of customer expectations. Therefore, more research into various design elements for VR streaming systems is required. This study introduces a SW tool that offers straight-forward encoding platforms to simplify the encoding of DASH VR videos. In addition, we developed a dataset composed of 9 VR videos encoded with seven tiling configurations, four segment durations, and up to four different bitrates. A corresponding tile size dataset is also provided, which can be utilised to power network simulations or trace-driven emulations. We analysed the traffic load of various films and encoding setups using the dataset that was presented. Our research indicates that, while smaller tile sizes reduce traffic load, video decoding may require more computational power.
Accurate prediction of cellular link performance represents a corner stone for many adaptive applications, such as video streaming. State-of-the-art solutions focus on distributed device-based methods relying on historic throughput and PHY metrics obtained through device APIs. In this paper, we study the impact of centralised solutions that integrate information collected from other network nodes. Specifically, we develop and compare machine learning inference engines for both distributed and centralised approaches to predict the LTE physical resource blocks using ns3-simulation. Our results illustrate that network load represents the most important feature in the centralised approaches resulting in halving the RB prediction error to 14% in comparison to 28 % for the distributed case.
We consider the problem of maximum-likelihood estimation in linear models represented by factor graphs and solved via the Gaussian belief propagation algorithm. Motivated by massive Internet of Things (IoT) networks and edge computing, we set the above problem in a clustered scenario, where the factor graph is divided into clusters and assigned for processing in a distributed fashion across a number of edge computing nodes. For these scenarios, we show that an alternating Gaussian belief propagation (AGBP) algorithm that alternates between inter- and intracluster iterations, demonstrates superior performance in terms of convergence properties compared to the existing solutions in the literature. We present a comprehensive framework and introduce appropriate metrics to analyze the AGBP algorithm across a wide range of linear models characterized by symmetric and nonsymmetric, square, and rectangular matrices. We extend the analysis to the case of dynamic linear models by introducing the dynamic arrival of new data over time. Using a combination of analytical and extensive numerical results, we show the efficiency and scalability of the AGBP algorithm, making it a suitable solution for large-scale inference in massive IoT networks.
Fifth-Generation (5G) networks have a potential to accelerate power system transition to a flexible, softwarized, data-driven, and intelligent grid. With their evolving support for Machine Learning (ML)/Artificial Intelligence (AI) functions, 5G networks are expected to enable novel data-centric Smart Grid (SG) services. In this paper, we explore how data-driven SG services could be integrated with ML/AI-enabled 5G networks in a symbiotic relationship. We focus on the State Estimation (SE) function as a key element of the energy management system and focus on two main questions. Firstly, in a tutorial fashion, we present an overview on how distributed SE can be integrated with the elements of the 5G core network and radio access network architecture. Secondly, we present and compare two powerful distributed SE methods based on: i) graphical models and belief propagation, and ii) graph neural networks. We discuss their performance and capability to support a near real-time distributed SE via 5G network, taking into account communication delays.
Evolving Internet applications, such as immersive multimedia and Industry 4, exhibit stringent delay, loss, and rate requirements. Realizing these requirements would be difficult without advanced dynamic traffic management solutions that leverage state-of-the-art technologies, such as Software-Defined Networking (SDN). Mininet represents a common choice for evaluating SDN solutions in a single machine. However, Mininet lacks the ability to emulate links that have multiple queues to enable differentiated service for different traffic streams. Additionally, performing a scalable emulation in Mininet would not be possible without light-weight application emulators. In this paper, we introduce two tools, namely: QLink and SPEED. QLink extends Mininet API to enable emulating links with multiple queues to differentiate between different traffic streams. SPEED represents a light-weight web traffic emulation tool that enables scalable HTTP traffic simulation in Mininet. Our performance evaluation shows that SPEED enables scalable emulation of HTTP traffic in Mininet. Additionally, we demo the benefits of using QLink to isolate three different applications (voice, web, and video) in a network bottleneck for numerous users.
Multimedia streaming over the Internet (live and on demand) is the cornerstone of modern Internet carrying more than 60% of all traffic. With such high demand, delivering outstanding user experience is a crucial and challenging task. To evaluate user Quality of Experience (QoE) many researchers deploy subjective quality assessments where participants watch and rate videos artificially infused with various temporal and spatial impairments. To aid current efforts in bridging the gap between the mapping of objective video QoE metrics to user experience, we developed DashReStreamer, an open-source framework for re-creating adaptively streamed video in real networks. DashReStreamer utilises a log created by a HTTP adaptive streaming (HAS) algorithm run in an uncontrolled environment (i.e., wired or wireless networks), encoding visual changes and stall events in one video file. These videos are applicable for subjective QoE evaluation mimicking realistic network conditions. To supplement DashReStreamer, we re-create 234 realistic video clips, based on video logs collected from real mobile and wireless networks. In addition our dataset contains both video logs with all decisions made by the HAS algorithm and network bandwidth profile illustrating throughput distribution. We believe this dataset and framework will permit other researchers in their pursuit for the final frontier in understanding the impact of video QoE dynamics.
Estimating the system state is a non-trivial task given a large set of measurements, fuelling the ongoing research attempts to find efficient, scalable and fast state estimation (SE) algorithms. The centralised SE becomes impractical for large-scale systems, particularly if the measurements are spatially distributed across wide geographical areas. Dividing the large-scale systems into clusters (i.e., subsystems) and distributing the computation across clusters, solves the constraints of a centralised based approach. In such scenarios, using distributed SE methods brings many advantages over the centralised approaches. In this paper, we propose a novel distributed method to solve the linear SE model by combining local solutions obtained by applying weighted least-squares (WLS) of the given subsystems with the Gaussian belief propagation (GBP) algorithm. The proposed method is based on the factor graph operating without a central coordinator, where subsystems exchange only “beliefs”, thus preserving the privacy of the measurement data and state variables. Further, we propose an approach to speed-up evaluation of the local solutions upon arrival of new information to the subsystem. Finally, the proposed algorithm reaches the accuracy of the centralised WLS solution in a few iterations and outperforms the vanilla GBP algorithm with respect to its convergence properties.
Ova stranica koristi kolačiće da bi vam pružila najbolje iskustvo
Saznaj više