In this short paper, we present goDASH, an infrastructure for headless streaming of HTTP adaptive streaming (HAS) video content, implemented in the language golang, an open-source programming language supported by Google. goDASH's main functionality is the ability to stream HAS content without decoding actual video (headless player). This results in low memory requirements and the ability to run multiple players in a large-scale-based evaluation setup. goDASH comes complete with numerous state-of-the-art HAS algorithms, and is fully written in the Google golang language, which simplifies the implementation of new adaptation algorithms and functions. goDASH supports two transportation protocols Transmission Control Protocol (TCP) and Quick UDP Internet Connections (QUIC). The QUIC protocol is a relatively new protocol with the promise of performance improvement over the widely used TCP. We believe that goDASH is the first emulation-based HAS player that supports QUIC. The main limitation in using QUIC protocol is the need for a security certificate setup on both ends (client and server) as QUIC demands an encrypted connection. This limitation is eased by providing our own testbed framework, known as goDASHbed. This framework uses a virtual environment to serve video content locally (which allows setting security certificates) through the Mininet virtual emulation tool. As part of Mininet, goDASH can be used in conjunction with other traffic generators.
The highly dynamic wireless communication environment poses a challenge for many applications (e.g., adaptive multimedia streaming services). Providing accurate TP can significantly improve performance of these applications. The scheduling algorithms in cellular networks consider various PHY metrics, (e.g., CQI) and throughput history when assigning resources for each user. This article explains how AI can be leveraged for accurate TP in cellular networks using PHY and application layer metrics. We present key architectural components and implementation options, illustrating their advantages and limitations. We also highlight key design choices and investigate their impact on prediction accuracy using real data. We believe this is the first study that examines the impact of integrating network-level data and applying a deep learning technique (on PHY and application data) for TP in cellular systems. Using video streaming as a use case, we illustrate how accurate TP improves the end user's QoE. Furthermore, we identify open questions and research challenges in the area of AI-driven TP. Finally, we report on lessons learned and provide conclusions that we believe will be useful to network practitioners seeking to apply AI.
In this paper, we present a 5G trace dataset collected from a major Irish mobile operator. The dataset is generated from two mobility patterns (static and car), and across two application patterns (video streaming and file download). The dataset is composed of client-side cellular key performance indicators (KPIs) comprised of channel-related metrics, context-related metrics, cell-related metrics and throughput information. These metrics are generated from a well-known non-rooted Android network monitoring application, G-NetTrack Pro. To the best of our knowledge, this is the first publicly available dataset that contains throughput, channel and context information for 5G networks. To supplement our real-time 5G production network dataset, we also provide a 5G large scale multi-cell ns-3 simulation framework. The availability of the 5G/mmwave module for the ns-3 mmwave network simulator provides an opportunity to improve our understanding of the dynamic reasoning for adaptive clients in 5G multi-cell wireless scenarios. The purpose of our framework is to provide additional information (such as competing metrics for users connected to the same cell), thus providing otherwise unavailable information about the eNodeB environment and scheduling principle, to end user. Our framework, permits other researchers to investigate this interaction through the generation of their own synthetic datasets.
Title Improving video streaming experience through network measurements and analysis Author(s) Raca, Darijo Publication date 2019-09-10 Original citation Raca, D. 2019. Improving video streaming experience through network measurements and analysis. PhD Thesis, University College Cork. Type of publication Doctoral thesis Rights 2019, Darijo Raca. https://creativecommons.org/licenses/by-nc-nd/4.0/ Item downloaded from http://hdl.handle.net/10468/9667
The state estimation algorithm estimates the values of the state variables based on the measurement model described as the system of equations. Prior to applying the state estimation algorithm, the existence and uniqueness of the solution of the underlying system of equations is determined through the observability analysis. If a unique solution does not exist, the observability analysis defines observable islands and further defines an additional set of equations (measurements) needed to determine a unique solution. For the first time, we utilise factor graphs and Gaussian belief propagation algorithm to define a novel observability analysis approach. The observable islands and placement of measurements to restore observability are identified by following the evolution of variances across the iterations of the Gaussian belief propagation algorithm over the factor graph. Due to sparsity of the underlying power network, the resulting method has the linear computational complexity (assuming a constant number of iterations) making it particularly suitable for solving large-scale systems. The method can be flexibly matched to distributed computational resources, allowing for determination of observable islands and observability restoration in a distributed fashion. Finally, we discuss performances of the proposed observability analysis using power systems whose size ranges between 1354 and 70 000 buses.
Today's HTTP adaptive streaming applications are designed to provide high levels of Quality of Experience (QoE) across a wide range of network conditions. The adaptation logic in these applications typically needs an estimate of the future network bandwidth for quality decisions. This estimation, however, is challenging in cellular networks because of the inherent variability of bandwidth and latency due to factors like signal fading, variable load, and user mobility. In this paper, we exploit machine learning (ML) techniques on a range of radio channel metrics and throughput measurements from a commercial cellular network to improve the estimation accuracy and hence, streaming quality. We propose a novel summarization approach for input raw data samples. This approach reduces the 90th percentile of absolute prediction error from 54% to 13%. We evaluate our prediction engine in a trace-driven controlled lab environment using a popular Android video player (ExoPlayer) running on a stock mobile device and also validate it in the commercial cellular network. Our results show that the three tested adaptation algorithms register improvement across all QoE metrics when using prediction, with stall reduction up to 85% and bitrate switching reduction up to 40%, while maintaining or improving video quality. Finally, prediction improves the video QoE score by up to 33%.
Recent years have witnessed an explosion of multimedia traffic carried over the Internet. Video-on-demand and live streaming services are the most dominant services. To ensure growth, many streaming providers have invested considerable time and effort to keep pace with ever-increasing users' demand for better quality and stall abolition. HTTP adaptive streaming (HAS) algorithms are at the core of every major streaming provider service. Recent years have seen sustained development in HAS algorithms. Currently, to evaluate their proposed solutions, researchers need to create a framework and numerous state-of-the-art algorithms. Often, these frameworks lack flexibility and scalability, covering only a limited set of scenarios. To fill this gap, in this paper we propose DASHbed, a highly customizable real-time framework for testing HAS algorithms in a wireless environment. Due to its low memory requirement, DASHbed offers a means of running large-scale experiments with a hundred competing players. Finally, we supplement the proposed framework with a dataset consisting of results for five HAS algorithms tested in various evaluated scenarios. The dataset showcases the abilities of DASHbed and presents the adaptation metrics per segment in the generated content (such as switches, buffer-level, P. 1203.1 values, delivery rate, stall duration, etc.), which can be used as a baseline when researchers compare the output of their proposed algorithm against the state-of-the-art algorithms.
Dynamic adaptive streaming over HTTP (DASH) is widely adopted for video transport by major content providers. However, the inherent high variability in both encoded video and network rates represents a key challenge for designing efficient adaptation algorithms. Accommodating such variability in the adaptation logic design is essential for achieving a high user quality of Experience (QoE). In this paper, we present ARBITER+ as a novel adaptation algorithm for DASH. ARBITER+ integrates different components that are designed to ensure a high video QoE while accommodating inherent system variabilities. These components include a tunable adaptive target rate estimator, hybrid throughput sampling, controlled switching, and short-term actual video rate tracking. We extensively evaluate the streaming performance using real video and cellular network traces. We show that ARBITER+ components work in harmony to balance temporal and visual QoE aspects. Additionally, we show that ARBITER+ enjoys a noticeable QoE margin in comparison to state-of-the-art adaptation approaches in various operating conditions. Furthermore, we show that ARBITER+ also achieves the best application-level fairness when a group of mobile video clients shares a cellular base station.
Streaming over the wireless channel is challenging due to rapid fluctuations in available throughput. Encouraged by recent advances in cellular throughput prediction based on radio link metrics, we examine the impact on Quality of Experience (QoE) when using prediction within existing algorithms based on the DASH standard. By design, DASH algorithms estimate available throughput at the application level from chunk rates and then apply some averaging function. We investigate alternatives for modifying these algorithms, by providing the algorithms direct predictions in place of estimates or feeding predictions in place of measurement samples. In addition, we explore different prediction horizons going from one to three chunk durations. Furthermore, we induce different levels of error to ideal prediction values to analyse deterioration in user QoE as a function of average error. We find that by applying accurate prediction to three algorithms, user QoE can improve up to 55% depending on the algorithm in use. Furthermore having longer horizon positively affects QoE metrics. Accurate predictions have the most significant impact on stall performance by completely eliminating them. Prediction also improves switching behaviour significantly and longer prediction horizons enable a client to promptly reduce quality and avoid stalls when the throughput drops for a relatively long time that can deplete the buffer. For all algorithms, a 3-chunk horizon strikes the best balance between different QoE metrics and, as a result, achieving highest user QoE. While error-induced predictions significantly lower user QoE in certain situations, on average, they provide 15% improvement over DASH algorithms without any prediction.
In this paper, we present a 4G trace dataset composed of client-side cellular key performance indicators (KPIs) collected from two major Irish mobile operators, across different mobility patterns (static, pedestrian, car, bus and train). The 4G trace dataset contains 135 traces, with an average duration of fifteen minutes per trace, with viewable throughput ranging from 0 to 173 Mbit/s at a granularity of one sample per second. Our traces are generated from a well-known non-rooted Android network monitoring application, G-NetTrack Pro. This tool enables capturing various channel related KPIs, context-related metrics, downlink and uplink throughput, and also cell-related information. To the best of our knowledge, this is the first publicly available dataset that contains throughput, channel and context information for 4G networks. To supplement our real-time 4G production network dataset, we also provide a synthetic dataset generated from a large-scale 4G ns-3 simulation that includes one hundred users randomly scattered across a seven-cell cluster. The purpose of this dataset is to provide additional information (such as competing metrics for users connected to the same cell), thus providing otherwise unavailable information about the eNodeB environment and scheduling principle, to end user. In addition to this dataset, we also provide the code and context information to allow other researchers to generate their own synthetic datasets.
The availability of reliable predictions for cellular throughput would offer a fundamental change in the way applications are designed and operated. Numerous cellular applications, including video streaming and VoIP, embed logic that attempts to estimate achievable throughput and adapt their behaviour accordingly. We believe that providing applications with reliable predictions several seconds into the future would enable profoundly better adaptation decisions and dramatically benefit demanding applications like mobile virtual and augmented reality. The question we pose and seek to address is whether such reliable predictions are possible. We conduct a preliminary study of throughput prediction in a cellular environment using statistical machine learning techniques. An accurate prediction can be very challenging in large scale cellular environments because they are characterized by highly fluctuating channel conditions. Using simulations and real-world experiments, we study how prediction error varies as a function of prediction horizon, and granularity of available data. In particular, our simulation experiments show that the prediction error for mobile devices can be reduced significantly by combining measurements from the network with measurements from the end device. Our results indicate that it is possible to accurately predict achievable throughput up to 8 sec in the future where 50th percentile of all errors are less than 15% for mobile and 2% for static devices.
HTTP Adaptive video Streaming (HAS) is the dominant traffic type on the Internet. When multiple video clients share a bottleneck link many problems arise, notably bandwidth underutilisation, unfairness and instability. Key findings from previous papers show that the "ON-OFF" behaviour of adaptive video clients is the main culprit. In this paper we focus on the network, and specifically the effects of network queue size when multiple video clients share network resources. We conducted experiments using the Mininet virtual network environment streaming real video content to open-source GPAC video clients. We explored how different network buffer sizes, ranging from 1xBDP to 30xBDP (bandwidth-delay-product), affect clients sharing a bottleneck link. Within GPAC, we implemented the published state-of-the-art adaptive video algorithms FESTIVE and BBA-2. We also evaluated impact of web cross-traffic. Our main findings indicate that the "rule-of-thumb" 1xBDP for network buffer sizing causes bandwidth underutilisation, limiting available bandwidth to 70% for all video clients across different round-trip-times (RTT). Interaction between web and HAS clients depends on multiple factors, including adaptation algorithm, bitrate distribution and offered web traffic load. Additionally, operating in an environment with heterogeneous RTTs causes unfairness among ompeting HAS clients. Based on our experimental results, we propose 2xBDP as a default network queue size in environments when multiple users share network resources with homogeneous RTTs. With heterogeneous RTTs, a BDP value based on the average RTTs for all clients improves fairness among competing clients by 60%.
In this demonstration we present a platform that encompasses all of the components required to realistically evaluate the performance of Dynamic Adaptive Streaming over HTTP (DASH) over a real-time NS-3 simulated network. Our platform consists of a network-attached storage server with DASH video clips and a simulated LTE network which utilises the NS-3 LTE module provided by the LENA project. We stream to clients running an open-source player with a choice of adaptation algorithms. By providing a user interface that offers user parametrisation to modify both client and LTE settings, we can view the evaluated results of real-time interactions between the network and the clients. Of special interest is that our platform streams actual video clips to real video clients in real-time over a simulated LTE network, allowing reproducible experiments and easy modification of LTE and client parameters. The demonstration showcases how changes in LTE network settings (fading model, scheduler, client distance from eNB, etc.), as well as video-related decisions at the clients (streaming algorithm, quality selection, clip selection, etc.), can impact on the delivery and achievable quality.
The design of an adaptive video client for mobile users is challenged by the frequent changes in operating conditions. Such conditions present a seemingly insurmountable challenge to adaptation algorithms, which may fail to find a balance between video rate, stalls, and rate-switching. In an effort to achieve the ideal balance, we design OSCAR, a novel adaptive streaming algorithm whose adaptation decisions are optimized to avoid stalls while maintaining high video quality. Our performance evaluation, using real video and channel traces from both 3G and 4G networks, shows that OSCAR achieves the highest percentage of stall-free sessions while maintaining a high quality video in comparison to the state-of-the-art algorithms.
Users of triple-play systems expect to be able to use their services on different locations. That opens an issue of extending security to include mobile triple-play users. Mobile users need to authenticate to the system and vice-versa. Users expect confidentiality of their communications. Content providers request copyrights to be respected. Protocols for session control, SIP. and media transfer, RTP, have their secured versions, SIPS and RTSP. That solution would require multiple protocols and keys and would be a burden on users and system administrators. This paper proposes an architecture that uses IMS to provide services and VPN to secure them. IMS provides convenience of user mobility. VPN provides authentication, confidentiality and integrity. Additional security provided by VPN does not translate to additional work for users, It is completely transparent for them. Proposed design is implemented and tested. IMS with different services was available, through VPN, to mobile users connected to the Internet with different devices and connections. The testing confirmed security and usability for mobile users.
Nema pronađenih rezultata, molimo da izmjenite uslove pretrage i pokušate ponovo!
Ova stranica koristi kolačiće da bi vam pružila najbolje iskustvo
Saznaj više