In FMCW automotive radar applications, it is often a challenge to design a chirp sequence that satisfies the requirements set by practical driving scenarios and simultaneously enables high range resolution, large maximum range, and unambiguous velocity estimation. To support long-range scenarios the chirps should have a sufficiently long duration compared to their bandwidth. At the same time, the long chirps result in ambiguous velocity estimation for targets with high velocity. The problem of velocity ambiguity is often solved by using multiple chirp sequences with co-prime delay shifts between them. However, coherent processing of multiple chirp sequences is not possible using classical spectral estimation techniques based on Fast Fourier Transform (FFT). This results in statistically not efficient velocity estimation and loss of processing gain. In this work, we propose an algorithm that can jointly process multiple chirp sequences and resolve possible ambiguities present in the velocities estimates. The resulting algorithm is statistically efficient, and gridless. Furthermore, it increases the resolution of velocity estimation beyond the natural resolution due to its super-resolution properties. These results are confirmed by both numerical simulations and experiments with automotive radar IC.
In wireless networks, an essential step for precise range-based localization is the high-resolution estimation of multipath channel delays. The resolution of traditional delay estimation algorithms is inversely proportional to the bandwidth of the training signals used for channel probing. Considering that typical training signals have limited bandwidth, delay estimation using these algorithms often leads to poor localization performance. To mitigate these constraints, we exploit the multiband and carrier frequency switching capabilities of wireless transceivers and propose to acquire channel state information (CSI) in multiple bands spread over a large frequency aperture. The data model of the acquired measurements has a multiple shift-invariance structure, and we use this property to develop a high-resolution delay estimation algorithm. We derive the Cramér-Rao Bound (CRB) for the data model and perform numerical simulations of the algorithm using system parameters of the emerging IEEE 802.11be standard. Simulations show that the algorithm is asymptotically efficient and converges to the CRB. To validate modeling assumptions, we test the algorithm using channel measurements acquired in real indoor scenarios. From these results, it is seen that delays (ranges) estimated from multiband CSI with a total bandwidth of 320 MHz show an average RMSE of less than 0.3 ns (10 cm) in 90% of the cases.
For validation and demonstration of high accuracy ranging and positioning algorithms and systems, a wideband radio signal generation and acquisition testbed, tightly synchronized in time and frequency, is needed. The development of such a testbed requires solutions to several challenges. Tight time and frequency synchronization, derived from a centrally distributed time-frequency reference signal, needs to be maintained in the hardware of the transmitter and receiver nodes, and wideband signal acquisition requires sustainable data throughput between the receiver and host PC as well as data storage at GB level. This article presents a testbed for wideband radio signal acquisition, for validation and demonstration of high accuracy ranging and positioning. It consists of multiple Ettus X310 universal software radio peripherals (USRPs) and supports high accuracy (<100 ps) time-deterministic, sustainable signal transmission and acquisition, with a bandwidth up to 320 MHz (in dual channel mode) and frequencies up to 6 GHz. Generation and processing of wideband arbitrary signal waveforms is done offline. To realize these features, radio frequency on chip (RFNoC) compatible HDL units were developed for integration in the X310 SDR platform. Wideband transmission and signal acquisition at a lower duty cycle is applied to reduce the data offloading throughput to the host’s personal computer (PC). Benchmarking of the platform was performed to demonstrate sustainable long duration dual channel acquisition. Indoor range measurements with the synchronous operation of the testbed show a decimeter-level accuracy.
This paper presents a systematic and comprehensive survey that reviews the latest research efforts focused on machine learning (ML) based performance improvement of wireless networks, while considering all layers of the protocol stack: PHY, MAC and network. First, the related work and paper contributions are discussed, followed by providing the necessary background on data-driven approaches and machine learning to help non-machine learning experts understand all discussed techniques. Then, a comprehensive review is presented on works employing ML-based approaches to optimize the wireless communication parameters settings to achieve improved network quality-ofservice (QoS) and quality-of-experience (QoE). We first categorize these works into: radio analysis, MAC analysis and network prediction approaches, followed by subcategories within each. Finally, open challenges and broader perspectives are discussed.
The presence of rich scattering in indoor and urban radio propagation scenarios may cause a high arrival density of multipath components (MPCs). Often the MPCs arrive in clusters at the receiver, where MPCs within one cluster have similar angles and delays. The MPCs arriving within a single cluster are typically unresolvable in the delay domain. In this paper, we analyze the effects of unresolved MPCs on the bias of the delay estimation with a multiband subspace fitting algorithm. We treat the unresolved MPCs as a model error that results in perturbed subspace estimation. Starting from the first-order approximation of the perturbations, we derive the bias of the delay estimate of the line-of-sight (LOS) component. We show that it depends on the power and relative delay of the unresolved MPCs in the first cluster compared to the LOS component. Numerical experiments are included to show that the derived expression for the bias well describes the effects of unresolved MPCs on the delay estimation.
Global Navigation Satellite Systems (GNSS) are nowadays the most common solutions used to cope with Positioning-Navigation-Timing (PNT) applications demands. GNSS are relied on in very diverse contexts and domains, yet the interest in systems such as GPS, GALILEO and Beidou is continuously increasing. However
In this paper, we focus on the problem of blind joint calibration of multiband transceivers and time-delay (TD) estimation of multipath channels. We show that this problem can be formulated as a particular case of covariance matching. Although this problem is severely ill-posed, prior information about radio-frequency chain distortions and multipath channel sparsity is used for regularization. This approach leads to a biconvex optimization problem, which is formulated as a rank-constrained linear system and solved by a simple group Lasso algorithm. Numerical experiments show that the proposed algorithm provides better calibration and higher resolution for TD estimation than current state-of-the-art methods.
This paper presents a systematic and comprehensive survey that reviews the latest research efforts focused on machine learning (ML) based performance improvement of wireless networks, while considering all layers of the protocol stack: PHY, MAC and network. First, the related work and paper contributions are discussed, followed by providing the necessary background on data-driven approaches and machine learning to help non-machine learning experts understand all discussed techniques. Then, a comprehensive review is presented on works employing ML-based approaches to optimize the wireless communication parameters settings to achieve improved network quality-of-service (QoS) and quality-of-experience (QoE). We first categorize these works into: radio analysis, MAC analysis and network prediction approaches, followed by subcategories within each. Finally, open challenges and broader perspectives are discussed.
The presence of rich scattering in indoor and urban radio propagation scenarios may cause a high arrival density of multipath components (MPCs). Often the MPCs arrive in clusters at the receiver, where MPCs within one cluster have similar angles and delays. The MPCs arriving within a single cluster are typically unresolvable in the delay domain. In this paper, we analyze the effects of unresolved MPCs on the bias of the delay estimation with a multiband subspace fitting algorithm. We treat the unresolved MPCs as a model error that results in perturbed subspace estimation. Starting from the first-order approximation of the perturbations, we derive the bias of the delay estimate of the line-of-sight (LOS) component. We show that it depends on the power and relative delay of the unresolved MPCs in the first cluster compared to the LOS component. Numerical experiments are included to show that the derived expression for the bias well describes the effects of unresolved MPCs on the delay estimation.
The multipath radio channel is considered to have a non-bandlimited channel impulse response. Therefore, it is challenging to achieve high resolution time-delay (TD) estimation of multipath components (MPCs) from bandlimited observations of communication signals. It this paper, we consider the problem of multiband channel sampling and TD estimation of MPCs. We assume that the nonideal multi-branch receiver is used for multiband sampling, where the noise is nonuniform across the receiver branches. The resulting data model of Hankel matrices formed from acquired samples has multiple shift-invariance structures, and we propose an algorithm for TD estimation using weighted subspace fitting. The subspace fitting is formulated as a separable nonlinear least squares (NLS) problem, and it is solved using a variable projection method. The proposed algorithm supports high resolution TD estimation from an arbitrary number of bands, and it allows for nonuniform noise across the bands. Numerical simulations show that the algorithm almost attains the Cramér Rao Lower Bound, and it outperforms previously proposed methods such as multiresolution TOA, MI-MUSIC, and ESPRIT.
In order to validate and demonstrate newly developed ranging techniques, a flexible test platform for signal acquisition enabling offline signal processing is generally needed. Developing such a platform becomes challenging when working with wideband (> 100MHz) signals due to the critical timing, the very high sampling rates and the huge data throughput involved. In this paper, we introduce an Ettus X310 SDR platform using custom designed logic allowing for dual-channel 400 Msps data transmission and acquisition for centimeter level ranging applications. Furthermore, we present initial measurement results as a benchmark of the platform, which show that the time delay of a 10 m cable can be estimated with high accuracy, in the order of 50 ps.
Achieving high resolution time-of-arrival (TOA) estimation in multipath propagation scenarios from bandlimited observations of communication signals is challenging because the multipath channel impulse response (CIR) is not bandlimited. Modeling the CIR as a sparse sequence of Diracs, TOA estimation becomes a problem of parametric spectral inference from observed bandlimited signals. To increase resolution without arriving at unrealistic sampling rates, we consider multiband sampling approach, and propose a practical multibranch receiver for the acquisition. The resulting data model exhibits multiple shift invariance structures, and we propose a corresponding multiresolution TOA estimation algorithm based on the ESPRIT algorithm. The performance of the algorithm is compared against the derived Cramér Rao Lower Bound, using simulations with standardized ultra-wideband (UWB) channel models. We show that the proposed approach provides high resolution estimates while reducing spectral occupancy and sampling costs compared to traditional UWB approaches.
Synchronization and ranging in internet of things $(IoT)$ networks are challenging due to the narrowband nature of signals used for communication between IoT nodes. Recently, several estimators for range estimation using phase difference of arrival $(PDoA)$ measurements of narrowband signals have been proposed. However, these estimators are based on data models which do not consider the impact of clock-skew on the range estimation. In this paper, clock-skew and range estimation are studied under a unified framework. We derive a novel and precise data model for PDoA measurements which incorporates the unknown clock-skew effects. We then formulate joint estimation of the clock-skew and range as a two-dimensional (2-D) frequency estimation problem of a single complex sinusoid. Furthermore, we propose: (i) a two-way communication protocol for collecting PDoA measurements and (ii) a weighted least squares (WLS) algorithm for joint estimation of clock-skew and range leveraging the shift invariance property of the measurement data. Finally, through numerical experiments, the performance of the proposed protocol and estimator is compared against the Cramér Rao lower bound demonstrating that the proposed estimator is asymptotically efficient.
This paper presents end-to-end learning from spectrum data—an umbrella term for new sophisticated wireless signal identification approaches in spectrum monitoring applications based on deep neural networks. End-to-end learning allows to: 1) automatically learn features directly from simple wireless signal representations, without requiring design of hand-crafted expert features like higher order cyclic moments and 2) train wireless signal classifiers in one end-to-end step which eliminates the need for complex multi-stage machine learning processing pipelines. The purpose of this paper is to present the conceptual framework of end-to-end learning for spectrum monitoring and systematically introduce a generic methodology to easily design and implement wireless signal classifiers. Furthermore, we investigate the importance of the choice of wireless data representation to various spectrum monitoring tasks. In particular, two case studies are elaborated: 1) modulation recognition and 2) wireless technology interference detection. For each case study three convolutional neural networks are evaluated for the following wireless signal representations: temporal IQ data, the amplitude/phase representation, and the frequency domain representation. From our analysis, we prove that the wireless data representation impacts the accuracy depending on the specifics and similarities of the wireless signals that need to be differentiated, with different data representations resulting in accuracy variations of up to 29%. Experimental results show that using the amplitude/phase representation for recognizing modulation formats can lead to performance improvements up to 2% and 12% for medium to high SNR compared to IQ and frequency domain data, respectively. For the task of detecting interference, frequency domain representation outperformed amplitude/phase and IQ data representation up to 20%.
Driven by the fast growth of wireless communication, the trend of sharing spectrum among heterogeneous technologies becomes increasingly dominant. Identifying concurrent technologies is an important step towards efficient spectrum sharing. However, due to the complexity of recognition algorithms and the strict condition of sampling speed, communication systems capable of recognizing signals other than their own type are extremely rare. This work proves that multi-model distribution of the received signal strength indicator (RSSI) is related to the signals’ modulation schemes and medium access mechanisms, and RSSI from different technologies may exhibit highly distinctive features. A distinction is made between technologies with a streaming or a non-streaming property, and appropriate feature spaces can be established either by deriving parameters such as packet duration from RSSI or directly using RSSI’s probability distribution. An experimental study shows that even RSSI acquired at a sub-Nyquist sampling rate is able to provide sufficient features to differentiate technologies such as Wi-Fi, Long Term Evolution (LTE), Digital Video Broadcasting-Terrestrial (DVB-T) and Bluetooth. The usage of the RSSI distribution-based feature space is illustrated via a sample algorithm. Experimental evaluation indicates that more than 92% accuracy is achieved with the appropriate configuration. As the analysis of RSSI distribution is straightforward and less demanding in terms of system requirements, we believe it is highly valuable for recognition of wideband technologies on constrained devices in the context of dynamic spectrum access.
Nema pronađenih rezultata, molimo da izmjenite uslove pretrage i pokušate ponovo!
Ova stranica koristi kolačiće da bi vam pružila najbolje iskustvo
Saznaj više