Automatic modulation classification (AMC) is a typical technology for identifying different modulation types, which has been widely applied into various scenarios. Recently, deep learning (DL), one of the most advanced classification algorithms, has been applied into AMC. However, these previously proposed AMC methods are centralized in nature, i.e., all training data must be collected together to train the same neural network. In addition, they are generally based on powerful computing devices and may not be suitable for edge devices. Thus, a distributed learning-based AMC (DistAMC) method is proposed, which relies on the cooperation of multiple edge devices and model averaging (MA) algorithm. When compared with the centralized AMC (CentAMC), there are two advantages of the DistAMC: the higher training efficiency and the lower computing overhead, which are very consistent with the characteristics of edge devices. Simulation results show that there are slight performance gap between the DistAMC and the CentAMC, and they also have similar convergence speed, but the consumed training time per epoch in the former method will be shorter than that on the latter method, if the low latency and the high bandwidth are considered in model transmission process of the DistAMC. Moreover, the DistAMC can combine the computing power of multiple edge devices to reduce the computing overhead of a single edge device in the CentAMC.
In this paper, we propose a convolutional neural network (CNN) aided automatic modulation recognition (AMR) method for a multiple antenna system. We also present two specific combination strategies, such as the relative majority voting method and arithmetic mean method to improve the classification performance in comparison with the state of the art. Our results are given to verify that the proposed method dominant exploits features and classify the modulation types with higher accuracy in comparison with the AMR employing high order cumulants (HOC) and artificial neural networks (ANN).
For a long time, poor channel quality and shortage of frequency resources often restrict its development. An adaptive shortwave communication is considered as an effective method while channel quality estimation (CQE) is essential for the shortwave adaptive communication system. Currently, deep learning (DL) based CQE methods are proposed to achieve a good identification performance. However, existing methods are hard to extract full features from baseband signals, due to the fact that their deep neural networks are trained from the limited length of signal samples. In order to avoid this problem, we consider two training models. The first one is transforming baseband signals into constellation diagrams and three kinds of DL algorithms (i.e., AlexNet, ResNet, DenseNet) are applied respectively for training. The second one is slicing IQ signals into multi-slices signals and convolutional neural network (CNN) is applied and CQE is a joint multi-slice and cooperative decision. Experimental results show that the proposed methods are robust, and joint multi-slice and cooperative detection aided DL-based CQE method achieves better performance even up to 100%.
Blockchain (BC) and artificial intelligence (AI) are often utilized separately in energy trading systems (ETSs). However, these technologies can complement each other and reinforce their capabilities when integrated. This article provides a comprehensive review of consensus algorithms (CAs) of BC and deep reinforcement learning (DRL) in ETS. While the distributed consensus underpins the immutability of transaction records of prosumers, the deluge of data generated paves the way to use AI algorithms for forecasting and address other data analytic-related issues. Hence, the motivation to combine BC with AI to realize secure and intelligent ETS. This study explores the principles, potentials, models, active research efforts and unresolved challenges in the CA and DRL. The review shows that despite the current interest in each of these technologies, little effort has been made at jointly exploiting them in ETS due to some open issues. Therefore, new insights are actively required to harness the full potentials of CA and DRL in ETS. We propose a framework and offer some perspectives on effective BC-AI integration in ETS.
In this letter, we propose a novel low-complexity user pairing (UP) and power allocation (PA) technique utilizing the compressive sensing (CS) theory for non-orthogonal multiple access (NOMA) systems. In the proposed scheme, we formulate the joint UP and PA optimization problem in NOMA systems as a relaxed sparse $l_{1}$ -norm problem, based on the fact that a limited, i.e., ‘sparse,’ number of users has to be paired among a large number of users over a dedicated resource block (RB). By exploiting this inherent sparsity property, we can obtain a near-optimal solution by relaxing the original NP-hard problem of joint UP and PA. Then, a CS technique is proposed to find a solution for the relaxed problem. Simulation results validate the effectiveness of our proposed idea regarding the sum-rate and the reduced complexity compared to the state-of-the-art schemes.
Massive multiple-input multiple-output (M-MIMO) is one of the main 5G-enabling technologies that promise to increase cell throughput and reduce multiuser interference. However, these abilities rely on exploiting the channel state information (CSI) feedback at base stations (BSs). One critical challenge is that the user equipment (UE) needs to return a large amount of channel information to the base station, creating a large signaling overhead. In this letter, we propose a framework based on deep learning, which is able to efficiently compress and recover the feedback CSI. The encoder learns the most suitable compressed codeword corresponding to the CSI. The decoder decompresses this codeword at the receiving BS end using a Generative Adversarial Network (GAN). A novel objective function is proposed and used to train the Deep Convolutional Generative Adversarial Network (DCGAN) to improve the performance of our proposed framework. Simulation results demonstrate that the proposed framework outperforms traditional compressive sensing-based methods and provides remarkably robust performance for the outdoor channels.
Automatic modulation classification (AMC) is an essential technology for the non-cooperative communication systems, and it is widely applied into various communications scenarios. In the recent years, deep learning (DL) has been introduced into AMC due to its outstanding identification performance. However, it is almost impossible to implement previously proposed DL-based AMC algorithms without large number of labeled samples, while there are generally few labeled sample and large unlabel samples in the realistic communication scenarios. In this paper, we propose a transfer learning (TL)-based semi-supervised AMC (TL-AMC) in a zero-forcing aided multiple-input and multiple-output (ZF-MIMO) system. TL-AMC has a novel deep reconstruction and classification network (DRCN) structure that consists of convolutional auto-encoder (CAE) and convolutional neural network (CNN). Unlabeled samples flow from CAE for modulation signal reconstruction, while labeled samples are fed into CNN for AMC. Knowledge is transferred from the encoder layer of CAE to the feature layer of CNN by sharing their weights, in order to avoid the ineffective feature extraction of CNN under the limited labeled samples. Simulation results demonstrated the effectiveness of TL-AMC. In detail, TL-AMC performs better than CNN-based AMC under the limited samples. What’s more, when compared with CNN-based AMC trained on massive labeled samples, TL-AMC also achieved the similar classification accuracy at the relative high SNR regime.
This paper studies practical limitations of learning methods for resource management in non-stationary radio environment. We propose two learning models carefully designed to support rate maximization objective under user mobility. We study the effects of practical systems such as latency and reliability on the rate maximization with deep learning models. For common testing in the non-stationary environment we present a generic dataset generation method to benchmark across different learning models versus traditional optimal resource management solutions. Our results indicate that learning models have practical challenges related to training limiting their applications. The models need environment-specific design to reach the accuracy of an optimal algorithm.
The information sharing among vehicles provides intelligent transport applications in the Internet of Vehicles (IoV), such as self-driving and traffic awareness. However, due to the openness of the wireless communication (e.g., DSRC), the integrity, confidentiality and availability of information resources are easy to be hacked by illegal access, which threatens the security of the related IoV applications. In this paper, we propose a novel Risk Prediction-Based Access Control model, named RPBAC, which assigns the access rights to a node by predicting the risk level. Considering the impact of limited training datasets on prediction accuracy, we first introduce the Generative Adversarial Network (GAN) in our risk prediction module. The GAN increases the items of training sets to train the Neural Network, which is used to predict the risk level of vehicles. In addition, focusing on the problem of pattern collapse and gradient disappearance in the traditional GAN, we develop a combined GAN based on Wasserstein distance, named WCGAN, to improve the convergence time of the training model. The simulation results show that the WCGAN has a faster convergence speed than the traditional GAN, and the datasets generated by WCGAN have a higher similarity with real datasets. Moreover, the Neural Network (NN) trained with the datasets generated by WCGAN and real datasets (NN-WCGAN) performs a faster speed of training, a higher prediction accuracy and a lower false negative rate than the Neural Network trained with the datasets generated by GAN and real datasets (NN-GAN), and the Neural Network trained with the real datasets (NN). Additionally, the RPBAC model can improve the accuracy of access control to a great extent.
Drones can be used for many assistance roles in complex communication scenarios and play as the aerial relays to support terrestrial communications. Although a great deal of emphasis has been placed on the drone-assisted networks, existing work focuses on routing protocols without fully exploiting the drones superiority and flexibility. To fill this gap, this paper proposes a collaborative communication scheme for multiple drones to assist the urban vehicular ad-hoc networks (VANETs). In this scheme, drones are distributed regarding the predicted terrestrial traffic condition in order to efficiently alleviate the inevitable problems of conventional VANETs, such as building obstacle, isolated vehicles, and uneven traffic loading. To effectively coordinate multiple drones, this issue is modeled as a multimodal optimization problem to improve the global performance on a certain space. To this end, a succinct swarm-based optimization algorithm, namely Multimodal Nomad Algorithm (MNA) is presented. This algorithm is inspired by the migratory behavior of the nomadic tribes on Mongolia grassland. Based on the floating car data of Chengdu China, extensive experiments are conducted to examine the performance of the MNA-optimized drone-assisted VANET. The results demonstrate that our scheme outperforms its counterparts in terms of hop number, packet delivery ratio, and throughput.
<div>In order to transmit communication signals of</div><div>different properties, quickly, effectively, and accurately, various</div><div>different modulation styles can be adopted. Accurate recognition</div><div>of signal modulation is required at the receive side. Automatic</div><div>modulation recognition (AMR) is a key technique to identify</div><div>various styles of modulation of signals received in wireless</div><div>channels. It can be used in many kinds of communication systems,</div><div>including single antenna system and multiple antenna system. In</div><div>this paper, we propose a convolutional neural networks (CNN)</div><div>aided AMR method for multiple antenna system. Compared with</div><div>the high order cumulants (HOC) and artificial neural networks</div><div>(ANN) aided traditional AMR classification method, both with</div><div>two specific combination strategies, such as relative majority</div><div>voting method and arithmetic mean method, the proposed</div><div>AMR with arithmetic mean method has the best classification</div><div>performance. The experimental results obtained verify that the</div><div>CNN, one of the representative algorithms of deep learning, has</div><div>a strong ability to exploit dominant features and classify the</div><div>modulation styles.</div>
Non-orthogonal multiple access (NOMA) based wireless caching network (WCN) is considered as a promising technology for next-generation wireless communications since it can significantly improve the spectral efficiency. In this letter, we propose a quality of service (QoS)-oriented dynamic power allocation strategy for NOMA-WCN. In content placement phase, base station (BS) sends multiple files to helpers by allocating different powers according to the different QoS targets of files, for ensuring that all helpers can successfully decode the two most popular files. In content delivery phase, helpers serve two users at the same time by allocating the minimum power to far user according to the QoS requirement, and then all the remaining power is allocated to near user. Hence, our proposed method is able to increase the hit probability and drop the outage probability compared with conventional methods. Simulation results confirm that the proposed power allocation method can significantly improve the caching hit probability and reduce the user outage probability.
In the fifth-generation (5G) mobile communication system, various service requirements of different communication environments are expected to be satisfied. As a new evolution network structure, heterogeneous network (HetNet) has been studied in recent years. Compared with homogeneous networks, HetNets can increase the opportunity in the spatial resource reuse and improve users’ quality of service by developing small cells into the coverage of macrocells. Since there is mutual interference among different users and the limited spectrum resource in HetNets, however, efficient resource allocation (RA) algorithms are vitally important to reduce the mutual interference and achieve spectrum sharing. In this article, we provide a comprehensive survey on RA in HetNets for 5G communications. Specifically, we first introduce the definition and different network scenarios of HetNets. Second, RA models are discussed. Then, we present a classification to analyze current RA algorithms for the existing works. Finally, some challenging issues and future research trends are discussed. Accordingly, we provide two potential structures for 6G communications to solve the RA problems of the next-generation HetNets, such as a learning-based RA structure and a control-based RA structure. The goal of this article is to provide important information on HetNets, which could be used to guide the development of more efficient techniques in this research area.
Drones-aided ubiquitous applications play more and more important roles in our daily life. Accurate recognition of drones is required in aviation management due to their potential risks and even disasters.Radio frequency (RF) fingerprinting-based recognition technology based on deep learning is considered as one of the effective approaches to extract hidden abstract features from RF data of drones. Existing deep learning-based methods are either a high computational burden or low accuracy.In this paper, we propose a deep complex-valued convolutional neural network (DC-CNN) method based on RF fingerprinting for recognizing different drones.Compared with existing recognition methods, the DC-CNN method has the advantages of high recognition accuracy, fast running time and small network complexity.Nine algorithm models and two datasets are used to represent the superior performance of our system.Experimental results show that our proposed DC-CNN can achieve recognition accuracy of 99.5\% and 74.1\% respectively on 4 and 8 classes of RF drone datasets.
This paper presents an enhanced peak cancellation method with simplified in-band distortion compensation for massive multi-input multi-output (mMIMO) orthogonal frequency division multiplexing (OFDM). The method compensates an in-band distortion due to peak cancellation by utilizing extra transmit antennas, where a compensation signal is designed and transmitted using extra antenna elements so that in-band distortion is canceled at the receiver end. Consequently, deep peak cancellation is possible without degrading bit error rate performance. The proposed method is further extended to non-linear precoded mMIMO-OFDM systems, where the perturbation vector cancellation signal is superimposed on the compensation signal so that the received signal is demodulated without non-linear processing to remove the perturbation vector. Thus, the proposed method does not require the iterative calculation to compensate for an in-band distortion. Our results show the effectiveness of the proposed method in terms of peak-to-average power ratio (PAPR) characteristics, signal to noise and distortion power ratio (SDNR), bit error rate (BER), and throughput in comparison with the state-of-the-art.
Nema pronađenih rezultata, molimo da izmjenite uslove pretrage i pokušate ponovo!
Ova stranica koristi kolačiće da bi vam pružila najbolje iskustvo
Saznaj više