To deal with the deep learning-based automatic modulation classification (AMC) in the scenario that the training dataset are distributed over a network without gathering the data at a centralized location, the decentralized learning-based AMC (DecentAMC) had been presented. However, there exists frequent model parameter uploading and downloading in DecentAMC method, which cause high communication overhead. In this paper, an innovative learning framework are proposed for AMC (named DeEnAMC), in which the framework is realized by utilizing the combination of decentralized learning and ensemble learning. Our results show that the proposed DeEnAMC reduces communication overhead while keeping a similar classification performance to DecentAMC.
The purpose of a network intrusion detection (NID) is to detect intrusions in the network, which plays a critical role in ensuring the security of the Internet of Things (IoT). Recently, deep learning (DL) has achieved a great success in the field of intrusion detection. However, the limited computing capabilities and storage of IoT devices hinder the actual deployment of DL-based high-complexity models. In this article, we propose a novel NID method for IoT based on the lightweight deep neural network (LNN). In the data preprocessing stage, to avoid high-dimensional raw traffic features leading to high model complexity, we use the principal component analysis (PCA) algorithm to achieve feature dimensionality reduction. Besides, our classifier uses the expansion and compression structure, the inverse residual structure, and the channel shuffle operation to achieve effective feature extraction with low computational cost. For the multiclassification task, we adopt the NID loss that acts as a better loss function to replace the standard cross-entropy loss for dealing with the problem of uneven distribution of samples. The results of experiments on two real-world NID data sets demonstrate that our method has excellent classification performance with low model complexity and small model size, and it is suitable for classifying the IoT traffic of normal and attack scenarios.
Mobile edge computing (MEC) is expected to provide low-latency computation service for wireless devices (WDs). However, when WDs are located at cell edge or communication links between base stations (BSs) and WDs are blocked, the offloading latency will be large. To address this issue, we propose an intelligent reflecting surface (IRS)-assisted cell-free MEC system consisting of multiple BSs and IRSs for improving the transmission environment. Consequently, we formulate a min–max latency optimization problem by jointly designing multiuser detection (MUD) matrices, IRSs’ reflecting beamforming vectors, WDs’ offloading data size and edge computing resource, subject to constraints on edge computing capability and IRSs phase shifts. To solve it, an alternating optimization algorithm based on the block coordinate descent (BCD) technique is proposed, in which the original nonconvex problem is decoupled into two subproblems for alternately optimizing computing and communication parameters. In particular, we optimize the MUD matrix based on the second-order cone programming (SOCP) technique, and then develop two efficient algorithms to optimize IRSs’ reflecting vectors based on the semi-definite relaxation (SDR) and successive convex approximation (SCA) techniques, respectively. Numerical results show that employing IRSs in cell-free MEC systems outperforms conventional MEC systems, resulting in up to about 60% latency reduction can be attained. Moreover, numerical results confirm that our proposed algorithms enjoy a fast convergence, which is beneficial for practical implementation.
Due to the lack of channel reciprocity in frequency division duplexity (FDD) massive multiple-input multiple-output (MIMO) systems, it is impossible to infer the downlink channel state information (CSI) directly from its reciprocal uplink CSI. Hence, the estimated downlink CSI needs to be continuously fed back to the base station (BS) from the user equipment (UE), consuming valuable bandwidth resources. This is exacerbated, in massive MIMO, with the increase of the antennas at the BS. This paper propose a fully convolutional neural network (FullyConv) to compress and decompress the downlink CSI. FullyConv will improve the reconstruction accuracy of downlink CSI and reduce the training parameters and computational resources. Besides, we add a quantization module in the encoder and a dequantization module in the decoder of the FullyConv to simulate a real feedback scenario. Experimental results demonstrate that the proposed FullyConv is better than the baseline on reconstruction performance and reduction of the storage and computational overhead. Furthermore, the FullyConv added quantization and dequantization modules is robust to quantization error in real feedback scenarios.
Radio Frequency Fingerprint (RFF) identification on account of deep learning has the potential to enhance the security performance of wireless networks. Recently, several RFF datasets were proposed to satisfy requirements of large-scale datasets. However, most of these datasets are collected from 2.4G WiFi devices and through similar channel environments. Meanwhile, they only provided receiving data collected by the specific equipment. This paper utilizes software radio peripheral as a dataset generating platform. Therefore, the user can customize the parameters of the dataset, such as frequency band, modulation mode, antenna gain, and so on. In addition, the proposed dataset is generated through various and complex channel environments, which aims to better characterize the radio frequency signals in the real world. We collect the dataset at transmitters and receivers to simulate a real-world RFF dataset based on the long-term evolution (LTE). Furthermore, we verify the dataset and confirm its reliability. The dataset and reproducible code of this paper can be downloaded from GitHub link: https://github.com/njuptzsp/XSRPdataset.
Millimeter wave (mmWave) communication technique has been developed rapidly because of many advantages of high speed, large bandwidth, and ultra-low delay. However, mmWave communications systems suffer from fast fading and frequent blocking. Hence, the ideal communication environment for mmWave is line of sight (LOS) channel. To improve the efficiency and capacity of mmWave system, and to better build the Internet of Everything (IoE) service network, this paper focuses on the channel identification technique in LOS and non-line of sight (NLOS) environments. Considering the limited computing ability of user equipments (UEs), this paper proposes a novel channel identification architecture based on eigen features, i.e. eigenmatrix and eigenvector (EMEV) of channel state information (CSI). Furthermore, this paper explores clustered delay line (CDL) channel identification with mmWave, which is defined by the 3rd generation partnership project (3GPP). The experimental results show that the EMEV based scheme can achieve identification accuracy of 99.88% assuming perfect CSI. In the robustness test, the maximum noise can be tolerated is $\text{SNR} = 16 \mathbf{dB}$, with the threshold $acc\geq$ 95%. What is more, the novel architecture based on EMEV feature will reduce the comprehensive overhead by about 90%.
We propose reliability and latency quantities as metrics to be used in the routing tree optimization procedure for Wi-Fi mesh networks. In contrast to state-of-the-art routing optimization methods, our proposal involves directly optimizing the data rates of individual mesh links according to underlying channel conditions such that reliability and latency requirements are satisfied for entire mesh paths. Moreover, to mitigate the channel contention problem that is common in Wi-Fi networks, we propose a multichannel (MC) assignment method. In this method, bandwidth is allocated to the individual mesh nodes based on the expected traffic load that they are expected to handle. Once the bandwidth for each node is determined, specific channels are assigned in a way to avoid co-channel interference. Furthermore, considerable efforts were spent for developing a system-level simulator that captures the features of the physical (PHY) layer and medium access layer defined in the IEEE 802.11 standard (Wi-Fi). Using this simulator, we were able to show that Wi-Fi mesh networks using the proposed routing metric based on reliability and latency quantities significantly outperform the state of the art. Finally, the mitigation of channel contention through the proposed MC assignment method results in further dramatic gains in performance.
Automatic modulation classification (AMC) is a promising technology for identifying modulation types, and deep learning (DL)-based AMC is one of its main research directions. Conventional DL-based AMC methods are centralized solutions (i.e., CentAMC), which are trained on abundant data collected from local clients and stored in the server and generally have advanced performance, but their major problem is the risk of data leakage. Besides, if DL-based AMC is only trained with the data from their corresponding clients, it may exhibit weak performance. Thus, a federated learning (FL)-based AMC (FedeAMC) is proposed under the condition of class imbalance and noise varying. Its advantage is low risk of data leakage without severe performance loss, because data and training are in each local client, while only knowledge (i.e., gradient or model weight), rather than data, is shared with the server. In addition, there is generally class imbalance problem in each local client, and balanced cross entropy is introduced as loss function for solving this problem. Simulation results demonstrated that average accuracy gap between FedeAMC and CentAMC is less than 2%.
Due to the implementation and performance limitations of centralized learning automatic modulation classification (CentAMC) method, this paper proposes a decentralized learning AMC (DecentAMC) method using model consolidation and lightweight design. Specifically, the model consolidation is realized by a central device (CD) for edge device (ED) model averaging (MA) and multiple EDs for ED model training. The lightweight is designed by separable convolutional neural network (S-CNN), in which the separable convolutional layer is utilized to replace the standard convolution layer and most of fully connected layers are cut off. Simulation results show that the proposed method substantially reduces the storage and computational capacity requirements of the EDs and communication overhead. The training efficiency also shows remarkable improvement. Compared with convolutional neural network (CNN), the space complexity (i.e., model parameters and output feature map) is decreased by about 94% and the time complexity (i.e., floating point operations) of S-CNN is decreased by about 96% while degrading the average correct classification probability by less than 1%. Compared with S-CNN-based CentAMC, without considering model weights uploading and downloading, the training efficiency of our proposed method is about <inline-formula> <tex-math notation="LaTeX">${N}$ </tex-math></inline-formula> times of it, where <inline-formula> <tex-math notation="LaTeX">${N}$ </tex-math></inline-formula> is the number of EDs. Considering the model weights uploading and downloading, the training efficiency of our proposed method can still be maintained at a high level (e.g., when the number of EDs is 12, the training efficency of the proposed AMC method is about 4 times that of S-CNN-based CentAMC in dataset <inline-formula> <tex-math notation="LaTeX">$D_{1} = \{2{\mathrm {FSK, 4FSK, 8FSK, BPSK, QPSK, 8PSK, 16QAM}}\}$ </tex-math></inline-formula> and about 5 times that of S-CNN-based CentAMC in dataset <inline-formula> <tex-math notation="LaTeX">$D_{2} = \{2 {\mathrm {FSK, 4FSK, 8FSK, BPSK, QPSK, 8PSK, PAM2, PAM4, PAM8, 16QAM}}\}$ </tex-math></inline-formula>), while the communication overhead is reduced more than 35%.
Radio frequency fingerprint (RFF) identification is a popular topic in the field of physical layer security. However, machine learning based RFF identification methods require complicated feature extraction manually while deep learning based methods are hard to achieve robust identification performance. To solve these problems, we propose a novel RFF identification method based on heat constellation trace figure (HCTF) and slice integration cooperation (SIC). HCTF is utilized to avoid the manual feature extraction and SIC is adopted to extract more features automatically in RF signals. Experimental results show that our proposed HCTF-SIC identification method can achieve higher accuracy than the existing RFF methods. The identification accuracy achieves 91.07% when SNR <inline-formula> <tex-math notation="LaTeX">$\pmb {=}$ </tex-math></inline-formula> 0 dB and it is even higher than 99.64% when the SNR <inline-formula> <tex-math notation="LaTeX">$\pmb {\ge }$ </tex-math></inline-formula> 5 dB.
Energy-efficiency (EE) is a critical metric within wireless optimization. Power control over fading channels is considered as a promising EE-improving technique, but requires optimization of a series of fractional functional optimization problems which are hard to handle by existing optimization techniques. In this paper, we propose a novel EE power control method with unsupervised learning. Firstly, the original fractional problems are decomposed into sub-problems by Dinkelbach and quadratic transformations. Then, these sub-problems are reformulated into unconstrained forms through Lagrange dual formulation. Furthermore, unsupervised primal-dual learning method is applied to handle these unconstrained problems with strong duality. Finally, The unsupervised primal-dual learning is implemented by the deep neural network (DNN) with low computational complexity. Simulation results verify the effectiveness of the proposed approach on a number of typical wireless optimizing scenarios. It is shown that compared to conventional algorithms our method achieves better performance in cognitive radio networks, interference networks, and OFDM networks.
Large intelligent surface-based transceivers (LISBTs), in which a spatially continuous surface is being used for signal transmission and reception, have emerged as a promising solution for improving the coverage and data rate of wireless communication systems. To realize these objectives, the acquisition of accurate channel state information (CSI) in LISBT-assisted wireless communication systems is crucial. In this paper, we propose a channel estimation scheme based on a parametric physical channel model for line-of-sight dominated communication in millimeter and terahertz wave bands. The proposed estimation scheme requires only five pilot signals to perfectly estimate the channel parameters assuming there is no noise at the receiver. In the presence of noise, we propose an iterative estimation algorithm that decreases the channel estimation error due to noise. The training overhead and computational cost of the proposed scheme do not scale with the number of antennas. The simulation results demonstrate that the proposed estimation scheme significantly outperforms other benchmark schemes.
Nema pronađenih rezultata, molimo da izmjenite uslove pretrage i pokušate ponovo!
Ova stranica koristi kolačiće da bi vam pružila najbolje iskustvo
Saznaj više