Holographic massive multiple-input multiple-output (MIMO), in which a spatially continuous surface is being used for signal transmission and reception, has emerged as a promising solution for improving the coverage and data rate of wireless communication systems. To realize these objectives, the acquisition of accurate channel state information in holographic massive MIMO systems is crucial. This paper proposes a channel estimation scheme based on a parametric physical channel model for line-of-sight (LoS) dominated communication in millimeter and terahertz wave bands. The proposed channel estimation scheme exploits the specific structure of the radiated beams generated by the continuous surface to estimate the channel parameters in a dominated LoS channel model. Since the number of unknown channel parameters is fixed regardless of the number of antennas, the training overhead of the proposed scheme does not scale with the number of antennas. The simulation results demonstrate that the proposed estimation scheme significantly outperforms other benchmark schemes in a poor scattering environment.
This paper proposes an unmanned aerial vehicle (UAV)-aided full-duplex non-orthogonal multiple access (FD-NOMA) method to improve spectrum efficiency. Here, UAV is utilized to partially relay uplink data and achieve channel differentiation. Successive interference cancellation algorithm is used to eliminate the interference from different directions in FD-NOMA systems. Firstly, a joint optimization problem is formulated for the uplink and downlink resource allocation of transceivers and UAV relay. The receiver determination is performed using an access-priority method. Based on the results of the receiver determination, the initial power of ground users (GUs), UAV, and base station is calculated. According to the minimum sum of the uplink transmission power, the Hungarian algorithm is utilized to pair the users. Secondly, the subchannels are assigned to the paired GUs and the UAV by a message-passing algorithm. Finally, the transmission power of the GUs and the UAV is jointly fine-tuned using the proposed access control methods. Simulation results confirm that the proposed method achieves higher performance than state-of-the-art orthogonal frequency division multiple-access method in terms of spectrum efficiency, energy efficiency, and access ratio of the ground users.
The development of the internet of things (IoT) and smart cities, combined with the widespread usage of cooperative or independent air traffic surveillance systems such as automatic dependent surveillance-broadcast (ADS-B) bring about novel deployment paradigms in air-ground integrated vehicular net-works (AGVN). However, compared with the evolutional physical layer advancing, the communication protocols such as TCP/IP protocol, which are listed on the top of communication protocol stacks, have relatively constricted developments due to their fixed frameworks. The most obvious manifestation of this trend is that these protocols can hardly extend interfaces to maximize the benefits brought by bottom layer upgrades. In view of the above problems, in this paper, we propose a novel handover strategy based on side information of the ADS- B for AG VN. Firstly, a practical scheme of combination between TCP/IP protocol and ADS-B, which is implemented in the Network simulation, version-3 (ns-3), is proposed to adapt the AGVN handover tasks. Secondly, the configuration, timing sequence and parameters, as well as hand over strategies of the scheme are proposed in detail with the modules called by ns-3 simulator. Finally, the experimental results are provided to validate the handover strategies.
In this paper, a channel state information (CSI) feedback method is proposed based on deep transfer learning (DTL). The proposed method addresses the problem of high training cost of downlink CSI feedback network in frequency division duplexing (FDD) massive multiple-input multiple-output (MIMO) systems. In particular, we obtain the models of different wireless channel environments at low training cost by fine-tuning the pre-trained model with a relatively small number of samples. In addition, the effects of different layers on training cost and model performance are discussed. Furthermore, a model-agnostic meta-learning (MAML)-based method is proposed to solve the problem associated with large number of samples of a wireless channel environment required to train a deep neural network (DNN) as a pre-trained model. Our results show that the performance of the DTL-based method is comparable with that of the DNN trained with a large number of samples, which demonstrates the effectiveness and superiority of the proposed method. At the same time, although there is a certain performance loss compared with the DTL-based method, the MAML-based method shows good performance in terms of the normalized mean square error (NMSE).
Malware traffic classification (MTC) is a key technology for solving anomaly detection and intrusion detection problems. And hence it plays an important role in the field of network security. Traditional MTC methods based on port, payload and statistic depend on the manual-designed features, which have low accuracy. Recently, deep learning methods have attracted significant attention due to their high accuracy in terms of classification. However, in practical application scenarios, deep learning methods require a large amount of labeled samples for training, while the available labeled samples for training are very rare. Furthermore, the preparation of a large amount of labeled samples requires a lot of labor costs. To solve these problems, this paper proposes two methods based on semi-supervised learning (SSL) and transfer learning (TL), respectively. Our proposed methods use a large amount of unlabeled data collected in the Internet traffic, which can greatly improve the accuracy classification with few labeled samples. Through experiments, we obtained the best method to improve the accuracy of few labeled samples in different situations. Experiment results show that our proposed methods can satisfy the requirement of MTC in the case of few labeled samples.
Intelligent reflecting surface (IRS)-aided millimeter-wave (mmWave) multiple-input single-output (MISO) is considered one of the promising techniques in next-generation wireless communication. However, existing beamforming methods for IRS-aided mm Wave MISO systems require high computational power, so it cannot be widely used. In this paper, we combine an unsupervised learning-based fast beamforming method with IRS-aided MISO systems, to significantly reduce the computational complexity of this system. Specifically, a new beamforming design method is proposed by adopting the feature fusion means in unsupervised learning. By designing a specific loss function, the beamforming can be obtained to make the spectrum more efficient, and the complexity is lower than that of the existing algorithms. Simulation results show that the proposed beamforming method can effectively reduce the computational complexity while obtaining relatively good performance results.
In this paper, we propose Federated Deep Learning (FDL) for intrusion detection in heterogeneous networks. Local Deep Neural Network (DNN) models are used to learn the hierarchical representations of the private network traffic data in multiple edge nodes. A dedicated central server receives the parameters of the local DNN models from the edge nodes, and it aggregates them to produce an FDL model using the Fed+ fusion algorithm. Simulation results show that the FDL model achieved an accuracy of 99.27 ± 0.79%, a precision of 97.03 ± 4.22%, a recall of 98.06 ± 1.72%, an F1 score of 97.50 ± 2.55%, and a False Positive Rate (FPR) of 2.40 ± 2.47%. The classification performance and the generalisation ability of the FDL model are better than those of the local DNN models. The Fed+ algorithm outperformed two state-of-the-art fusion algorithms, namely federated averaging (FedAvg) and Coordinate Median (CM). Therefore, the DNN-Fed+ model is preferable for intrusion detection in heterogeneous wireless networks.
The heterogeneous network (HetNet) is a promising network for beyond 5G to improve network capacity and spectral efficiency by developing low-power small cells within macro networks. Therefore, appropriate resource allocation schemes should be designed to balance between data rates and the cross-tier interference among different cells. To improve system robustness and also reduce the harmful interference to macrocell users (MUs), in this paper, we maximize the total interference efficiency (IE) (i.e., the ratio of the sum data rates of femtocell users (FUs) to the total cross-tier interference from femtocell base stations (FBSs) to MUs) with imperfect channel state information in a two-tier orthogonal frequency division multiple access based HetNet by jointly optimizing the transmit power of the FBS and the subcarrier allocation factor. Meanwhile, the outage rate constraint of each FU, the outage interference constraint of each MU, the maximum transmit power constraint of the FBS, and the subcarrier assignment constraint are considered simultaneously. The considered problem belongs to a mixed-integer non-linear programming problem and thus is NP-hard. The original problem with uncertain parameters is transformed into a deterministic and convex one solved by using the quadratic transformation approach, the variable relaxation approach, and Lagrange dual theory. The feasible region analysis and robust sensitivity are provided. Simulation results demonstrate that the proposed scheme has a lower interference to the MU and higher IE by comparing it with the traditional schemes.
Accurate downlink channel state information (CSI) is one of the essential requirements for harnessing the potential advantages of frequency-division duplexing (FDD) massive multi-input multi-output (MIMO) systems. The current state-of-art in this vibrant research area include the use of deep learning to compress and feedback downlink CSI at the user equipments (UEs). These approaches focus mainly on achieving CSI feedback with high reconstruction performance and low complexity, but at the expense of inflexible compression rate (CR). High training overheads and limited storage capacity requirements are some of the challenges associated with the design of dynamic CR, which instantaneously adapt to propagation environment. This paper applies transfer learning (TL) to develop a multi-rate CSI compression and recovery neural network (TL-MRNet) with reduced training overheads. Simulation results are presented to validate the superiority of the proposed TL-MRNet over traditional methods in terms of normalized mean square error and cosine similarity.
In this letter, we propose a fairness-aware rate maximization scheme for a wireless powered communications network (WPCN) assisted by an intelligent reflecting surface (IRS). The proposed scheme combines user scheduling based on time division multiple access (TDMA) and (mechanical) angular displacement of the IRS. Each energy harvesting user (EHU) has dedicated time slots with optimized durations for energy harvesting and information transmission whereas, the phase matrix of the IRS is adjusted to focus its beam to a particular EHU. The proposed scheme exploits the fundamental dependence of the IRS channel path-loss on the angle between the IRS and the node’s line-of-sight, which is often overlooked in the literature. Additionally, the network design can be optimized for large number of IRS unit cells, which is not the case with the computationally intensive state-of-the-art schemes. In fact, the EHUs can achieve significant rates at practical distances of several tens of meters to the base station (BS) only if the number of IRS unit cells is at least a few thousand.
Deep learning is considered one of promising tools to develop intelligent wireless techniques in the fifth-generation (5G) wireless communication systems. However, existing researches are conducted based on the channel datasets in fourth-generation (4G) wireless communications systems. Also, some 5G nonstandard channel dataset generators are proposed for frontier technology research. However, these datasets cannot be applied in real 5G new radio (NR) systems. In this letter, we propose a generalized channel dataset generator for 5G NR systems. Furthermore, this letter also proposes a data sampling scheme called RB replacement, which improves the resolution of the dataset and greatly reduces the size of the dataset. The dataset generator can set different channel parameters according to different needs of users, and also can generate massive multiple input multiple output (MIMO) channel. The data generator is open source at GitHub,1 which can be downloaded and used by researchers for free.1The code of this letter can be downloaded from GitHub link:https://github.com/CodeDwan/5G-NR-data-generato.git.
The proliferation of wireless sensor networks (WSNs) and their applications has attracted remarkable growth in unsolicited intrusions and security threats, which disrupt the normal operations of the WSNs. Deep learning (DL)-based network intrusion detection (NID) methods have been widely investigated and developed. However, the high computational complexity of DL seriously hinders the actual deployment of the DL-based model, particularly in the devices of WSNs that do not have powerful processing performance due to power limitation. In this letter, we propose a lightweight dynamic autoencoder network (LDAN) method for NID, which realizes efficient feature extraction through lightweight structure design. Experimental results show that our proposed model achieves high accuracy and robustness while greatly reducing computational cost and model size.
In frequency-division duplexing (FDD) massive multiple-input multiple-output (MIMO) systems, deep learning for predicting the downlink channel state information (DL-CSI) has been extensively studied. However, in some small cellular base stations (SBSs), a small amount of training data is insufficient to produce an excellent model for CSI prediction. Traditional centralized learning (CL) based method brings all the data together for training, which can lead to overwhelming communication overheads. In this work, we introduce a federated learning (FL) based framework for DL-CSI prediction, where the global model is trained at the macro base station (MBS) by collecting the local models from the edge SBSs. We propose a novel model aggregation algorithm, which updates the global model twice by considering the local model weights and the local gradients, respectively. Numerical simulations show that the proposed aggregation algorithm can make the global model converge faster and perform better than the compared schemes. The performance of the FL architecture is close to that of the CL-based method, and the transmission overheads are much fewer.
Advanced signal detectors pose a lot of technical challenges for designing signal detection methods in orthogonal frequency division multiplexing (OFDM) with index modulation (IM). Traditional signal detection methods such as maximum likelihood have an excessive complexity, and existing deep learning (DL) based detection methods can reduce the complexity significantly. To further improve the detection performance, in this paper, we propose a complex deep neural network (C-DNN) and a complex convolution neural network (C-CNN) based intelligent signal detection method for OFDM-IM. Specifically, the proposed intelligent signal detection method is designed by C-DNN and C-CNN. The proposed signal detection methods for OFDM-IM use pilots to achieve semi-blind channel estimation, and to reconstruct the transmitted symbols based on channel state information (CSI). Simulation results are given to confirm the performance of the proposed signal detection method in terms of bit error rate and convergence speed.
Specific emitter identification (SEI) is a promising technology to discriminate the individual emitter and enhance the security of various wireless communication systems. SEI is generally based on radio frequency fingerprinting (RFF) originated from the imperfection of emitter’s hardware, which is difficult to forge. SEI is generally modeled as a classification task and deep learning (DL), which exhibits powerful classification capability, has been introduced into SEI for better identification performance. In the recent years, a novel DL model, named as complex-valued neural network (CVNN), has been applied into SEI methods for directly processing complex baseband signal and improving identification performance, but it also brings high model complexity and large model size, which is not conducive to the deployment of SEI, especially in Internet-of-things (IoT) scenarios. Thus, we propose an efficient SEI method based on CVNN and network compression, and the former is for performance improvement, while the latter is to reduce model complexity and size with ensuring satisfactory identification performance. Simulation results demonstrated that our proposed CVNN-based SEI method is superior to the existing DL-based methods in both identification performance and convergence speed, and the identification accuracy of CVNN can reach up to nearly 100% at high signal-to-noise ratios (SNRs). In addition, SlimCVNN just has 10% $\sim 30$ % model sizes of the basic CVNN, and its computing complexity has different degrees of decline at different SNRs; there is almost no performance gap between SlimCVNN and CVNN. These results demonstrated the feasibility and potential of CVNN and model compression.
Nema pronađenih rezultata, molimo da izmjenite uslove pretrage i pokušate ponovo!
Ova stranica koristi kolačiće da bi vam pružila najbolje iskustvo
Saznaj više