Articles in press have been peer-reviewed and accepted, which are not yet assigned to volumes/issues, but are citable by Digital Object Identifier (DOI).

Display Method:

, doi: 10.11999/JEIT171211
Abstract:
Side channel attacks have serious threat to the hardware security of Advanced Encryption Standard (AES), how to resist the side channel attack becomes an urgent problem. Byte substitution operation is the only nonlinear operation in AES algorithm, so it is very important for the whole encryption algorithm to improve its security. In this paper, a countermeasure against side-channel attack is proposed based on random addition-chain for AES by replacing the fixed addition-chain with random addition-chain to realize the inverse operation of multiplication in a finite field GF(28). The impact of the random addition-chain on the security and effectiveness of the algorithm is studied. Experimental results show that the proposed random addition-chain based algorithm is more secure and effective than the previous fixed addition-chain based algorithms in defending against side channel attacks.
, doi: 10.11999/JEIT180484
[Abstract](96) [FullText HTML](75) [PDF 2231KB](17)
Abstract:
Hardware Trojan horse detection has become a hot research topic in the field of chip security. Most existing detection algorithms are oriented to ASIC circuits and FPGA circuits, and rely on golden chips that are not infected with hardware Trojan horses, which are difficult to adapt to the coarse-grained reconfigurable array consisting of large-scale reconfigurable cells. Therefore, aiming at the structural characteristics of Coarse-grained reconfigurable cryptographic logical arrays, a hardware Trojan horse detection algorithm based on partitioned and multiple variants logic fingerprints is proposed. The algorithm divides the circuit into multiple regions, adopts the logical fingerprint feature as the identifier of the region, and realizes the hardware Trojan detection and diagnosis without golden chip by comparing the multiple variant logic fingerprints of the regions in both dimensions of space and time. Experimental results show that the proposed detection algorithm has high detection success rate and low misjudgment rate for hardware Trojan detection.
, doi: 10.11999/JEIT180333
[Abstract](40) [FullText HTML](20) [PDF 1488KB](1)
Abstract:
Device-free passive localization is a key issue of the intruder detection, environmental monitoring, and intelligent transportation. The existing device-free passive localization method can obtain the multidimensional measurement information by channel state information, but the existing scheme can not fully exploit the frequency diversity on multiple channels to improve the localization performance. This paper proposes a Compressive Sensing (CS) based multi-target device-free passive localization algorithm using multidimensional measurement information. It takes advantage of the frequency diversity of multidimensional measurement information to improve the accuracy and robustness of localization results under the CS framework. The dictionary is built according to the saddle surface model, and the multi-target device-free passive localization problem is modeled as a joint sparse recovery problem based on multiple measurement vectors. The target location vector is estimated based on the multiple sparse Bayesian learning algorithm. Simulation results indicate that the proposed algorithm can make full use of the multidimensional measurement information to improve the localization performance.
, doi: 10.11999/JEIT180373
[Abstract](38) [FullText HTML](17) [PDF 882KB](5)
Abstract:
To solve the problem of the loss in the motion features during the transmission of deep convolution neural networks and the overfitting of the network model, a cross layer fusion model and a multi-model voting action recognition method are proposed. In the preprocessing stage, the motion information in a video is gathered by the rank pooling method to form approximate dynamic images. Two basic models are presented. One model with two horizontally flipping layers is called " non-fusion model”, and then a fusion structure of the second layer and the fifth layer is added to form a new model named " cross layer fusion model”. The two basic models of " non-fusion model” and " cross layer fusion model” are trained respectively on three different data partitions. The positive and negative sequences of each video are used to generate two approximate dynamic images. So many different classifiers can be obtained by training the two proposed models using different training approximate dynamic images. In testing, the final classification results can be obtained by averaging the results of all these classifiers. Compared with the dynamic image network model, the recognition rate of the non-fusion model and the cross layer fusion model is greatly improved on the UCF101 dataset. The multi-model voting method can effectively alleviate the overfitting of the model, increase the robustness of the algorithm and get better average performance.
, doi: 10.11999/JEIT180323
[Abstract](50) [FullText HTML](21) [PDF 1863KB](6)
Abstract:
To solve the low performance problem of the existing Modulated Wideband Converter (MWC)-based sub-Nyquist sampling recovery algorithm, this paper proposes a support recovery algorithm based on the kernel space of sampling value and a random compression rank-reduction idea. Combining them, a high-performance sampling recovery algorithm is achieved. Firstly random compression transforms are used to convert the sampling equation into several new multiple-measurement-vector problems, without changing the sparsity of the unknown matrix. Then the orthogonal relationship between the kernel space of sampling value and the support vectors of sampling matrix is utilized to obtain joint sparse support set of the unknown. The final recovery is performed by the pseudo inversion. The proposed method is analyzed and verified by theory and experiment. Numerical experiments show that, compared with the traditional recovery algorithm, the proposal can improve the recovery success rate, and reduce the channel number required for high-probability recovery. Furthermore, in general, the recovery performance improves with the rise of compression times.
, doi: 10.11999/JEIT180485
[Abstract](87) [FullText HTML](53) [PDF 1721KB](12)
Abstract:
A multi-parameter convolutional neural network method is proposed for gesture recognition based on Frequency Modulated Continuous Wave (FMCW) radar. A multidimensional parameter dataset is constructed for gestures by performing time-frequency analysis of the radar signal to estimate the distance, Doppler and angle parameters of the gesture target. To realize feature extraction and classification accurately, an end-to-end structured Range-Doppler-Angle of Time (RDA-T) multi-dimensional parameter convolutional neural network scheme is further proposed using multi-branch network structure and high-dimensional feature fusion. The experimental results reveal that using the combined gestures information of distance, Doppler and angle for multi-parameter learning, the proposed scheme resolves the problem of low information quantity of single-dimensional gesture recognition methods, and its accuracy outperforms the single-dimensional methods in terms of gesture recognition by 5%～8%.
, doi: 10.11999/JEIT180464
[Abstract](59) [FullText HTML](37) [PDF 715KB](6)
Abstract:
Due to the limitation of individual controller’s processing capacity in large-scale complex Software Defined Networks (SDN), an efficient online algorithm for load balancing among controllers based on efficiency range is proposed to improve load balancing among controllers and reduce the propagation delay between a controller and the switch. In the initial static network, the initial set of controllers is selected by a greedy algorithm, then M improved Minimum Spanning Trees (MST) rooted at the initial set of controllers are constructed, so initial M subnets with load balancing are determined. With the dynamic changes of load, for the purpose of making the controller work within efficiency range at any time, several switches in different subnets are reassigned by Breadth First Search (BFS). The initial set of controllers is updated for minimizing propagation delay in the algorithms’ last step. The algorithm is based on the connectivity of intra-domain and inter-domain. Simulation results show that the proposed algorithms not only guarantee the load balancing among controllers, but also guarantee the lower propagation delay. As to compare to PSA algorithm, optimized K-Means algorithm, etc., it can make Network Load Balancing Index (NLBI) averagely increase by 40.65%.
, doi: 10.11999/JEIT180438
[Abstract](85) [FullText HTML](26) [PDF 2156KB](2)
Abstract:
Multiband fusion imaging can effectively improve the range resolution of Inverse Synthetic Aperture Radar (ISAR) imaging. The traditional Estimation of Signal Parameters via Rotational Invariance Techniques (ESPRIT) spectral estimation signal fusion algorithm uses only the complex measured data without using their conjugate data. This paper proposes to modify the unitary ESPRIT method, which is based on synthesizing complex observation data and its conjugate data, to achieve unitary ESPRIT based multiband fusion ISAR imaging. The unitary ESPRIT method makes full use of the information of complex observations, which is more beneficial to multiband frequency spectrum estimation and ISAR imaging. Furthermore, for the correction of Migration Through Resolution Cell (MTRC) of scatterers in multiband fusion, the traditional processing flow is adjusted and optimized. The migration through range cell correction and the migration through Doppler cell correction are performed before and after the multiband fusion respectively, which avoids the influence of the fast time frequency - slow time coupling in the echo and the phase compensation on the spectrum fusion processing, thereby a better multiband fusion ISAR image is obtained. Simulation and real data experimental results show that the proposed methods can not only get high quality ISAR images, but also have good antinoise performance and higher computational efficiency.
, doi: 10.11999/JEIT180474
[Abstract](42) [FullText HTML](29) [PDF 954KB](13)
Abstract:
The existing virtual network reconfiguration algorithms do not consider the fragment resources generated in the physical network, which results in the improvement of the performance of the online virtual network embedding algorithms is not obvious. To solve this problem, a definition of network resource fragmentation is given, and a Fragment-Aware Secure Virtual Network Reconfiguration (FA-SVNR) algorithm is proposed. In the process of reconfiguration, the virtual node set to be migrated is selected by considering the fragmentation of nodes in the physical network periodically, and the best virtual node migration scheme is selected by considering the reduction of the fragmentation of the physical network and the reduction of the embedding cost of the virtual network. Simulation results show that the proposed algorithm has the higher acceptance ratio and revenue to cost ratio compared with the existing virtual network reconfiguration algorithm, especially in the metric of revenue to cost ratio.
, doi: 10.11999/JEIT180420
[Abstract](77) [FullText HTML](21) [PDF 2828KB](0)
Abstract:
China is a flood disaster-prone country, where floods occur frequently every year, from July to August. Therefore, rapid disaster detection and assessment of floods affected areas is of great significance. GF-3 SAR satellite data has obvious advantages of all-day, all-weather imaging characteristics in flood disaster reduction applications because of its active observation technology. For the purpose of rapid water detection in flooding area, a rapid detection method of flood area based on GF-3 single-polarized SAR data is proposed, including SAR preprocessing, flood extraction based on Markov random fields, shadow false alarm removal. Its detecting accuracy is evaluated with manual detection result. The test results show that this method can realize the rapid and accurate extraction of waters in flood disaster area.
, doi: 10.11999/JEIT180399
[Abstract](63) [FullText HTML](37) [PDF 2118KB](13)
Abstract:
Accurately estimating rotor rotation frequency of Unmanned Aerial Vehicle (UAV) is of great significance for UAV detection and recognition. For the UAV target echo model of LFMCW (Linear Frequency Modulated Continuous Wave) radar, this paper proposes an auto-correlation and cepstrum to estimate the rotor rotation frequency of UAV, which derives the mapping relationship between the rotor rotation frequency of UAV and the periodic delay in the radar echo cepstrum output, and more effectively estimates the rotor frequency of multi-rotor UAV by weighted equilibrium, making up for the shortages of traditional methods. The effectiveness of the method is verified by simulation and real scene experiments.
, doi: 10.11999/JEIT180440
[Abstract](85) [FullText HTML](22) [PDF 1388KB](2)
Abstract:
The traditional feature-based image matching method has many problems such as many redundant points and low matching accuracy, which can hardly meet the real-time and robustness requirements. In this regard, a fast scene matching method based on Scale Invariant Feature Transform (SIFT) is proposed. In the feature detection phase, FAST (Features from Accelerated Segment Test) is used to detect characteristics in multi-scale, after then, combining with Difference Of Gauss (DOG) operators to filter characteristics again. From this, the feature search process is simplified. In feature matching phase, the affine transformation model is used to simulate the transformation relation and establish the geometric constraint, to overcome the mismatching because of ignoring the geometric information. The experimental results show that the proposed method is superior to the SIFT in efficiency and precision, also has good robustness to light, blur and scale transformation, achieves scene matching better.
, doi: 10.11999/JEIT180292
[Abstract](83) [FullText HTML](26) [PDF 367KB](6)
Abstract:
Honeypot technology is a network trap in cyber defense. It can attract and deceive attackers and record their attack behavior, so as to study the target and attack means of the adversary and protect real service resources. However, because of the static configuration and the fixed deployment in traditional honeypots, it is as easy as a pie for intruders to identify and escape those traps, which makes them meaningless. Therefore, how to improve the dynamic characteristic and the camouflage performance of honeypot becomes a key problem in the field of honeypot. In this paper, the recent research achievements in honeypot are summarized. Firstly, the development history of honeypot in four stages is summed up. Subsequently, by focusing on the key honeypot mechanism, the analysis on process, deployment, counter-recognition and game theory are carried out. Finally, the achievements of honeypot in different aspects are characterized and the development trends of honeypot technology is depicted.
, doi: 10.11999/JEIT180435
[Abstract](46) [FullText HTML](35) [PDF 2272KB](11)
Abstract:
To solve the problem of spatial parameter estimation of multi-frequency hopping signals, the sparsity in spatial domain of frequency hopping signals is used to realize the Direction Of Arrival (DOA) estimation based on Sparse Bayesian Learning (SBL). First, the spatial discrete grid is constructed and the offset between the actual DOA and the grid points is modeled into it. The data model of the uniform linear array with multiple frequency hopping signals is established. Then the posterior probability distribution of the sparse signal matrix is obtained by the SBL theory, and the line sparsity of the signal matrix and the offset is controlled by the hyperparameters. Finally, The expectation maximization algorithm is used to iterate the hyper parameters, and the maximum posteriori estimation of the signal matrix is obtained to complete the DOA estimation. Theoretical analysis and simulation experiments show that this method has good estimation performance and can adapt to less snapshots.
, doi: 10.11999/JEIT180405
[Abstract](46) [FullText HTML](30) [PDF 1255KB](4)
Abstract:
The existing Ring Oscillator (RO) Physical Unclonable Function (ROPUF) design has low reliability and uniqueness, resulting in poor application security. A statistical model for ROPUF is proposed, the factors of reliability and uniqueness are quantitatively analyzed, it is found that the larger delay difference can improve the reliability, and the lower process difference between RO units can improve the uniqueness. According to the conclusion of the model, a dynamic RO unit is designed based on the mesh topological structure. In combination with the frequency distribution characteristics of the RO array, a new frequency sorting algorithm is designed to increase the delay difference and reduce the process variation of the RO unit, thereby improving the reliability and uniqueness of ROPUF. The results show that compared with other improved ROPUF designs, the reliability and uniqueness of the proposed design has significant advantages, which can reach 99.642% and 49.1%, and temperature changes affect minimally them. It is verified by security analysis that the proposed design has strong anti-modeling attack capabilities.
, doi: 10.11999/JEIT180407
[Abstract](77) [FullText HTML](27) [PDF 4598KB](5)
Abstract:
In order to achieve more suitable night vision fusion images for human perception, a novel night-vision image fusion algorithm is proposed based on intensity transformation and two-scale decomposition. Firstly, the pixel value from the infrared image is used as the exponential factor to achieve intensity transformation of the visible image, so that the task of infrared-visible image fusion can be transformed into the merging of homogeneous images. Secondly, the enhanced result and the original visible image are decomposed into base and detail layers through a simple average filter. Thirdly, the detail layers are fused by the visual weight maps. Finally, the fused image is reconstructed by synthesizing these results. The fused image is more suitable for the visual perception, because the proposed method presents the result in the visual spectrum band. Experimental results show that the proposed method outperforms obviously the other five methods. In addition, the computation time of the proposed method is less than 0.2 s, which meet the real-time requirements. In the fused result, the details of the background are clear while the objects with high temperature variance are highlighted as well.
, doi: 10.11999/JEIT180277
[Abstract](212) [FullText HTML](117) [PDF 1478KB](38)
Abstract:
Virtualization is a new technology that can effectively solve the low resource utilization and service inflexibility problem in the current Wireless Sensor Network (WSN). For the resource competition problem in virtualized WSN, a multi-task resource allocation strategy based on Stackelberg game is proposed. According to the different Quality of Service (QoS) requirements of the business carried by Virtual Sensor Network Request (VSNR), the importance of multiple VSNRs is quantified. Then, the optimal price of WSN and the optimal resource requirements of VSNRs are obtained by using distributed iteration method. Finally, the resource corresponding to multiple VSNRs is acquired according to optimal price and optimal resource allocation determined by Nash equilibrium. The simulation results show that the proposed strategy can not only meet the diversified needs of users, but also improve the resource utilization of nodes and links.
, doi: 10.11999/JEIT180388
[Abstract](127) [FullText HTML](66) [PDF 3909KB](7)
Abstract:
Considering the possible security problems of directly extending steganographic schemes for gray-scale images to color images, an adaptive distortion-updated steganography method is put forward based on the Modification Strategy for Color Components (CCMS). First, the correlation between color components and RGB channels is analyzed, and the principle of distortion cost modification is proposed. Moreover, the optimal modification mode is conducted to maintain the statistical correlation of adjacent components. Finally, color image steganography schemes called CCMS are proposed. The experimental results show that the proposed HILL-CCMS and WOW-CCMS make great improvement over HILL and WOW methods under 5 embedding rates in resisting state-of-the-art color steganalytic methods such as CRM and SCCRM.
, doi: 10.11999/JEIT180402
[Abstract](105) [FullText HTML](23) [PDF 1214KB](0)
Abstract:
The efficiency of Service Function Chain (SFC) depends closely on where functions are deployed and how to select paths for data transmission. For the problem of SFC deployment in a resource-constrained network, this paper proposes an optimization algorithm for SFC deployment based on the Longest Effective Function Sequence (LEFS). To optimize function deployment and bandwidth requirement jointly, the upper bound of path length is set and relay nodes are searched incrementally on the basis of LEFS until the service request is satisfied. Simulation results show that, the proposed algorithm can balance network resource and optimize the function deploymen rate and bandwidth utilization. Compared with other algorithms, the utilization of network resource decreases 10%, so that more service requests can be supported. What is more, the algorithm has a lower computation complexity and can response to service requests quickly.
, doi: 10.11999/JEIT180307
[Abstract](244) [FullText HTML](180) [PDF 4298KB](69)
Abstract:
Magnetic Anomaly Detection (MAD) is a widely used passive target detection method. Its applications include surface warship target monitoring, underwater moving targets, and land target detection and identification. It is of great significance to research on the reliability detection method of weak magnetic anomaly signals based on geomagnetic background. This paper proposes a single sensor detection method based on the fractal characteristics of target magnetic anomaly signal based on the study of the differences in geomagnetic background and fractal characteristics of magnetic anomaly signals and conducts actual field test verification. The experimental results show that the method can accurately distinguish the geomagnetic background interference and magnetic anomaly signals, and can detect the weak magnetic anomaly signals in the geomagnetic background noise.
, doi: 10.11999/JEIT180381
[Abstract](198) [FullText HTML](186) [PDF 1435KB](35)
Abstract:
With the development of network information system, virus propagation and immunization strategy become one of the hot topics in the field of network security. In this paper, a new virus with hybrid attacking is introduced, which can attack network in two modes. One is to attack and infect the network nodes directly, and the another is to hide itself in the nodes by hiding its viral characteristic. According to its characteristics, this type of virus is defined as " Two-go and One-live” and the corresponding virus propagation model is established. Moreover, the stability of the system is studied by solving the equilibrium points and analyzing the basic reproduction number R0. Numerical simulations are presented to verify effectiveness and stability of the novel model.
, doi: 10.11999/JEIT180394
[Abstract](133) [FullText HTML](118) [PDF 2428KB](21)
Abstract:
Keystone transform is an effective broadband array signal pre-processing method, but it has a main problem of array data missing. In order to solve this problem, an enhanced Keystone transform algorithm, which combines the autoregression model with traditional Keystone transform, is proposed in this paper for sonar broadband adaptive beamforming. After phase alignment of broadband array signal using traditional Keystone transform, autoregression models for each frequency are constructed to compensate the missing array data. Then, a robust adaptive beamforming approach is utilized to obtain the target bearing results. The results of simulation studies indicate that the proposed broadband adaptive beamforming algorithm based on enhanced Keystone transform outperforms the beamforming algorithms based on traditional Keystone transform, steered minimum variance and frequency focusing.
, doi: 10.11999/JEIT180331
[Abstract](74) [FullText HTML](51) [PDF 1389KB](10)
Abstract:
Incoherent scatter spectrum plays an important role in studying the physical parameters of the ionosphere. The conventional theoretical model of incoherent scatter spectrum for derivation and calculation is extremely complicated and the model of the autocorrelation function can not be obtained . In this paper, the simplified model of ionospheric incoherent scatter spectrum is re-derived and the corresponding autocorrelation function is proposed. In the procedure of traditional incoherent scattering radar signal processing, the autocorrelation function is imbalance at different delays. This is mainly because the range resolution of zero-lag is very low, which affects the estimated performance of ionospheric scatter spectrum. Focus on this problem, a method based on data fitting is proposed to estimate the autocorrelation at zero-lag. Considering the computational complexity, a fast implementation method by polynomial functions is proposed to approach the autocorrelation function. Finally, experimental results on real echo data demonstrate the correctness and efficiency of the proposed method, which is of great significance for ionospheric detection.
, doi: 10.11999/JEIT180375
[Abstract](134) [FullText HTML](98) [PDF 3026KB](28)
Abstract:
In Near-Field (NF) applications of Ultra-High-Frequency Radio Frequency IDentification (UHF RFID) systems, due to the structural characteristics of the microstrip tag, the traditional inter-coil mutual impedance expression has a large error in the estimation of the mutual coupling effect such as the frequency shift of the prediction system, and the accuracy is not enough. Firstly, based on the transformer model, the mutual impedance expressions of the NF dense tags are derived from the perspective of radio energy transmission. Then, the electrical parameter values are obtained indirectly by establishing the electromagnetic simulation model combining with the NF inductance coupling tag. Finally, the derivation formula is verified and UHF RFID NF frequency shift is studied from the perspective of environmental factors that affect the mutual impedance between the two tags. The test results show that the derived mutual impedance expression is applied to the frequency offset calculation with error range in 1.6 MHz～7.3 MHz when the tags’ spacing is less than 30 mm. The results provide a reference for studying the mutual coupling effect between UHF RFID NF tags based on the mutual impedance between tags.
, doi: 10.11999/JEIT180379
[Abstract](135) [FullText HTML](87) [PDF 1378KB](22)
Abstract:
This paper investigates the design of hybrid analog and digital precoder and combiner for multi-user millimeter wave MIMO systems. Considering the problem of signal interference between multiple users due to diffuse scattering of signal propagation, a robust hybrid precoding algorithm based on Successive Interference Cancellation (SIC) is proposed. By deducing the orthogonal decomposition formula of the channel matrix to eliminate the interference from the known users’ signals, the multi-user links optimization problem with nonconvex constraints can be decompose into multiple single-user link optimization problems. The phase extraction algorithm is then used to search each user’s optimal transmission link one by one, and the multi-user hybrid precoding matrix is obtained in combination with Minimum Mean Square Error (MMSE) criterion. Simulation results show that the proposed algorithm has significant performance advantages compared with the existing hybrid precoding algorithms under severe interference conditions.
, doi: 10.11999/JEIT180397
[Abstract](124) [FullText HTML](67) [PDF 1948KB](12)
Abstract:
A low power and cost BeiDou-reflectometry used to retrieve Significiant Wave Height (SWH) and wind is designed and implemented. To improve the retrieval accuracy, a correction method based on the power function of the elevation angle sinusoidal and a delay correlation for the rapid change of wind speed is proposed. Moreover, combined observation of multi-satellite signals and single-side filtering for the observable are performed to improve further the retrieval accuracy. The experiment results of observating SWH and wind speed using reflected BeiDou signals show that designed and developed system could implement long-term and stable observation; the retrieval accuracies of SWH and wind speed retrieved by propsoed retrieval models and improvement methods of the retreival accuracy are 0.13 m and 1.28 m/s which are 0.13 m and 0.78 m/s higher than the methods proposed by Soulat et al.
, doi: 10.11999/JEIT180340
[Abstract](86) [FullText HTML](53) [PDF 2143KB](22)
Abstract:
For the fact that current gridless Direction Of Arrival (DOA) estimation methods with two-dimensional array suffer from unsatisfactory performance, a novel girdless DOA estimation method is proposed in this paper. For two-dimensional array, the atomic L0-norm is proved to be the solution of a Semi-Definite Programming (SDP) problem, whose cost function is the rank of a Hermitian matrix, which is constructed by finite order of Bessel functions of the first kind. According to low rank matrix recovery theorems, the cost function of the SDP problem is replaced by the log-det function, and the SDP problem is solved by Majorization-Minimization (MM) method. At last, the gridless DOA estimation is achieved by Vandermonde decomposition method of semidefinite Toeplitz matrix built by the solutions of above SDP problem. Sample covariance matrix is used to form the initial optimization problem in MM method, which can reduce the iterations. Simulation results show that, compared with on-grid MUSIC and other gridless methods, the proposed method has better Root-Mean-Square Error (RMSE) performance and identifiability to adjacent sources; When snapshots are enough and Signal-Noise-Ratio (SNR) is high, proper choice of the order of Bessel functions of the first kind can achieve approximate RMSE performance as that of higher order ones, and can reduce the running time.
, doi: 10.11999/JEIT180385
[Abstract](110) [FullText HTML](74) [PDF 1334KB](21)
Abstract:
Considering that it is difficult to balance efficiency and resource utilization of Service Chain (SC) mapping problem in Software Defined Network (SDN)/Network Function Virtualization (NFV) environment, this paper proposes a collaborative mapping method for SC based on matching game. Firstly, it defines a SC mapping model named MUSCM to maximize the utility of network resources. Secondly, it divides the SC mapping problem into Virtual Network Function (VNF) deployment and connection parts. As for the VNF deployment part, an algorithm is designed to collaborate the selection of the SC and the service node based on many-to-one matching game, improving the mapping efficiency of SC and utilization of physical resource effectively. On the basis of it, an algorithm is designed based on segment routing strategy to accomplish the traffic steering between VNF instances to finish the VNF connection part, reducing the link transmission delay effectively. The experiment result shows that, compared with the classical algorithm, this algorithm ensures the mapping request received rate, and at the same time, it reduces the average transmission delay of the service chain and improves the physical resources utilization of the system effectively.
, doi: 10.11999/JEIT180315
[Abstract](82) [FullText HTML](57) [PDF 2782KB](6)
Abstract:
To improve accuracy and reliability of the traditional turbine-vital capacity meter, a novel four-line turbine-detection method is presented for the high precision and high reliability Chronic Obstructive Pulmonary Disease (COPD) monitoring system. On the hardware, a four-line breath signal acquisition circuit is designed following the four-line turbine-type detection method, which improves the resolution of the optical path through reasonable components arrangement. On the software, a linear regression algorithm is used to obtain early screening and diagnostic indicators such as Forced Vital Capacity (FVC), Peak Expiratory Flow (PEF) and so on. The standard Fluke air flow analyzer is used for data calibration, compared with the traditional medical turbine-type lung function meter: FVC average relative error is reduced from 1.98% to 1.47% and PEF average relative error is reduced from 2.04% to 1.02%. It is showed that the expiratory parameters of the four-line turbine-type COPD monitoring system is more accurate and reliable than that of the traditional COPD system which is suitable for early screening and accurate diagnosis of COPD. Combined with pulse oxygen saturation, End-tidal CO2, it can be used to achieve the medical care for COPD and play an important role to early detect and control of disease for moderate or severe COPD patients.
, doi: 10.11999/JEIT180386
[Abstract](35) [FullText HTML](26) [PDF 2323KB](14)
Abstract:
The convolutive blind source separation can be effectively solved in frequency domain, but blind source separation in frequency domain must solve the problem of ranking ambiguity. A frequency-domain blind source separation sorting algorithm is proposed based on regional growth correction. First, the convolutional mixed signal short-time Fourier transform is used to establish an instantaneous model at each frequency point in the frequency domain for independent component analysis. Based on this, the correlation of the power ratio of the separated signal is used to sort all frequency points one by one replacement. Second, according to the threshold, the sorted result is divided into several small areas. Finally. regional replacement and merging is performed according to the regional growth method, and the correct separation signal is finally obtained. Regional growth correction minimizes the mis-proliferation of frequency sorting and improves separation results. The speech blind source separation experiments are performed in the simulated and real environments respectively. The results show the effectiveness of the proposed algorithm.
, doi: 10.11999/JEIT180352
[Abstract](78) [FullText HTML](48) [PDF 1055KB](16)
Abstract:
In view of the problem that the Cardinalized Probability Hypothesis Density (CPHD) probability hypothesis density filtering algorithm based on the Pairwise Markov Chains (PMC) model (PMC-CPHD) is not suitable for implementation, the PMC-CPHD algorithm is modified into a polynomial form to facilitate implementation, and the Gauss Mixture (GM) implementation of the improved algorithm is given. The experimental results show that the given GM implementation realizes multitarget tracking effectively, and improves the stability of the target number estimation compared with the GM implementation of the probability hypothesis density filtering algorithm based on the PMC model (PMC-PHD).
, doi: 10.11999/JEIT180335
[Abstract](70) [FullText HTML](52) [PDF 2396KB](15)
Abstract:
Due to the probabilistic failure of the optical fiber of the underlying network in the virtual environment, traditional full protection configures one protection path at least which leads to high resource redundancy and low acceptance rate of the virtual network. In this paper, a Security Awareness-based Diverse Virtual Network Mapping (SA-DVNM) strategy is proposed to provide security guarantee in the event of failures. In SA-DVNM, the physical node weight formula is designed by considering the hops between nodes and the bandwidth of adjacent links, besides, a path-balanced link mapping mechanism is proposed to minimize the overloaded link. For improving the acceptability of virtual network, SA-DVNM strategy designs a resource allocation mechanism that allows path cut when a single path is unavailable for low security. Considering the difference of time delay to ensure the security of delay-sensitive services, a multipath routing spectrum allocation method based on delay difference is designed to optimize the routing and spectrum allocation for SA-DVNM strategy. The simulation results show that the proposed SA-DVNM strategy can improve the spectrum utilization and virtual optical network acceptance rate in the probabilistic fault environment, and reduce the bandwidth blocking probability.
, doi: 10.11999/JEIT180299
[Abstract](83) [FullText HTML](53) [PDF 2752KB](7)
Abstract:
For the Inverse Synthetic Aperture Radar (ISAR) imaging, the ISAR image obtained by the Range-Doppler (RD) or time-frequency analysis methods can not display the target's real shape due to its azimuth relating to the target Doppler frequency, thus the cross-range scaling is required for ISAR image. In this paper, a fast cross-range scaling method for ISAR is proposed to estimate the Rotational Angular Velocity (RAV). Firstly, the proposed method utilizes efficient Pseudo Polar Fast Fourier Transform (PPFFT) to transform the rotational motion of two ISAR images from two different instant time into translation in the polar angle direction. Then, a new cost function called integrated correction is defined to obtain the RAV coarse estimation. Finally, the optimal RAV can be estimated using the Bisection method to realize the cross-range scaling. Compared with the available algorithms, the proposed method avoids the problems of precision loss and high computational complexity caused by interpolation operation. The results of computer simulation and real data experiments are provided to demonstrate the validity of the proposed method.
, doi: 10.11999/JEIT180396
[Abstract](69) [FullText HTML](49) [PDF 1395KB](5)
Abstract:
In order to achieve service data isolation in advanced metering Infrastructure for water, electricity, gas, and heat Meters and improve the stability and coverage of local data collection network, a network virtualization scheme of Advanced Metering Infrastructure (AMI) is proposed. In this scheme, the end-to-end isolated service data collection channels are constructed utilizing virtual Access Point Name (APN) and Software Defined Network (SDN) slice technology. The micro-power wireless and low-voltage power line carriers are used to constructed a real-time and reliable local dual mode virtual network. Furthermore, the networking algorithm based on global link-state and hierarchical iterative algorithm are proposed. The simulation and experiments show the packet loss rate and transmission delay of collected data are decreased utilizing the proposed scheme, and business support capability is improved. Moreover, the service data isolation is implemented in AMI for water, electricity, gas, and heat Meters and multiplexing ability of communication network infrastructure is improved.
, doi: 10.11999/JEIT180310
[Abstract](84) [FullText HTML](58) [PDF 1673KB](10)
Abstract:
To solve problem of the high delay caused by the change of physical network topology under the 5G access network C-RAN architecture, this paper proposes a scheme about dynamic deployment of Service Function Chain (SFC) in access network based on Partial Observation Markov Decision Process (POMDP). In this scheme, the system observes changes of the underlying physical network topology through the heartbeat packet observation mechanism. Due to the observation errors, it is impossible to obtain all the real topological conditions. Therefore, by the partial awareness and stochastic learning of POMDP, the system dynamically adjust the deployment of the SFC in the slice of the access network when topology changes, so as to optimize the delay. Finally, point-based hybrid heuristic value iteration algorithm is used to find SFC deployment strategy. The simulation results show that this model can support to optimize the deployment of SFC in the access network side and improve the access network’s throughput and resource utilization.
, doi: 10.11999/JEIT180336
[Abstract](212) [FullText HTML](83) [PDF 1631KB](22)
Abstract:
In order to improve the robustness of MLAPG algorithm, a person re-identification algorithm, called Equid-MLAPG algorithm is proposed, which is based on the equidistance measurement learning strategy. Due to the imbalanced distribution of positive and negative sample pairs in the mapping space, sample spacing hyper-parameter of MLAPG algorithm is more affected by the distance of negative sample pairs. Therefore, Equid-MLAPG algorithm tends to map the positive sample pair to be a point in the transform space. That is, the distance of a positive sample pair in the transform space is mapped to be zero, resulting in no intersection in the distribution of positive and negative sample pairs in the transform space when algorithm convergences. Experiments show that the Equid-MLAPG algorithm can achieve better experimental results on commonly used person re-identification datasets with better recognition rate and wide applicability.
, doi: 10.11999/JEIT180387
[Abstract](108) [FullText HTML](66) [PDF 4110KB](9)
Abstract:
The general method for inversion of Digital Surface Model (DSM) in forest region has great errors due to the inestimable waves’ penetration depth. For this problem, an approach to inversion of high-precision DSM is proposed. First, the phases of high and low scattering phase centers of the waves in forest are obtained by maximizing the phase separation of the coherence optimization. Then, the normal height variation models of the high and low scattering centers with extinction factors are constructed. According to the models, the least penetration depth of the waves in forest is acquired. Eventually, by implementing the interferometric technique on the phase of high scattering phase center, a coarse DSM is retrieved, and a high-precision DSM is developed by compensating the least penetration depth to the coarse one. The validation of the method is investigated by simulated datasets of PolSARpro under different tree species and different forest heights and by airborne real datasets. It shows that the proposed method can improve the accuracy on the inversion of DSM effectively in forest region.
, doi: 10.11999/JEIT180432
[Abstract](12) [FullText HTML](4) [PDF 656KB](0)
Abstract:
To solve the problem of the joint blind channel estimation and symbol detection for SIMO-OFDM systems, a PARAllel FACtor (PARAFAC) analysis model of the receive data matrix is established. Then, with the full row rank characteristic of the discrete Fourier transform matrix and the singular value decomposition of the receiving data matrix, a closed method is proposed for joint blind channel estimation and symbol detection. The proposed method has low computational complexity because it has no iteration. Furthermore, by the simultaneously calculated of channel and signals, the proposed method can avoid the performance reduction of signal estimation caused by channel estimation error. Simulation results show that the proposed method has lower computational complexity and better estimation performance compared with traditional methods.
, doi: 10.11999/JEIT180541
[Abstract](11) [FullText HTML](4) [PDF 819KB](1)
Abstract:
In view of the problem that the existing methods are not applicable or are only feasible to the case where only a low ratio of data are missing in multivariable time series, a missing data prediction algorithm is proposed based on Kronecker Compressed Sensing (KCS) theory. Firstly, the sparse representation basis is designed to largely utilize both the temporal smoothness characteristic of time series and potential correlation between multiple time series. In this way, the missing data prediction problem is modeled into the problem of sparse vector recovery. In the solution part of the model, according to the location of missing data, the measurement matrix is designed suitable for the current application scenario and low correlation with the sparse representation basis. Then, the validity of the model is verified from two aspects: whether the sparse representation vector is sufficiently sparse and the sensing matrix satisfies the restricted isometry property. Simulation results show that the proposed algorithm has good performance in the case where a high ratio of data are missing.
, doi: 10.11999/JEIT180720
[Abstract](12) [FullText HTML](4) [PDF 985KB](0)
Abstract:
In order to solve the incomplete semantic structure problem that occurs in the process of using the Abstract Meaning Representation (AMR) graph to predict the summary subgraph, a semantic summarization algorithm is proposed based on Integer Linear Programming (ILP) reconstructed AMR graph structure. Firstly, the text data are preprocessed to generate an AMR total graph. Then the important node information of the summary subgraph is extracted from the AMR total graph based on the statistical features. Finally, the ILP method is applied to reconstructing the node relationships in the summary subgraph, which is further utilized to generate a semantic summarization. The experimental results show that compared with other semantic summarization methods, the ROUGE index and Smatch index of the proposed are improves significantly, up to 9% and 14% respectively. This method significantly improved the quality of semantic summarization.
, doi: 10.11999/JEIT180452
[Abstract](10) [FullText HTML](4) [PDF 2313KB](0)
Abstract:
Three dimensional interferometry of wide-band radar can provide crucial information for estimating the micro-motion and geometric parameters of targets. For estimation of the micro-motion parameters via three dimensional interferometry in the case of squint observing mode, an algorithm for micro-motion and geometric parameters based on squint calibration is proposed. The algorithm performs ranging and angle measuring for each antenna receiving echo in an L formation array. Moreover, the squint distortion is calibrated and three dimensional trajectories of scattering centers are obtained via establishing two elements and quadratic nonlinear equations and coordinate transformation. In addition, smoothing filtering and optimization are used to retrieve micro-motion and geometry parameters. The effectiveness and robustness of proposed algorithm is confirmed via extensive experiments.
, doi: 10.11999/JEIT180442
[Abstract](12) [FullText HTML](4) [PDF 1200KB](0)
Abstract:
Focusing on the issue of heavy decrease of object tracking performance induced by illumination variation, a visual tracking method via jointly optimizing the illumination compensation and multi-task reverse sparse representation is proposed. The template illumination is firstly compensated by the developed algorithm, which is based on the average brightness difference between templates and candidates. In what follows, the candidate set is exploited to sparsely represent the templates after illumination compensation. Subsequently, the obtained multiple optimization issues associated with single template can be recast as a multi-task optimization one related to multiple templates, which can be solved by the alternative iteration approach to acquire the optimal illumination compensation coefficient and the sparse coding matrix. Finally, the obtained sparse coding matrix can be exploited to quickly eliminate the unrelated candidates, afterwards the local structured evaluation method is employed to achieve the accurate object tracking. As compared to the existing state-of-the-art algorithms, simulation results show that the proposed algorithm can improve the accuracy and robustness of the object tracking significantly in the presence of heavy illumination variation.
, doi: 10.11999/JEIT180289
[Abstract](11) [FullText HTML](4) [PDF 1197KB](0)
Abstract:
A wide area difference calibration algorithm based on Virtual Reference Station (VRS) for tri-satellite Time Difference of Arrival (TDOA) geolocation system is proposed to solve the problem that traditional difference calibration algorithm can not eliminate the location error caused by ephemeris error completely, especially when the emitter source is far away from the calibration station. Firstly, TDOA measurements of the VRS, which is in the vicinity of emitter source, is estimated by using TDOA measurements of reference station. Then, in order to remove the effect of ephemeris error and synchronization error on location error, TDOA measurements of the VRS is subtracted from that of emitter source. Simulation results demonstrate that the proposed algorithm can almost eliminate the effect of ephemeris error on location error of tri-satellite TDOA geolocation system in wide area.
, doi: 10.11999/JEIT180433
[Abstract](12) [FullText HTML](4) [PDF 1519KB](0)
Abstract:
The existing Direct Position Determination (DPD) algorithm of Coherently Distributed (CD) sources rely on the distribution model of CD sources with huge computation cost, which is not practical. To improve further the localization performance, a novel DPD algorithm of CD sources that profits from the characteristics of noncircular signals is proposed based on the symmetric shift invariance of the centrosymmetric array. With the parameterization assumption of CD sources, the direct position determination model is firstly constructed by combining the characteristics of noncircular signals. Then, it is proved that for any centrosymmetric array, the generalized steering vector of CD sources has the property of symmetric shift invariance. Base on this characteristic, the positions of CD sources are directly estimated by fusing the information of all observation stations with no need to consider the distribution model, which reduces the dimension of the parameter to be estimated. Simulation results validate that, compared with the existing localization algorithms of CD sources, the proposed algorithm improves the localization accuracy, and avoids the dependence on the distribution model of CD sources, which is of great practical value.
, doi: 10.11999/JEIT170935
[Abstract](13) [FullText HTML](6) [PDF 577KB](1)
Abstract:
Existing attribute-based deduplication schemes can support neither auditing of cloud storage data nor revocation of expired users. On the other hand, they are less efficient for deduplication search and users decryption. In order to solve these problems, this paper proposes an efficient deduplication and auditing Attribute-Based Encryption (ABE) scheme. A third-party auditor is introduced to verify the integrity of cloud storage data. Through an agent auxiliary user revocation mechanism, the proposed scheme supports the revocation of expired users. Effective deduplication search tree is put forward to improve the search efficiency, and the proxy decryption mechanism is used to assist users to decrypt. Finally, the security analysis shows that the proposed scheme can achieve IND-CPA security in the public cloud and PRV-CDA security in the private cloud by resorting to the hybrid cloud architecture. The performance analysis shows that the deduplication search is more efficient and the computation cost of user encryption is smaller.
, doi: 10.11999/JEIT180531
[Abstract](11) [FullText HTML](4) [PDF 1507KB](1)
Abstract:
In order to solve the dictionary mismatch problem of Compressive Sensing (CS) based multi-target Device-Free Localization (DFL) under the wireless localization environments, a Variational Expectation Maximization (VEM) based dictionary refinement method is proposed. Firstly, this method builds the dictionary based on the saddle surface model, and models the environment-related dictionary parameters as tunable parameters. Then, a two-layer hierarchical Gaussian prior model is imposed on the location vector to induce its sparsity. Finally, the variational EM algorithm is adopted to estimate the posteriors of hidden variables and optimize the environment-related dictionary parameter, thus the estimation of target locations and dictionary refinement can be realized jointly. Compared with the conventional (CS) based multi-target DFL schemes, the simulation results demonstrate that the performance of the proposed algorithm is especially excellent in changing wireless localization environments.
, doi: 10.11999/JEIT180378
[Abstract](12) [FullText HTML](4) [PDF 2255KB](0)
Abstract:
Surface defects such as gaps have a significant impact on the stealth performance of the aircraft. According to the scattering mechanism of surface defects such as gaps, a method of evaluating the surface defect targets based on vector cancellation is proposed. The carrier is regarded as the target background and is subtracted the scattering effect of the carrier, especially the strong scattering angle of the carrier, and then the scattering characteristic data of the surface defect class target in the whole angle range are obtained, which solves the problem that the complete scattering characteristic can not be obtained using the conventional method. The comparison between the numerical calculation and the experimental results shows that the vector cancellation method can effectively evaluate the electromagnetic scattering characteristics of defective targets. After vector cancellation, the scattering of the carrier is greatly reduced, and the calculation or measurement accuracy of the defective target is effectively improved. At the same time, due to reducing the influence of the scattering of the carrier itself, this method avoids the requirement of the carrier size and ultra-low scattering characteristics, and reduces effectively the processing cost of the carrier.
, doi: 10.11999/JEIT180257
[Abstract](70) [FullText HTML](55) [PDF 592KB](7)
Abstract:
In order to resist the malware sandbox evasion behavior, improve the efficiency of malware analysis, a code-evolution-based sandbox evasion technique for detecting the malware behavior is proposed. The approach can effectively accomplish the detection and identification of malware by first extracting the static and dynamic features of malware software and then differentiating the variations of such features during code evolution using sandbox evasion techniques. With the proposed algorithm, 240 malware samples with sandbox-bypassing behaviors can be uncovered successfully from 7 malware families. Compared with the JOE analysis system, the proposed algorithm improves the accuracy by 12.5% and reduces the false positive to 1%, which validates the proposed correctness and effectiveness.
, doi: 10.11999/JEIT180460
[Abstract](53) [FullText HTML](33) [PDF 1771KB](9)
Abstract:
Under Single Measurement Vector (SMV) and low Signal-to-Noise Ratio (SNR) conditions, the sparse reconstruction method can improve the estimation accuracy of Time Of Arrival (TOA). However, the existing reconstruction algorithms have some mistakes and missing in the selection of sparse support set elements, which leads to limited estimation accuracy. In order to solve this problem, this paper proposes an algorithm based on sparse reconstruction Loop Matching Pursuit (LMP), which improves the estimation accuracy of the direct path. The algorithm first establishes a sparse representation model of channel impulse response. Then, under the premise of having obtained initial support set, the elements in the support set are removed cyclically. In addition, according to the maximum value of the current residual within the product, the remaining elements are used to match and add the new elements until the residual product is the same. Finally, the estimate of the TOA is obtained using the relationship between the time delay value and the sparse support set. The simulation results show that the proposed algorithm has higher estimation accuracy than the traditional sparse reconstruction time delay estimation algorithm. At the same time, based on the USRP platform, the effectiveness of the proposed algorithm is verified by the actual signal.
, doi: 10.11999/JEIT180274
[Abstract](40) [FullText HTML](18) [PDF 973KB](0)
Abstract:
In cloudlet enhanced Fiber-Wireless (FiWi) access network, there is a problem that the traditional energy saving mechanism does not match the offload traffic. An offload collaboration sleep mechanism with load transfer is proposed. By analyzing the load of the optical network unit and combining the transmission delay of the multi-hop in the wireless domain and the sending time of the report frame of the target optical network unit, the proposed mechanism can determine the sleeping and the destination optical network unit to complete load transfer. Then, the optical network unit jointly considers the arrival time of the returned data of the edge severs and the sending time of the control frame in the wireless domain to select the optimal sleep duration and reduce the controlling overhead. Simulation results show that the proposed mechanism can effectively reduce the network energy consumption while ensuring the delay performance of offload traffic.
, doi: 10.11999/JEIT180480
[Abstract](43) [FullText HTML](25) [PDF 1947KB](4)
Abstract:
For the problem of the finite word length effect of prototype filters in hardware implementation of the filter bank system, this paper studies how to improve the performance of roundoff noise caused by signal quantization for the FIR prototype filter, that is, to reduce the roundoff noise gain. An FIR filter optimization structure is proposed. By analyzing the source of roundoff noise, a polynomial parameterization method is used to derive the roundoff noise gain expression. The simulation example shows that the amplitude-frequency and phase-frequency response of the proposed structure filter are basically consistent with the ideal state under different constraint of word length. Compared with the existing algorithms, the proposed structure has a smaller roundoff noise gain.
, doi: 10.11999/JEIT180276
[Abstract](103) [FullText HTML](69) [PDF 1311KB](18)
Abstract:
The deep learning model based on the residual network and the spectrogram are used to recognize infant crying in this paper. The corpus has balanced proportion of infant crying and non-crying samples. Finally, through the 5-fold cross validation, compared with three models of Support Vector Machine (SVM), Convolutional Neural Network (CNN) and the cochleagram residual network based on Gammatone filters (GT-Resnet), the spectrogram based residual network gets the best F1-score of 0.9965 and satisfies requirements of real time. This paper proves that the spectrogram could react acoustics features intuitively and comprehensively in the recognition of infant crying. The residual network based on spectrogram is a good solution to infant crying recognition problem.
, doi: 10.11999/JEIT180303
[Abstract](165) [FullText HTML](94) [PDF 2285KB](24)
Abstract:
To address track-to-track association problem of radar and Electronic Support Measurements (ESM) in the presence of sensor biases and different targets reported by different sensors, an anti-bias track-to-track association algorithm based on track vectors detection is proposed according to the statistical characteristics of Gaussian random vectors. The state estimation decomposition equation is firstly derived in the Modified Polar Coordinates (MPC). The track vectors are obtained by the real state cancellation method. Second, In order to eliminate most non-homologous target tracks, the rough association is performed according to the features of the azimuthal rate and Inverse-Time-to-Go (ITG). Finally, the track-to-track association of radar and ESM is extracted based on track vectors chi-square distribution. The effectiveness of the proposed algorithm are verified by Monte Carlo simulation experiments in the presence of sensor biases, targets densities and detection probabilities.
, doi: 10.11999/JEIT180423
[Abstract](67) [FullText HTML](23) [PDF 2352KB](1)
Abstract:
Circular polarizer is a key component in feed systems with circular polarization in radio astronomy telescope and satellite communication antennas. Conventional polarizers are capable of operating over a maximum bandwidth of 40% with an axial ratio value of 0.75 dB, which is unable to meet the growing demand for wide band applications. In this paper, the design of the wide band quad ridges waveguide polarizer is introduced, and the relationship between the phase constants of two orthogonal principal modes is analyzed. The broadband phase shift characteristics are achieved by employing different horizontal and vertical ridges dimensions. Based on this method, a C-Band polarizer is designed, which operates at 3.625～7.025 GHz, 64% bandwidth. The effects of main parameters on the polarizer performances are studied. A prototype of the polarizer is developed. The measurements of the prototype show that return losses are less than –21 dB for two orthogonal polarizations and the phase difference is 90°±3.8°, the corresponding axial ratio is less than 0.6 dB. Measured and simulated results show good agreements, thus validating the analysis and design methods.
, doi: 10.11999/JEIT180444
[Abstract](82) [FullText HTML](29) [PDF 2699KB](1)
Abstract:
For the problems of the Composite Binary Offset Carrier (CBOC) signal pseudo code period and combination code sequence are difficult to estimate in a non-cooperative context, two blind methods are proposed based on power spectrum reprocessing and Radial Basis Function (RBF) neural networks. it can get the CBOC pseudo code period through two power spectrum calculations. Firstly, the received one pseudo code period is overlapped segmentation based on the estimated pseudo code period. Secondly, the learning coefficient is optimized selection and each segment of date vector as an input signal to the RBF neural networks to supervised adjustment. Finally, through the continuous input signal, it can restore the original combination code sequence according to the convergent weight vectors. Simulation results show that the pseudo code period can be estimated using the secondary power spectrum under low Signal-to-Noise (SNR). Compared with the Back Propagation (BP) neural networks and the Sanger neural networks, the proposed RBF neural networks improve the SNR by 1 dB and 3 dB respectively and the number of data groups required is less through RBF neural networks under the same condition.
, doi: 10.11999/JEIT180372
[Abstract](36) [FullText HTML](17) [PDF 2672KB](0)
Abstract:
Inspired by the idea of multi-antenna interferometric processing in Interferometric Inverse Synthetic Aperture Radar (InISAR), by utilizing an L-shaped three-antenna imaging model, a Three-Dimensional (3-D) interferometric imaging and micro-motion feature extraction method for rotating space targets is proposed. Based on the integration of micro-Doppler (m-D) effect theory and multi-antenna interferometry processing technology, the m-D curves corresponding to different scatterers are obtained on the time-frequency plane and separated via Viterbi algorithm effectively, and then the projected coordinates of scatterers along the direction of baselines are reconstructed by interferometric processing. The height information of scatterers is solved by ellipse fitting, and 3-D imaging for the rotating space target is realized. Meanwhile, some 3-D micro-motion features are exactly extracted during imaging. Simulation results validate the effectiveness and the robustness of the method.
, doi: 10.11999/JEIT180272
[Abstract](136) [FullText HTML](84) [PDF 3367KB](27)
Abstract:
When evaluating the enhancement quality of a whole image set, the existing average score criterion will vary inconsistently with different image sets and produce a large evaluation quality fluctuation. Therefore, this paper proposes a consistency enhancement quality assessment criterion in confidence interval for any image set. By setting application parameters and using confidence interval to screen data, the proposed criterion compares the quality score difference before and after enhancing each image, and evaluates the consistency of image quality enhancement, and then calculates the effective value of consistency enhancement quality scores. Among many image enhancement algorithms, the proposed criterion can select the high-reliability enhancement algorithm for a specific application. The experimental results show that the proposed criterion has good subjective and objective consistency and outperforms the existing average score criterion, which provides an evaluation criterion for those image enhancement algorithms applied to any image set.
, doi: 10.11999/JEIT180245
[Abstract](219) [FullText HTML](174) [PDF 1054KB](46)
Abstract:
The Digital Video Broadcasting-Common Scrambling Algorithm (DVB-CSA) is a hybrid symmetric cipher. It is made up of the block cipher encryption and the stream cipher encryption. DVB-CSA is often used to protect MPEG-2 signal streams. This paper focuses on impossible differential cryptanalysis of the block cipher in DVB-CSA called CSA-BC. By exploiting the details of the S-box, a 22-round impossible differential is constructed, which is two rounds more than the previous best result. Furthermore, a 25-round impossible differential attack on CSA-BC is presented, which can recover 24 bit key. For the attack, the data complexity, the computational complexity and the memory complexity are 253.3 chosen plaintexts, 232.5 encryptions and 224 units, respectively. For impossible differential cryptanalysis of CSA-BC, the previous best result can attack 21-round CSA-BC and recover 16 bit key. In terms of the round number and the recovered key, the result significantly improves the previous best result.
, doi: 10.11999/JEIT180268
[Abstract](92) [FullText HTML](59) [PDF 1571KB](24)
Abstract:
The signal source position can only be estimated by passive monitoring of the signal in terms of that the signal monitored by the spectrum monitoring system can not be controlled and there is no prior knowledge. To address this issue, based on Received Signal Strength Indication Difference (RSSID) and using Kalman filtering, a location algorithm is proposed to improve its localization accuracy. The proposed algorithm transforms the RSSID between two base stations into the ratio of the distance from the location of the signal source to the two base stations, and the distances to constructs the matrix of location equations is obtained according to the ratio, and then the least square method to find the signal source position is obtained. The simulation results show that the proposed algorithm has better performance than the classical RSSI localization algorithm, reducing the impact of environmental factors on the positioning accuracy, and better meet the positioning service needing fewer parameters. This algorithm can be effectively applied to the spectrum monitoring system. In addition, Kalman algorithm can effectively improve the system's positioning accuracy, and achieve the expected positioning effect.
, doi: 10.11999/JEIT180237
[Abstract](141) [FullText HTML](99) [PDF 1443KB](17)
Abstract:
In recent years, searchable encryption technology and fine-grained access control attribute encryption is widely used in cloud storage environment. Considering that the existing searchable attribute-based encryption schemes have some flaws: It only support single-keyword search without attribute revocation. The single-keyword search may result in the waste of computing and broadband resources due to the partial retrieval from search results. A verifiable multi-keyword search encryption scheme that supports revocation of attributes is proposed. The scheme allows users to detect the correctness of cloud server search results while supporting the revocation of user attributes in a fine-grained access control structure without updating the key or re-encrypting the ciphertext during revocation stage. The aforementioned scheme is proved by the deterministic linearity hypothesis, and the relevant analysis results indicate that it can resist the attacks of keyword selection and the privacy of keywords in the random oracle model with high computational efficiency and storage effectiveness.
, doi: 10.11999/JEIT180238
[Abstract](88) [FullText HTML](62) [PDF 2520KB](10)
Abstract:
The obvious orbit curvature of Medium Earth Orbit (MEO) results in severe two-dimensional space variance in the received signals. Thus, the focusing of MEO SAR data is still a problem to be solved. Fourth-order polynomial is used to model the range history. Also, an azimuth two-step resampling method is proposed to address the azimuth variance. The azimuth resampling in the time domain can adjust the azimuth chirp rate to be the same, then CS/RMA algorithm can be used to handle the space variance of the RCM. The second-step azimuth resampling can correct the left space variance of the Doppler parameters, including range-azimuth coupled space variance of the azimuth chirp rate, and the higher-order focusing parameters. The proposed method can well address the azimuth space variance of the whole scene, make the conventional frequency-domain focusing algorithms applicable to large scene focusing. Finally, the comparison results obtained by the proposed method and the reference method, validate the effectiveness of the proposed method.
, doi: 10.11999/JEIT180187
[Abstract](107) [FullText HTML](63) [PDF 809KB](21)
Abstract:
Firefly Algorithm (FA) may suffer from the defect of low convergence accuracy depending on the complexity of the optimization problem. To overcome the drawback, a novel learning strategy named Orthogonal Opposition Based Learning (OOBL) is proposed and integrated into FA. In OOBL, first, the opposite is calculated by the centroid opposition, making full use of the population search experience and avoiding depending on the system of coordinates. Second, the orthogonal opposite candidate solutions are constructed by orthogonal experiment design, combining the useful information from the individual and its opposite. The proposed algorithm is tested on the standard benchmark suite and compared with some recently introduced FA variants. The experimental results verify the effectiveness of OOBL and show the outstanding convergence accuracy of the proposed algorithm on most of the test functions.
, doi: 10.11999/JEIT180195
[Abstract](76) [FullText HTML](54) [PDF 2303KB](18)
Abstract:
Heavy computational burden, or complex training procedure and poor universality caused by the manual setting of the fixed thresholds are the main issues associated with most of the noise image quality evaluation algorithms using domain transformation or machine learning. As an attempt for solution, an improved spatial noisy image quality evaluation algorithm based on the masking effect is presented. Firstly, according to the layer-layer progressive rule based on Hosaka principle, an image is divided into sub-blocks with different sizes that match the frequency distribution of its content, and a masking weight is assigned to each sub-block correspondingly. Then noise in the image is detected through the pixel gradient information extraction, via a two-step strategy. Following that, the preliminary evaluation value is obtained by using the masking weights to weighting the noise pollution index of all the sub-blocks. Finally, the correction and normalization are carried out to generate the whole image quality evaluation parameter——i.e. Modified No-Reference Peak Signal to Noise Ratio (MNRPSNR). Such an algorithm is tested on LIVE and TID2008 image quality assessment database, covering a variety of noise types. The results indicate that compared with the current mainstream evaluation algorithms, it has strong competitiveness, and also has the significant effects in improving the traditional algorithm. Moreover, the high degree of consistency to the human subjective feelings and the applicability to multiple noise types are well demonstrated.
, doi: 10.11999/JEIT180262
[Abstract](94) [FullText HTML](61) [PDF 2163KB](7)
Abstract:
A Novel Matrix Mapping (NMM) method is proposed for the synthesis of sparse rectangular arrays with multiple constraints. Firstly, the sizes of element coordinate matrices are resized to improve the Degree Of Freedom (DOF) of elements by taking account of both placeable number and distributable range of elements. Then, a selection matrix is established to determine which elements should be turned off when the coordinate matrices should be thinned. By establishing two different mapping functions, a NMM method is presented to overcome the drawbacks of existing methods in terms of flexibility and effectiveness. Finally, comparison experiments are conducted to verify the effectiveness of the proposed method. The numerical validation points out that the proposed method outperforms the existing methods in the design of sparse rectangular arrays.
, doi: 10.11999/JEIT180264
[Abstract](100) [FullText HTML](76) [PDF 1545KB](15)
Abstract:
For Network Function Virtualization (NFV) environment, the existing placement methods can not guarantee the mapping cost while optimizing the network delay, a service function chaining optimal placement algorithm is proposed based on the IQGA-Viterbi learning algorithm. In the training process of Hidden Markov Model (HMM) parameters, the traditional Baum-Welch algorithm is easy to fall into the local optimum, so the quantum genetic algorithm is proposed which can better optimize the model parameters. In each iteration, the improved algorithm maintains the diversity of feasible solutions and expands the scope of the spatial search by replicating the best fitness population with equal proportion, thus improving the accuracy of the model parameters. In the process of solving Hidden Markov chain, to overcome the problem that can not be directly observed for hidden sequences, Viterbi algorithm can solve the implicit sequences exactly and solve the problem of optimal service paths in the directed graph. Experimental results show that the network delay and mapping costs are lower compared with the existing algorithms. In addition, the acceptance ratio of requests is raised.
, doi: 10.11999/JEIT180208
[Abstract](50) [FullText HTML](27) [PDF 2158KB](1)
Abstract:
As a competitive Non-Orthogonal Multiple Access (NOMA) technique, Sparse Code Multiple Access (SCMA) improves efficiently the system spectral efficiency by combining the high dimensional modulation and sparse spread spectrum. To address the existing issues of SCMA codebook design, in this paper, an optimization design method for SCMA codebooks is proposed for both Rayleigh fading and Gaussian channels. In the method, by rotating the base constellation and the mother constellation, the minimum Euclidean distance between the projection points of the mother constellation on each dimension, and between the constellation points on the constellations corresponding to each user in the total constellation on a single resource block is maximized in order to improve the performance of the SCMA codebooks over Gaussian channels; On the basis of it, by rotating the constellation of multiple users superimposed on each resource block, the corresponding minimum product distance and the Signal Space Diversity (SSD) order of the users’ constellations are optimized; At last, an additional diversity gain is achieved by using Q-coordinate interleaving technology to improve further the performance over the Rayleigh fading channels. Simulation results show that the performance of the proposed SCMA codebooks outperforms that of the HUAWEI’ SCMA codebooks and Low Density Signature Multiple Access (LDS-MA) in both the Gaussian channels and the Rayleigh fading channels.
, doi: 10.11999/JEIT180256
[Abstract](74) [FullText HTML](54) [PDF 1395KB](12)
Abstract:
For qualitative and quantitative complex evaluation problem of electromagnetic environment. This paper proposes a novel electromagnetic environment complex evaluation algorithm based on fast S-transform and time-frequency space model, which can count time-complex, frequency-complex and energy-complex simultaneously. At the same time, the computation methods and concept of qualitative and quantitative evaluation degree are introduced in this paper. To overcome the limitations of the traditional, F-norm and root-mean-square are selected as two important evaluation indicators, which have the advantage in accurate evaluation. Simulation results show that the proposed method is accurate and effective to reflect the intensity degree of electromagnetic interference; Meanwhile, the interference experiment of bus card confirms the correctness of the time-frequency space model. The experimental test results verify the correctness of the evaluators mentioned in this paper.
, doi: 10.11999/JEIT180239
[Abstract](169) [FullText HTML](85) [PDF 2109KB](18)
Abstract:
Single beacon location algorithm based on additive noise model can not accurately represent the actual characteristics of distance measurement, leading to a problem of model mismatch. A two step location algorithm considering the multiplicative noise characteristics is presented, which combines least squares algorithm and nonlinear fading filter. A range error model in the background of multiplicative noise is established baed on the analysis of the effective sound velocity error. The nonlinear fading filtering algorithm with single fading factor under multiplicative noise background is improved by introducing the attenuation factor which increases the track continuity. Using the least squares based pre-location process to solve the problem that the improved algorithm is sensitive to the initial value. The simulation and experimental data show that the location precision of the proposed algorithm is obviously better than the extended Kalman filtering algorithm under the additive noise background.
, doi: 10.11999/JEIT180306
[Abstract](104) [FullText HTML](73) [PDF 1146KB](14)
Abstract:
To reduce the beamforming training cost and network delay, make the best of Beacon and S-CAP sub-period in the existing Terahertz Wireless Personal Access Network (TWPAN) directional MAC protocols, an Adaptive Directional MAC (AD-MAC) protocol for TWPAN is proposed. AD-MAC adaptively uses the entire network cooperative beam training in a static scenario, and makes network nodes quickly respond to beam training frames based on historical information in a dynamic scenario. The reverse listening strategy is used to reduce the collision probability of same sector nodes. The control frame and data frame are transmitted simultaneously in the Beacon and S-CAP slot using time-slot reuse. Theoretical analysis verifies the effectiveness of AD-MAC. Also, simulation results show that, comparing with ENLBT-MAC, AD-MAC reduces about 21.84% of beamforming training cost and 22.70% of the average network delay in static scene, and reduces about 18.7% of beamforming training cost and 13.07% of the average network delay in dynamic scene.
, doi: 10.11999/JEIT180116
[Abstract](51) [FullText HTML](33) [PDF 556KB](9)
Abstract:
Considering the limits of fuzzy comprehensive evaluation on quality of early warning radar intelligence in actual training, a method of quality evaluation on radar intelligence based on the theory of asymmetric proximity and multilevel fuzzy comprehensive evaluation is proposed. Through the analysis of the producing, transmission, using environmental factors of early warning radar intelligence, the evaluating metric of quality evaluation on radar intelligence integrated for six classes, that are timely, accuracy, completeness, continuity, objectiveness and so on, and then factor set, weight set, and comment set are established, and the quality of the radar intelligence based on the asymmetric proximity with the fuzzy comprehensive evaluation is carried out. This researching methods and results not only can take comprehensive evaluations of a certain quality of radar intelligence, help for finding out the factors to determine the quality of the radar intelligence, and also can fight for providing certain reference to solve complex environment of radar intelligence of operational effectiveness evaluation problem.
, doi: 10.11999/JEIT180290
[Abstract](36) [FullText HTML](22) [PDF 1139KB](11)
Abstract:
This paper presents an approach of combining the existing enhanced inter-cell interference coordination technology and the downlink joint transmission scheme of coordinated multi-point transmission technology to solve the problem of serious cross-layer interference in 5G ultra-dense heterogeneous network. With using tools from stochastic geometry theory, the expressions such as the outage probability, spectrum efficiency and network average ergodic capacity of two-layer ultra-dense heterogeneous network are derived. Simulation results show that the joint interference coordination scheme proposed in this paper not only reduces the number of cooperative users compared with the traditional coordinated multi-point transmission technology, but also reduces the outage probability of users by 15% in the network at 0 dB. Compared with the enhanced inter-cell interference coordination technology, when the bias value is 10 dB, the user spectrum efficiency in the extended area is improved to 35% and the average traversal capacity of the entire network is increased by 3.4%.
, doi: 10.11999/JEIT180353
[Abstract](41) [FullText HTML](23) [PDF 2875KB](1)
Abstract:
In view of the correction for tropospheric delay is limited by the shortage of sounding data, which leads to the problem that the low correction efficiency, this paper proposes a model named as Sa+GPT2w, combining Saastamoinen model and GPT2w model. In this paper, the real-time correction for Zenith Tropospheric Delay (ZTD) over china is realized by using the high-precision meteorological values provided by the GPT2w model, and the results are verified by the measured data. Taking the ZTD in 2015-2017 of International GNSS Service(IGS) as a reference, the accuracy of the Sa+GPT2w model (bias: 1.661 cm, RMS: 4.711 cm) rises by 50.5%, 41.9% and 37.1%, respectively, relative to the Sa+EGNOS, Sa+UNB3m and the Hop+GPT2w models. Moreover, using the ZTD from Global Geodetic Observing System (GGOS) in 2017 as a standard, the Sa+GPT2w model (bias: 1.551 cm, RMS: 4.859 cm) improves the accuracy by 49.5%, 38.5% and 46.8% relative to other three models, respectively. Finally, this paper analyzes the temporal and spatial distribution characteristics of the bias and RMS of the above three models. The results provide a significant reference for the effectiveness of correction for ZTD by using different meteorological models in the research of navigation and atmospheric refraction over China.
, doi: 10.11999/JEIT180226
[Abstract](69) [FullText HTML](52) [PDF 2143KB](9)
Abstract:
Blind separation performance bound of Paired Carrier Multiple Access (PCMA) mixed signal is a measure of the separability of mixed signals and the performance of the separation algorithm. For the PCMA mixed signal, the spatial mapping of the modulation signal bits and symbols is constructed from the transmit signal model. The maximum likelihood criterion is used to derive the lower bound expression of separation performance independent of the separation algorithm. Numerical results agree well with the Viterbi simulation results under ideal conditions, which verify the rationality of the derived performance boundaries.
, doi: 10.11999/JEIT180243
[Abstract](57) [FullText HTML](40) [PDF 1747KB](9)
Abstract:
For the problem of high precision frequency measurement of dynamic signals with high fundamental frequency and small frequency change value in electronic measurement, a method of differential frequency measurement is introduced. A novel dynamic adjustable multi-stage frequency-difference circuit structure is proposed. The fast differential frequency measurement system based on FPGA is used to design the Fast Fourier Transform (FFT) algorithm on the FPGA to realize the data processing function of the system. The simulation and experimental results show that the structure of the multi-stage differential frequency circuit can be designed with high precision frequency, and the result can be obtained when the spectrum analysis is carried out. The system can realize the fast FFT operation. Compared with the MATLAB software platform, the system has obvious advantages in the efficiency of data processing. The structure of the FFT model can be dynamically adjusted to meet the requirements of FFT operation of different scale points, and the system performance index can meet the requirements of data acquisition system.
, doi: 10.11999/JEIT180142
[Abstract](75) [FullText HTML](29) [PDF 2912KB](7)
Abstract:
Recommendation systems can help people make decisions conveniently. However, few studies considere the effect of removing irrelevant noise users and retaining a small number of core users to make recommendations. A new method of core user extraction is proposed based on trust relationship and interest similarity. First all users trust and interest similarity between pairs are calculated and sorted, then according to the frequency and position weight users travel in the nearest neighbor in the list of two kinds of strategies for the selection of candidate core collection of users. Finally, according to the user’s ability the core users are sieved out. Experimental results show that the core user recommendation effectiveness, and the core of user 20% can reach more than recommended accuracy of 90%, and through the use of core user recommendation the negative effects can be resisted caused by the attacks on the recommendation system.
, doi: 10.11999/JEIT180075
[Abstract](38) [FullText HTML](26) [PDF 2051KB](9)
Abstract:
Oriented to the high-rapid development of Internet applications, new challenges are encountered by the conventional Routing and Spectrum Assignment (RSA). A new direction for the blocking rate reduction and the Quality of Experience (QoE) assurance is provided to the Elastic Optical Network (EON) integrated by Degraded Service (DS) technology. Due to the inefficiency of spectrum resources and the revenue decline caused by DS, a Mixed Integer Linear Programming (MILP) model is proposed with a joint objective that minimizes both spectrum consumption and the priorities and DS frequency of online services. And a dynamic RSA algorithm based on differentiated DS and adaptive modulation is proposed, which considers service-priority differentiation, the adaptive modulation and DS technology. Meanwhile, DS loss function and DS window selection strategy are designed to differentiate service levels, and ideal spectrum location and resource are assigned for the impending blocked services. And the network revenue function considering the relationship between spectrum and revenue balance is designed to achieve efficient utilization of spectrum resources, reduce the impact of degradation, and enhance network revenue. The simulation results verify the advantages of the proposed algorithm in terms of blocking rate, network profit, etc.
, doi: 10.11999/JEIT180343
[Abstract](36) [FullText HTML](26) [PDF 987KB](5)
Abstract:
In wireless relay networks, random transmission delays among relay nodes will lead to substantial performance degradation, for which delay-tolerant Distributed Linear Convolutive Space-Time Code (DLC-STC) is proposed. However, its diversity gain on fast fading Rayleigh channels is not clear. This paper analyzes the diversity gain of the DLC-STC on fast fading Rayleigh channels. It is shown that the DLC-STC can achieve full asynchronous cooperative diversity order with Maximum Likelihood (ML) receivers on fast fading Rayleigh channels, although it is originally proposed for slow fading channels. The numerical results verify the theoretical analysis and show that MMSE-DFE receivers, can collect the same diversity order as ML receivers on fast fading Rayleigh channels.
, doi: 10.11999/JEIT180063
[Abstract](46) [FullText HTML](35) [PDF 800KB](5)
Abstract:
To solve the problems of low resource utilization rate, high energy consumption and poor user service quality in the existing virtualized Cloud Radio Access Network (C-RAN), a virtual resource allocation mechanism based on energy consumption and delay is proposed. According to the network and traffic characteristics of the virtualized C-RAN, considering the resource constraints and proportional fairness, an energy consumption and delay optimization model is established. Furthermore, a heuristic algorithm is used to allocate resources for different types of virtual C-RAN and user virtual base stations to complete resource global optimization configuration. Simulation results show that the proposed resource allocation mechanism can effectively save energy by 62.99% and reduce the latency by 32.32% while improving the network resource utilization.
, doi: 10.11999/JEIT180223
[Abstract](204) [FullText HTML](101) [PDF 1499KB](23)
Abstract:
Firewall policy is defined as access control rules in Software Definition Network (SDN), and distributing these ACL (Access Control List) rules across the networks, it can improve the quality of service. In order to reduce the number of rules placed in the network, the Heuristic Algorithm of Rules Allocation (HARA) of rule multiplexing and merging is proposed in this paper. Considering TCAM storage space of commodity switches and connected link traffic load of endpoint switches, a mixed integer linear programming model which minimize the number of rules placed in the network is established, and the algorithm solves the rules placement problem of multiple routing unicast sessions of different throughputs. Compared with the nonRM-CP algorithms, simulations show that HARA can save 18% TCAM at most and reduce the bandwidth utilization rate of 13.1% at average.
, doi: 10.11999/JEIT180048
[Abstract](53) [FullText HTML](39) [PDF 2532KB](6)
Abstract:
In order to solve the unreasonable virtual resource allocation caused by the uncertainty of service and delay of information feedback in wireless virtualized networks, an online adaptive virtual resource allocation algorithm proposed based on Auto Regressive Moving Average (ARMA) prediction. Firstly, a cost of virtual networks minimization is studied by jointly allocating the time-frequency resources and buffer space, while guaranteeing the overflow probability of each virtual network. Secondly, considering the different demand of virtual networks to different resources, a resource dynamic scheduling mechanism designed with multiple time scales, in which the reservation strategy of buffer space is realized based on the ARMA’s prediction information in slow time scale and the virtual networks are sorted according to the overflow probability derived by the large deviation principle and dynamically schedules the time-frequency resources in fast time scale, so as to meet the service demand. Simulation results show that the algorithm can effectively reduce the bit loss rate and improve the utilization of physical resources.
, doi: 10.11999/JEIT180050
[Abstract](51) [FullText HTML](32) [PDF 2982KB](8)
Abstract:
Deep learning based ship detection method has a strict demand for the quantity and quality of the SAR image. It would take a lot of manpower and financial resources to collect the large volume of the image and make the corresponding label. In this paper, based on the existing SAR Ship Detection Dataset (SSDD), the problem of insufficient utilization of the dataset is solved. The algorithm is based on Generative Adversarial Network (GAN) and Online Hard Examples Mining (OHEM). The spatial transformation network is used to transform the feature map to generate the feature map of the ship samples with different sizes and rotation angles. This can improve the adaptability of the detector. OHEM is used to discover and make full use of the difficult sample in the process of backward propagation. The limit of positive and negative proportion of sample in the detection algorithm is removed, and the utilization ratio of the sample is improved. Experiments on the SSDD dataset prove that the above two improvements improve the performance of the detection algorithm by 1.3% and 1.0% respectively, and the combination of the two increases by 2.1%. The above two methods do not rely on the specific detection algorithm, only increase the time in training, and do not increase the amount of calculation in the test. It has very strong generality and practicability.
, doi: 10.11999/JEIT180203
[Abstract](51) [FullText HTML](35) [PDF 4531KB](9)
Abstract:
At present, microwave radiometers suffer from serious Radio Frequency Interference (RFI), especially in low frequency. In this paper, a radio frequency detection algorithm is proposed for L-band phased array radiometer, which is used to measure the sea surface salinity and soil moisture. First, the L-band phased array radiometer is introduced in briefly. Secondly, the radio frequency detection algorithm is introduced in details, which consists of the raw RFI flag, the RFI first moving–averaged window flag, the RFI second moving–averaged window flag and the expanded RFI flag. Finally, the experimental data obtained by the L-band phased array radiometer is dealt with the proposed RFI detection algorithm. The results indicate that the proposed detection RFI algorithm can effectively detect the RFI contaminated abnormal data, and exhibits a good detected ability.
, doi: 10.11999/JEIT180280
[Abstract](30) [FullText HTML](19) [PDF 3469KB](4)
Abstract:
The abnormal pixels in hyperspectral images are often have the characteristics of low probability and scattered outside the background data cloud. How to automatically detect these abnormal pixels is an important research direction in hyperspectral imagery processing. Classical hyperspectral anomaly detection methods are usually based on statistical perspective. The RXD algorithm which is widely used can give the anomalies distribution directly through the second order statistical feature of the image, but the disadvantage is that it does not take into account the higher order statistics of the image. Anomaly detection algorithm based on Independent Component Analysis (ICA) considers the sensitivity of higher order statistics to outliers, but it needs iteration process to extract abnormal components first. And then the extracted components is used for anomaly detection. A method based on cokurtosis tensor for anomaly detection is proposed in this paper. This method does not need to extract anomaly components first. It can directly detect the observed pixels and give the distribution of abnormal pixels. Experiments results on both simulated and real data show that it can detect abnormal pixels while suppressing the background information better. Therefore, it can reduce false alarm rate and improve detection accuracy.
, doi: 10.11999/JEIT180098
[Abstract](37) [FullText HTML](24) [PDF 1375KB](2)
Abstract:
A improved methods is proposed for compensating the distortion created by mismatches in Time-Interleaved Analog-to-Digital Converters (TI ADCs). The error compensation of offset and gain is realized by error parameters, and the error compensation of sampling time is realized by the simplified Lagrange interpolation algorithm. The compensation method is implemented in FPGA with the low complexity of fixed-point algorithm, and the online calibration of multi-channel ADC sampling data is implemented in the TIADC hardware platform. The experimental results show that the proposed method improves the Spurious-Free Dynamic Range (SFDR) of sampling data up to 51 dB in the simulation environment, and optimizes the SFDR up to 45 dB in the process of hardware implementation. Under the premise of maintaining the error estimation precision and compensation effect, this method not only reduces the computational complexity of the algorithm, but also the compensation structure is not limited by the number of TIADC channels.
, doi: 10.11999/JEIT180342
[Abstract](35) [FullText HTML](23) [PDF 894KB](5)
Abstract:
The security issue of wireless transmission becomes a significant bottleneck in the development of Internet of Things (IoT). The limited computing capability and hardware configuration of IoT terminals and eavesdroppers equipped with massive Multiple-Input Multiple-Output (MIMO) bring new challenges to physical layer security technology. To solve this problem, a lightweight noise injection scheme is proposed that can combat massive MIMO eavesdropper. Firstly, the proposed noise injection scheme is introduced, along with the corresponding secrecy analysis. Then, the close-formed expression of the throughput is derived based on the proposed scheme. Furthermore, the slot allocation coefficient and power allocation coefficient are optimized. The analytical and simulation results show that the proposed noise injection scheme can achieve the security of private information transmission by designing of the IoT system parameters.
, doi: 10.11999/JEIT180254
[Abstract](61) [FullText HTML](20) [PDF 3226KB](0)
Abstract:
In this paper, Two novel Artificial Magnetic Conductor (AMC) structures, based on circular loop patch and substrate, are designed to realize 180° reflection phase difference in a wide frequency band. These two AMCs’ reflection phase property is applied to redirect the scattering fields of a radar target to reduce its Radar Cross Section (RCS). This method of RCS reduction can be realized by covering with a chessboard surface composed of two proposed AMC structures, so the RCS reduction in a wide frequency band can be achieved as well. Compared with the same-sized metallic surface, this proposed chessboard surface can reduce RCS drastically from 8 to 20 GHz under normally incident waves, and the RCS also can be reduced under obliquely incident waves. Meanwhile, this surface also can be used as antenna. By precisely designing feed network, the metasurface antenna can be designed. This antenna also has a low profile. The simulated impedance matching frequency band is from 9.08 to 10.30 GHz. Excellent agreement is obtained between simulation and measurement for metasurface antenna and chessboard surface. Such method gives a method for integrated design of antenna and metasurface, so the RCS reduction can be achieved, at the same time the radiation properties can be maintained.
, doi: 10.11999/JEIT180427
[Abstract](51) [FullText HTML](21) [PDF 2692KB](1)
Abstract:
As an efficient anti-interference technique, Luby Transform (LT) codes are applied to cognitive radio systems for reliable data transmission of secondary users. Encoding and decoding are critical issue for the anti-interference performance of LT codes. To improve the reliability and speed of data transmission, a novel encoding and decoding method Combined Poisson Robust Soliton Distribution-Hierarchical, (CPRSD-H) for LT codes is proposed to apply to cognitive radio systems. In the process of encoding, the encoder first produces encoded symbols and generator matrix based on CPRSD, and then uses column vectors corresponding to degree –1 and –2 in the generator matrix to carry dual information: the relationship between the degree –1 and –2 encoded symbols and their connected input symbols; and part of the original data. Contrarily, in the decoding process, the decoder first uses the Belief Propagation (BP) algorithm to decode by the first information, and then correct some unrecovered bits by the second information. Simulation results show that the proposed method CPRSD-H and application to cognitive radio systems can significantly reduce the Bit Error Rate (BER) of LT codes, the goodput performance of secondary users and the encoding and decoding speed of LT codes.
, doi: 10.11999/JEIT180146
[Abstract](50) [FullText HTML](25) [PDF 566KB](1)
Abstract:
Proxy re-encryption plays an important role for encrypted data sharing and so on in cloud computing. Currently, almost all of the constructions of identity-based proxy re-encryption over lattice are in the random oracle model. According to this problem, an efficient identity-based proxy re-encryption is constructed over lattice in the standard model, where the identity string is just mapped to one vector and getting a shorter secret key for users. The proposed scheme has the properties of bidirectional, multi-use, moreover, it is semantic secure against adaptive chosen identity and chosen plaintext attack based on Learning With Errors(LWE) problems in the standard mode.
, doi: 10.11999/JEIT180414
[Abstract](53) [FullText HTML](25) [PDF 1299KB](2)
Abstract:
A new Joint Blind Source Separation (J-BSS) algorithm is proposed based on joint diagonalization of fourth-order cumulant tensors. This algorithm constructs first a set of fourth-order tensors by computing the fourth-order cross cumulant of the multiset signals. Then, based on the Jacobian successive rotation strategy, the highly nonlinear optimization problem of joint tensor diagonalization is transformed into a series of simple sub-optimization problems, each admitting a closed form solution. The multiset mixing matrices are hence updated via alternating iterations, which diagonalize jointly the data tensors. Simulation results show that the proposed algorithm has nice convergence pattern and higher accuracy than existing BSS and J-BSS algorithms of similar type. In addition, the algorithm works well in a real-world application to fetal ECG separation.
, doi: 10.11999/JEIT180451
[Abstract](51) [FullText HTML](25) [PDF 851KB](1)
Abstract:
For the full-duplex two-way relay network, a two-way relay transmission scheme that is robust to the relay residual self-interference signal is proposed. Firstly, the residual self-interference signal of the relay is analyzed, the infinite self-interfering signal is modeled as an equivalent multipath signal, and the cyclic prefix of OFDM is used to combat the equivalent multipath phenomenon to reduce the residual self-interference signal impact. Based on the equivalent multipath scheme, the paper aims at maximizing the SINR of the system, and deduces the optimal amplification factor solving method of the relay in bidirectional full-duplex relay transmission. Finally, the simulation verifies the correctness of the optimal amplification factor of relay, and the effectiveness of the proposed two-way relay transmission scheme is verified through simulation.

Display Method:          |

2018, 40(12)
[Abstract](91) [PDF 189KB](11)
Abstract:
2018, 40(12): 2795 -2803   doi: 10.11999/JEIT180229
[Abstract](157) [FullText HTML](105) [PDF 3275KB](34)
Abstract:
This paper presents a novel unsupervised image classification method for Polarimetric Synthetic Aperture Radar (PolSAR) data. The proposed method is based on a discriminative clustering framework that explicitly relies on a discriminative supervised classification technique to perform unsupervised clustering. To implement this idea, an energy function is designed for unsupervised PolSAR image classification by combining a supervised Softmax Regression (SR) model with a Markov Random Field (MRF) smoothness constraint. In this model, both the pixelwise class labels and classifiers are taken as unknown variables to be optimized. Starting from the initialized class labels generated by Cloude-Pottier decomposition and K-Wishart distribution hypothesis, the classifiers and class labels are iteratively optimized by alternately minimizing the energy function with respect to them. Finally, the optimized class labels are taken as the classification result, and the classifiers for different classes are also derived as a side effect. This approach is applied to real PolSAR benchmark data. Extensive experiments justify that the proposed approach can effectively classify the PolSAR image in an unsupervised way and produce higher accuracies than the compared state-of-the-art methods.
2018, 40(12): 2804 -2811   doi: 10.11999/JEIT180263
[Abstract](119) [FullText HTML](75) [PDF 6401KB](29)
Abstract:
In order to improve the fusion quality of panchromatic image and multi-spectral image, a remote sensing image fusion method based on optimized dictionary learning is proposed. Firstly, K-means cluster is applied to image blocks in the image database, and then image blocks with high similarity are removed partly in order to improve the training efficiency. While obtaining a universal dictionary, the similar dictionary atoms and less used dictionary atoms are marked for further research. Secondly, similar dictionary atoms and less used dictionary atoms are replaced by panchromatic image blocks with the largest difference from the original sparse model to obtain an adaptive dictionary. Furthermore the adaptive dictionary is used to sparse represent the intensity component and panchromatic image, the modulus maxima coefficients in the sparse coefficients of each image blocks are separated to obtain maximal sparse coefficients, and the remaining sparse coefficients are called residual sparse coefficients. Then, each part is fused by different fusion rules to preserve more spectral and spatial detail information. Finally, inverse IHS transform is employed to obtain the fused image. Experiments demonstrate that the proposed method provides better spectral quality and superior spatial information in the fused image than its counterparts.
2018, 40(12): 2812 -2819   doi: 10.11999/JEIT180209
[Abstract](115) [FullText HTML](74) [PDF 2879KB](21)
Abstract:
Vehicle detection is one of the hotspots in the field of remote sensing image analysis. The intelligent extraction and identification of vehicles are of great significance to traffic management and urban construction. In remote sensing field, the existing methods of vehicle detection based on Convolution Neural Network (CNN) are complicated and most of these methods have poor performance for dense areas. To solve above problems, an end-to-end neural network model named DF-RCNN is presented to solve the detecting difficulty in dense areas. Firstly, the model unifies the resolution of the deep and shallow feature maps and combines them. After that, the deformable convolution and RoI pooling are used to study the geometrical deformation of the target by adding a small number of parameters and calculations. Experimental results show that the proposed model has good detection performance for vehicle targets in dense areas.
2018, 40(12): 2820 -2825   doi: 10.11999/JEIT180177
[Abstract](267) [FullText HTML](91) [PDF 1402KB](18)
Abstract:
The azimuth resolution of traditional synthetic aperture radar is only provided by synthetic aperture. However, in the forward looking area, the Doppler diversity is limited, so the imaging performance declines rapidly. And forward looking imaging also has the Doppler ambiguity problem. In this paper, an adaptive beam forming method with spatial confinement under ideal line track is proposed. The imaging quality of the positive forward region is improved effectively by combining the array of real aperture and synthetic aperture, and the Doppler solution is blurred by using the array space domain. First, the echo data is processed by High Squint SAR imaging to obtain the blurred image. Then the beam-forming is performed, weighted and coherent accumulated with each channel image, so as to resolve Doppler ambiguity and enhance the azimuth resolution. Simulation confirms the validity of the proposed approach.
2018, 40(12): 2826 -2833   doi: 10.11999/JEIT180039
[Abstract](71) [FullText HTML](45) [PDF 1669KB](4)
Abstract:
In multistatic radar, a Censored Data-Based Decentralized Fusion (CDDF) is proposed to address the issue of fusing local observations with communication constraints. The local likelihood ratio is calculated based on the observation of moving target immersed in clutter, where the local radar site possesses a coherent multi-channel array. Each local radar site transmits if and only if their observations’ likelihood ratios exceed the local thresholds, which determine the communication rates. By virtue of the Neyman-Pearson lemma, the global test statistic can be achieved by combining received censored data. The fusion center makes a global decision through comparing the global test statistic with a global threshold. Besides, the closed-form expression of probability of false alarm or probability of detection is also derived in this paper. Numerical simulation shows that the CDDF has better performance than " OR” rule, while approaching the performance of Centralized Fusion (CF) with the increase of the communication rate.
2018, 40(12): 2834 -2840   doi: 10.11999/JEIT180079
[Abstract](179) [FullText HTML](97) [PDF 1734KB](17)
Abstract:
To improve the resolution of the SAR system, radar bandwidth should be improved. By means of synthetic bandwidth, wide bandwidth can be achieved with less hardware complexity. For frequency band synthesis SAR system, frequency difference should be accurately known. However, in the real measurement situation, the frequency difference may drift and should be estimated based on the raw data. In this manuscript, an effective method is proposed to estimate the frequency difference error and compensate the phase error. Based on the relation between the interferometric phase of subband echoes and frequency difference, the frequency difference drift is estimated. The interferometry between subband images yields the interferometric image. It is observed that in the yielded image, phase varies with range and the slope is proportional to the frequency difference. Also, the phase is redundant along azimuth. Based on the redundancy along azimuth, a new vector is formed. The vector is a sinusoidal signal with the frequency value corresponding to the relative range shift. Frequency analysis yields the value of the frequency difference error. Based on the proposed method, the SAR image is improved. The effectiveness of the method is verified by processing the real SAR data.
2018, 40(12): 2841 -2847   doi: 10.11999/JEIT180097
[Abstract](305) [FullText HTML](85) [PDF 1635KB](10)
Abstract:
In passive bistatic radar systems, there exists the zero and non-zero Doppler shift multipath clutter in the surveillance channel. The multipath clutter affects the target detection. Temporal adaptive iterative filter such as Least Mean Square (LMS), Normalized Least Mean Square (NLMS) and Recursive Least Square (RLS) are often used to reject multipath clutter in passive bistatic radar, but these methods are only applicable to reject zero Doppler shift multipath clutter. To solve the problem of zero and non-zero Doppler shift multipath clutter, combined with the orthogonal frequency division multiplexing waveform features of digital broadcasting television signals, a clutter rejection algorithm is proposed based on carrier domain adaptive iterative filter. The algorithm utilizes the correlation of multipath clutter with the same Doppler shift at the same carrier frequency in subcarrier domain to reject the zero and non-zero Doppler shift multipath clutter. Simulation and experiment data processing results show the superiority of the proposed algorithm.
2018, 40(12): 2848 -2853   doi: 10.11999/JEIT180294
[Abstract](82) [FullText HTML](54) [PDF 1452KB](8)
Abstract:
The separable probability is a significant criterion to evaluate the resolution characteristics of SAR distribution targets. On the basis of refining the separable condition of targets and taking the statistic characteristic of SAR distribution targets into consideration, a new separable judgment criterion for targets is proposed, and a precise calculation method of the separable probability is deduced. Besides, in order to simplify the calculation, the approximate calculation method with less computational complexity is presented. It is shown in the simulation results that the proposed method is in accordance with the actual situation, which can reflect the effect of the statistic characteristic of SAR distribution target on the resolution characteristic, and can provide theoretical support for the SAR image quality evaluation and system parameter design.
2018, 40(12): 2854 -2860   doi: 10.11999/JEIT180115
[Abstract](78) [FullText HTML](47) [PDF 1648KB](7)
Abstract:
To link better scattering centers with target structures, a forward method is presented to deduce the component-level 3-D scattering center position of radar target under the mechanisms of single and double scattering based on target geometric model. Under the mechanism of double scattering, the principle and method for determining the ray equivalent position is introduced especially under the situation of strong scattering. As for other weak scattering situations, the equivalent transformation is used to transform the weak scattering situations to the strong one. Finally, this position derivation method is applied to the models of right dihedral angle, obtuse dihedral angle, SLICY and T72 tank to deduce and analyze their component-level scattering center positions. The corresponding simulated or actual SAR images are used for contrast to validate the accuracy of the position derivation method.
2018, 40(12): 2861 -2867   doi: 10.11999/JEIT180212
[Abstract](98) [FullText HTML](60) [PDF 1448KB](16)
Abstract:
This paper proposes a threat assessment based sensor control by using multi-target filter with random finite set. First, the general sensor control approach based on information theory is presented in the framework of Partially Observable Markov Decision Process (POMDP). Meanwhile, combined with target movement situation, the factors that affect the target threat degree are analyzed. Then, the multi-target state is estimated based on the particle multi-target filter, the multi-target threat level is established according to the multi-target motion situation, and the maximum threat target distribution characteristic is analyzed and extracted from the multi-target distribution characteristic. Finally, the Rényi divergence is used as the evaluation index in sensor control, and the final control policy is solved with the maximum information gain as the criterion. The simulation results verify the feasibility and effectiveness of the proposed method.
2018, 40(12): 2868 -2873   doi: 10.11999/JEIT180147
[Abstract](227) [FullText HTML](145) [PDF 1536KB](40)
Abstract:
Wi-Fi indoor localization technique is one of the current research hotspots in the field of mobile computing, however, the conventional location fingerprinting based localization scheme does not consider the diversity of Wi-Fi signal distribution in the complicated indoor environment, resulting in the low robustness of indoor localization system. To address this problem, a new hybrid hypothesis test of signal distribution for Wi-Fi indoor localization is proposed. Specifically, the Jarque-Bera (JB) test is conducted to examine the normality of Wi-Fi signal distribution at each Reference Point (RP). Then, according to the different Wi-Fi signal distributions, the hybrid Mann-Whitney U test and T test approaches are used to construct the set of matching reference points with the purpose of realizing the area localization. Finally, by calculating the K-Nearest Neighbor (KNN) of matching reference points in the located area, the location coordinate of the target is obtained. The experimental results indicate that the proposed approach is featured with higher localization accuracy as well as stronger system robustness compared with the conventional Wi-Fi indoor localization approaches.
2018, 40(12): 2874 -2880   doi: 10.11999/JEIT180225
[Abstract](309) [FullText HTML](156) [PDF 1722KB](42)
Abstract:
Focusing on the problem of adaptive beamformer performance decreasing due to target steering vector constraint errors, an algorithm for robust beamforming with joint iterative estimations of steering vector and covariance matrix is proposed. First, the initial value of target steering vector is obtained by sparse reconstruction, following eliminating the target signal estimation in the sampling covariance matrix, the initialization of the covariance matrix is completed; Then, basing on the steering vector error optimization model, this algorithm adopts the convex optimization to estimate joint-iteratively target steering vector and interference plus noise covariance matrix. Finally, the adaptive weight vector is obtained with the steady estimations of steering vector and covariance matrix. Simulation results show output signal to interference and noise ratio is improved in the situation of target steering vector constraint errors.
2018, 40(12): 2881 -2888   doi: 10.11999/JEIT171058
[Abstract](47) [FullText HTML](34) [PDF 1078KB](2)
Abstract:
In the two-dimensional Direction Of Arrival (DOA) estimation of coherently distributed noncircular sources, the problem of large complexity is caused by dimension expansion after exploiting noncircular property, meanwhile the existing low-complexity algorithms all require additional parameter pairing procedure. To solve these problems, a rapid DOA estimation algorithm with automatic pairing is proposed for coherently distributed noncircular sources based on cross-correlation propagator. The L-shaped array is considered. Firstly, the extended array manifold model is established by exploiting the noncircularity of the signal, and then it is proved that there are approximate rotational invariance relationships in the Generalized Steering Vectors (GSVs) of two subarrays of the L array. At the same time, the extra noise can be eliminated by the cross-correlation matrix of the array output signals. Finally, on the basis of the approximate rotational invariance relationships of the sub-arrays, the center azimuth and elevation DOAs can be obtained by propagator method. Theoretical analysis and simulation experiments show that without the spectrum searching and eigenvalue decomposition of the sample covariance matrix, the proposed algorithm has low computational complexity. Moreover, it can automatically pair the estimated central azimuth and central elevation DOAs. In addition, compared with the existing propagation method for coherently distributed noncircular sources, the proposed algorithm can achieve higher estimation accuracy with the small complexity cost.
2018, 40(12): 2889 -2895   doi: 10.11999/JEIT180186
[Abstract](73) [FullText HTML](50) [PDF 2704KB](14)
Abstract:
There are a large number of indoor WiFi signals which can be used for indoor positioning. Although many WiFi indoor positioning technology is proposed, it's positioning accuracy still does not meet the actual application requirements. For this problem, an Adaptive Affinity Propagation Clustering (AAPC) algorithm is proposed to improve the clustering quality of WiFi fingerprint, thus improving the positioning accuracy. The AAPC algorithm generates different clustering results by dynamically adjusting parameters, then cluster validity indices are used to select the best ones. A large number of real environmental data are collected and tested. The experimental results show that the clustering results generated by AAPC algorithm have higher positioning accuracy.
2018, 40(12): 2896 -2904   doi: 10.11999/JEIT180241
[Abstract](87) [FullText HTML](53) [PDF 3530KB](15)
Abstract:
To solve the problems in current co-saliency detection algorithms, a novel co-saliency detection algorithm is proposed which applies fully convolution neural network and global optimization model. First, a fully convolution saliency detection network is built based on VGG16Net. The network can simulate the human visual attention mechanism and extract the saliency region in an image from the semantic level. Second, based on the traditional saliency optimization model, the global co-saliency optimization model is constructed, which realizes the transmission and sharing of the current superpixel saliency value in inter-images and intra-image through superpixel matching, making the final saliency map has better co-saliency value. Third, the inter-image saliency value propagation constraint parameter is innovatively introduced to overcome the disadvantages of superpixel mismatching. Experimental results on public test datasets show that the proposed algorithm is superior over current state-of-the-art methods in terms of detection accuracy and detection efficiency, and has strong robustness.
2018, 40(12): 2905 -2912   doi: 10.11999/JEIT180180
[Abstract](198) [FullText HTML](91) [PDF 2785KB](11)
Abstract:
As to the problem of sound event detection in low Signal-Noise-Ratio (SNR) noise environments, a method is proposed based on discrete cosine transform coefficients extracted from multi-band power distribution image. First, by using gammatone spectrogram analysis, sound signal is transformed into multi-band power distribution image. Next, 8×8 size blocking and discrete cosine transform are applied to analyze the multi-band power distribution image. Based on the main Zigzag coefficients which are scanned from the discrete cosine transform coefficients, features of sound event are constructed. Finally, features are modeled and detected through random forests classifier. The results show that the proposed method achieves a better detection performance in low SNR comparing to other methods.
2018, 40(12): 2913 -2918   doi: 10.11999/JEIT171091
[Abstract](102) [FullText HTML](64) [PDF 1558KB](13)
Abstract:
A new robust Generalized Synchrosqueezing S-Transform(GSST) is proposed to solve the distortion problem of SynchroSqueezing S-Transform(SSST) in mixture noise. Firstly, the method improves the Viterbi algorithm for improving the Time-Frequency(TF) analysis performance of S-transform in alpha-gaussian mixture noise. After acquiring the phase locus information of the FM signal, the synchrosqueezing is used to improve the time-frequency aggregation. The simulation results show that the proposed method can accurately obtain the time-frequency information of FM signal under the background of Alpha-Gaussian mixture noise in low SNR, and has a better robustness and applicability than the SST.
2018, 40(12): 2919 -2927   doi: 10.11999/JEIT180120
[Abstract](160) [FullText HTML](89) [PDF 5269KB](11)
Abstract:
The Coherent Plane-Wave Compounding (CPWC) algorithm is based on the recombination of several plane-waves with different steering angles, which can achieve high-quality images with high frame rate. However, CPWC ignores the coherence between the plane-wave imaging results. Coherence Factor (CF) weighted algorithm can effectively improve the imaging contrast and resolution, while it degrades the background speckle quality. A Short-Lag Coherence Factor (SLCF) algorithm for CPWC is proposed. SLCF uses the angular difference parameter to ascertain the order of the coherence factor and calculates the coherence factor for the plane-waves with small angular difference. Then, SLCF is utilized to weight CPWC to obtain the final images. Simulated and experimental results show that SLCF-weighted algorithm can improve the imaging quality in terms of lateral resolution and Contrast Ratio (CR), compared with CPWC. In addition, in comparison with CF and Generalized Coherence Factor (GCF) weighted algorithm, SLCF can achieve better background speckle quality and it has lower computational complexity.
2018, 40(12): 2928 -2935   doi: 10.11999/JEIT180191
[Abstract](190) [FullText HTML](110) [PDF 1302KB](23)
Abstract:
A method based on Gaussianization and generalized matching, called Gaussianization-Generalized Matching (GGM) method is proposed, for nonlinear processing in impulsive noise. The GGM method can be designed based on noise samples, aided by nonparametric probability density estimation. Thus the GGM design is suitable for nonlinear processing in unknown noise models. The GGM method in the \begin{document}${\rm S\alpha S}$\end{document} model is analyzed, and also the comparison with another approach is presented based on unmatched noise model assumption in the Class A noise. The GGM method is applied to the constant false alarm rate technique via the efficacy function. Simulation and analysis results show that the GGM design is sub-optimal, works robustly when the noise model is unknown, and raises a low requirement on the sample number. Thus, the GGM method provides a promising choice when the noise model is unclear or time-varying.
2018, 40(12): 2936 -2944   doi: 10.11999/JEIT180154
[Abstract](60) [FullText HTML](41) [PDF 1969KB](2)
Abstract:
Chroma extensions video coding is a hot topic in the field of video coding. Chroma extensions video coding scheme based on AVS2 platform is proposed. The most direct solution is pseudo444/422 coding. In this method, chroma component in the input image is down sampled by averaging adjacent samples. The core coding modules are still 420 coding. Further, this paper seamlessly extends intra prediction and loop filter to the 444/422 chroma format to implement 444/422 intra prediction coding. The experimental results show that compared with pseudo444/422 coding, in the case of high bit rate, the average U/V BD-rate saving is 31.44%/31.72% and 18.85%/19.30% for 444 and 422 test sequences respectively, with negligible increase of Y BD-rate (0.5% on average). The modification of the 422 chroma intra prediction algorithm achieves up to 5.66% Y/U/V BD-rate reduction. 444/422 intra prediction coding provides similar or better coding performance than HEVC RExt coding at low bitrates.
2018, 40(12): 2945 -2953   doi: 10.11999/JEIT180077
[Abstract](93) [FullText HTML](48) [PDF 7328KB](4)
Abstract:
Focusing on the issue that the systematic errors lead to poor robustness and low accuracy of optical flow calculation, a robust optical flow calculation method is proposed in this paper, which is based on the wavelet multi-resolution theory. With the multi-resolution characteristics of wavelet, the system error caused by variation of illumination conditions and sensor noise is incorporated into the calculation of optical flow to improve the robustness and estimation accuracy. In what follows, the total least square method is used to solve the over-determined wavelet optical flow equations to obtain the optical flow vector. As compared to the traditional Lucas-Kanade approach, Horn-Schunck method and optical flow estimation in omnidirectional images using wavelet approach, simulation results show that the proposed algorithm can significantly improve the accuracy of optical flow estimation and the robustness of the optical flow field.
2018, 40(12): 2954 -2961   doi: 10.11999/JEIT180192
[Abstract](80) [FullText HTML](57) [PDF 2430KB](9)
Abstract:
2018, 40(12): 2962 -2969   doi: 10.11999/JEIT180131
[Abstract](79) [FullText HTML](55) [PDF 1438KB](7)
Abstract:
An adaptive virtual resource allocation algorithm is proposed based on Constrained Markov Decision Process (CMDP) for wireless access network slice virtual resource allocation. First of all, this algorithm in the Non-Orthogonal Multiple Access (NOMA) system, uses the user outage probability and the slice queues as constraints, uses the total rate of slices as a reward to build a resource adaptive problem using the CMDP theory. Secondly, the post-decision state is defined to avoid the expectation operation in the optimal value function. Furthermore, aiming at the problem of " dimensionality disaster” of MDP, based on the approximate dynamic programming theory, a basis function for the assignment behavior is designed to replace the post-decision state space and to reduce the computational dimension. Finally, an adaptive virtual resource allocation algorithm is designed to optimize the slicing performance. The simulation results show that the algorithm can improve the performance of the system and meet the service requirements of slicing.
2018, 40(12): 2970 -2978   doi: 10.11999/JEIT180111
[Abstract](202) [FullText HTML](103) [PDF 1584KB](17)
Abstract:
Based on interference cancellation method, a low complexity Iterative Parallel Interference Cancellation (IPIC) algorithm is proposed for the uplink of massive MIMO systems. The proposed algorithm avoids the high complexity matrix inversion required by the linear detection algorithm, and hence the complexity is maintained only at \begin{document}$({\cal O}({K^2}))$\end{document} . Meanwhile, the noise prediction mechanism is introduced and the noise-prediction aided iterative parallel interference cancellation algorithm is proposed to improve further the detection performance. Considering the residual inter-antenna interference, a low-complexity soft output signal detection algorithm is proposed as well. The simulation results show that the complexity of all the proposed signal detection methods are better than that of the MMSE detection algorithm. With only a small number of iterations, the proposed algorithm achieves its performance quite close to or even surpassing that of the MMSE algorithm.
2018, 40(12): 2979 -2985   doi: 10.11999/JEIT180218
[Abstract](86) [FullText HTML](57) [PDF 1053KB](11)
Abstract:
The resource allocation for Cloud Radio Access Network (C-RAN) is investigated. The max-min fairness criterion is used as the optimization criterion and the Energy Efficiency (EE) of C-RAN users is taken as the optimization objective function, by maximizing the EE of the worst link under the constraints of maximum transmit power and minimum transmit rate, the user transmit power and Remote Radio Heads (RRHs) beamforming vectors are jointly optimized. The above optimization problem belongs to the nonlinear and fractional programming problem. First, the original nonconvex optimization problem is transformed into an equivalent optimization problem in subtractive form. Then, by introducing a new variable, non-smooth equivalent optimization problem is transformed into a smooth optimization problem. Finally, a two-layer iterative power allocation and beamforming algorithm is proposed. The proposed algorithm is compared with traditional non-EE resource allocation algorithm and EE maximization algorithm. The experimental results show that the proposed algorithm is effective in improving the EE and the fairness of resource allocation.
2018, 40(12): 2986 -2991   doi: 10.11999/JEIT180196
[Abstract](519) [FullText HTML](358) [PDF 360KB](76)
Abstract:
Lai-Massey structure is a block cipher structure developed from IDEA algorithm. FOX is the representative of this cipher structure. In this paper, the keys are assumed to be generated independently and uniform randomly, and then the provable security against differential and linear cryptanalysis of Lai-Massey structure is studied from two aspects: the upper bound of the average differential probability and the upper bound of the average linear chains probability with the given starting and ending points. This paper proves that when \begin{document}$r{\rm{ = }}2$\end{document} , the average differential probability \begin{document}$\le p{}_{\max }$\end{document} . With the F function of the Lai-Massey structure is orthomorphism, this paper proves that when \begin{document}$r \ge 3$\end{document} , the average differential probability \begin{document}$\le p_{\max }^2$\end{document} . A similar conclusion is obtained for the linear chains with a given starting and ending point.
2018, 40(12): 2992 -2997   doi: 10.11999/JEIT180189
[Abstract](73) [FullText HTML](45) [PDF 392KB](12)
Abstract:
Based on the theory of Galois rings of characteristic 4, a new class of quaternary sequences with period 2p2 is established over Z4 using generated cyclotomy, where p is an odd prime. The linear complexity of the new sequences is determined. Results show that the sequences have larger linear complexity and resist the attack by Berlekamp-Massey (B-M) algorithm. It is a good sequence from the viewpoint of cryptography.
2018, 40(12): 2998 -3006   doi: 10.11999/JEIT180122
[Abstract](213) [FullText HTML](94) [PDF 1106KB](14)
Abstract:
Attribute based encryption can provide data confidentiality protection and fine-grained access control for fog-cloud computing, however mobile devices in fog cloud computing system are difficult to bear the burdensome computing burden of attribute based encryption. In order to address this problem, an offline/online ciphertext-plicy attribute-based encryption scheme is presented with verifiable outsourced decryption based on the bilinear group of prime order. It can realize the offline/online key generation and data encryption. Simultaneously, it supports the verifiable outsourced decryption. Then, the formal security proofs of its selective chosen plaintext attack security and verifiability are provided. After that, the improved offline/online ciphertext-plicy attribute-based encryption scheme with verifiable outsourced decryption is presented, which reduces the number of bilinear pairings from linear to constant in the transformation phase. Finally, the efficiency of the proposed scheme is analyzed and verified through theoretical analysis and experimental simulation. The experimental results show that the proposed scheme is efficient and practical.
2018, 40(12): 3007 -3012   doi: 10.11999/JEIT180249
[Abstract](97) [FullText HTML](63) [PDF 396KB](8)
Abstract:
The privacy preserving aggregate signcryption for heterogeneous systems can ensure the confidentiality and unforgeability of the data between heterogeneous cryptosystems, it also can provide multi-ciphertext batch verification. This paper analyzes the security of a scheme with privacy-preserving aggregate signcryption heterogeneous, and points out that the scheme can not resist the attack of malicious Key Generating Center (KGC), it can forge a valid ciphertext. In order to improve the security of the original scheme, a new heterogeneous aggregation signature scheme with privacy protection function is proposed.The new scheme overcomes the security problems existing in the original scheme and ensures the data transmission between the certificateless public key cryptography and the identity-based public key cryptographic, and the security of the new scheme is proved under the random oracle model. Efficiency analysis shows that the new program is equivalent to the original one.
2018, 40(12): 3013 -3019   doi: 10.11999/JEIT180219
[Abstract](92) [FullText HTML](52) [PDF 1638KB](8)
Abstract:
Wireless powered technology is an effective way to extend the lifetime of wireless network nodes. A wireless powered hybrid multiple access system is studied that is consist of a base station and multiple users in clusters. The transmission of the system is divided into two phases. The base station broadcasts energy to the users in the first phase. The users transmit information to the base station in the second phase. The users among different clusters transmit in the time division multiple access manner, while the users in the same cluster transmit in the non-orthogonal multiple access manner. Joint phase time duration allocation and power allocation are investigated at the base station and the users in order to improve the spectrum efficiency and user fairness, respectively. Two algorithms are proposed, which maximize the system throughput and the minimum throughput of the clusters, respectively. Simulation results show that the two proposed algorithms can effectively increase spectral efficiency and guarantee fairness of user clusters, respectively.
2018, 40(12): 3020 -3027   doi: 10.11999/JEIT171085
[Abstract](128) [FullText HTML](93) [PDF 1720KB](20)
Abstract:
The Internet of Things (IoT) is becoming a hot research area, and tens of billions of devices are being connected to the Internet which are advancing on the sensor search service. IoT features (searches are strong spatiotemporal variability, limited resources of the sensor, and mass heterogeneous dynamic data) raise a challenge to the search engines for efficiently and effectively searching and selecting the sensors. In this paper, Piecewise-Linear fitting Sensor Similarity (PLSS) search method is proposed. Based on the content values, PLSS calculates the sensor similarity models to search most similarity sensors. PLSS improves the accuracy and efficiency of search compared with FUZZY set algorithm (FUZZY) and least squares method. PLSS storage costs are at least two order of magnitude less than raw data.
2018, 40(12): 3028 -3035   doi: 10.11999/JEIT180207
[Abstract](74) [FullText HTML](60) [PDF 1564KB](10)
Abstract:
Under the present network architecture, it is disadvantageous for scalability and service performance of server cluster to adopt hardware systems to realize load balancing of server cluster, because there are some restriction factors in such a method, including the difficulty of acquiring load nodes status and the complexity of redirecting traffic, etc. To solve the problem, a Load Balancing mechanism based on Software-Defined Networking (SDNLB) is proposed. With superiorities of SDN such as centralized control and flexible traffic scheduling, SDNLB monitors run states of servers and overall network load information by means of SNMP protocol and OpenFlow protocol in real time, and chooses the highest weight server as target server aiming for processing coming flows through the way of weight value calculation. On this basis, SDNLB takes full advantage of the optimal forwarding path algorithm to carry on traffic scheduling, and achieves the goal that raises utilization rate and processing performance of server cluster. An experiment platform is built to carry out simulation tests for overall performance of SDNLB, and the experiment results show that under the same network load conditions, SDNLB lowers effectively loads of server cluster, noticeably raises network throughput and bandwidth utilization, and reduces finish time and average latency of flows, compared with other load balancing algorithms.
2018, 40(12): 3036 -3041   doi: 10.11999/JEIT180217
[Abstract](71) [FullText HTML](45) [PDF 1622KB](3)
Abstract:
A novel power frequency electric field measurement system based on high-performance MEMS electric field sensing chips is developed. Based on cross-correlation detection principle, a power frequency electric field demodulation algorithm of MEMS sensing chips that can inhibit background interference noise is proposed. And a small-scale, high-resolution electric field measuring probe is designed. Moreover, the system overall structure scheme is designed for implementation of high-accuracy demodulation electric field signals. The test result under power lines shows that the plotted curves of the developed MEMS system are consistent with Narda EFA-300.
2018, 40(12): 3042 -3050   doi: 10.11999/JEIT180170
[Abstract](157) [FullText HTML](69) [PDF 2933KB](6)
Abstract:
In the advanced applications of real-time radar imaging and high-precision scientific computing systems, the design of high throughput and reconfigurable Floating-Point (FP) FFT accelerator is significant. Achieving high throughput FP FFT with low area and power cost poses a greater challenge due to high complexity of FP operations in comparison to fixed-point implementations. To address these issues, a serial of mixed-radix algorithms for 128/256/512/1024/2048-point FFT are proposed by decomposing long FFT into short implementations with cascaded radix-2k stages so that the complexity of multiplications can be significantly reduced. Besides, two novel fused FP add-subtract and dot-product units for dual-mode functionality are proposed, which can either compute on a pair of double precision operands or on two pairs of single precision operands in parallel. Thus, a high throughput dual-mode floating-point variable length FFT is designed. The proposed processor is implemented based on SMIC 28 nm CMOS technology. Simulation results show that the throughput and Signal-to-Quantization Noise Ratio (SQNR) in single-channel single precision and dual-channel half precision floating-point mode are 3.478 GSample/s, 135 dB and 6.957 GSample/s, 60 dB respectively. Compare to the other FP FFT, this processor can achieve 12 times improvement of normalized throughput-area ratio.

Monthly Journal Founded in 1979

The Source Journal of EI Compendex The Source Journal of ESCI Database

Competent unit：Authorized by CAS

Host unit：Hosted by IECAS，Department of Information Science of NNSFC

Editor-in-Chief：Yirong Wu

ISSN 1009-5896  CN 11-4494/TN

News
more >
Conference
more >
Author Center

Wechat