With the increasing importance of cyberspace, its scope is continuously expanding. In addition to supporting global interconnectedness through the traditional internet, it also includes various forms of limited-domain networks such as satellite networks, industrial internet, data center networks, etc. These limited-domain networks have diverse terminal types, access methods, topology structures, and business scenarios. The protocols, transmission formats, and forwarding methods used by the traditional internet architecture cannot meet the efficient interconnection and development requirements of these heterogeneous networks. Therefore, the integration of the internet, space satellite networks, industrial internet, high-performance data center networks, and other heterogeneous networks is an inevitable development trend. This paper focuses on the two major challenges faced by the integrated network: scalability and real-time performance. Using an innovative research approach, a domain-autonomous and scalable integrated network architecture has been designed to achieve large-scale scalable architecture with inter-domain collaboration and intra-domain autonomy. Rich semantic inter-layer interfaces are supported to meet the quality-of-service requirements for differentiated business cross-domain transmission.
In the positioning reconnaissance mission, the cooperative working mode of UAV formation is more in line with the needs of future electronic reconnaissance because of its good positioning effect and strong robustness. Considering the positioning accuracy, a UAV formation route planning method based on master-slave formation control scheme is proposed in this paper. For host route planning, combining adaptive step size with particle swarm optimization node selection, a hybrid A* algorithm is proposed based on sparse A* algorithm, and an obstacle avoidance strategy is proposed for obstacle swarm environment. For slave route planning, an improved multi-objective quantum-behaved particle swarm optimization (IMQPSO) algorithm is proposed, which introduces the particle hybrid update strategy, the advantage selection strategy of non inferior solution and the UAV Y-type station layout scheme into the algorithm. Compared with the traditional multi-objective particle swarm optimization (MPSO) algorithm and multi-objective quantum-behaved particle swarm optimization (MQPSO) algorithm, the comprehensive fitness value of the improved algorithm is reduced by 4.7% and 1.4% respectively when the running time of the algorithm is basically the same.
Considering the problem that the false target that is produced by single jammer will be suppressed by multichannel synthetic aperture radar (SAR) system, a countermeasure method based on amplitude cooperative modulation of multiple jammers is proposed in this paper. The principle of deception jamming suppression of multichannel SAR is analyzed in detail, making the essential reason why deception jamming of single jammer is ineffective understood. On the basis, the cooperative modulation model of multiple jammers is established and the condition against jamming suppression is deduced. And the influence of jammer number on the proposed method is analyzed theoretically. Simulation results demonstrate that the proposed countermeasure method of multiple jammers in this paper can counteract the extra phase of jamming signal. In this way, the jamming signal can obtain the same processing gain as the echo of real target, effectively offsetting the jamming suppression and making the SAR deceived.
In the reconstruction of spatial electromagnetic (EM) signals based on the spatial-domain compression technique, a novel method using Gauss-Laplace mixture distribution fitting the spectral EM signal envelop is proposed, to alleviate the redundant calculation that causes low system efficiency, and patch up the absence of reliable model which can support the integration process. By determining the mapping correlation between the key coefficients and the distinctive space parameters, the mixture distribution to the specific calculation space is promptly established. Replacing the signal envelop by the distribution pattern, it adaptively rearranges the sampling points through the stochastic process. The peer comparison of various integration techniques with this novel method, from the aspects of the number of sampling and the distinctive space parameters, verifies it more accurate and efficient. The proposed method obtains a wide application condition in the computation of spectral EM signal integration, and the potential of improving 3D real-time simulation and high-speed inversion method.
Radar-communication integration is one of the research hotspots in recent years, which can simultaneously realize target detection and data transmission in a single system. In this paper, a simple and compact radar-communication integration scheme based on the optical injection of a semiconductor laser is proposed. The coherence between the master and slave lasers is improved by locking the slave laser with the N-order modulated optical sideband of the injected phase-modulated optical signal. By changing the period-one oscillation frequency and the injected optical signal, an amplitude shift-keying linearly frequency-modulated (ASK-LFM) signal with tunable operating frequency band and frequency multiplication factor is generated. The radar and communication functions of the integrated system based on the generated signal are verified through experiments and simulations. The experimental results show that for the ASK-LFM signal with a bandwidth of 1 GHz, the measurement error of the target range is better than 3.2 cm, the imaging results of the three targets are identifiable, and the correct transmission of communication data is achieved at the same time.
To address the problems of the existing direction of arrival (DOA) estimation methods of coherently distributed sources, such as huge computational complexity, inferior performance in impulse noise and ineffective decoherence ability, a multimodal DOA estimation method of coherently distributed sources in impulse noise is proposed and the Cramér-Rao bound is derived for DOA estimation of coherently distributed sources in the impulse noise. A multimodal weighted signal subspace fitting equation, employing the weighted norm covariance, is derived firstly to achieve the DOA estimation of coherently distributed sources in the impulse noise, meanwhile, a multimodal quantum bald eagle algorithm is designed to quickly solve the derived equation without quantization error. Simulation results show that the proposed method can achieve the DOA estimation of coherently distributed sources with a small number of snapshots in the impulse noise, and can locate coherent sources without additional decoherence operations. Compared with the existing high precision DOA estimation methods, the proposed method has shorter simulation time and higher estimation accuracy and successful rate, which breaks through the application limitations of the existing coherently distributed source DOA estimation methods and can be popularized and applied in other complex DOA estimation problems.
For the deployment problem with constrained locations in belt barrier coverage, this paper proposes an optimal deployment method based on heterogeneous bistatic radars. The method proves the properties of deployment pattern and sequence for the heterogeneous bistatic radar. Then based on these properties, an optimization model of the problem is proposed. This paper divides the optimization model into three layers to solve this model. In the inner layer, the position of the heterogeneous transmitter is fixed, and the barrier coverage is separated into two segments by this position. An algorithm that integrates the integer linear programming and greedy methods is exploited to determine the minimum deployment cost and deployment sequence for a barrier; In the middle layer, with a given width of the barrier, the minimum deployment cost is calculated for all possible heterogeneous transmitter positions, then the minimum deployment cost of a barrier is determined; In the outer layer, two deployment strategies are introduced to determine the barrier widths, one is barrier width non-equalization strategy; the other is the barrier width equalization strategy, correspondingly, two algorithms are given to determine the minimum deployment cost for the entire area, the deployment sequences and their numbers of the sub-barriers. Finally, the feasibility and effectiveness of these two algorithms are verified by simulations. The simulation results reveal that the non-equalization strategy is superior to the equalization strategy, leading the deployment cost can be reduced by 6% and the number of transmitters can be reduced by 10%.
Priority-based flow control (PFC) mechanism is widely deployed in data center network to avoid packet loss due to buffer overflow. Although PFC mechanism guarantees lossless transmission, it brings negative impacts such as head-of-line blocking and congestion spreading, etc. In recent years, many end-to-end congestion aware transport protocols have been proposed to effectively alleviate network congestion and reduce the triggering of PFC. However, in the case of transient congestion due to burst traffic, PFC is still triggered frequently even if the above end-to-end transport protocols are deployed, resulting in serious head-of-line blocking and congestion spreading. Therefore, on the basis of end-to-end congestion control, this paper proposes a direct congestion notification (DCON) solution implemented on the switches. DCON can timely identify the congested flows (really responsible for congestion) sharing the ingress port with the non-congested flows (not responsible for congestion). Meanwhile, DCON directly sends the congestion notification message to the corresponding senders from the switch and accurately sets the target rate for the identified congested flows at the sender. Compared to the existing end-to-end transmission control protocols, the experimental results show that DCON effectively avoids the head-of-line blocking and congestion spreading of PFC, and reduces the average flow completion time by up to 55%.
In recent years, traditional synthetic aperture radar (SAR) target recognition methods based on deep learning have achieved promising results. However, these deep models need lots of training samples for parameter optimization, otherwise they would possibly encounter serious overfitting problem, resulting in low recognition accuracy and poor generalization ability of these deep models. Aiming to above problems, this paper proposed a few-shot SAR target recognition method named multi feature attention fusion Meta-ResNet. In this method, we designed a learner based on residual network, which can effectively transmit contrast information in SAR images and thus improve recognition accuracy. Furthermore, the multi feature attention fusion module can provide weighted multi scale features by fusing features of different ResNet layers. The improved meta-learner can not only learn good initialization parameters for learner, but also learn a different but appropriate learning rate for each learner parameter. The comparative experiment between the proposed method and other three few-shot recognition methods demonstrated the effectiveness and progressiveness of the proposed method. We also conducted experiment to verify the robustness and study the influence of the network structure on recognition accuracy. We showed the different but appropriate learning rates of learner parameters learned by the meta-learner.
Current 3D model classification methods are validated on the datasets whose initial poses are aligned. However, in practical applications, the poses of 3D models are unknown, resulting in obvious performance degradation of a non-aligned 3D models. A new 3D model classification method which is suitable for both the aligned and non-aligned poses of datasets, is proposed in this paper. This method employs graph convolutional neural network (GCN) to learn the spatial relations between views, and uses the preset camera positions as the vertexes in the graph structure. Moreover, the timing feature extraction network and the attention network are used to further improve the effect of GCN. Experiments on ModelNet10 and ModelNet40 datasets show that the proposed method achieves accuracies of 99.3% and 97.4% under aligned poses of 3D models, which is much higher than other existing methods. On non-aligned poses of 3D models, also has high classification accuracy.
In order to improve the performance of the path splitting selecting strategy based on search set under the successive cancellation list (PSS-SS-SCL) decoding algorithm, an improved decoding algorithm based on the enhanced parity check code and Monte Carlo segment aiding successive cancellation list (EPC-MS-SCL) is proposed. The algorithm divides the information sequence into some segments in the encoding stage of the polar code, and adds an enhanced parity check code at the end of each segment. The decoder performs the path splitting only when decoding the elements within the search set (SS), and directly does the hard decision for the other unfrozen bits. The decoder will verify the segment immediately after decoding a sequence, the verified path can only be retained. Thus the competition from the wrong paths to the correct path can be reduced to increase the probability of keeping the correct path until the end of the decoding and the performance of the decoding can be improved. Meanwhile, the decoding complexity is also reduced because of decreasing the number of the decoding lists. Simulation results show that compared with the PSS-SS-SCL decoding algorithm, the proposed decoding algorithm can improve the coding gain to a certain degree and reduce the decoding complexity too.
In seawater, light suffers from scattering and wavelength-related attenuation, which makes underwater images exhibit color distortion and low contrast. In this paper, we propose a robust method called dual residual hybrid attention underwater image enhancement method (DRHA-UIE). The proposed method uses an improved dual residual block for features learning. To capture the significant features, a hybrid attention mechanism is introduced in the dual residual block, which infers the attention weights on the features along the channel and spatial dimensions. To train the proposed method end-to-end, a joint feature loss function that consists of pixel-level loss, structural similarity loss, and content-aware loss is designed, then enhanced images with finer textures are obtained. In addition, a revised underwater image formation model (IFM) is applied to optimize the underwater image enhancement benchmark dataset (UIEB), and a dataset with higher visual quality, new-underwater image enhancement benchmark dataset (N-UIEB), is obtained. Experiments show that the enhanced images trained by N-UIEB have more natural colors and sharper details than those trained by the UIEB dataset. The proposed model is extensively evaluated on the above two datasets. Results show that the DRHA-UIE method effectively enhances the image quality subjectively and outperforms the other 10 state-of-the-art methods in quantitative comparisons.
In order to improve the accuracy of target tracking results in nonlinear systems under different kinds of noise, minimum fuzzy error entropy unscented filter (MFEE-UF) is proposed in this paper. In this proposed method, the fuzzy membership is introduced to represent the different effects of different error samples on the estimation results, solving the problem of same weight in common error entropy. And then the minimum fuzzy error entropy criterion (MFEEC) is constructed and used to optimize the unscented filtering, deriving MFEE-UF. In this proposed algorithm, the unscented transformation (UT) framework is used to obtain a priori state estimation and a priori covariance estimation, and error information is obtained by system reconstruction. Then the objective function is constructed based on MFEEC, and finally the posterior state estimation and the posterior covariance estimation is solved by using fixed-point iteration method. In addition, kernel width is set adaptively. Simulations show that the proposed algorithm has strong stability, and can track a target more accurately.
In order to improve the intelligent management ability of satellite battery pack, this paper proposes a satellite redundant battery balancing control strategy based on deep reinforcement learning. The method can train the agent to extract the characteristics of the current operation state of the battery group and judge it, change the input state and quantity of the battery unit, realize the capacity balance between the battery units, reduce the range of bus voltage change and reduce the switching adjustment times. The battery pack simulation environment is built in MATLAB \ Simulink and OpenAI gym environment respectively to train the reinforcement learning agent. The feasibility of this method is verified by an example, and compared with the traditional sorting method, it is proved that this method can effectively achieve the capacity balance between batteries and reduce the variation range of bus voltage.
Considering that the existing ultrasonic wind measuring instrument has low measuring accuracy and is easily affected by environment temperature, humidity and shadow effect, this paper proposes an ultrasonic wind measurement method with three mutually transmitting array elements based on quadratic correlation method. In this method, a structure with three mutually transmitting ultrasonic sensors is adopted, the transmission time of ultrasonic wave is calculated by using quadratic correlation time-delay estimation algorithm, and then the wind speed and wind direction are obtained according to the relationship between the transmission time and wind speed and wind direction. The proposed method has strong noise suppression capability, and is free of the influence of temperature and humidity, so there is no need to correct ultrasonic velocity in real time. Moreover, the wind measuring structure with three mutually transmitting ultrasonic sensors can reduce the influence of shadow effect. At last, numerical simulation experiments are carried out to verify the effectiveness of the proposed algorithm, and the wind measurement system with three mutually transmitting ultrasonic sensors is built for measured data experiments. The experimental results show that the proposed method gets higher wind measuring accuracy than the wind measurement with orthogonal mutually transmitting ultrasonic sensors, the wind measurement with ultrasonic sensor array, and the wind measurement with three mutually transmitting array elements based on correlation. According to the actual test results, the relative error of wind speed is 2.38% and the error of wind direction is 2.3°, which basically meets the technical requirements of ultrasonic wind measurement.
As a technology to solve the problem of transmitter LED modulation bandwidth limitation, there are complex circuit design, high power consumption and low efficiency in previous studies.In view of the above problems, from the perspective of frequency field analysis, the frequency division pre-emphasis (FDPE) circuit is proposed based on the principle of pre-emphasis (PE) circuit, and finally realizes the FDPE circuit through simulation and physical design and compares with the PE circuit.Simulation and experimental data show that the FDPE circuit proposed in this paper achieves the same frequency response compensation effect as in the PE circuit in visible optical communication, effectively reducing the complexity and power consumption of the driving circuit.It is shown that the split-frequency advance the feasibility of achieving low power integration in LED driven circuits based on visible optical communication.
For the optimization problem of high-dimensional variables, we design a flower pollination algorithm based on nonlinear cross-generation differential evolution (FPA-NCDE). The algorithm guides individuals to approximate the optimal solution with cross-generation differential evolution to make local search process oriented. Meanwhile, the nonlinear inertia weight is set to improve the search convergence speed. The scaling factor and crossover probability are dynamically updated by parameter adaptive adjustment to enhance the population richness and reduce the number of local solutions. Combined with the cross-generation roulette wheel, the probability of trapping into local optimal solution is decreased. The performance evaluation verifies that the proposed FPA-NCDE can maintain good optimization characteristics and stability under different dimensional benchmark functions, especially under high dimensional test functions. In addition, FPA-NCDE is applied to unmanned aerial vehicle intelligent inspection of industrial internet to evaluate the performance of the algorithm in practical applications. The experiments results show that FPA-NCDE can satisfy the needs of low cost, high efficiency and avoidance of external attacks in inspection path planning.
In order to solve the contradiction between the increasing computing demands of users and the limited computing and communication resources in mobile edge computing networks, and the difficulty to guarantee the fairness in handling users' computing tasks, this paper proposes a fair computation offloading policy for a mobile edge computing network based on multi-antenna NOMA (Non-Orthogonal Multiple Access). By jointly optimizing the user clustering, the decoding order of NOMA cluster, the transmission power, the CPU processing frequency, and the transmission time, an optimization problem is formulated to maximize the fair computation efficiency of the system. Considering the non-convexity of the optimization problem, it is decomposed into two solving stages. In the first stage, an effective heuristic algorithm is designed to realize user clustering, and determine the decoding order of NOMA cluster according to the clustering results. In the second stage, based on the Dinkelbach method and the SCA (Successive Convex Approximation), an iterative algorithm is proposed to obtain the transmission power, the CPU processing frequency, and the transmission time. Simulation results show that the proposed scheme has good convergence, and it can achieve the superior system performance compared with several baseline schemes.
In order to better solve the engineering design constrained optimization problem and further improve the global optimization and application ability of the JAYA algorithm, the JAYA algorithm based on multi role differential evolution strategy is proposed. Firstly, the cosine similarity strategy is introduced, the individual positions with high cosine similarity to the optimal individual are processed by rotation change operator and non-uniform mutation operator, which not only accelerates the convergence speed of the algorithm, but also enriches the diversity of the population; Then the multi role strategy is introduced to the individual location updating, and the symbiosis strategy and cauchy mutation mechanism are introduced to effectively balance and better meet the different needs of the algorithm for exploration and mining ability in different iterative periods, which improves the optimization performance of the algorithm; Finally, the pinhole-imaging opposition-based learning strategy is introduced, which expands the search range of the algorithm and further improves the convergence and accuracy of the algorithm. Through the simulation experiment of function extremum optimization of the multi algorithms on multiple dimensions of the 10 complex benchmark test functions and the optimization of 5 more challenging complex engineering design problems described in CEC2020, the test results clearly show that the proposed algorithm has significantly better optimization accuracy, convergence performance, solution stability, applicability and effectiveness to different problems, and it has obvious advantages in solving engineering design optimization problems.
Aiming at the high false alarm rate and missed detection rate of ship target detection results in SAR (Synthetic Aperture Radar) images, a ship target detection algorithm based on selective coordinate attention mechanism is proposed in this paper. The algorithm is based on a new selective coordinate attention mechanism. Firstly, the feature of ship target is extracted by the feature extraction branches of different convolution kernels. Then, the features of all branches are fused, and in order to capture the position information of the features in the spatial direction, the features are encoded along different spatial directions of the fused features to form two one-dimensional feature vector codes. Finally, this direction and position sensitive feature vector coding is used to form a “gate” mechanism to get the weighted fusion of the features extracted from receptive fields with different sizes of each branch, so as to enhance the feature representation of ship targets. In this paper, the SSD (Single Shot MultiBox Detector) algorithm is used as a benchmark to test the detection results of ship targets on the SSDD (SAR Ship Detection Dataset) data set. The experimental results show that, compared with other attention mechanisms, the selective coordinate attention mechanism improves the ship detection ability of the network model more effectively. At the same time, the average detection accuracy of the SSD algorithm based on selective coordinate attention mechanism is improved to 94.20%, which is 4.45% higher than the original SSD algorithm. In addition, further tests on the other two ship data sets show that the improved algorithm has good generalization and its comprehensive performance is better than the comparison algorithms.
Time series has the characteristics of non-linearity and instability. There exist some deficiencies in the current researches of time series prediction, such as too many training parameters and poor generalization, which leads to its low prediction accuracy. In order to solve the problems, this paper proposes GOSSA-HMM, a prediction model for time series, based on the fusion of global optimal sparrow search algorithm (GOSSA) and hidden Markov model (HMM). By using the advantages of the hidden Markov model in pattern recognition and classification, the original data in time series can be dealt with by the subtractive preprocessing and classification, which is used as the input of the hidden Markov model. The GOSSA is used to train the parameters of the HMM to solve the problems, such as slow convergence speed and being sensitive to the initial value setting. The D-value data endowed with category attributes are to be segmented, the probability of the trend of each sequence will be calculated by using the improved HMM, and the prediction can be achieved by locating the same pattern with the past datasets matching the trend of the current data. The dissolved oxygen data from 15 marine ranches in the Shandong Peninsula are combined for analysis. Under the same experimental conditions, GOSSA-HMM has fewer training parameters, a lower cost of calculation, and better prediction accuracy and generalization.
In anticipation of data sets with small sample size and evident lag effects, a novel ISGS (ISOMPA-SMOTE-GA-SVR) model was proposed to address the issues of low prediction accuracy and inadequate generalization in single-algorithm prediction models. This ISGS model integrates isometric feature mapping (ISOMAP), synthetic minority oversampling technique (SMOTE), genetic algorithm (GA), and support vector regression (SVR), thereby providing a comprehensive solution.Firstly, ISOMAP and SMOTE were employed to perform feature transformation on data sets characterized by small sample size and evident lag. Secondly, the SVR algorithm was adopted due to its robust ability to generalize and classify non-linearly in regression analysis of the data set. Lastly, GA was utilized to optimize the parameters of SVR, thereby enhancing the prediction accuracy of the model. Three data sets comprised of meteorological factors, air quality and the number of patients with respiratory diseases was utilized to conduct simulation and comparative experiments using the ISGS model. The experimental results demonstrate that the proposed ISGS model achieves a prediction accuracy of 93.65%, surpassing that of all other reference models. Furthermore, the model exhibits superior generalization capabilities and can effectively handle data sets with higher dimension and smaller sample size.
A high electron mobility transistor (HEMT) device with ZnO nanowires as light sensing gate is successfully prepared by hydrothermal growth method. The HEMT is etched under source drain (S/D) with an etching depth of 120/150 nm to explore the effects of different S/D etching depths on the device performance. At the same time, ZnO crystal seed layer is deposited on the gate electrode by magnetron sputtering. The hydrothermal growth time is controlled to 6/8/10 h at 80℃ to explore the effects of different hydrothermal growth time on the surface morphology of ZnO nanowires and the UV detection performance of HEMT. The results show that compared with conventional devices, ZnO nanowire gate devices have higher optical absorptivity in the wavelength range of 350~450 nm. Under the same hydrothermal growth time, when the etching depth is 150 nm, the source drain saturation current is smaller than that of 120 nm, but the saturation current difference under dark/ultraviolet light is larger, up to 8 mA, showing higher detection efficiency for ultraviolet light. When the hydrothermal growth time is controlled to 6 h, the growth morphology of nanowires is good, and when the etching depth is 150 nm, the optical response/recovery time of the device reaches the minimum, which are 0.005 7 s and 2.128 s respectively.
Wafer-level vacuum packaging is an important way to improve the quality factor of the MEMS (Micro-Electro-Mechanical Systems) electric field sensor and the efficiency of batch manufacture. This paper proposes a novel wafer-level vacuum packaged electric field sensor based on SOI-SOG (Silicon on Insulator-Glass on Silicon) bonding. A wafer-level manufacture process, including the fabrication of sensitive structure and the vacuum package of the sensor, is designed and realized. This paper set up a structural capacitance model of the sensor, carried out the finite element simulation, analyzed the characteristics of the sensor, and broke through the technical difficulties in the microfabrication process, such as the fabrication and release of the microstructure, and the bonding of the SOI and the SOG. The sensor has the advantages of low working voltage and high quality factor. Experimental results reveal that driving signals with 5 V DC voltage and 0.05 V AC voltage are required for the sensor. The quality factor is higher than 5 000, which is kept with no drop in 60-days test. The sensitivity of the sensor is 0.15 mV/(kV/m), the linearity is 2.21% and the accuracy is 4.74% in the range of 0~50 kV/m.
Radar echo extrapolation based on deep learning is an important method for precipitation nowcasting. Since radar echo data has significant non-rigid motion characteristics, the statistical characteristics of the data are constantly changing with time, which means that it has high-order non-stationarity. However, it is difficult for existing deep learning methods to capture the non-rigid motion characteristics of echo sequences and model the high-order non-stationarity of radar data. In this paper, we propose a new spatio-temporal fusion neural network (STUNNER) for radar data features, which designs a two-stream spatio-temporal fusion architecture to achieve efficient radar echo extrapolation by cross-connecting the temporal differencing network and the spatio-temporal trajectory network. The temporal differencing network learns the long-term trend of the radar echo by introducing the idea of difference to extract the stationary features in the high-order non-stationary data. The spatio-temporal trajectory network uses dynamic convolution to optimize the fixed parameter sampling position of conventional convolution in the convolutional recurrent neural network to a dynamic position that changes with time to extract the transient variation of radar echo. And the long-term trend and the transient variation are fused by a two-stream fusion strategy to realize long- and short-term association memory. The proposed model is experimentally compared with four other models on two public datasets. In the radar echo extrapolation task, when the radar reflectivity threshold is 45 dBZ, compared with MIM (Memory in Memory), STUNNER has a 0.020 higher POD (Probability of Detection), 0.023 higher CSI (Critical Success Index) and 0.043 lower FAR (False Alarm Rate). The experimental results show that the innovative model has higher accuracy in dealing with the extrapolation of radar echoes.
To improve the accuracy and robustness of optical flow estimation under large displacements and motion occlusions, this paper proposes a pyramid patch-matching optical flow estimation method based on occlusion optimization. First, the pyramid patch matching method is adopted to obtain the initial matching motion field, and the occlusion detection model is designed to optimize the matching motion field by removing the false matching points from the initial matching motion field. Second, a robust interpolation scheme with edge-preserving is planned to achieve the dense optical flow field. Third, the global energy function is projected to gain the global optimization optical flow. Finally, the Middlebury, MPI-Sintel and KITTI test datasets are employed to conduct a comprehensive comparison between the proposed method and the representative variation, patch matching and CNN-based optical flow methods. The experimental results show that the proposed method effectively improves the accuracy and robustness of optical flow estimation under large displacements and motion occlusions.
The existing KP-ABE (Key-Policy Attribute-Based Encryption) scheme mainly supports large universe by using hash function, and the security is built under the random oracle model instead of the standard model; the number of exponentiation operations is greater than the maximum number of attributes at the time of encryption when calculating the ciphertext components or key components for each attribute; it does not support data and user security classification. To address the above problems, this paper proposes a KP-ABE scheme that supports large universe and security classification. The scheme maps arbitrary “attribute name: attribute value” to an element in the finite field through an encoding function instead of hash function, enabling support for any “attribute name: attribute value” and ensuring that any two different “attribute name: attribute value” combinations are encoded with different values; combined with the idea of mandatory access control, the scheme assigns different security levels to ciphertext and user, and only when the user security level is not lower than the security level of the ciphertext can the user decrypt it. The scheme is proved to be secure against chosen plaintext attacks under the standard model; the performance analysis shows that the proposed scheme only needs to perform two exponential operations to complete the computation of the ciphertext components or key components corresponding to one attribute.
Image dehazing is a classic and challenging research direction in the field of computer vision. In recent years, methods based on deep learning have achieved remarkable achievements in image dehazing. However, most existing dehazing algorithms rely on synthetic paired data training network, which limits the practical application of the dehazing methods, due to the discrepancy in the distribution between the synthetic and real-world foggy images. At present, image dehazing algorithms based on the CycleGAN network framework regard image dehazing as general image transformation, ignoring the effectiveness of generator learning. In addition, these algorithms lacks the exploration of local areas in image restoration, and uses only first-order channel attention in the constructed network, ignoring the effective utilization of deep-level channel-related information. Therefore, this paper proposes a dehazing algorithm for outdoor images based on multi-order channel attention calibration using a dual-discriminator heterogeneous CycleGAN framework, which mainly consists of a dual-discriminator heterogeneous cycle framework and multi-order channel attention module. Specifically, the dual-discriminator heterogeneous CycleGAN framework improves the convergence effect of the algorithm and increases the focus of the local area through the batch normalization generator of the heterogeneous CycleGAN and constraining the generator's local field of view. To further explore the feature channel information that is essential for image dehazing, this study employs a multi-order channel attention module by introducing first-order and second-order feature statistics to improve the visual quality of dehazing images. The results of the experiment show that our proposed method outperforms eight state-of-the-art dehazing algorithms on both synthetic and real-world data sets, regarding the extent of objective evaluation and visual quality.
For the problem of difficulty in collecting equipment fault data and the shortage of target fault samples in industrial fault diagnosis, existing zero sample fault diagnosis method still depends on the fault datasets, a zero sample fault diagnosis method based on transfer learning is proposed. Through the classic PCA (Principal Components Analysis) algorithm, a discriminant attribute extractor applied to source domain and target domain is constructed, to extract the potential fine-grained feature representation of source domain data samples as a bridge for knowledge transfer. The shared fine-grained base group of all known fault classes is obtained from the source domain fault data and transferred as knowledge to the target domain fault representation. The discriminant matrices of source domain and target domain are learned from the shared fine-grained basis group, thus the discriminant characteristics of each domain are constructed. Finally, the discriminant attributes are used to realize zero sample fault diagnosis. Based on the tennessee-eastrman process (TEP) datasets, the proposed method is compared with other zero sample fault diagnosis methods, and the experimental results illustrates the effectiveness of our method for zero sample fault diagnosis.
In the massive machine type communication (mMTC) internet of Things environment of 5G, there is the problem of cross-domain secure communication from IoT users with public key infrastructure (PKI) encryption scheme to IoT devices with certificateless cryptosystem (CLC) encryption regime across security domains. In this paper, we propose a three-factor cross-domain signed-encryption access control scheme in the Internet of things (TCSAC-IoT) for PKI users to CLC devices with cross-domain secure communication based on the signcryption algorithm of three-factor and national secret SM2 composed of user passwords, biometrics and user smart device. The scheme authenticates PKI IoT users through a three-factor cross-domain signcryption algorithm, and grants legitimate PKI IoT users a shared secret key with CLC IoT devices to avoid illegal access to CLC IoT device resources by illegal users. It is also demonstrated under the real-or-random (ROR) model that the scheme satisfies semantic security under the Dolev-Yao (DY) model and Cantti-Krawczyk (CK) model, and is also resistant to spoofing attacks, replay attacks, man-in-the-middle attacks, internal privilege attacks and theft or loss of PKI user smart device attacks. The results of the analysis in comparison with similar schemes show that this scheme has low computational overhead and communication overhead.
The task of high real-time object detection in remote sensing scenes has important research value and application significance. Aiming at the slow detection speed of the current remote sensing image target detection model due to multiple angles, dense arrangement and complex background, a cascaded inverted residual convolution (CIRC) is proposed. This structure uses depthwise separable convolution as the basic convolution unit to quickly improve the model's computing power. On this basis, the multi-dimensional features of the object are enhanced by transposing the channel matrix with cascaded depth convolution and increasing the number of residual connection layers. Further, multi-level module stacking is carried out to improve the detection effect of the model on the object. Based on RetinaNet, this paper uses CIRC to design a fast lightweight object detection network—CIRCN (Cascaded Inverted Residual Convolution Net). At the same time, the angle variable is introduced in the training phase and participates in back propagation, and the angle offset is added to the horizontal frame in the inference phase, which effectively improves the matching degree of the directional target and the detection frame. The experimental results on the DOTA dataset show that the detection speed of CIRCN reaches 42 fps with a slight loss of accuracy, which is 3.5 times higher than the benchmark algorithm. The results verify the effectiveness and reliability of the proposed algorithm.
Visibility graph is one of the important methods for converting time series into complex networks, building a bridge between nonlinear signal analysis and complex networks with a new perspective, which is widely used in economic, biomedical, physics and other fields. Visibility graph inherits the dynamics of the original time series, is stable and easy to implement, and can distinguish specific behaviors under specific time series data by the relevant statistical properties of visibility graph. In this paper, we first introduce the application of the visibility graph in the analysis of time series complex networks. The advantages of the visibility graph are fully explained through the necessity and feasibility analysis. Then we describe the steps and main properties of basic visibility graph and horizontal visibility graph, and then review the current research on visibility graph from the following aspects: process improvement of algorithms, efficiency improvement of algorithms and visibility graph applications. We introduce the process of numerous visibility graph methods. We also summarize the main characteristics of these methods, analyze the recognition capability, anti-noise capability and network construction efficiency of visibility graph algorithms. In addition, we implement some algorithms and expose the codes for the general scholars to learn and use. Through the review and analysis of visibility graph related research, we can understand the main research directions of visibility graph at this stage, provide thinking for future related research, and establish the foundation for time series complex network analysis.