Satellite signals in complex electromagnetic environments are often submerged in background and noise, and the performance of traditional signal detection algorithms degrades dramatically without accurate a priori knowledge. Currently, deep learning-based signal detection algorithms often require data post-processing steps that rely on expert experience and cannot achieve end-to-end detection of signals. To address the limitations of existing algorithms, an intelligent detection method of satellite signals based on DETR_S (DEtection with TRansformer on Signal) is proposed. Firstly, DETR_S is based on the coder-decoder architecture and uses the global modeling ability of the transformer network to capture spectrum information. Secondly, uses the multi-head self-attention mechanism to effectively improve the problem of long-distance dependence of spectrum information. Then, the prediction frame matching module based on the Hungarian algorithm abandons the post-processing step of data with non-maximum suppression, and transforms the signal detection problem into a set prediction problem, so that the model can output the detection results in parallel. Finally, the signal reconstruction module is introduced, and the spectrum reconstruction loss function is added to the loss function to further improve the signal detection performance by mining the deep representation of the spectrum. The experimental results show that DETR_S can accurately detect faint satellite signals (>95%) at signal-to-noise ratios of 0 dB and above using only the signal spectral amplitude information, which is a significant improvement in the detection effect compared with the typical target detection representative network.
This paper focuses on the problem of the amplitude-comparison direction-finding (DF) system of the monopulse radar under the scene of cooperative coherent dual sources. The authors describe the general model of the amplitude-comparison DF system and analyze three typical cases. And then, the theoretical errors and their accuracies of the estimated angle, which are caused by the system errors and the synchronous phase difference error, are derived. Finally, the numerical simulations are provided to analyze the influences of the signals’ parameters to the estimated angle, including the synchronous phase difference, the power ratio and the signals’ angles. The errors of estimated angle and their accuracies are also provided. The results illuminate that the estimated angle is between the angles of the dual sources, the smaller the errors the more accurate the error models of the estimated angle and the higher the working frequency the more rigorous the time synchronous demand. This paper can fill up some gaps of the DF theory under the condition of dual sources and also provide some guidance for the performance analyses and the optimizations of the scheme for the DF system.
In response to the issue of slow responsiveness and low real-time performance in traditional spectrum resource allocation management within battlefield environments, a blockchain-enabled distributed spectrum allocation architecture is first established. Constraints such as spectrum satisfaction, conflicts, and priority are taken into consideration to formulate an optimization objective model. A distributed gale-shapley (D-GS) spectrum allocation algorithm based on matching theory is proposed, transforming battlefield spectrum allocation from static centralized to dynamic distributed, significantly improving spectrum allocation performance. By incorporating a greedy mechanism and a satisfaction threshold for spectrum units, combat units can notably enhance spectrum allocation satisfaction even when spectrum demands are submitted late. Simulation results demonstrate that the proposed method, under limited spectrum resource conditions, ensures unit satisfaction and maximizes the real-time nature of distributed spectrum management. Compared to static centralized spectrum allocation methods, the time overhead is reduced by over one order of magnitude, indicating significant advantages. The algorithm exhibits superior time performance and spectrum allocation efficiency, leading to improved spectrum allocation in battlefield scenarios.
The transmitarray antenna is a kind of antenna combining lens array and antenna, and the far-field lens structure is mostly composed of discrete rows of phase modulation units. The transmitarray antenna has high gain characteristics, so it is necessary to reinforce the electromagnetic protection for the electronic information system of the transmitarray antenna. To effectively deal with the threat of high power microwave and strong electromagnetic pulse to the electronic information system, a full-phase gradient coverage energy selective structure is designed to provide electromagnetic protection for the phase modulation unit. Based on the decoupling design idea, the paper cascades the energy selective layer and the phase modulation layer, and carries out the integrated optimization design, with simultaneous energy selection and gradient phase modulation functions. Simulation results show that the insertion loss of the energy selective phase modulation structure is less than 2 dB at different phase modulation levels, and the shielding effectiveness is up to 35 dB. The simulated transmissive phase coverage ranges from -63° to -486°, and is more than 360° when the length of the metal edge is increased from 2.0 mm to 8.2 mm. The performance of the structural test piece is verified in a waveguide, and the measured results show an average insertion loss of 2.07 dB, a shielding effectiveness of up to 15 dB, and a phase coverage of -76° to -317°. Simulation and measurement results verify that the design method initially realizes the functions of energy selective and phase modulation, which can be applied to the design of electromagnetic protective transmitarray antenna and energy selective surface with gradient phase conversion requirements, and effectively guide the design of energy selective phase modulation structures in other frequency bands.
The unmanned jammer is widely adopted in modern electronic warfare. However, it would reduce the signal quality of legitimate users while interfering with the enemy. To address this issue, an active jamming nulling strategy in the radio frequency domain is developed. By coordinating the waveform, amplitude, phase, and relative delay of signals emitted by dual unmanned jammers, a jamming nulling region is created while interfering with the enemy, ensuring the signal quality of legitimate users. Considering the inevitable time synchronization error between the dual unmanned jammers, closed-form expressions for the received signal-to-noise ratio and achievable rate advantages of legitimate users under the constraint of time error are given, which serve to assess the jamming nulling performance. Further, the emission power of unmanned jammers is optimized to maximize the achievable rate advantage for legitimate users, and the power optimization strategies are simplified according to practical jamming scenarios. Numerical simulations reveal that the proposed jamming nulling strategy outperforms traditional jamming strategies by an average of approximately 3.2 bps/Hz in achievable rate advantage. Compared with the jamming emission power strategy neglecting time synchronization errors, our proposed power optimization strategy has an average improvement of approximately 1.5 bps/Hz in achievable rate advantage.
High-precision direction of arrival (DOA) estimation is of great significance for multi-user high-speed communication using millimeter-wave large-scale arrays. To deal with the issues of degraded received signal quality due to wideband effects, reduced signal dimension caused by hybrid structures, and high computational complexity required in multi-user angle estimation, this paper proposes a high-precision DOA estimation method based on wideband signal phase measurement. Firstly, this paper establishes a system model and wideband received signal model for millimeter-wave large-scale arrays with a hybrid structure, and demonstrates the impact of wideband effects. Secondly, this paper derives the Cramer-Rao lower bound (CRLB) for DOA estimation and proposes an optimal training sequence design method that satisfies constant modulus constraints by minimizing CRLB. Subsequently, for single-user scenarios, this paper proposes a gridless high-precision DOA estimation method based on phase measurement reaching CRLB progressively. For multi-user scenarios, an iterative DOA estimation algorithm using the expectation maximization (EM) method is proposed on the basis of single-user DOA estimation to avoid dimension disaster caused by joint estimation and reduce computational complexity. Simulation results verify the effectiveness of the proposed algorithm. When the signal-to-noise ratio (SNR) exceeds 5 dB, the single-user and multi-user estimation algorithms proposed in this paper can progressively achieve CRLB, and the DOA estimation performance surpasses traditional estimation methods, avoiding the impact of wideband effects and signal dimension reduction.
In massive grant-free non-orthogonal multiple access (GF-NOMA) systems, multi-user detection usually relies on the prior sparsity of signals to detect active users. However, in practical applications, especially in dynamic multi-user access, the user access process becomes more complex and obtaining such prior information becomes more difficult. Therefore, this paper proposes a learnable threshold optimization scheme for massive dynamic multi-user access detection, namely the threshold-improved adaptive alternating direction method of multipliers (TI-A-ADMM) algorithm. In this algorithm, the time correlation of active user communication is utilized to introduce a dynamic correlation measure, which adaptively scales the noise threshold for active user detection, thereby improving detection performances. Moreover, to enhance the accuracy of active user detection across different signal-to-noise ratios, a deep learning network is employed to optimize the initial detection threshold, adapting to various access environments. Simulation results indicate that, in the case of dynamic multi-user access without known prior sparsity information, the proposed TI-A-ADMM algorithm achieves a performance gain of 2.4 dB in terms of active error rate (AER) and symbol error rate (SER) compared to existing algorithms with known sparsity information. The proposed algorithm exhibits lower performance degradation and higher robustness against interference caused by multi-user access.
Based on the techniques of the support set exchange, the orbits of -variable -rotation symmetric can be obtained from the orbits of -variable 1-rotation symmetric through the analysis of the orbits of -rotation symmetric.Furthermore, by modifying the support set of the rotation symmetric functions over , a new class of -rotation symmetric 2-resilient Boolean functions are constructed and at least different -variable -rotation symmetric 2-resilient Boolean functions can be obtained.
LiCi, LiCi-2, and GRANULE are all ultra-lightweight block ciphers designed for resource-constrained internet of things environments. Because of their fast encryption (or decryption) speed and favorable implementation in both hardware and software platforms, which have received extensive attention. In this paper, the linear structure characteristics of these ciphers are investigated via multiple perfect linear approximations (circular iterations) with an absolute correlation of 1. Moreover, the perfect linear approximations (linear distinguishers with probability one) for the full rounds of the LiCi, LiCi-2, and GRANULE are achieved, thereby completely breaking these cryptographic algorithms. It directly means that these block ciphers have serious design flaws.
This paper studies the stability and security of satellite network system to provide safe and reliable end-to-end service for business. Considering jamming attacks and denial of service (DoS) attacks in the satellite network, a joint detection security transmission scheme is designed according to the detection error accuracy to minimize end-to-end delay jitter. We construct the security transmission optimization problem based on the optimal control strategy, and determine the transmission path by combining the detection requirements as constraints. The optimal data transmission scheme is obtained by the augmented Lagrange differential evolution algorithm. For the first time, the accuracy of anomaly detection is used as a factor to determine the security path policy in this paper. When the network is attacked, the security control algorithm provides stable end-to-end services and the control policy dynamically triggers the anomaly detection, enabling the network to actively defend. In this paper, 66 satellite constellations are constructed to simulate and verify the effectiveness of the proposed secure transmission scheme in jamming attacks and denial of service attacks. The results show that using anomaly detection error as a decision factor of secure transmission strategy can effectively improve the stability of network services.
To ensure the timeliness of multicast services in space-air-ground integrated networks, for minimizing energy consumption, this paper proposes an energy-efficient multicast routing algorithm based on multi-dimensional time-varying graphs. First, we analyze the energy consumption issues of multicast routing in low earth orbit (LEO) satellite networks and develop a multi-dimensional time-varying graph model to characterize the time-varying topology, energy consumption, and delay of LEO satellite networks. Then, the K-shortest path (KSP) candidate path algorithm is applied to generate a path set that satisfies the quality of service (QoS) requirements of multicast services. A minimum path heuris-tic (MPH)-based multicast tree construction algorithm is further employed to derive the solution. Finally, it is theoretically proven that the multicast tree constructed by the proposed algorithm achieves near-optimal energy consumption under QoS constraints. Simulation results demonstrate that the proposed algorithm outperforms the existing methods in terms of network energy efficiency.
In order to solve the problem that user information is easily intercepted and eavesdropped on because of the wide area coverage of the low earth orbit satellite communication systems, as well as the problem of the difficulty of accurately obtaining channel status information due to factors such as feedback delays, etc., this paper constructs a model of the low earth orbit (LEO) satellite communication system in the presence of channel uncertainty. Firstly, a robust resource allocation problem that jointly optimizes the satellite beamforming vectors, artificial noise vectors, and power allocation factors is established based on the minimum security rate constraint, the beamforming power threshold constraint, and the channel uncertainty constraint. Then, for this non-convex problem with uncertainty, we propose an iterative robust resource allocation algorithm based on S-procedure, semi-positive definite relaxation and successive convex approximation. Simulation results show that the proposed algorithm can improve the satisfaction probability by 9.62% compared with the traditional non-robust algorithm.
To achieve customized communication and sensing service more flexibly, a beamforming optimization for coordinated rate-splitting multiple access (CoRSMA) assisted multi-static integrated sensing and communication (ISAC) is proposed in the paper. The received signal of common stream, private stream, and sensing echo from each base station (BS) is modeled, and the relationship between the beamforming vector and communication rate/sensing signal-to-noise ratio (SNR) is analyzed. Aiming to maximize the sum communication rate of the worst-case user equipment (UE) under the constraints of sensing performance requirement within a prescribed region, the beamforming vectors of all BSs are jointly optimized. To efficiently solve the formulated problem, the non-convex objective and constraints are relaxed into convex ones by introducing auxiliary variables and applying successive convex approximation (SCA) technique, and then the optimal beamforming vectors can be obtained via semi-definite programming (SDP) technique. Simulation results demonstrate that our proposed CoRSMA-ISAC system outperforms the ISAC system with spatial division multiple access (SDMA) and non-orthogonal multiple access (NOMA) in terms of both communication and sensing performance.
Terahertz synthetic aperture radar (SAR) exhibits broad application prospects due to its capability for high-resolution imaging and detailed target extraction. However, its short wavelength makes terahertz SAR extremely susceptible to the platform vibration, leading to many issues during the imaging process such as false imaging points, azimuthal blurring, and defocused SAR images. Therefore, this paper establishes a fine platform vibration model of terahertz SAR, and proposes an adaptive terahertz SAR motion compensation algorithm. Based on the impact mechanism analysis of the platform vibration on imaging using the mathematical model, the complex platform vibration in terahertz SAR imaging scenes can be compensated flexibly and accurately. Firstly, a fine terahertz SAR vibration model is established based on the temporal amplitude modulation vibration model (TAMVM). By integrating the cosine time-varying amplitude and the random time-varying amplitude modulation vibration model, the TAMVM model reduces the limitation of the traditional harmonic model, and improves the adaptability to the complex and variable terahertz SAR platform vibration. Secondly, to address the performance loss of traditional harmonic model-based motion compensation algorithms when handling the complex platform vibration, this paper proposes an adaptive motion compensation method based on the Levenberg-Marquardt (LM) algorithm under the minimum Tsallis entropy criterion. The image quality-driven motion compensation algorithm proposed in this paper does not rely on the dominant target points, and it can precisely estimate the complex and varying vibration phase under the nonlinear least squares framework without the additional compensation steps. Moreover, the iterative process of the LM algorithm is derived under the minimum Tsallis entropy criterion in this paper. This algorithm adaptively adjusts the search displacement to achieve the feedback update and the iterative optimization, enabling precise estimation of the vibration phase and suppression of image blur, thereby obtaining high-quality focused terahertz SAR images. Furthermore, the comparison results of the simulated and real-measured data verify the rationality and feasibility of the proposed TAMVM model, and demonstrate the superiority of the proposed adaptive motion compensation method in achieving the precise terahertz SAR image focusing and suppressing false imaging points.
To address the limited generalization capability of single capacity degradation models in predicting the remaining useful life (RUL) of lithium-ion batteries under varying operating conditions, this paper proposes a prediction method based on the interactive multiple model particle flow filter (IMM-PFF). The method employs particle flow filter to collaboratively estimate the states of exponential, polynomial, and Verhulst models, and dynamically integrates multi-model predictions within an interactive multiple model framework, thereby adaptively matching the multi-phase characteristics of battery degradation. Experimental validation is conducted using lithium-ion battery degradation datasets (NASA and CALCE) under diverse operating conditions, which are divided into three distinct degradation phases. Results demonstrate that compared to single-model particle filter methods, the IMM-PFF reduces the root mean square error (RMSE) of capacity prediction and the absolute RUL prediction error by 24.3% and 4.5%, respectively. This study provides a novel high-precision and highly robust framework for lithium-ion batteries lifespan prediction in complex operational scenarios.
To address the issue of AMCL (Adaptive Monte Carlo Localization) failure in similar and dynamic environments within the field of mobile robotics, this paper proposes a method based on the improved YOLOv8 to construct a semantic chain list, which provides a pre-localization pose for AMCL, altering the particle weight update mechanism to enhance localization accuracy and robustness. Built on the YOLOv8 architecture, the method integrates the gather-and-distribute mechanism and attentional scale sequence fusion module to enhance the feature fusion capabilities of the Neck section, while pruning the model to improve both accuracy and speed. Laser SLAM (Simultaneous Localization And Mapping) is used to construct a 2D grid map, and the improved YOLOv8 extracts object semantics and maps them onto the grid map, generating a 2D semantic map. A semantic chain list is constructed based on the relationships between consecutive semantic objects. During localization, the robot's detected object semantic information is matched with the semantic chain list to provide a pre-localization pose for AMCL, modifying the particle update mechanism for precise localization. Additionally, a bag-of-words model is employed to mitigate semantic chain breaks caused by occlusion from obstacles. Localization experiments in similar and dynamic environments validate the effectiveness of the proposed algorithm.
In recent years, with the widespread adoption of intelligent mobile devices and their powerful sensing capabilities, mobile crowdsensing (MCS) has emerged as a promising method for large-scale sensing of urban dynamics. A key challenge in MCS is discovering the truth from the noisy sensory data submitted by numerous workers. However, the process of truth discovery inevitably raises privacy concerns. To address these challenges, researchers frequently integrate local differential privacy (LDP) techniques by adding random noise to workers’ data for privacy protection. Nonetheless, the randomness and unbounded nature of Laplace noise may inject excessive noise, resulting in outliers. Additionally, existing research often fails to adequately model the Laplace noise injected to satisfy LDP protection, resulting in low truth accuracy. Moreover, the current truth discovery methods are typically only applicable to discrete data, or cannot strictly satisfy the LDP constraints. To address the above issues, this paper proposes LEADER, an outlier-oriented truth discovery algorithm under LDP. First, the algorithm adds Laplace noise to workers’ data to ensure privacy protection. Second, it addresses outliers by adopting the Huber loss function to measure distances, mitigating their impact on truth estimation. Finally, through a data-driven metric approach, the algorithm optimizes the weight allocation for worker and task importance and groups workers based on the similarity of their submitted values. These enhancements enable LEADER to improve the accuracy of estimated truths while maintaining privacy protection. Theoretical analysis demonstrates that LEADER strictly satisfies LDP constraints, effectively handles continuous data, and achieves high-accuracy truth discovery. Furthermore, compared to non-private truth discovery methods, the LEADER algorithm maintains comparable communication and computational overhead. Experimental results on two real-world datasets and a synthetic dataset indicate that the LEADER algorithm significantly outperforms existing methods, achieving at least an 18% improvement in the accuracy of the noisy truth.
In open-source software and platforms, developers can submit issues to report software bugs or suggest new feature requests. Due to the lack of experience and limited professional skills, users may struggle to summarize the content of issues accurately and effectively, resulting in low-quality issue titles, which in turn decreases the efficiency of addressing issues. Additionally, existing automatic issue title generation methods are primarily designed for English open-source platforms, such as GitHub, and the performance are degraded when applied to Chinese open-source platforms, like Gitee. Furthermore, existing methods mainly use the issue body description as inputs, ignoring the code snippets in the issue. In this paper, we propose a method called GITG (Gitee Issue Title Generation) specifically designed for Gitee, an open-source platform. GITG addresses the challenge of generating issue titles for both Chinese and English text by fine-tuning the Chinese BART (Bidirectional and Auto-Regressive Transformers) pre-trained model on a constructed Gitee issue dataset. It leverages the bi-modal information from the issue body description and code snippets to generate informative and accurate issue titles. A dataset consisting of 18 242 Gitee issue samples is constructed to validate the effectiveness of GITG. Experimental results demonstrate that GITG outperforms iTAPE and iTiger by at least 13.09%, 10.18%, and 12.84% on the ROUGE-1, ROUGE-2, and ROUGE-L metrics, respectively. GITG also achieves improvements in BLEU and METEOR metrics. Human evaluation results also indicate that the average scores of the titles generated by GITG are improved by at least 26.7%, 20.8%, 24.2%, and 20.0% in overall score, fluency, informativeness, and conciseness, respectively, compared to iTAPE and iTiger.
Robust reversible watermarking (RRW) is a very novel and valuable research direction in the field of information hiding in recent years, which has a good development prospect in both image copyright authentication and high-fidelity fields. However, existing RRW schemes are less robust against geometric transformations, common attacks, and joint attacks. To solve these issues, this paper proposes a robust reversible watermarking scheme using integer wavelet transformation (IWT) and adaptive Pseudo-Zernike moments that is reversible and robust while improving watermark imperceptibility and embedding capacity. First, the low-frequency region is obtained from the original image through IWT, and the magnitude of the generated Pseudo-Zernike moment is calculated, and then the qualified moment is selected by the adaptive normalization method and the optimized embedding strategy. Then, the robust watermark is embedded into the appropriate Pseudo-Zernike moments by the improved quantization index modulation with distortion compensation (DC-QIM), and the watermarked Pseudo-Zernike moment is reconstructed to generate the watermarked image. Finally, its hash value is calculated, and the error between the watermarked image and the original image and the reconstruction error constitute auxiliary information embedded in the watermark image, so as to realize the reversible restoration of the carrier image without attack. Experimental results show that the proposed scheme is robust to common signal processing and geometric transformation attacks. Compared with RRW schemes proposed in recent years, this scheme achieves better robustness under good invisibility.
To address the issues of overlapping clusters, lack of spatial information consideration, and poor noise robustness in the possibilistic fuzzy C-Means (PFCM) algorithm, an enhanced PFCM algorithm integrating regional and neighborhood-level information is proposed. First, a novel function structure is designed to suppress overlapping clusters by introducing nonlinear attenuation characteristics, which effectively adjusts the contribution of different membership levels to various clusters, thereby reducing cluster overlap. Second, by incorporating local variance constraints, the algorithm integrates regional and neighborhood-level image information, fully utilizing spatial information to improve noise robustness. Finally, kernel metric is applied to the clustering dissimilarity measure, where the kernel bandwidth parameter is adaptively determined based on the intrinsic properties of the image, further enhancing algorithm flexibility. Segmentation experiments on noisy synthetic images, brain magnetic resonance imaging (MRI), and noisy color images demonstrate that the proposed algorithm achieves superior visual segmentation results and outperforms existing comparison algorithms in performance evaluation metrics.
Non-contrast CT (Computed Tomography) liver tumor imaging shows great potential in advancing the screening of colorectal cancer with liver metastasis. It provides reliable liver tumor segmentation from non-enhanced CT images, avoiding contrast agent toxicity, radiation, and costs. In this paper, we propose an innovative “teacher-student model driven by dual-modal knowledge collaboration (BKC-TS)”for accurately segmenting liver tumors in non-contrast CT images, significantly improving the safety, accuracy, and efficiency of liver tumor diagnosis and treatment. BKC-TS employs a teacher network to acquire explicit liver tumor knowledge and guide a student network in recognizing nearly invisible tumors from non-contrast images. It integrates clinical examination text data with medical imaging data. Text data, as prior information, guides tumor learning in CT images, enhancing precision and accuracy. The text-image collaborative learning teacher-student framework improves liver tumor segmentation accuracy in non-contrast images by integrating text knowledge and addressing CT image resolution issues. The dual-modal knowledge fusion and transmission module combines imaging and clinical data through knowledge extraction, fusion, and transmission, effectively supporting tumor localization and recognition in non-contrast images. The gaussian distribution-constrained student self-learning strategy boosts the student network’s independent learning, generalization, and robustness by iterating segmentation distribution and selecting beneficial knowledge. All experiments were conducted on a generalized dataset containing 1 140 CT liver images before and after enhancement. Experimental results show that BKC-TS achieved optimal liver tumor segmentation (at least a 2.17 percentage points IOU improvement), demonstrating its importance in non-contrast technology development.
Existing semi-supervised learning methods typically handle all pixels in the image equally, ignoring the differences in complexity of different regions within the image. This results in the model’s insufficient learning of challenging regions with higher difficulty to predict, reducing its ability to process challenging areas. Furthermore, since pseudo-labels are generated based on the model’s predictions on unlabeled data, and the model performs poorly in challenging regions, inaccurate predictions increase the risk of introducing noise into the pseudo-labels, thereby reducing their reliability. To address these issues, a semi-supervised learning framework based on suspicious pixels mutual correction (SPMC) is proposed, the framework consists of two networks with identical encoder structures but different upsampling methods in the decoder. Firstly, a common difficulty pixel screening (CDPS) module was designed to handle labeled data. By utilizing suspicious pixels from the prediction results of eatch network to accurately screen out the hard to predict pixels in both networks. These pixels are then corrected using precise supervision signals, thereby improving the model’s prediction accuracy in challenging regions. Secondly, a soft pseudo-label assisted teaching (SPLAT) module was developed to handle unlabeled data. By utilizing trusted pixels from the soft pseudo-labels generated by one network to selectively pseudo-supervise suspicious pixels in the predicted results of the other network. Two networks update parameters through high-quality interaction, thereby reducing the model's cognitive bias and enhancing the quality of the pseudo-labels. Experimental results on three publicly available medical datasets, left atrium (LA), brain tumor segmentation (BraTS), and automatic cardiac diagnosis challenge (ACDC), show that the proposed method is superior to current mainstream semi-supervised learning methods.
With the rapid development of deep learning theory and technology, deep learning-based optical flow estimation methods have significantly improved in computational accuracy and robustness. However, due to the limitations of standard convolution’s local receptive field and existing matching cost volume strategies that can lead to matching ambiguities, current methods often suffer from low accuracy in optical flow estimation and severe motion blur, particularly in large displacement motions and weak-texture regions. To address these issues, this paper proposes a global matching optimization optical flow estimation method combining deep separable residuals with multi-scale dual-channel attention. First, an encoding module is constructed that integrates deep separable residual blocks with multi-scale dual-channel attention, achieving more accurate depth features between consecutive frames while balancing parameter count and computational speed. Then, a learnable global matching optimization strategy for optical flow estimation is designed, which alleviates motion blur caused by matching ambiguities by excluding occlusions and efficiently utilizing global matching information. Finally, to enhance the model’s training stability and generalization, a combined global and local optical flow loss function is proposed to constrain model training. Experiments conducted on the MPI-Sintel, KITTI-2015 and Middlebury test datasets demonstrate that the proposed method achieves the best optical flow estimation accuracy among all compared methods, especially showing better accuracy and robustness in large displacement and weak-texture regions.
Open-set malicious traffic recognition plays an important role in the field of network security. Existing methods have limitations in single model structure and lack of flexibility; neglecting incremental training samples selection, resulting in suboptimal classification performance. To address these problems, this paper proposes a method for continuous detection and classification of malicious network flows based on double-layer model and index distribution. Based on the relationship between the output weights of scalable extreme learning machine (S-ELM) and the standard output, this method designs following three indexes: the improved closest Pearson’s correlation coefficient, the normalized relative variance, and the normalized distance to “the others” column. These indexes are multiplied together to obtain a comprehensive index, which is combined with a single classifier for unknown class detection. In order to improve the continuous incremental capability of S-ELM in the open-set recognition task, a sample selection method based on the distribution of the comprehensive index is developed to select the optimal sub-dataset for incremental model training. Comparison experiments with existing representative methods show that the NA index of unknown class detection of the proposed method can be improved by 3%~13%, and the classification Acc index can be enhanced by about 3%~7% after continuous incremental updating.
To solve the problem of adapting the resource allocation in the data plane of polymorphic network to the network usage requirements of service traffic, this paper proposes a method for the coexistence and optimized deployment of network modals in a polymorphic network environment. This method considers key constraints such as polymorphic network element resource constraints, link resource constraints, and traffic forwarding delay to ensure the quality of user services. It also ensures the connectivity of each network mode through joint routing optimization. Since the direct modeling of the network modal deployment problem is a non-convex problem and difficult to solve directly, this paper transforms the original problem into a 0-1 integer linear programming problem. Based on this, a network modal deployment and routing selection algorithm based on stochastic rounding is designed. Theoretical analysis shows that this algorithm has an approximation ratio of , where is the number of polymorphic network elements in the network. Simulation results demonstrate that the proposed approach can achieve efficient deployment of network modals in the data plane, effectively reducing link load by 13% to 22% while satisfying network resource and traffic forwarding delay constraints.
With the rise of the internet of vehicles (IoV) and intelligent transportation systems, the increasing computational costs and problem scale have made the implementation of real-time applications extremely challenging, while also bringing a large number of combinatorial optimization problems that are in urgent need of parallel solving to vehicular edge computing (VEC). Often, these complex practical problems may possess non-convex, non-differentiable or even black-box objectives and constraints, which may go beyond the scope that traditional mathematical methods can handle. In this context, evolutionary multi-task optimization (EMTO), as a new paradigm in the field of multi-task optimization, effectively solves multiple independent optimization tasks in parallel by fully exploiting the potential correlations between tasks. An explicit EMTO framework tailored for IoV is designed. By integrating the unique characteristics of IoV tasks and deeply exploring the implicit correlations among them, a novel EMTO approach for IoV is proposed, which establishes mappings based on vehicle location information. This paper focuses on the multi-task optimization problem in the context of IoV, jointly optimizing fouraspects: vehicle-to-road data routing (DR), vehicle-to-road service migration (SM), vehicle-to-vehicle message transmission (MT), and vehicle-to-vehicle task offloading (TO), with the objective of maximizing the delivery rate of each task within a specified time frame. Furthermore, to enhance the efficiency of knowledge transfer among related tasks when their correlations are unknown, a migration mechanism grounded in task correlation assessment is introduced. Specifically, the longest common subsequence between links is utilized to calculate their similarity, and three migration strategies are devised according to different correlation distributions, ensuring the algorithm’s capability of knowledge transfer across various scenarios. Finally, through experimental validation and performance evaluation, the effectiveness of the proposed framework and algorithm is demonstrated. Compared with other EMTO algorithms, the algorithm presented in this paper exhibits faster convergence speeds for various optimization problems and yields better solutions after knowledge transfer among populations, showcasing impressive results.
The Diffie-Hellman (DH) key exchange protocol, as a widely adopted cryptographic protocol, plays a key role in various encryption application scenarios. However, given the rapid development of quantum computing technology, The DH protocol faces significant threats from quantum attacks and there is an urgent need to develop alternative solutions with post-quantum security. Among them, lattice-based key exchange protocols are one of the main methods for building post-quantum key exchange protocols. This paper first systematically reviews the design of lattice-based DH-like key exchange protocols, and then points out two major distinctions between this type of protocol and the DH protocol: firstly, the protocol strictly prohibits key reuse to avoid two potential key recovery attacks that may arise from it, which significantly affects the flexibility and efficiency of the protocol; Secondly, protocols often require additional interactions, which not only increase communication complexity and latency, but also lead to a decrease in efficiency in practical application environments. These differences make it difficult for lattice based class DH key exchange protocols to inherit all the advantages when directly replacing traditional DH protocols. To reduce the cost of post-quantum migration and achieve seamless integration with existing systems, exploring the design of non-interactive class DH key exchange protocols that support key reuse has become an important research direction in the current field of cryptography. Such protocols aim to preserve the efficiency and usability of DH protocols while enhancing resistance to quantum attacks. Finally, through a thorough analysis of the advantages and challenges of such key exchange protocols, future research directions have been clarified. Aimed at further optimizing protocol design and improving performance, promote the development of post-quantum transfer technology.
As a key research direction in the field of image editing, style transfer has shown a broad applications in artistic creation and related fields. Since Gatys et al. proposed the use of deep convolutional inter-feature correlations to capture texture information for style transfer, numerous neural style transfer algorithms have emerged. Recently, with the rise of various generative models, particularly the introduction of generative adversarial networks and diffusion models, style transfer work has gained new attention. Additionally, breakthroughs in image-text cross-modal tasks have made text-guided image style transfer possible. This paper presents a comprehensive review of the latest developments in style transfer techniques, classifying methods into image-guided and text-guided categories based on the guiding conditions. Furthermore, the methods are categorized into autoencoder-based approaches, GAN-based methods, diffusion model-based methods, and other architectural variants. This paper also introduces relevant dataset and evaluation metrics for image style transfer tasks, and compares state-of-the-art methods in terms of quantitative and qualitative aspects. Finally, the paper discusses the challenges and g provides insights into potential future research directions.