With the implementation of major national strategies in industries such as intelligent manufacturing and transportation, determinism has become a new focus of information networks, especially industry-specific networks. Existing deterministic network technologies provide deterministic guarantees based on network transmission elements (e.g., bandwidth or time slots). However, relying solely on network transmission elements does not support the diverse needs of emerging industry applications. For example, in computing network integration scenarios, intelligent computing tasks require the determinism of transmission and computing elements to achieve high-performance communication. In green communication scenarios, the determinism of node energy elements needs to be considered to maintain network operation stability. In response to the above requirements, this paper studies generalized deterministic identification networks with respect to multiple elements such as transmission, computing, storage, and energy based on a previously proposed network identification technology. First, a generalized deterministic identification network architecture is proposed that includes a differentiated service layer, a heterogeneous network layer, and an intelligent adaptation layer. The differentiated service and heterogeneous network layers uniformly identify the deterministic applications and networks. The intelligent adaptation layer schedules the network resources in units of flow. Existing deterministic resource scheduling methods, even if they only consider the basic deterministic elements in a single network, still face problems such as long computational time, high complexity, and low flexibility. To support a more complex collaborative adaptation of multiple deterministic elements, the end-to-end deterministic resource scheduling (E2eDet) algorithm, which is based on deep reinforcement learning, is designed. To meet the various deterministic requirements of different applications, E2eDet uniformly and collaboratively allocates multiple deterministic network resources for mixed data streams from end to end. Experimental results show that E2eDet increases the amount of data flow scheduling by 28.4% and 6.38× when compared with the DeepCQF and Random algorithms, respectively. Moreover, E2eDet can better balance the computational time and scheduling ability.
JointCloud Computing is a new computing paradigm that supports heterogeneous cloud services such as public cloud, private cloud, edge cloud, and micro cloud to achieve a win-win situation with the autonomous peer, cooperation way. It provides tamper-proof, traceability support for behavior interactions, trust building, and contribution measurement between independent cloud service entities, thereby minimizing the uncertainties that hinder collaboration between different stakeholders. What principles and mechanisms need to be designed to improve the willingness and efficiency of JointCloud collaboration is a new proposition that needs to be further elaborated in the evolution of JointCloud computing. Firsthy, based on the design principle of building or enhancing trust relationship on the basis of “digital space evidence”, this paper proposes the software-defined model of JointCloud distributed ledger (JCDL) based on BlockChain and systematically expounds the inherent mechanism of distributed ledger supporting the JointCloud service independence, auditability and unleashing the potential of JointCloud cooperation. Secondly, facing the complex interaction in JointCloud, it deepens and solidifies the design of the logical process and key aspects of the JCDL operation, so as to avoid the uncertain conflicts and contradictions under the comprehensive action of complicated elements. Thirdly, the logical model of JCDL contract is refined and optimized for the scenario of data asset flow and complex asynchronous interaction. Finally, the paper discusses the core technologies and corresponding indicators of JCDL represented by distributed consensus and smart contract and looks forward to the target attribute system and forward-looking challenges of key techniques.
Photos captured under low-light conditions suffer from multiple coupling problems, i.e., low brightness, color distortion, heavy noise, and detail degradation, making low-light image enhancement a challenging task. Existing deep learning-based low-light image enhancement methods typically focus on improving the illumination and color while neglecting the noise in the enhanced image. To solve the above problems, this paper proposes a low-light image enhancement method based on task decoupling. According to the different requirements for high-level and low-level features, the low-light image enhancement task is decoupled into two subtasks: illumination and color enhancement and detail reconstruction. Based on the task decoupling, we propose a two-branch low-light image enhancement network (TBLIEN). The illumination and color enhancement branch is built as a U-Net structure with global feature extraction, which exploits deep semantic information for illumination and color improvement. The detail reconstruction branch uses a fully convolutional network that preserves the original resolution while performing detail restoration and noise removal. In addition, for the detail reconstruction branch, we design a half-dual attention residual module. Our module enhances features through spatial and channel attention mechanisms while preserving their context, allowing precise detail reconstruction. Extensive experiments on real and synthetic datasets show that our model outperforms other state-of-the-art methods, and has better generalization capability. Our method is also applicable to other image enhancement tasks, i.e., underwater image enhancement.
To solve the problem that the mainstream network based on Transformer only does self-attention computation on the input pixel blocks and ignores the information interaction between different pixel blocks, as well as the blurring of local feature details due to a single input scale, a backbone network based on Transformer and used for processing vision tasks is proposed called ConvFormer. ConvFormer aggregates the semantic information between multi-scale pixel blocks through the designed channel-shuffle and multi-scale attention (CSMS) and dynamic relative position coding (DRPC) modules, as well as introduces deep convolution in the feedforward network to improve the local modeling capability of the network. In the image classification, target detection, and semantic segmentation experiments on public datasets ImageNet-1K, COCO 2017, and ADE20K, ConvFormer-Tiny compares with the optimal networks of the same magnitude RetNetY-4G, Swin-Tiny, and ResNet50 in different vision tasks, the accuracy is improved by 0.3%, 1.4%, and 0.5%.
Compared with the supervised deep learning-based denoising models adopting fixed network parameter values, the unsupervised deep image prior (DIP) is more flexible and practical than those supervised denoising models. However, the overall performance of the unsupervised DIP model is far lower than those supervised ones, especially when it is easy to obtain training data such as synthetic noisy images. To improve the performance of the DIP model, in this paper we propose a denoising model called dual-channel deep image prior (DCDIP). The DCDIP model consists of three modules: preprocessing, online training, and image fusion. First, two classical denoising methods, i.e., BM3D and FFDNet, are used to preprocess a given noisy image to obtain two corresponding initial denoised images. Then, the original DIP model with single channel approximation target image architecture is expanded to a dual-channel working manner. In the first channel, the initial denoised image obtained with FFDNet and the noisy image are taken as the dual-target images. Similarly, in the second channel, the initial denoised image obtained with BM3D and the noisy image are taken as the dual-target images. On this basis, according to the original DIP online training manner, the output image of the DCDIP is iteratively trained to approach the respective target images in the two channels, and the iterative process is terminated in time with the proposed pseudo reference image quality evaluation index based on the edge energy. As such, we can obtain two intermediate generated images with high quality. Finally, the two intermediate generated images are fused as the final denoised image by using the structural patch decomposition (SPD) fusion algorithm. The experimental results show that the proposed DCDIP significantly outperforms the original DIP model and unsupervised ones by about 2.2 dB across different noise levels. In addition, it even approaches and surpasses the recently proposed supervised denoising models, demonstrating the effectiveness of our improvement strategy. On the real-world noisy image, the proposed DCDIP outperforms the second-ranked competing denoising method by about 2 dB, which verifies its unique advantages in the practical application scenarios. The performance improvement is mainly due to the hybrid use of internal and external image prior-based denoising methods under the dual-channel DIP framework.
The space-time focusing characteristics of the time-reversal transmission technology can effectively improve the performance of the communication system. This paper designs a scheme that applies the time-reversal technology to the direct-spread-spread-spectrum (DSSS) communication system. The performance of the system, such as receiving signal-to-interference-noise ratio, system capacity, and bit error rate, is analyzed theoretically and verified by simulation. Theoretical and simulation results show that the performance of the DSSS communication system has been significantly improved through time-reversal preprocessing, and it is better than the system using multi-channel Rake receiver with high complexity, and the more the number of multi-paths, the more obvious the advantage.
The existing object detection methods are insufficient for low-light images due to their intrinsic property such as low contrast, detail loss and high noise. To solve this problem, a low-light object detection method that combines spatial-aware attention mechanism with multi-scale feature fusion (SAM-MSFF) is proposed. Firstly, multi-scale features are fused by multi-scale interactive memory pyramid to enhance effective information under low-illumination condition, and features of memory vector storage samples are set to capture potential correlation between samples. Then, a spatial-aware attention mechanism is introduced to obtain long-distance context information and local information of features in spatial domain, thereby enhancing the object features in low-light images and suppressing the interference of background information and noise. Finally, multiple receptive field enhancement module is used to expand receptive field of the features, and the features with different receptive fields are grouped and re-weighted, so that detection network can adaptively adjust the size of receptive field according to input multi-scale information. Experimental results on the ExDark dataset show that mAP (mean Average Precision) of the proposed method reaches 77.04%, which is 2.6%~14.34% higher than existing mainstream object detection methods.
Due to the decision-making provision for vehicle environment perception and path planning, street scenes video instance segmentation as one of the key issues in research of self-driving technology has aroused wide concern. However, current researches focus on insufficient edge feature extraction, which is caused by utilization of single receptive field sampling for multi-aspect ratio anchor frames and deficiencies of spatial detailed position information in the high-level feature pyramid architecture. To alleviate these problems, we propose a network anchor frame calibration and spatial position information compensation for video instance segmentation (AS-VIS). Firstly, we conduct the anchor frame calibration module as additional branch in parallel with three prediction branches to align multi-type receptive field sampling with different aspect ratio of anchor frame. Secondly, a multi-receptive field subsampling module is designed to fuse the features of various receptive fields achieving less information missing compared with traditional down-sampling. Finally, for spatial location information compensation and detail location information dispersion in the higher-level feature space, we design multi-receptive field subsampling module embedded in higher level to map active feature of target region in lower level of the feature pyramid. The street scene video dataset is extracted from Youtube-VIS benchmark, including 329 videos in training set and 53 videos in validation set. Quantitative comparison of experimental results with YolactEdge show that the average accuracy of anchor frame calibration is improved by 8.63% and 5.09%, spatial position information compensation feature pyramid network is improved by 7.76% and 4.75%, and the overall average accuracy of AS-VIS is improved by 9.26% and 6.46%. The proposed network AS-VIS realizes detection, tracking, and segmentation synchronously on instance-level street scene video sequences, and provides an effective theoretical basis for environment perception of self-driving vehicles.
The hybrid architecture of convolutional neural network (CNN) and Transformer can effectively model local and global features of images, and has emerged as the predominant choice for remote sensing images change detection tasks. Nevertheless, these networks still confront challenges. The convolution and pooling operations employed by the CNN branch typically suppress the high-frequency information of remote sensing images, resulting in decreased precision of object boundaries in change detection results. Additionally, the Transformer branch equivalently models long-range dependencies for all pixels in remote sensing images, thereby disregarding shape information and semantic associations of objects, which limits the network’s feature representation ability on changed objects in remote sensing images. To address these challenges, a remote sensing images change detection network is proposed based on edge guidance and dynamic deformable Transformer. In the CNN branch, an edge information guidance module (EIG) is designed to enhance the edge information of changed objects by leveraging the high-frequency details of images. This enhancement improves the edge accuracy of the changed objects. Simultaneously, an innovative dynamically deformable Transformer (DDaT) is designed to adaptively match changed objects with different shapes, selecting features relevant to changes to model long-range dependency relationships and enhance the network’s feature expression capability. Experimental results show that the proposed method significantly improves the detection accuracy on three public datasets: LEVIR-CD, CDD and DSIFN-CD, and is significantly better than the current mainstream networks in terms of edge accuracy and internal integrity of changed objects.
Positron emission tomography (PET) plays an important role in the early diagnosis of many diseases, and one of the difficult problems in PET image reconstruction is how to maintain the edge characteristics of the lesion in the reconstructed image while having good denoising performance. To this problem, a kernel method for PET image reconstruction is proposed, which combines deep image prior and the graph Laplacian regularization. An improved U-net neural network is designed to represent the kernel coefficients in the PET forward model. The graph Laplacian matrix is constructed by the prior information. The reconstruction model is formulated as a maximum likelihood neural network-based constrained optimization problem with graph Laplacian regularization. By applying the optimization transfer algorithm, we derive a convergent iterative algorithm.Each iteration includes a KEM step for updating image and a kernel coefficient update step using neural network. The results from simulations and in-vivo data demonstrate that the proposed method has better reconstruction performance under different criteria, and outperforms the kernelized expectation maximization (KEM) and the state of the art neural KEM methods.
Affected by many different factors, such as noise or random details, the size ratio of target to background, or the point spread function during imaging, the gray level histograms of many images appear as non-modal, unimodal, bimodal or multimodal patterns. To deal with the issue of automatic threshold selection in these four different modal situations within a unified framework, an automatic thresholding segmentation method guided by maximizing the multi-directional weighted Tsallis entropy is proposed in this paper. Based on the multi-scale product effect of a newly designed arctangent directional convolution kernel, the proposed method first converts the gray level histograms of different modalities into a unified unimodal right-biased gray level histogram. After extracting this special unimodal right-biased gray level histogram in four different directions, a multi-directional weighted strategy is utilized to construct a weighted Tsallis entropy objective function that is closely related to the gray levels of the original image. When the objective function takes the maximum value, the corresponding gray level is used as the final segmentation threshold. The proposed method is compared with three thresholding methods, one soft segmentation method, one active contour method and one automatic clustering segmentation method. The experimental results on four synthetic images and fifty real-world images in four different modal situations show that although the proposed method has no advantage in terms of computational efficiency, it has more robust segmentation adaptability to test images of different modalities, and it is superior to the other six segmentation methods in terms of Matthews correlation coefficient used to quantify segmentation accuracy.
The construction of indistinguishability obfuscation ( ) is a long-term concern confusing the researchers. The existing constructions are based on primitives of multi-linear map, functional encryption, fully homomorphic encryption. These routines naturally inherent the shortages appeared in security, efficiency and generic abilities. To explore new approaches satisfying better generic functioning and indistinguishability from the angle of automatic searching, circuit design problems are mapped to the construction of graph neural network. In this paper, we present an framework called AG (Adversarial Graphweualietwork based ), which is based on dual adversarial neural network and can automatically generate sub-optimal with functional equivalence and generic circuit obfuscation. The AG achieved indistinguishability by circuit garbling which is a natural tool in constructing obfuscation. Then we design the graph-based automatic evolvement, which can well achieve sub-optimal circuit generalization. Through our test, the AG is simple to deploy and implement, while the efficiency is acceptable in achieving generalization and statistical indistinguishability.
Deep neural networks are vulnerable to adversarial examples. Gradient based attacks exhibit weak transferability in the black-box setting, though perform well in the white-box situation. The Heavy-ball momentum and Nesterov momentum based attacks boost the transferability for the consideration of gradient history. To further take advantage of the gradient history information, we propose an iterative fast gradient method (NAI-FGM) on Nesterov momentum for its faster convergence property. As the commonly used constant step size is replaced by adaptive step size, NAI-FGM makes use of gradient history information both in step size and gradient direction. Additionally, we propose a new input transformation mechanism named linear-transformation invariant method (LIM). Experimental results demonstrate that NAI-FGM and LIM perform better than the same kind attacks. Besides, the integrated method LI-NAI-FGM could achieve an average rate of 87.8% on commonly trained models, 57.5% on adversarial trained models, 67.2% on defense models, which are higher than the state-of-the-art results.
Inverse synthetic aperture radar (ISAR) sparse imaging methods can provide the imaging results with high image contrast and less sidelobe interference. The premise of sparse imaging is that the scatterers distribution of the scene or target to be imaged is sparse, which means that the final imaging quality is determined by the sparse feature of the target or scene to be imaged. The natural sparsity of ISAR target scenes emphasize point-like features, and the sparse representations of to be imaged target scene in the transform domains can enhance general features (e.g., texture or contour features) of the target. The well learned sparse transformation dictionaries can adapt to the to be imaged target scenes and find their unique sparse representations. However, the image patches oriented sparse representations ignore the geometric feature of target to be imaged. The nearest neighbor graph model is able to establish the geometric feature description operator of the given data, which can be used for describing the geometric feature information of the given data. In this paper, we introduce the nearest neighbor graph model (NNGM) into ISAR sparse imaging to express the geometric feature of the to be imaged target. The NNGM of the to be imaged target is then used as the regularization term and mapped to ISAR sparse imaging model. We propose an ISAR sparse imaging method combined with the NNGM for the imaging of different types of real ISAR data. Compared with the existing ISAR sparse imaging methods, the proposed imaging method can provide the imaging result with clearer contour, and the imaging time is reduced by 10.4% on average.
Most of the existing distance measures for the HFS (Hesitant Fuzzy Set) and its extensions either require the length of the membership in the HFEs (Hesitant Fuzzy Elements) to be equal or need to rearrange the order, which result in error and information loss under certain circumstances. In order to solve these problems, we propose a distance measure of HFS based on distance matrix. The limitations of the existing hesitant fuzzy comparison rules are analyzed, and a new total order comparison rule is defined. The proposed distance can reasonably explain the shortcomings of the existing hesitant fuzzy distance measures. Furthermore, we explore a CDPA (Comparison, Distance, Prospect, Aggregation) method by using a new comparison law, the proposed characteristic distance measure, prospect theory notion and aggregation operators. Finally, we apply the CDPA method to the decision making level target recognition and solve a target threat judgment problem. The results show that the proposed method can liberate the length and order to obtain correct recognition.
To address the missed detection problem resulting from the fluctuation of target signal-to-clutter ratio in complex clutter background, this paper proposes the power spectrum manifold-based information geometry dynamic programming track-before-detect (DP-TBD) algorithm by combining the performance advantage of information geometry detector and the ability of dynamic programming in multi-frame information accumulation. This algorithm utilizes the duality between the power spectrum manifold and matrix manifold and designs the power spectrum information geometry detector to reduce the computation complexity of information geometry detector by approximate two levels. According to the experiments based on real-recorded clutter data, the power spectrum DP-TBD algorithm achieves almost the same detection performance as the matrix DP-TBD algorithm while only requires 3%~8% running time than that of the matrix DP-TBD algorithm. In addition, the power spectrum DP-TBD algorithm provides an SCR improvement of 2~3 dB to information geometry detector.
The utilization of the synergistic fusion of low resolution hyperspectral image (LR-HSI) and high resolution multispectral image (HR-MSI) for the purpose of achieving enhanced hyperspectral spatial resolution has emerged as a prominent and actively pursued research area within the domain of hyperspectral image processing. At the present time, deep learning has become an efficient tool for HSI-MSI fusion. Despite the potential of deep learning, there are still some challenging, such as how to effectively mine the complementary information of HSI and MSI, how to inject the spatial structure and detail of MSI into HSI, and how to maintain the spectral fidelity of HSI. This study proposes a multilevel wavelet-deep aggregation network (MW-DAN). It has dual branches, which combine undecimated wavelet transform (UDWT) with deep residual network to promote the image reconstruction. Particularly, the UDWT directional subband decomposition of MSIs is performed by introducing jumper aggregation connections in the deep residual network to design an information aggregation type structure, and injected into the middle hidden layer of the network layer by layer to enhance the detail injection and spectral fidelity of the directional subband structure. The entire network are trained from LR-HSI, HR-MSI and HR-HSI in an end-to-end fashion. It could learn the spatial-spectral fusion nonlinear mapping with superior performance. Experimental results on simulation and real datasets show that the proposed method is superior to the state-of-the-art fusion methods in terms of objective evaluation index, spectral fidelity and visual performance.
Aiming at the problem of multi-static underwater small target classification and recognition, a multi-static underwater small target recognition method based on joint sparse representation of kernel space and exponential smoothing is proposed. Six typical features with information complementarity and correlation are extracted from the multi angle scattering signals of underwater targets. A feature selection method (RF-mRMR) combining random forest (RF) and minimum redundancy maximum correlation (mRMR) is proposed to obtain the comprehensive feature importance ranking results. The optimal feature subset required by the classification model is obtained through experiments, so as to reduce the complexity of data processing and improve the result of target classification. In order to capture the high-order structure in the data, based on the joint sparse representation model, the kernel function is used to map the linearly indivisible feature data to the high-dimensional kernel feature space. In order to fully mine the useful information contained in the residual band after sparse reconstruction, the exponential smoothing formula is used to reuse the residual information with certain significance. Finally, the category of the target is determined by the minimum error criterion under the kernel feature space. The method proposed in this paper is applied to identify the sea trial data of four types of targets. The results show that the improved method has better classification performance than the other seven comparison algorithms in this paper. In most cases, the proposed algorithm has higher recognition accuracy and lower false alarm rate than mono-static sonar in bistatic sonar mode.
With the rapid development of mobile terminals, especially the industrial Internet technique, the dense distribution of terminal devices and the limitation of wireless mobile bandwidth make it difficult for the centralized cloud resource scheduling of specific business processes to meet the low-latency and low-cost computing needs of remote terminal applications. Focusing on local servers linked to cloud data centers, edge computing provides an agile computing service model for these mobile applications. Although the service pattern of edge computing can effectively reduce the latency of mobile applications and the communication costs, task offloading between heterogeneous resources in the edge computing environment often leads to potential data security hazards and compromised quality of service. In response to the above challenges and technology development trends, we propose a consortium blockchain based reliable task offloading approach in edge computing. In this approach, we design a genetic algorithm-based offloading strategy using the consortium blockchain for identity verification and feedback of offloading results, and using task completion time, offloading cost and resource reliability as evaluation index. The results of simulation experiments show that our approach can improve task offload reliability while satisfying the latency constraint, providing an effective data security approach for mobile smart applications.
Convolution operation is an indispensable part of modern convolutional neural networks, and it is also the most time-consuming. In order to solve the performance problem of convolution operators, fast convolution algorithms including FFT (Fast Fourier Transform) and Winograd have been proposed. Winograd convolution is used to improve the inference performance of small convolution kernels and is currently the mainstream implementation method in convolutional neural networks. However, the implementation of Winograd convolution in many highly optimized deep neural network libraries and deep learning compilers is relatively inefficient. Due to the complex data dependence of the four stages of Winograd convolution, it is very challenging to optimize it for GPU. In this paper, the performance of the Winograd convolution operator is optimized for modern GPU architecture. This paper proposes the equivalent transformation of the Winograd calculation stage and its non-synchronization implementation using Tensor Core, and further proposes a partial kernel fusion method utilizing different GPU memory hierarchies, i.e. PKF (Partial Kernel Fusion). Based on TVM (Tensor Virtual Machine) and a code reconstructor named PKF-Reconstructor (Partial Kernel Fusion Reconstructor), a high-performance Winograd convolution is implemented. The evaluation of the convolution operators from real-world convolutional neural networks shows that the proposed algorithm achieves a performance improvement of 7.58~13.69 times compared with cuDNN.
In this article, by extending the BLT (Baum-Liu-Tesche) method, the shielding effectiveness (SE) of cavities with complex structures is calculated. The BLT method was originally used to analyze the transmission line model and was later extended to calculate the shielding effectiveness of the cavity. However, it is limited to calculating the shielding effectiveness of the rectangular cavity or cylindrical cavity, and cannot analyze more complex cavity structures. Combined with the numerical method, and using the regression method of machine learning, it is extended to the shielding effectiveness calculation of complex cavity structures. The eigenmodes of the two-dimensional section of the cavity are firstly calculated; then an equivalent circuit network structure is established based on the law of electromagnetic wave propagation, and the frequency domain response distribution of the shielding effectiveness of the cavity is obtained by combining the BLT method; finally, the machine learning method is used to train the SVR (Support Vector Regression) model, modify the frequency domain response amplitude, and obtain the shielding effectiveness of the cavity target. In this paper, three cavities with different structures are used to verify the method proposed, and the frequency range is 0.1~3 GHz. Taking the results obtained by commercial software simulation as the standard, the method proposed in this paper can achieve a root mean square error of less than 3 under the condition of 85% confidence, and the calculation speed is over 80 times higher than that of commercial software.
In recent years, significant progress has been made in multi-exposure image fusion in dynamic scenes. In particular, the deep learning based methods have shown great visual performance in dynamic multi-exposure image fusion, which have become the mainstream methods in high dynamic range (HDR) imaging. However, the current deep learning based methods are mostly implemented in a supervised manner, which heavily rely on the ground-truth images. That makes it difficult for them to work in real scenes. In this paper, we propose a self-supervised multi-exposure image fusion network for dynamic scenes. The main contributions of this paper are as follows: we design a self-supervised fusion network to explore the latent relationship between HDR and low dynamic range (LDR) images; we propose an attention mechanism based global deghosting module, to reduce the ghosting artifacts caused by moving objects; we propose a merging reconstruction module with residual and dense connections, to improve the reconstruction details; we design a motion mask guided self-supervised loss function to train the proposed network efficiently. Experimental results demonstrate the effectiveness of the proposed method. Compared with the state-of-the-art methods, our method achieves higher objective and subjective quality on reconstructed HDR images, with faster running speed.
In recent years, with the rapid development of high-precision orientation system, the imaging array based on handheld millimeter wave radar has attracted extensive attention in the fields involving internal structure imaging such as nondestructive testing and medical imaging. Different from the common two-dimensional (2D) planar single-input-single-output (SISO) array, the elements of the handheld millimeter wave radar imaging array are usually unevenly distributed in three-dimensional (3D) space, which means that the existing fast imaging algorithms for dielectric targets are inapplicable. In this paper, a fast imaging algorithm for 3D non-uniform SISO array is proposed. The proposed algorithm expands each element of the 3D spatial random array into a virtual uniform array, and then transforms the data of all virtual arrays into wavenumber domain and coherently accumulates them, and finally performs fast imaging through 3D inverse fast Fourier transform (IFFT). Under the imaging parameters given in this paper, numerical simulation and experimental measurements show that the proposed algorithm can achieve the same imaging quality with reducing the imaging time by more than 94% compared with the improved backward projection (IBP) algorithm.
Entanglement-assisted quantum error-correcting codes are the generalization of classical quantum error-correcting codes, which realize quantum communication by using pre-shared entangled states between the receiver and the sender. It is an interesting problem to construct entanglement-assisted quantum error-correcting codes with small pre-shared entangled states because of the extra cost of pre-shared entangled states. In this paper, a sufficient condition for a class of negacyclic codes over finite fields to be Hermitian dual-containing codes is given. Then based on the structural properties of cyclotomic cosets, the existence conditions of different number of pre-shared entangled states are determined, and some new entanglement-assisted quantum maximum-distance-separable (MDS) codes with small pre-shared entangled states are obtained by using the construction method of entanglement-assisted quantum error-correcting codes.
Direction of arrival (DOA) estimation uses sensor arrays to identify the direction of sound sources, while traditional DOA estimation methods ignore the sparsity of sound sources in spatial distribution. The penalty function used by current convex sparse DOA estimation methods and non-convex sparse DOA estimation methods do not consider the important scale invariance feature of sparse norm, which cannot accurately describe the spatial sparse structure of the sound source, and it is difficult to obtain high DOA estimation accuracy. For this reason, firstly, the scale-invariance norm ratio function is used to approximate the norm and characterize the spatial sparse structure of the sound source in this paper; Secondly, aiming at the non-convex property of the norm ratio function, a smooth approximation function is constructed by using the idea of smoothing; Then, the scale-invariant -over- regularized sparse DOA estimation model is constructed, and meanwhile an optimization algorithm is developed for it. A lot of simulation analysis demonstrate that the proposed algorithm has higher DOA estimation accuracy and better performance under different SNR and snapshot numbers than the popular multi-snapshot DOA estimation algorithm. The analysis results of S5 events in SWellEx-96 sea trial experiment verified the effectiveness of the proposed algorithm.
The widespread spread of network rumors has caused great harm to the society, so the task of early rumor detection has become an important research focus. The majority of existing methods for rumor detection focus on mining effective features from text contents, user profiles, and patterns of propagation, but these methods do not take full advantage of both global semantic relationship of text and local context semantic relationship. In order to overcome the above limitations and make full use of the text global-local context semantic relationship, text semantic content feature and the structural feature of tweet propagation in the rumor data, this paper puts forward a kind of early rumors detection algorithm based on Bert-GNNs heterogeneous graph attention network (BGHGAN).This method constructs a tweet-word-user heterogeneous graph according to historical rumor sets and user characteristics, using the method of combining Bert and GCN (Graph Convolutional Network) for feature learning to mine the relationship between the text semantic features and the text of rumors. And by decomposing the heterogeneous graph into tweet-word subgraph and tweet-user subgraph, the method uses GAT (Graph Attention network) to perform feature learning respectively, so as to make full use of the global-local context semantic relationship of the text and the global structure relationship of the propagation graph to strengthen the feature expression. Finally, the learning integration of different modules is carried out through the subgraph-level attention mechanism for final rumor detection. The proposed algorithm is experimented on real Twitter15 and Twitter16 data, and verifies that the detection accuracy of the algorithm is 91.4% and 91.9%, respectively, which is 1% and 1.4% higher than the existing best model, and also has the ability to detect rumors in the early stage. And this paper discusses the importance of different features to rumor detection and the importance of the quality of heterogeneous graph construction.
As the scale and complexity of software increase, it becomes more difficult to ensure its quality and reliability. Some of the most important software quality and reliability assurance methods are software fault localization techniques, of which spectrum-based fault localization (SFL) is the most commonly used. SFL calculates the suspicious values of code statements by analyzing the statement coverage matrix, locating the faulty statements according to the suspicious values. However, the statement coverage matrix suffers from a serious redundancy problem, which severely impairs the fault localization performance of SFL. For instance, in more than half of the statement coverage matrices of 395 programs in the Defects4J dataset, there are other statements with the same coverage information for 90% of the statements. Feature selection, a data preprocessing technique, is often used to obtain valuable feature subsets by removing redundant and irrelevant features. We propose a software fault localization approach, based on redundant coverage information reduction (FLRR), by taking the statement coverage matrix as the original feature set and modeling the reduction of redundant coverage information as a feature selection problem. First, feature selection techniques are applied to reduce the statement coverage matrix, which includes both statement coverage information and test case execution results, to obtain a subset of the matrix. Second, SFL is used to calculate the suspicious values of statements in the statement coverage matrix subset, and the statements are sorted in descending order according to their suspicious values. The method presented in this paper uses six common feature selection techniques to perform feature selection and reduction on the statement coverage matrix, to obtain the subset of the matrix, and then uses four typical SFL techniques to localize faulty statements in the subset. To evaluate the fault localization performance of FLRR, comparative experiments were conducted with four typical SFL techniques on the Defects4J dataset, using and MRR (Mean Reciprocal Rank) as evaluation metrics. Experimental results show that FLRR can improve the fault localization performance of SFL. When compared with DStar, Ochiai, Barinel, and OP2, FLRR located 23, 26, 14, and 13 more faulty statements, improved by 69.70%, 76.47%, 45.16%, and 38.24%, and improved MRR by 20.08%, 24.94%, 17.45%, and 19.15%, respectively.
Blind single-image super-resolution refers to reconstructing the high-resolution image from a single low-resolution one with an unknown blur kernel, which is a severely ill-posed inverse problem. The additional information about the latent high-resolution image can be incorporated by adding the regularizer in order to recover or reconstruct reasonable high-frequency details for the low-resolution image. In this paper, we propose a blind super-resolution method based on the cross-scale low rank prior from a single low-resolution image, which alternates between updating the blur kernel and the high-resolution image by a jointly modeling approach. According to the self-similarity across the high-resolution image, the low-resolution image and its down-sampled image, we search for similar patches from the down-sampled image for the low-resolution patch, and group into a matrix the cross-scale similar image patches consisting of the parents of the low-resolution patch and its similar patches in the high-resolution reconstructed image and the low-resolution image respectively. Since the cross-scale similar patches in the low-resolution image provide potential details for reconstructing the high-resolution image patches, the low rank matrix approximation applied to the cross-scale similar patches enforces the reconstructed image to recover more high-frequency details and thus promotes the accuracy of the kernel estimation during the iteration. In addition, the low rank regularization elegantly indicates the non-local structure of data inherently robust to noise. Experimental results on real and simulated images show that the proposed method can accurately estimate the blur kernel and reconstruct high-resolution image with sharp edges and fine details, which outperforms the existing blind super-resolution methods based on unsupervised learning.
In the 3D modeling and visualization analysis of ground penetrating radar (GPR) data, the 3D time-varying arrays they rely on are usually generated indirectly from 1D data channels or 2D arrays. Since these data are often sparse and irregularly shifted, dense processing is required to obtain high-density 3D time-varying arrays. This paper proposes a constrained embedding dense time-varying array construction method based on controllable variational autoencoder (ControlVAE). A time-varying data reconstruction network based on the ControlVAE is used to increase the array density by generating pseudo-channel data close to the natural distribution through the fusion of hidden space sampling interpolation and deep and shallow feature information. A data alignment module based on scale-invariant feature transformation is constructed to extract the inter-channel gradient features and structural symmetry features to complete the data alignment in the spatial domain. The shallow features are also embedded as constraint information in the data generation network to eliminate the influence of data channel offset on array generation. The experimental results show that the proposed method can efficiently reconstruct a time-varying array with a single scan direction using a two-dimensional B-scan data set. This method can effectively reduce the cumulative error of the reconstructed model and improve the robustness in the face of complex actual data.
In order to facilitate scientific researchers to understand the project application, acceptance, and funding status of the “Semiconductor Science and Information Devices” discipline direction of the National Natural Science Foundation of China, this article conducts a statistical analysis of the status of the projects in 2023. Firstly, the important reform measures of the National Natural Science Foundation of China in 2023 is briefly introduced. Subsequently, the application and funding status of F04 including general projects, youth science fund projects, regional science fund projects, key projects, outstanding youth science fund projects, and national outstanding youth science fund projects are summarized and analyzed. The distribution of supporting organizations of the general program, youth science fund projects, regional science fund projects are analyzed, as well as the four types of scientific problem attributes of the applied projects. Finally, the priority development direction in the field of “semiconductor science and information devices” is prospected.
Internet is the paramount infrastructure in information society, and the architecture designing is one of fundamental research issues on promoting its healthy, orderly and sustainable development. This paper clears up the definition of network architecture, summarizes the current situation and key challenges of Internet development, and elucidates the impact and significance of network architecture research in facing challenges and facilitating Internet development. Subsequently, the historical research plans and projects on Internet architecture are outlined, and some key viewpoints and important results obtained in these research investigations are presented. Two main technical roadmaps to update network architecture, including in the fashion of both revolution and evolution, are discussed, respectively. Both recent research progress and future trend are discussed in details. Finally, based on historical and recent research results, the paper provides some opinions and comments about significance, impact, methodology and incentive in network architecture research.