CIE Homepage  |  Join CIE  |  Login CIE  |  中文 

Collections

中国电子学会科学技术奖特约专栏
Sort by Default Latest Most read  
Please wait a minute...
  • Select all
    |
  • ZHANG Lei, CHEN Zhao, YIN Liu-guo
    ACTA ELECTRONICA SINICA. 2024, 52(4): 1132-1143. https://doi.org/10.12263/DZXB.20230972

    Space communication is facing with harsh and dynamically changing electromagnetic environments. As the classical coding algorithm is designed based on the assumption of Gaussian channels, its direct application in space communication will result in low communication resource efficiency, poor flexibility, and transmission reliability deficiencies. This paper proposes a protograph-based generalized sparse random coding construction and implementation method for non-Gaussian channels in space communication. By introducing high-dimensional algebraic constraints on the protograph-based random coding architecture, the decoding performance at extremely low code rates is improved. This paper also proposes a multi-rate compatible coding structure for non-Gaussian channels and different application scenarios by dynamically adjusting the order and dimension of high-dimensional constraints. Finally, test results show that the proposed codes can achieve a large coding rate ranging from 1/40 to approximately 1, a variable information length ranging from 104 bits to 20000 bits, and a variable throughput ranging from 25 Kbps to 10 Gbps, which can achieve a frame error rate of 1.0×10‒4 at a low SNR of -37.1 dB (corresponding to E b / N 0 = - 0.79   d B).

  • LI Jia-ning, YAO Peng, JIE Lu, TANG Jian-shi, WU Dong, GAO Bin, QIAN He, WU Hua-qiang
    ACTA ELECTRONICA SINICA. 2024, 52(4): 1103-1117. https://doi.org/10.12263/DZXB.20230967
    Abstract (1231) Download PDF (1608) HTML (1238)   Knowledge map   Save

    Von Neumann computer architecture faces the bottleneck of “storage wall”, which hindering the performance improvement of AI (Artificial Intelligence) computing. Computing-In-Memory (CIM) breaks the limitation of “storage wall” and greatly improves the performance of AI computing. At present, CIM schemes have been implemented in a variety of storage media. According to the type of calculation signal, CIM scheme can be divided into digital CIM and analog CIM scheme. CIM has greatly improved the performance of AI computing, but the further development still faces major challenges. This article provides a detailed comparative analysis of CIM schemes in different signal domains, pointing out the main advantages and disadvantages of each scheme, and also pointing out the challenges faced by CIM. We believe that with the cross level collaborative research and development of process integration, devices, circuits, architecture, and software toolchains, CIM will provide more powerful and efficient computing power for AI computing at the edge and cloud ends.

  • LIU Yuan-an, GAO Zhao-dong, SUN Sheng, SU Ming, ZHENG Shao-yong, WU Fan, GUO Xing-yue, MU Dong-mei
    ACTA ELECTRONICA SINICA. 2024, 52(4): 1118-1131. https://doi.org/10.12263/DZXB.20230970

    When the working bandwidth exceeds 20 GHz (wavelength from less than 15 mm to infinitesimal), the electromagnetic (EM) environment of high-speed high-density electronic systems becomes very complex, and it is increasingly difficult to model, analyze and predict the EM response, and it makes the problem more acute when the bandwidth reaches more than 40 GHz. In order to be able to predict, evaluate and even control the process and effect of EM response at the design stage, accurate modeling methods and large-scale fast computing techniques are required, especially ultra-large bandwidth ultra-high-speed hybrid circuits and integrated circuits are involved. In this paper, a multi-scale processing technology based on the basic theory of electromagnetic computing in hybrid circuit environment is proposed, which solves the technical challenge of high-efficiency and high-precision field-circuit-hybrid modeling for electromagnetic environments of multi-scale complex circuits by employing the Huygens equivalence principle and the electromagnetic convergence and speed reduction mechanism of electromagnetic singularity. Using the Huygens equivalence principle and Kirchhoff integral equation, the Huygens port is defined on the regional boundary surface, and a general method of domain decomposition is proposed which can simultaneously cut entire region into vertical and horizontal subregions for arbitrary complex printed circuit board (PCB). The proposed realizes the hierarchical classification processing and modular packaging of any PCB structure, and improves the flexibility of high-speed high-density electronic system analysis. A technical approach based on eigen mode expansion method is proposed, and a fast parallel processing technology based on modal and domain decomposition and segmentation method is developed, which realizes both high precision and high calculation speed through the eigen-function representation for discontinuous field distribution, and reduces the calculation time and design time of complex electronic systems. Statistic data reveal frequency error of 3.7% and amplitude error of ±3 dB in verification range of 0~40 GHz. The hierarchical modeling and analysis method proposed in this paper can be applied to the design of high-end electronic communication system and improve the Chinese circuit design and control capabilities of the electromagnetic environment of broadband high-speed digital-analog system, and shorten the product development cycle.

  • SHEN Chun, LI Jian-bing, GAO Hang, YIN Jia-peng, WANG Xue-song
    ACTA ELECTRONICA SINICA. 2024, 52(4): 1189-1204. https://doi.org/10.12263/DZXB.20230699

    Low-level complex wind field usually refers to the wind field below 600 m, with rapidly fluctuating speed and direction. The fine-resolution and large-scale detection under all-weather conditions is a long-term challenge in radar science, meteorology, etc., involving key technologies such as sensor design, three-dimensional wind field retrieval and information fusion. This paper reviews the technical connotation and research status of wind field detection from the aspects of light detection and ranging (LiDAR) detection, microwave radar detection, all-weather data fusion and hazard feature extraction, and finally analyzes its development trend. We believe this work provides reference for the development and application of all-weather precision detection technology in low-level complex wind field.

  • HUANG Han-lin, XU Ke, LI Qi, LI Tong, FU Song-tao, GAO Xiang-yu
    ACTA ELECTRONICA SINICA. 2024, 52(4): 1083-1102. https://doi.org/10.12263/DZXB.20230682

    The Internet, as a critical component of a nation's information infrastructure, has played a significant role in various domains. However, as its scale continues to expand and its applications deepen, we also face the potential catastrophic consequences of inconsistent network behaviors. To ensure the normal operation of the Internet and the consistency of network behaviors, there is an urgent need for deployable network verification technologies that align network operations with the intentions of network operators. Extensive research has been conducted on network verification technologies, assisting users in automating the detection of network errors and analyzing their root causes. However, to meet the increasing demands of the expanding Internet, scalability has become a crucial challenge in deploying network verification technologies. Specifically, how to quickly identify and diagnose errors in network policies, while satisfying time and space complexity constraints, has become a research hotspot in effectively applying network verification technologies in practice. To address this problem, this paper delves into and summarizes cutting-edge research on the temporal and spatial scalability of network verification. It begins by introducing the background knowledge related to network verification and then describes the current issues and challenges faced in network verification. Focusing on the core issue of scalability, the paper thoroughly analyzes existing work in achieving scalable verification from both the data plane and control plane perspectives. It provides a systematic analysis of the characteristics of these approaches, showcasing the distinctions and connections among related studies. According to the existing researches, we find that: (1) The scalability of data plane verification is primarily constrained by header space and forwarding matching rules, while the scalability of control plane verification is mainly limited by the complexity of multiple protocols and policies. (2) Although both data plane and control plane research employ similar scalable verification techniques, they address different but interconnected targets. For example, incremental computation in the data plane primarily focuses on updating packet equivalence classes, while incremental computation in the control plane primarily deals with network models affected by configuration changes. When applying network slicing techniques, both data plane and control plane independently validate the network by dividing it into multiple segments. (3) Compared to spatial scalability, current research places greater emphasis on temporal scalability, where reducing verification time overhead appears to be the primary pursuit of verification tools. (4) Previous research predominantly adopted a centralized verification approach, which involved collecting control plane or data plane information and then performing centralized analysis and verification. However, there has been a recent trend towards distributed verification, such as Coral and Tulkun in control plane verification. Lastly, based on the current research landscape, the paper concludes by summarizing and forecasting the research trends in scalable network verification technologies, offering valuable insights for researchers in this field. In conclusion, this paper presents a comprehensive review and outlook on the topic of scalability in network verification. It emphasizes the importance of aligning network behaviors with the intentions of network operators to ensure the reliable and consistent operation of the Internet. By addressing the challenges of scalability, researchers can advance the development of network verification technologies that can effectively verify large-scale networks within the constraints of time and space complexity. Ultimately, this contributes to enhancing the reliability and security of the Internet as a critical information infrastructure.

  • LI Xue-long
    ACTA ELECTRONICA SINICA. 2024, 52(4): 1041-1082. https://doi.org/10.12263/DZXB.20230698
    Abstract (1999) Download PDF (1459) HTML (1809)   Knowledge map   Save

    Approximately 71% of the Earth’s surface is encompassed by aqueous elements, such as rivers, lakes, and seas. Concurrently, terrestrial imaging contends with the influence of water in the forms of clouds, snow, rain, and fog. Notwithstanding, contemporary machine vision research and application systems predominantly concentrate on visual tasks within aerial and vacuum environments, leaving a dearth of systematic investigation into visual tasks within various aquatic contexts. Water-related vision, emblematic of water-based optical technology in the realm of vision, is committed to dissecting the scientific intricacies of light-water interactions and their inter-medium propagation. It also entails intelligent processing and analysis of visual image signals within aquatic settings. This discipline concurrently addresses engineering and technical intricacies intrinsic to the progression of advanced, intelligent water-related vision apparatus. Embarking from the fundamentally significant scientific query, “What is the reason for the ocean’s blue color?” this paper proffers an exhaustive survey encapsulating the repercussions of seawater’s light absorption, scattering, and attenuation mechanisms upon underwater visual tasks. Furthermore, the current methodologies for the processing and refinement of subaquatic images are systematically examined. Exploiting the optical attributes of water and factors contributing to image degradation, this manuscript underscores our team’s milestones in pioneering indispensable technologies for underwater imaging and image analysis. Substantial headway has been achieved in devising underwater observation and analytical apparatus, encompassing the full-ocean-depth ultra-high-definition camera “Haitong,” the full-ocean-depth 3D camera, and the full-ocean-depth high-definition video camera. These innovations have distinctly established a comprehensive and methodical proficiency in optical detection within submerged contexts, encompassing variables of color, intensity, polarization, and spectral analysis. This collective endeavor effectively bridges the gap in China’s full-ocean-depth optical detection technology, propelling the progress of exploration and technological innovation within the domain of water-related vision, which offers remarkable application value and societal advantages.

  • Special Issue: Recipients of CIE Science and Technology Awards
    WANG Zi-xuan, WANG Shan-hu, WANG Xin, YAO Jia-fei, ZHANG Jun, HU Shan-wen, CAI Zhi-kuang, GUO Yu-feng
    ACTA ELECTRONICA SINICA. 2024, 52(4): 1182-1188. https://doi.org/10.12263/DZXB.20230912

    With the development of ultra-low power (ULP) Internet of Things (IoT) systems, fast startup crystal oscillators with energy injection techniques have gained popularity due to their significant impact on the power consumption of IoT systems. Energy injection technology can significantly reduce the startup time and startup energy of the crystal oscillator but imposes strict restrictions on the accuracy of the injection source. To enlarge the injection frequency-mismatch tolerance and achieve high-efficiency injection, this work presents a delay locked-loop assisted phase-error correction technique. The injection frequency-mismatch tolerance is enlarged to 2%, and the non-injection duration of the start-up is only 4 cycles, achieving high injection efficiency. The proposed crystal oscillator is designed and fabricated in a 40 nm CMOS process. The prototype is tested with a 24 MHz crystal under 1.0 V supply. It achieves 7.2 μs startup time and consumes 5.1 nJ startup energy under 2%-injection-frequency-mismatch injection. Compared to the traditional injection with 2% injection frequency mismatch, this work achieves a startup time shorted by 99.66%.

  • ZHU Yuan-wei, HUANG Ya-kun, QIAO Xiu-quan
    ACTA ELECTRONICA SINICA. 2024, 52(4): 1144-1154. https://doi.org/10.12263/DZXB.20230788

    Tile-based methods that use the divide-and-conquer and on-demand transmission techniques are promising to handle 3D holographic video streaming. However, the current solutions either lack an adaptive tiling scheme or cannot apply to mobile real-time scenarios. In this paper, we propose VVSTiler (Volumetric Video Streaming Tiling selector), an adaptive tiling selector for holographic video communications, which can adaptively maximize perceived video quality under dynamic and limited computing and bandwidth resources. To be specific, we first conduct a preliminary study on the impacts of different tiling schemes and find that fine-grained tiles improve the rational utilization of dynamic network resources and coarse tiles ensure coding efficiency and robustness, which stimulates us to construct an adaptive tiling optimization based on the predicted viewport, available computing resources, and network bandwidth; and then devise a fast algorithm to enable online tiling decisions. Rich experiments on the 8iVFB (8i Voxelized Full Bodies) datasets are conducted to compare VVSTiler with state-of-the-art tiling-based baselines. The results exhibit that VVSTiler can achieve up to 60.4% video quality improvements and save on average 27% bandwidth per frame against the closest competitor, in cases of terrible and accurate viewport predictions, respectively.

  • ZHANG Xin, WANG Guan-cheng, WU Yi-qian, CHEN Yi-fan, LI Tian-chi, ZHANG Yi-fan, XIONG Ying-fei
    ACTA ELECTRONICA SINICA. 2024, 52(4): 1155-1172. https://doi.org/10.12263/DZXB.20230973

    Program analysis plays a critical role in software development and maintenance. However, traditional logic-based program analysis methods exhibit significant limitations when dealing with modern, complex, large-scale, and dynamically rich software systems. The root cause of these limitations lies in the uncertainty present in software systems. To address this issue, researchers have proposed a series of new techniques for specific program analysis problems. These techniques combine probability information with traditional logic analysis to capture the uncertainty inherent in software systems. By summarizing and abstracting existing work in this area, this paper introduces the Bayesian program analysis framework. The core idea of this framework is to integrate program analysis with Bayesian statistical inference. It does so by modeling and updating probability distributions about the program to infer information about program behavior. Bayesian program analysis employs probabilistic logic programming to simultaneously handle both probability and logic information, providing a unified approach that encompasses various existing works. It can also be generalized to non-traditional static program analysis tasks, such as program fault localization and delta debugging. This paper provides a definition of the Bayesian program analysis framework, demonstrates its applications in program analysis and related fields, and outlines future directions for development.

  • PING Bu, MENG Fan-bo, HUANG Jin
    ACTA ELECTRONICA SINICA. 2024, 52(4): 1173-1181. https://doi.org/10.12263/DZXB.20230968

    Microdroplet injection has a promising application in the field of printed electronics, which can complete the integrated molding of antenna dielectric substrates and conductive patterns in a single device. The surface topography quality of printed functional devices has a significant impact on their electrical properties. In this paper, for the problem of difficult to control the surface topographic quality in microdroplet jet 3D printing, a predictive control-based topographic compensation method for printed parts is proposed. Firstly, based on the layer-by-layer stacking behavior of the droplets, a topography prediction model of the printed part is established, which uses the matrix element update to describe the layer-by-layer evolution of the part topography, so as to accurately predict the existence of topographic defects in the multilayer printed part, such as the edge collapse and the large surface roughness. Then, a prediction controller is built based on this model to realize efficient compensation of topographic defects of the printed part by adjusting the print pattern of subsequent layers. Comparison experiments between open-loop printing and compensated printing are used to verify the effectiveness of the method. The experimental results show that the compensated printing method reduces the surface roughness of the printed part by 66.80% and the edge collapse by 43.22%, which effectively compensates for the surface morphology defects of the printed part. Finally, the microstrip patch antenna was fabricated using the microdroplet jet 3D printing process. The surface roughness of the dielectric substrate fabricated with the compensated printing method proposed in this paper is lower than that of the open-loop printed samples, which ensures the high-quality connection of the RF layer of the antenna. The return loss parameters were tested to be closer to the simulation results, thus demonstrating the significance of this study in printed electronics.

  • ZHANG Tao, FEI Jia-xuan, WANG Qi, SHAO Zhi-peng, CAI Xing-pu
    ACTA ELECTRONICA SINICA. 2024, 52(4): 1205-1218. https://doi.org/10.12263/DZXB.20231001

    The electric power infrastructure of China has developed into a highly informationized, automated, and intelligent cyber physical integration system. The interaction of cyber and physical not only significantly improves the efficiency and performance of power supply, but also introduces new network security threat. Cross-domain attacks that occurring in the cyber domain and acting on the physical domain can cause the systematic breakdown of power infrastructure and then lead to large-scale power outages. However, the current isolated cyber side or physical side defense system is difficult to effectively deal with these cross-domain attack threats.This paper introduces the current situation of information and physical cross domain attack threats faced by the power system, elaborates on the shortcomings of traditional defense methods in facing cross domain attacks, proposes a cross domain attack defense architecture based on information and physical collaboration, and designs defense methods from the perspectives of perception, identification, and blocking on the attack time scale. Through example design, it is proven that the proposed information and physical collaboration defense architecture can ensure the safe and stable operation of the power system.

  • ZHANG Xiu-she, HU Xiao-quan, YI Kai, WANG Xiao-jun
    ACTA ELECTRONICA SINICA. 2024, 52(4): 1219-1229. https://doi.org/10.12263/DZXB.20231051

    Element-level cooperation is the next stage of cooperative operation, and the dynamic reconstruction of kill chain is its typical feature. Aiming at the problem that platform-level cooperation is difficult to meet the demand for complex operational tasks, this paper proposes solution ideas and theoretical methods for heterogeneous platform element collaboration. Firstly analyzes the two main difficulties in realizing element-level cooperation, namely the complexity of element-level cooperation relationship and the uncertainty of cooperative ability; Secondly a based on feedback mechanism reconfigurable architecture, a dynamic reconstruction method for elements of the kill chain operator and a quantitative evaluation method for the effectiveness of collaborative order parameters were proposed through operational network modeling; Finally, The dynamic construction and reconstruction of the kill chain were achieved by simulation scenario, which can guide the algorithm design and engineering practice of element collaboration.