CIE Homepage  |  Join CIE  |  Login CIE  |  中文 

Most download

  • Published in last 1 year
  • In last 2 years
  • In last 3 years
  • All

Please wait a minute...
  • Select all
    |
  • WU Xiang-yu, SHEN Ying, TANG You-xi
    Acta Electronica Sinica. 2015, 43(1): 1-6. https://doi.org/10.3969/j.issn.0372-2112.2015.01.001

    Measurements and characterizations are rarely conducted about indoor self-interference channel of CCFD(Co-time Co-frequency full-duplex).An extensive self-interference channel measurement is performed at 2.6GHz under typical indoor environment with channel sounder based on vector network analyzer.By analyzing the measurement data,the empirical channel characteristics such as path loss exponent,RMS(root-mean-square)delay spread have been extracted.Results show that path loss exponent and RMS delay statistics are different according to antenna separation of transmitter-receiver.When the separation is larger than 1m,the path loss exponent is 1.86,and the RMS delay statistics follow lognormal distribution.When the separation is less than 1m,the path loss exponent is 1.52,and the RMS delay statistics at different antenna separation follow lognormal distribution whose mean and variance are linear functions of antenna distance respectively.

  • LI Xiao-xu, LI Rui-fan, FENG Fang-xiang, CAO Jie, WANG Xiao-jie
    Acta Electronica Sinica. 2014, 42(10): 2040-2044. https://doi.org/10.3969/j.issn.0372-2112.2014.10.026

    In the paper, we mainly focus on classifition on multi-view data.Considering that ensemble methods can combine weak classifiers to construct a strong classifier, and topic model can learn latent representations from complex data, we try to introduce ensemble idea to topic model, such that predictive latent representation could be obtained and multi-view classifier could be learned.We propose multi-view supervised latent Dirichlet allocation (multi-view sLDA) model by combining latent Dirichlet allocation model and the mixture of softmax model which is an ensemble classification model.Moreover, we derive a parameter estimation algorithm of the proposed model based on variational expectation maximization (EM) procedure.The experimental results on two real datasets show the effectiveness of the proposed model.

  • 综述评论
    QIAN Zhi-hong;WANG Yi-jun
    Acta Electronica Sinica. 2012, 40(5): 1023-1029. https://doi.org/10.3969/j.issn.0372-2112.2012.05.026
    Based on analyzing IoT two basic concepts,architecture of IoT is proposed,which includes underlayer network distribution,convergence gateway access,inter-connected network integration and terminal user application.In the architecture,a protocol structure of IoT is given,which consists of network protocol layers,network control platform and application terminal platform,and the key technologies for IoT have been discussed concerning hardware and software.Six development conceptions of future IoT have been presented based on summarizing the existing problems of IoT in standards,technologies,security and application.
  • 论文
    SHI Guang-ming;LIU Dan-hua;GAO Da-hua;LIU Zhe;LIN Jie;WANG Liang-jun
    Acta Electronica Sinica. 2009, 37(5): 1070-1081.
    Sampling is the bridge between analog source signal and digital signal.With the rapid progress of information technologies,the demands for information are increasing dramatically.So the existing systems are very difficult to meet the challenges of high speed sampling,large volume data transmission and storage.How to acquire information in signal efficiently is an urgent problem in electronic information fields.In recent years,an emerging theory of signal acquirement——compressed sensing (CS) provides a golden opportunity for solving this problem.This paper reviews the theoretical framework and the key technical problems of compressed sensing and introduces the latest developments of signal sparse representation,design of measurement matrix and reconstruction algorithm.Then this paper also reviews several open problems in CS theory and discusses the existing difficult problems.In the end,the application fields of compressed sensing are introduced.
  • 综述评论
    SU Song-zhi;LI Shao-zi;CHEN Shu-yuan;CAI Guo-rong;;WU Yun-dong
    Acta Electronica Sinica. 2012, 40(4): 814-820. https://doi.org/10.3969/j.issn.0372-2112.2012.04.031
    Pedestrian detection is an active area of research with challenge in computer vision.This study conducts a detailed survey on state-of-the-art pedestrian detection methods from 2005 to 2011,focusing on the two most important problems:feature extraction,the classification and localization.We divided these methods into different categories;pedestrian features are divided into three subcategories:low-level feature,learning-based feature and hybrid feature.On the other hand,classification and localization is also divided into two sub-categories:sliding window and beyond sliding window.According to the taxonomy,the pros and cons of different approaches are discussed.Finally,some experiences of how to construct a robust pedestrian detector are presented and future research trends are proposed.
  • JIAO Li-cheng;YANG Shu-yuan;LIU Fang;HOU Biao
    Acta Electronica Sinica. 2011, 39(7): 1651-1662.

    Compressive Sensing(CS) is a new developed theoretical framework for information acquisition and processing,which is based on matrix analysis,statistical probability theory,topological geometry,optimization and opsearch,functional analysis and so on.The high-dimensional signals can be recovered from the low-dimensional and sub-Nyquist sampling data based on the compressibility of signals.It not only inspires us to survey the linear problem again,but also enriches the optimization approaches for signal recovery to promote the combination of mathematics with engineering application.Nowadays the researches on compressive sensing have developed from the earlier concept understanding,numerical simulation,principle verification,and primary system designation,to the deeper researches on theory,development and application of practical system.In this paper,we introduce the basic idea of compressive sensing,and the development history,current and future challenges.

  • 论文
    REN Hai-bing;ZHU Yuan-xin;XU Guang-you;LIN Xue-yin;ZHANG Xiao-ping
    Acta Electronica Sinica. 2000, 28(2): 118-121.
    Hand gestures play a natural and intuitive communication mode for all human dialogs.The ability for computer to visually recognize hand gestures is essential for future human-computer interaction.However,vision-based recognition of hand gestures is an extremely challenging interdisciplinary project due to following three reasons:(1)hand gestures are rich in diversities,multi-meanings,and space-time varieties;(2)human hands are complex non-rigid objects;(3)computer vision itself is an ill-posed problem.This paper presents a survey on visual recognition of hand gestures from points of modeling,analysis and recognition techniques of vision-based hand gestures recognition.
  • PAN Jing, XU Feng, WANG Yuan, ZHANG Lin, LÜ Jian
    Acta Electronica Sinica. 2006, 34(S1): 2571-2574.
    Most existing trust management systems have two main problems,(1) hard to provide security guarantee for web resources since their implements do not comply with Web Service standards;(2) the chief trust engine runs at a low efficiency,cannot fulfill demands of on time security policy in the all long changeable environment.We use java language to design and implement a trust engine oriented to Web Service based on mobile agent to satisfy security demands in the open and dynamic web environment.The main work of trust engine is to verify whether a web resource can be visited by some client.We use mobile agent to do bidirectional trust chain discovery to impro ve the efficiency of the trust engine.
  • SUN Dong-mei, QIU Zheng-ding
    Acta Electronica Sinica. 2001, 29(S1): 1744-1748.
    An accurate automatic personal identification is critical to our highly inter-connected information society.Biometrics refers to automatic identification of an individual based on her/his physiological or behavioral traits.Though biometrics is not an identi-fication panacea,it is beginning to provide very powerful tools for problems requiring positive identification.This paper attempts to summarize important research issues in biometrics.
  • YANG Li-cai, LI Bai-min, LI Guang-lin, JIA Lei
    Acta Electronica Sinica. 2005, 33(7): 1234-1241.
    Brain-computer interfaces provide a direct communication and control channel for sending messages and instructions from brain to external computers or other electronic devices.Using the non-muscular channel,subjects with severe neuromuscular dysfunction can directly express their thought and manipulate the external devices without using human language and actions.This will greatly enhance the ability of these subjects to manage external event and will improve their living quality.Brain-computer interface technology is an interdisciplinary technology integrating neurology,signal collection,signal processing,pattern recognition and more other relevant techniques.In the present paper,we review the development,the rationale and the methodology of BCI's technology and the major limitations in the BCI's implementation and applications,and then discuss some vital aspects that may be conducted in the future investigations.
  • CHEN Hao-wen, LI Xiang, ZHUANG Zhao-wen
    Accepted: 2023-01-18
    Multiple-input multiple-output (MIMO) radar as a new radar system is proposed at the beginning of 21st century,which has been attracting much attention of researchers and institutions all over the world.In this paper,MIMO radar theory is summarized based on the current literature firstly.Then,the potential capabilities of system configuration,signal design,target detection,parameter estimation and high resolution performance are introduced.At the same time,the restricting factors in practical application are pointed out.Finally,some further work and the technical difficulties in MIMO radar are predicted.
  • PAN Zong-xu, YU Jing, XIAO Chuang-bai, SUN Wei-dong
    Acta Electronica Sinica. 2015, 43(2): 209-216. https://doi.org/10.3969/j.issn.0372-2112.2015.02.001

    Adaptive dictionary learning uses the low resolution image itself as training samples to make the similar patches have sparse representation over the learned dictionary,so that extra information can be exploited from structural self-similarity by dictionary learning.In this paper,we propose a single image super resolution method based on adaptive multi-dictionary learning.To exploit extra information from both the low resolution image itself,and the image database,the proposed method incorporates the idea of global dictionary learning that the image database can be used to obtain extra information into the process of adaptive dictionary learning.In the proposed method,all patches in the image pyramid of the low resolution image are clustered into several groups,then each patch satisfying a certain condition in the database is classified into one of these groups with the supervision of the clustering results,and multi-dictionary learning is used to learn corresponding dictionaries for different groups.Experimental results demonstrate that our method achieves better result compared with ScSR,SISR,NLIBP,CSSS and mSSIM methods.

  • 论文
    GAO Ying;XIE Sheng-li
    Acta Electronica Sinica. 2001, 29(8): 1094-1097.
    This paper discusses about some algorithms of variable step size LMS adaptive filtering and establishes another new non-linear functional relationship between μ and e(n).The functional relationship is not only simpler than Sigmoid functional relationship,but also has the property of slight change e(n) near to zero.Therefore it is superior to Sigmoid functional relationship in the process of step size change of adaptive steady state.On the basis of the functional relationship,The author presents the new algorithm of variable step size LMS adaptive filtering and analyses the algorithm with various α and β.the algorithm has less computational complexity than the former algorithms,besides good convergence properties.Computer simulation results confirms the theoretical analysis and shows the algorithm performance is better than that of former algorithms.
  • 综述评论
    XIAO Zhu;WANG Yong-chao;TIAN Bin;YU Quan;YI Ke-chu
    Acta Electronica Sinica. 2011, 39(1): 133-141.
    Ultra-Wideband (UWB) has the ability of achieving the better ranging and localization precision than current wireless system.The research and application of UWB localization are studied on this paper,in which the main topics include the different UWB localization methods like TOA/TDOA(Time/Time Difference of Arrival),multipath delay estimation theory,non-light-of-sight localization,cooperative localization,MB-OFDM(Orthogonal Frequency Division Multiplexing) and other ultra-wideband signals localization.Then its development and actuality are discussed in details.The existing problems and several future valuable research directions are summarized finally.
  • 学术论文
    XU Xin-zheng;DING Shi-fei;SHI Zhong-zhi;JIA Wei-kuan
    Acta Electronica Sinica. 2010, 38(2A): 76-82.

    Image segmentation is a classic problem in computer vision,and become a hot topic in the field of image understanding.The research actuality and new progress in image segmentation in recent years are summarized in this paper.Firstly,the traditional methods of image segmentation are introduced summarily.Then,the specific theory for image segmentation,including morphology,fuzzy sets and neural network,support vector machine,immune algorithm,graph theory and granular computing,are presented emphatically.Furthermore,several new representative papers with the application of each theory are analyzed and discussed.Finally,the development trend of image segmentation method is discussed.

  • ZHANG Xian-da, BAO Zheng
    Acta Electronica Sinica. 2001, 29(S1): 1766-1771.
    A typical problem in array processing and data analysis is to recover the unobserved source signals from their mixtures.Blind source separation(BSS) is a powerful methodology for solving this problem.In recent years,the BSS has received considerable attention from the signal processing community and the neural network community.This paper presents a survey and review on the BSS,focusing on the independent component analysis(ICA) and the principal component analysis(PCA).
  • 学术论文
    WEN Xue-zhi;;FANG Wei;ZHENG Yu-hui;
    Acta Electronica Sinica. 2011, 39(5): 1121-1126.
    An algorithm based on Haar-like features and AdaBoost classifier for vehicle recognition is proposed to solve the problem of poor recognition performance based on SVM (Support Vector Machines) classifier and cascaded AdaBoost classifier as well as the problem of much time consumed for training traditional AdaBoost.At first,the extended Haar-like features are extracted using integral image method,then a small number of critical features from a very large set of Haar-like features are selected while training AdaBoost,finally two classes classification is performed using the AdaBoost classifier and the selected features.Experimental results demonstrate that the proposed approaches has better performance both in recognition and time consuming than traditional methods, and shows promising perspective.
  • 论文
    ZHOU Jie;LU Chun-yu;ZHANG Chang-shui;LI Yan-da
    Acta Electronica Sinica. 2000, 28(4): 102-106.
    Automatic human face recognition is attractive in pattern recognition and image processing.In this paper we gave a survey of automatic human face recognition,mainly aiming at the latest progress.
  • LIN Xiang-hong, WANG Xiang-wen, ZHANG Ning, MA Hui-fang
    Acta Electronica Sinica. 2015, 43(3): 577-586. https://doi.org/10.3969/j.issn.0372-2112.2015.03.024

    Spiking neural networks are shown to be suitable tools for the processing of spatio-temporal information.However, due to their intricately discontinuous and implicit nonlinear mechanisms, the formulation of efficient supervised learning algorithms for spiking neural networks is difficult, which is an important problem in the research area.In this paper, we introduce the general framework of supervised learning algorithms for spiking neural networks, and analyze their performance evaluations including spike trains learning ability, offline and online processing ability, the locality of learning mechanism and the applicability to network structure.Furthermore, we survey the advance of the research on supervised learning algorithms, which can be divided into three categories according to their differences:gradient descent rule, synaptic plasticity rule, and spike trains convolution rule.Finally, we discuss the advantages and disadvantages of these algorithms, and prospect the problems in current research and some future research directions in this area.

  • 学术论文
    XU Bao-guo;SONG Ai-guo;FEI Shu-min
    Acta Electronica Sinica. 2011, 39(5): 1025-1030.
    In the study of brain-computer interface (BCI),a novel method of extracting electroencephalography (EEG) features based on discrete wavelet transform (DWT) and autoregressive (AR) model was proposed.First,the EEG signal was decomposed to three levels by Daubechies wavelet function and statistics of wavelet coefficients were computed.Also,the sixth-order AR coefficients of the EEG signal were estimated using Burg's algorithm.Then,the combination features were used as an input vector for neural network (NN) classifier,support vector machine (SVM) classifier,and linear discriminant analysis (LDA) classifier.Performance of this feature extraction method was tested using the data set from BCI 2003 competition.The recognition rate was compared with the best result of the competition and the classification results showed the effectiveness of this algorithm.Moreover,applying this pattern recognition algorithm to online robot control system based on EEG,the average accuracy of 89.5% was obtained.This method provides a new idea for the study of online BCI system.
  • 论文
    Acta Electronica Sinica. 2004, 32(3): 416-420.
    A new adaptive mutation particle swarm optimizer(AMPSO),which is based on the variance of the population's fitness is presented.During the running time,the mutation probability for the current best particle is determined by two factors:the variance of the population's fitness and the current optimal solution.The ability of particle swarm optimization algorithm(PSO) to break away from the local optimum is greatly improved by the mutation.The experimental results show that the new algorithm not only has great advantage of convergence property over genetic algorithm and PSO,but also can avoid the premature convergence problem effectively.
  • YU Neng-hai, HAO Zhuo, XU Jia-jia, ZHANG Wei-ming, ZHANG Chi
    Acta Electronica Sinica. 2013, 41(2): 371-381. https://doi.org/10.3969/j.issn.0372-2112.2013.02.026
    With the development of cloud computing in the academia and industry,it is inevitable that many security problems arise.This paper summarizes the security requirements of cloud computing,which not only cover the traditional security requirements like confidentiality,data integrity,access control and identity authentication,but also introduce new security requirements in the credibility,configuration and virtual machinery.We make conclusions about the security situations on two typical cloud computing products:Amazon Web Services and Windows Azure and elaborate two attack mechanisms against cloud computing:Denial of service attack and Side channel attack.Based on the security requirements and attacks against cloud computing,we systematically summarize the current security protection mechanisms and further make a comparison among them.
  • LIN Hui-min, ZHANG Wen-hui
    Acta Electronica Sinica. 2002, 30(S1): 1907-1912.
    As computer hardware and software systems become more and more complex,how to assure the correctness and reliability of such system sbecomes an urgent problem. Among theories proposed as solutions to this problem, model checking has become a very attractive and appealing approach, because of its simplicity and high level of automation. Research on model checking covers the following subjects: modal/temporal logics, model checking algorithms, efficiency of model checking with respect to time and space(especially space complexity), and development of model checking tools. These aspects are closely related. Complexities of model checking algorithms vary very much for different modal/temporal logics, and optimizations are often targeted at certain types of logic fonnulas. Some new achievements and research directions are also discussed.
  • 论文
    ZHOU Le-zhu;LI Dou;GUO Wen-jia
    Acta Electronica Sinica. 2001, 29(6): 824-828.
    Multiple-beam antennas have found wide use in a variety of applications involving satellite communications.In this paper a summary of several structure schemes of multiple-beam antennas for satellite communications is presented,and a comparison about their advantages and drawbacks are also made.Some topics to be further researched are pointed out.
  • 科研通信
    SONG Jin-ze;DAI Bin;SHAN En-zhong;HE Han-gen
    Acta Electronica Sinica. 2010, 38(2A): 225-228.
    Aiming at the application of ALV (automatic land vehicle),an improved RRT (Rapidly Random-exploring Trees) path planning algorithm is proposed.The algorithm combines the noholonomic constraints of vehicle with double extend RRTs,which not only improves the efficiency of searching but also guarantees the feasibility of the path at the same time.The path points are taken as reference points for the B-spline basic function,approach and create a new smooth route which suits for the vehicle to follow.The simulation outcomes and field tests verify the effectiveness of the method proposed.
  • CHEN Huo-wang, WANG Ji, Dong Wei
    Acta Electronica Sinica. 2003, 31(S1): 1933-1938.
    As the software plays more and more important roles in today's information society,the software reliabitity,safety and security are strongly required.The state of art of its engineering technologies for this high confidence software and the challenges it faced are descirbed in this paper.Based on the formal methods,this confidence software developing trends and its key technical points are also discussed.
  • 论文
    LIU Ming-liang;CAI Yong-quan;RAO Min;LIU Di
    Acta Electronica Sinica. 2000, 28(5): 111-112.
    Theory method and example of deconvolution via convolution operation are introduced at first briefly.The ill-posed problem of this technique is proposed and processing method is given next.Finally,the estimate error and example of this technique is given.
  • WANG Qiang, LI Jia, SHEN Yi
    Acta Electronica Sinica. 2013, 41(10): 2041-2050. https://doi.org/10.3969/j.issn.0372-2112.2013.10.027
    Measurement matrix,whose performance can affect the compression and reconstruction of original signal,plays a key role in compressive sensing.Most of the existing measurement matrices are random ones,which have shortcomings in practical application,such as large storage capacity,low efficiency and difficulty when implemented in the hardware.Therefore,it is of important practical significance to construct deterministic measurement matrix for the promotion and application of the compressive sensing theory.In this paper,the existing construction algorithms for deterministic measurement matrix are reviewed,introduced and classified in detail.Finally the performances of all algorithms are summarized in terms of common indicators.
  • 论文
    CHAO Rui;ZHANG Ke;LI Yan-jun
    Acta Electronica Sinica. 2004, 32(5): 750-753.
    A multisensor image fusion algorithm using wavelet transform is proposed in this paper.The principles of choosing high frequency and low frequency coefficients are discussed respectively.While choosing high frequency coefficients,we select these that have maximal absolute values and verify the consistency of these coefficients.When choosing the low frequency coefficients,we design three approaches.The first method is average method,which decreases the contrast of images in some degree.The second method is combination of average and selection proposed by Burt,which uses average method and selective methods alternatively according to the match degree of two images.The third method is based on edges of images,selecting the pixels that might be on the edges possibly from original images.The algorithm is used to fuse several multi-focused images.The results show that this algorithm can preserve all useful information from primitive images and all targets in the fused image are very clear.However,if we choose low frequency coefficients by the third method,the effect of fused image is the best.
  • 综述评论
    YANG Hai-rong;ZHANG Cheng;DING Da-wei;WEI Sui
    Acta Electronica Sinica. 2011, 39(1): 142-148.
    Compressive sensing,by means of the non-adaptive measurements with a well below the Nyquist frequency and optimization methods,reconstruct signal with high probability.In this paper,we introduce the basic theory of compressed sensing and the main reconstruction algorithms,including iterative algorithms as well as our improved MBOOMP algorithm.Meanwhile,the simulation of radom signal which is composed of 0 and 1 are adapted to compare their performance.It is shown that our algorithm is better than typical OMP algorithm.
  • SUN Xue-hong, LI Qiang, PANG Dan-xu, ZENG Zhi-min
    Acta Electronica Sinica. 2015, 43(11): 2305-2314. https://doi.org/10.3969/j.issn.0372-2112.2015.11.025

    With the rapid development of wireless communication technology,spectrum efficiency and system capacity has been close to the Shannon limit.As a new technology,the orbital angular momentum(OAM) has attractive properties of high spectrum utilization and anti-interference,arousing widespread concerns of domestic and foreign academics.Firstly,this paper reviews the application of OAM and the basic principle of wireless communication system.Then,it puts emphasis on the research progress of OAM in the related fields,mainly on the in-depth analysis for key technologies of generating OAM and the summary in existing method for receiving OAM.Finally,it looks into the future and proposes some prominent issues to be solved and focused on wireless communication research and application of OAM,including generation and reception of electromagnetic vortex waves carrying OAM,the suppression of the mutual interference of the OAM electromagnetic vortex waves in different modes,coding of OAM and the separation and detection of vortex waves carrying OAM in different modes and so on.

  • XIE Hui, HUANG Zhi-tao, WANG Feng-hua
    Acta Electronica Sinica. 2013, 41(6): 1166-1176. https://doi.org/10.3969/j.issn.0372-2112.2013.06.019
    Blind recognition of channel coding plays an important role in the field of non-cooperative signal processing,which has been extended from the signal level to the information level.Blind recognition of channel coding is widely used in the fields of intelligence communication,information interception and information countermeasure.Firstly,the recognition algorithms of convolution code,BCH code,RS code,Turbo code and scramble code which are commonly used in modern digital communication systems were summarized and classified.Then the theories of the algorithms were described,and the computational complexity and performances in noisy environment of the algorithms were analyzed.Finally,the future of blind recognition of channel coding was pointed out based on the shortcoming of current algorithms and practical need.
  • 论文
    NIU Xia-mu;JIAO Yu-hua
    Acta Electronica Sinica. 2008, 36(7): 1405-1411.
    Perceptual hashing is a class of one way mappings from multimedia presentations to a perceptual hash value in terms of their perceptual content.It features the perceptual robustness and security.Perceptual hashing could be applied to multimedia content identification,retrieval,authentication,etc.On the basis of the human perceptual model,definition of perceptual hashing is proposed in this paper,as well as the properties and the general descriptions.Its typical algorithm reviews,application scenarios and benchmark are presented.The prospect of perceptual hashing algorithms and techniques is also discussed.
  • QIAO Shao-jie, HAN Nan, ZHU Xin-wen, SHU Hong-ping, ZHENG Jiao-ling, YUAN Chang-an
    Acta Electronica Sinica. 2018, 46(2): 418-423. https://doi.org/10.3969/j.issn.0372-2112.2018.02.022
    Traditional fitting-based trajectory prediction algorithms cannot meet the requirements of high accuracy and real-time prediction. A dynamic Kalman filter based TP approach was proposed, which performs state estimation of dynamic behavior with regard to moving objects, updates the state variable estimation value based on the estimation of the previous and current observation states, in order to infer the next location of moving objects. Extensive experiments are conducted on real datasets of moving objects and the results demonstrate that the average prediction error (root mean square error between the predicted location and the actual location) of the TP algorithm based on Kalman filter is around 12.5 meters on the GeoLife datasets. The prediction error is reduced by about 555.4 meters by compared to the fitting-based TP algorithms, and the prediction accuracy is increased by 7.1% on the T-Drive datasets as well. The dynamic TP approach based on Kalman filter can handle the problem of low prediction accuracy with the guarantee of efficient time performance.
  • 综述评论
    LI Jian-jiang;CUI Jian;WANG Dan;YAN Lin;HUANG Yi-shuang
    Acta Electronica Sinica. 2011, 39(11): 2635-2642.
    Through well-defined interfaces and runtime support library,MapReduce parallel programming model can automatically perform the large-scale computing tasks in parallel,hide the underlying implementation details,and reduce the difficulty of parallel programming.This paper reviews the domestic and overseas research of the MapReduce,describes and analyzes the characteristics and lack of the typical research achievements about MapReduce at home and abroad.Then this paper focus on the in-depth analysis of the key technologies about MapReduce (including:model optimization,model implementation according to the different platforms,task scheduling,load balancing,and fault tolerance).Finally,this paper prospects the MapReduce for the future trend.
  • 论文
    MU Jian-jun;JIAO Xiao-peng;CAO Xun-zhi
    Acta Electronica Sinica. 2009, 37(7): 1571-1577.
    Most network communication based on TCP treats data as an ordered sequence of packets.This ordered-sequence paradigm of TCP is too restrictive when the data is large or is to be distributed to a large number of users.In the data transmission paradigm based on digital fountain codes,obviating the need for ordered data simplifies data delivery.Thus,digital fountain codes are a class of efficient codes which are suitable for reliable communication,and have been widely used in many respects.This paper surveys the principles of a few types of digital fountain codes and their advantages and disadvantages,and investigates their engineering applications.Some key problems needed to be solved in the study of digital fountain codes are pointed out.Finally,the future development and research directions of digital fountain codes are discussed.
  • LIU Jian-wei, CUI Li-peng, LUO Xiong-lin
    Acta Electronica Sinica. 2015, 43(4): 776-782. https://doi.org/10.3969/j.issn.0372-2112.2015.04.021

    The sparsity and group sparsity have important applications in the statistics,signal processing and machine learning.This paper summarized and analyzed the differences and relations between various group sparse models.In addition,we compared different models' variable selection ability,variable group selection ability,variable selection consistency and variable group selection consistency.We also summarized the algorithms of group sparse models and pointed the advantages and disadvantages of the algorithms.Finally,we point out the future research directions of the group sparse models.

  • 论文
    LI Rong;YE Shi-wei;SHI Zhong-zhi
    Acta Electronica Sinica. 2002, 30(5): 745-748.
    A new algorithm that combined Support Vector Machine (SVM) with K Nearest neighbour (KNN) is presented and it comes into being a new classifier.The classifier based on taking SVM as a 1NN classifier in which only one representative point is selected for each class.In the class phase,the algorithm computes the distance from the test sample to the optimal super-plane of SVM in feature space.If the distance is greater than the given threshold,the test sample would be classified on SVM;otherwise,the KNN algorithm will be used.In KNN algorithm,we select every support vector as representative point and compare the distance between the testing sample and every support vector.The testing sample can be classed by finding the k-nearest neighbour of testing sample.The numerical experiments show that the mixed algorithm can not only improve the accuracy compared to sole SVM,but also better solve the problem of selecting the parameter of kernel function for SVM.
  • 论文
    HUANG Xiang-lin;SHEN Lan-sun
    Acta Electronica Sinica. 2002, 30(7): 1065-1071.
    With the rapid increasing of image data,an effective and efficient image retrieval system is needed for finding useful images in large databases or on Internet.Because of the complexity of images,the traditional information retrieval techniques,which are based on value and/or text comparison,are not sufficient.Then the research is shifted to the techniques of content-based image retrieval (CBIR).In this paper,some key issues on CBIR are discussed:image features extraction,similarity measures,the expression of query conditions,retrieval techniques in compressed-domain,the performance evaluation,and so on.Some future research directions are also given in the conclusion.
  • LI Shun-dong, WANG Dao-shun
    Acta Electronica Sinica. 2013, 41(4): 798-803. https://doi.org/10.3969/j.issn.0372-2112.2013.04.029
    Secure multiparty computation is a key privacy-preserving technology in cyberspaces and a research focus in the international cryptographic community.We first present a new encoding scheme to encode private data.By using this encoding scheme together with homomorphic encryption scheme,we construct a new scheme for Yao's millionaires' problem and prove its privacy-preserving property.This new scheme is more concise,more general and can be applied to compare any two objects on which a total order can be defined.We finally utilize the new scheme to propose a solution to the coprime problem and prove the privacy-preserving properties of the solution.