电子学报 ›› 2021, Vol. 49 ›› Issue (12): 2479-2489.DOI: 10.12263/DZXB.20200404
高俊涛1, 王梅1, 徐光会1, 刘聪2
收稿日期:
2020-04-27
修回日期:
2021-01-06
出版日期:
2021-12-25
作者简介:
基金资助:
GAO Jun-tao1, WANG Mei1, XU Guang-hui1, LIU Cong2
Received:
2020-04-27
Revised:
2021-01-06
Online:
2021-12-25
Published:
2021-12-25
Supported by:
摘要:
正则语言推断研究从语言的有限信息出发,通过归纳和推理得出正则语言模型.该技术在信息抽取、软件工程、模式识别等领域应用广泛.本文首先阐明了语言的可学习性概念和推断结果的评价准则.然后从推理策略、数据结构、算法复杂性等方面,对被动、主动和基于神经网络的学习算法进行分类归纳与对比,梳理各流派的技术发展脉络.接着分析推断产生的三种泛化效应.最后指出当前研究中不足,对未来研究方向进行展望.
中图分类号:
高俊涛, 王梅, 徐光会, 刘聪. 正则语言推断综述[J]. 电子学报, 2021, 49(12): 2479-2489.
GAO Jun-tao, WANG Mei, XU Guang-hui, LIU Cong. A Survey on Regular Language Inference[J]. Acta Electronica Sinica, 2021, 49(12): 2479-2489.
算法名称 | 数据结构 | 样本条件 | 语言模型 | 算法效率 |
---|---|---|---|---|
TB算法[ | APTA | 一致完整性样本 | 最小DFA | 运行时间上限: |
非一致完整性样本 | PTA | |||
Gold算法[ | 状态特征矩阵,也称为证据表 | 特征样本 | 最小DFA | 时间复杂度: 空间复杂度: |
非特征样本 | PTA | |||
RPNI算法[ | PTA | 特征样本 | 最小DFA | 时间复杂度: 运行时间上限: |
非特征样本 | DFA | |||
EDSM算法[ | APTA | 特征样本 | 最小DFA | 运行时间上限: |
非特征样本 | DFA | |||
Blue‑Fringe 算法[ | PTMM | 特征样本 | 最小摩尔DFA | 运行时间上限: |
非特征样本 | 摩尔DFA | |||
Zhang算法[ | 红蓝集合 | 语义完整性样本 | 最小DFA | 时间复杂度: 空间复杂度: |
非语义完整性样本 | DFA | |||
Exbar算法[ | APTA | 任意样本 | 最小DFA | - |
QSM算法[ | PTA | 特征样本 | 最小DFA | 时间复杂度: |
非特征样本 | DFA | |||
DeLeTe2算法[ | PTMM | 任意样本 | RFSA | 时间复杂度: |
Grinchtein算法[ | 布尔表达式 | 任意样本 | 最小DFA | 编码长度: |
dfasat算法[ | 布尔表达式 | 任意样本 | 最小DFA | 编码长度: |
BFS算法[ | 布尔表达式 | 任意样本 | 最小DFA | 编码长度: |
DFA‑Inductor2[ | 等式及布尔表达式 | 任意样本 | 最小DFA | 编码长度: |
表1 正负样本学习算法列表
算法名称 | 数据结构 | 样本条件 | 语言模型 | 算法效率 |
---|---|---|---|---|
TB算法[ | APTA | 一致完整性样本 | 最小DFA | 运行时间上限: |
非一致完整性样本 | PTA | |||
Gold算法[ | 状态特征矩阵,也称为证据表 | 特征样本 | 最小DFA | 时间复杂度: 空间复杂度: |
非特征样本 | PTA | |||
RPNI算法[ | PTA | 特征样本 | 最小DFA | 时间复杂度: 运行时间上限: |
非特征样本 | DFA | |||
EDSM算法[ | APTA | 特征样本 | 最小DFA | 运行时间上限: |
非特征样本 | DFA | |||
Blue‑Fringe 算法[ | PTMM | 特征样本 | 最小摩尔DFA | 运行时间上限: |
非特征样本 | 摩尔DFA | |||
Zhang算法[ | 红蓝集合 | 语义完整性样本 | 最小DFA | 时间复杂度: 空间复杂度: |
非语义完整性样本 | DFA | |||
Exbar算法[ | APTA | 任意样本 | 最小DFA | - |
QSM算法[ | PTA | 特征样本 | 最小DFA | 时间复杂度: |
非特征样本 | DFA | |||
DeLeTe2算法[ | PTMM | 任意样本 | RFSA | 时间复杂度: |
Grinchtein算法[ | 布尔表达式 | 任意样本 | 最小DFA | 编码长度: |
dfasat算法[ | 布尔表达式 | 任意样本 | 最小DFA | 编码长度: |
BFS算法[ | 布尔表达式 | 任意样本 | 最小DFA | 编码长度: |
DFA‑Inductor2[ | 等式及布尔表达式 | 任意样本 | 最小DFA | 编码长度: |
算法名称 | 语言模型 | 数据结构 | 反例处理 | 算法复杂度 |
---|---|---|---|---|
L*算法[ | DFA | OT表 | 增加反例的所有前缀到数据结构中 | 空间复杂度: |
基于采样的L*算法[ | DFA | OT表 | 增加反例的所有前缀到数据结构中 | - |
Rivest&Schapire算法[ | DFA | OT表 | 只增加反例的一条后缀到数据结构中 | 空间复杂度: |
Kearns&Vazirani算法[ | DFA | DT树 | 只增加反例的一条前缀到数据结构中 | 空间复杂度: |
TTT算法[ | DFA | DT树 | 只增加反例的一条后缀到数据结构中 | 空间复杂度: |
NL*算法[ | RNFA | OT表 | 增加反例的所有后缀到数据结构中 | |
AL*算法[ | AFA、UFA | OT树 | 增加反例的部分后缀到数据结构中 | |
AL**算法[ | Residual AFA | OT表 | 增加反例的所有后缀到数据结构中 |
表2 主动学习算法列表
算法名称 | 语言模型 | 数据结构 | 反例处理 | 算法复杂度 |
---|---|---|---|---|
L*算法[ | DFA | OT表 | 增加反例的所有前缀到数据结构中 | 空间复杂度: |
基于采样的L*算法[ | DFA | OT表 | 增加反例的所有前缀到数据结构中 | - |
Rivest&Schapire算法[ | DFA | OT表 | 只增加反例的一条后缀到数据结构中 | 空间复杂度: |
Kearns&Vazirani算法[ | DFA | DT树 | 只增加反例的一条前缀到数据结构中 | 空间复杂度: |
TTT算法[ | DFA | DT树 | 只增加反例的一条后缀到数据结构中 | 空间复杂度: |
NL*算法[ | RNFA | OT表 | 增加反例的所有后缀到数据结构中 | |
AL*算法[ | AFA、UFA | OT树 | 增加反例的部分后缀到数据结构中 | |
AL**算法[ | Residual AFA | OT表 | 增加反例的所有后缀到数据结构中 |
序号 | 算法 | 应用条件 | |
---|---|---|---|
1 | TTT算法 | 目标系统允许试探性操作 | 能回答等价性问题 |
2 | 采样L*算法 | 允许近似推断 | |
3 | dfasat算法 | 目标系统不允许试探性操作 | 样本规模较小 |
4 | DFA‑Inductor2算法 | ||
5 | Exbar算法 | ||
6 | Blue‑Fringe算法 | 样本规模较大 | |
7 | ESDM算法 | ||
8 | RPNI算法 | ||
9 | 增量式SAT推断算法[ | 样本数据较长 |
表3 在软件模型推断中学习算法推荐列表
序号 | 算法 | 应用条件 | |
---|---|---|---|
1 | TTT算法 | 目标系统允许试探性操作 | 能回答等价性问题 |
2 | 采样L*算法 | 允许近似推断 | |
3 | dfasat算法 | 目标系统不允许试探性操作 | 样本规模较小 |
4 | DFA‑Inductor2算法 | ||
5 | Exbar算法 | ||
6 | Blue‑Fringe算法 | 样本规模较大 | |
7 | ESDM算法 | ||
8 | RPNI算法 | ||
9 | 增量式SAT推断算法[ | 样本数据较长 |
1 | BEX G J, NEVENF, SCHWENTICKT, et al. Inference of concise regular expressions and DTDs[J]. ACM Trans Database Syst, 2010, 35(2): 1 - 47. |
2 | XIEY, YUF, ACHANK, et al. Spamming botnets: signatures and characteristics[J]. SIGCOMM Comput Commun Rev, 2008, 38(4): 171 - 182. |
3 | CHIVILIKHIND, PATILS, CHUKHAREVK, et al. Automatic state machine reconstruction from legacy programmable logic controller using data collection and sat solver[J]. IEEE Transactions on Industrial Informatics, 2020, 16(12): 7821 - 7831. |
4 | CHENY‑F, HSIEHC, LENGL O R, et al. PAC learning‑ based verification and model synthesis[A]. DILLON L. Proceedings of the 38th International Conference on Software Engineering[C]. New York, United States: ACM, 2016. 714 - 724. |
5 | GOLDE M. Language identification in the limit[J]. Information and Control, 1967, 10(5): 447 - 474. |
6 | ANGLUIND. Inductive inference of formal languages from positive data[J]. Information and Control, 1980, 45(2): 117 - 135. |
7 | VALIANTL G, VALIANTL. A theory of the learnable[J]. Communications of ACM, 1984, 27(11): 1134 - 1142. |
8 | ISHIGAMIY, TANIS I. VC‑dimensions of finite automata and commutative finite automata with k letters and n states[J]. Discrete Applied Mathematics, 1997, 74(3): 229 - 240. |
9 | GOLDE M. Complexity of automation identification and control[J]. Information and Control, 1978, 37(4): 302 - 320. |
10 | ANGLUIND. On the complexity of minimum inference of regular sets[J]. Information and Control, 1978, 39(3): 337 - 350. |
11 | ANGLUIND. Queries and concept learning[J]. Machine Learning, 1988, 2(3): 319 - 342. |
12 | FREYDENBERGERD, REIDENBACHD. Inferring descriptive generalizations of formal languages[J]. Journal of Computer and System Sciences, 2012, 79(5): 622 - 638. |
13 | WALKINSHAWN, BOGDANOVK, DAMASC, et al. A framework for the competitive evaluation of model inference techniques[A]. GROZ R, LI K. Proceedings of the First International Workshop on Model Inference In Testing[C]. New York, United States: ACM, 2010. 1 - 9. |
14 | GRACHEVP, BEZBORODOVR, SMETANNIKOVI, et al. Exploring the relationship between the structural and the actual similarities of automata[A]. Proceedings of the 3rd International Conference on Machine Learning and Soft Computing[C]. New York, United States: ACM, 2019. 81 - 86. |
15 | CHANM GC, RASTOGIAND R.. RE‑tree: an efficient index structure for regular expressions[J]. the VLDB Journal, 2003, 12(2): 102 - 118. |
16 | GRUNWALDP D. The Minimum Description Length Principle[M]. Dordrecht, Netherlands: Springer, 2007. |
17 | RISSANENJ. Modeling by shortest data description[J]. Automatica, 1978, 14(5): 465 - 471. |
18 | GOLDE M. Complexity of automaton identification from given data[J]. Information and Control, 1978, 37(3): 302 - 320. |
19 | TRAKHTENBROTB A, BARZDINY M. Finite Automata: Behavior and Synthesis[M]. North‑Holland, Netherlands: Elsevier, 1973. |
20 | ZQUEZ DE PARGAMV, GARCA P, L PEZ D. Minimal consistent DFA revisited[J]. Theoretical Computer Science, 2016, 647: 43 - 49. |
21 | ZHANGC. Minimal consistent DFA from sample strings[J]. Acta Informatica, 2020, 57(3): 657 - 670. |
22 | ONCINAJ, GARCIAP. Inferring regular languages in polynomial update time[A]. SANFELIU A, BLANCA N P D L, VIDAL E. Pattern Recognition and Image Analysis[C]. Singapore: World Scientific, 1992. 49 - 61. |
23 | LANGK J. Random DFA’s can be approximately learned from sparse uniform examples[A]. HAUSSLER D. Proceedings of the Fifth Annual Workshop on Computational Learning Theory[C]. New York, United States: ACM, 1992. 45 - 52. |
24 | LANGK J, PEARLMUTTERB A, PRICER A. Results of the abbadingo one DFA learning competition and a new evidence‑driven state merging algorithm[A]. HONAVAR V G, SLUTZKI G. Proceedings of the 4th International Colloquium on Grammatical Inference[C]. Berlin, German: Springer, 1998. 1 - 12. |
25 | LANGK. Faster Algorithms for Finding Minimal Consistent DFAs[R]. Kovilpatti, India: National Engineering College, 1999. |
26 | OLIVEIRAA L, SILVAJ P M. Efficient algorithms for the inference of minimum size DFAs[J]. Machine Learning, 2001, 44(1‑2): 93 - 119. |
27 | ZAKIRZYANOVI, MORGADOA, IGNATIEVA, et al. Efficient symmetry breaking for SAT‑based minimum DFA inference[A]. MART N‑VIDE C, OKHOTIN A, SHAPIRA D. Language and Automata Theory and Applications[C]. Cham,Switzerland:Springer,2019. 159 - 173. |
28 | HEULEM J H, VERWERS. Exact DFA identification using SAT solvers[A]. SEMPERE J M, GARC A P. Grammatical Inference: Theoretical Results and Applications[C]. Berlin, German: Springer, 2010. 66 - 79. |
29 | SMETSERSR, FITERĂU‑BROŞTEANP, VAANDRAGERF. Model learning as a satisfiability modulo theories problem[A]. KLEIN S T, MART N‑VIDE C, SHAPIRA D. Language and Automata Theory and Applications[C].Cham,Switzerland:Springer, 2018. 182 - 194. |
30 | DUPONTP, LAMBEAUB, DAMASC, et al. The QSM algorithm and its application to software behavior model induction[J]. Applied Artificial Intelligence, 2008, 22(1‑2): 77 - 115. |
31 | DENISF, LEMAYA, TERLUTTEA. Learning regular languages using non deterministic finite automata[A]. Proceedings of the 5th International Colloquium on Grammatical Inference: Algorithms and Applications[C]. Berlin, Germany: Springer, 39 - 50. |
32 | GRINCHTEINO, LEUCKERM, PITERMANN. Inferring network invariants automatically[A]. FURBACH U, SHANKAR N. Automated Reasoning[C]. Berlin, Germany: Springer, 2006. 483 - 497. |
33 | ZAKIRZYANOVI, SHALYTOA, ULYANTSEVV. Finding all minimum‑size dfa consistent with given examples: SAT‑based approach[A]. CERONE A, ROVERI M. Software Engineering and Formal Methods[C]. Cham, Switzerland: Springer, 2018. 117 - 131. |
34 | GRACHEVP. Reputational genetic model for regular inference[A]. Proceedings of the 3rd International Conference on Advances in Image Processing[C]. New York, United States: ACM, 2019. 185 - 189. |
35 | GRACHEVP. Grammar inference with multiparameter genetic model[A]. Proceedings of the 3rd International Conference on Advances in Image Processing[C]. New York, United States: ACM, 2019. 160 - 164. |
36 | LIUJ, BAIR, LUZ, et al. Data‑driven regular expressions evolution for medical text classification using genetic programming[A]. 2020 IEEE Congress on Evolutionary Computation [C]. Piscataway, United States: IEEE, 2020. 1 - 8. |
37 | RADHAKRISHNANV, NAGARAJAG. Inference of regular grammars via skeletons[J]. IEEE Transactions on Systems,Man, and Cybernetics,1987,17(6): 982 - 992. |
38 | GARCIAP, VIDALE. Inference of k‑testable languges in the strict sense and application to syntactic pattern recognition[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1990, 12(9): 920 - 925. |
39 | GARCIAP, VIDALE, ONCINAJ. Learning locally testable languages in the strict sense[A]. ARIKAWA S, GOTO S, OHSUGA S, et al. Proceedings of ALT’90[C]. Tokyo, Japan: JSAI, 1990. 325 - 338. |
40 | AHONENH. Generating Grammars for Structured Documents Using Grammatical Inference Methods[D]. Helsinki, Finland: Univeristy of Helsinki, 1996. |
41 | TANIDAN, T.YOKOMORI. Polynomial time identification of strictly regular languages in the limit[J]. IEICE Transaction on Information and Systems, 1992, E75-D(6): 125 - 132. |
42 | EMERALDJ D, SUBRAMANIANK G, THOMASD G. Learning code regular and code linear languages[A]. MICLET L, HIGUERA C D L. Grammatical Interference: Learning Syntax from Sentences[C]. Berlin, Germany: Springer, 1996. 211 - 221. |
43 | MAKINENE. Inferring uniquely terminating regular languages from positive data[J]. Information Processing Letters, 1997, 62(2): 57 - 60. |
44 | FERNAUH. Algorithms for learning regular expressions from positive data[J]. Information and Computation, 2009, 207(4): 521 - 541. |
45 | BEX G J, GELADEW, NEVENF, et al. Learning deterministic regular expressions for the inference of schemas from XML data[J]. ACM Transactions on The Web, 2010, 4(4): 1 - 32. |
46 | 冯晓强, 郑黎晓, 陈海明. 一类受限正则表达式的推断算法[J]. 计算机科学, 2014, 41(4): 178 - 183. |
FENGXiao‑qiang, ZHENGLi‑xiao, CHENHai‑ming. Inferring algorithm for a subclass of restricted regular expressions[J]. Computer Science, 2014, 41(4): 178 - 183. (in Chinese) | |
47 | BIERMANNA W, FELDMANJ A. On the synthesis of finite‑state machines from samples of their behavior[J]. IEEE Transactions on Computers, 1972, C-21(6): 592 - 597. |
48 | ZAANENM M V. Bootstrapping Structure into Language:Alignment‑Based Learning[D]. Mahikeng, South Africa: North‑West University, 2001. |
49 | MINJ‑K, J‑YAHN, C‑WCHUNG. Efficient extraction of schemas for XML documents[J]. Information Processing Letters, 2003, 85(1): 7 - 12. |
50 | GALASSIU, GIORDANAA. Learning regular expressions from noisy sequences[A]. ZUCKER J‑D, SAITTA L. Abstraction, Reformulation and Approximation[C]. Berlin, Germany: Springer, 2005. 92 - 106. |
51 | GAROFALAKISM, GIONISA, RASTOGIR, et al. XTRACT: learning document type descriptors from XML document collections[J]. Data Mining and Knowledge Discovery, 2003, 7(1): 23 - 56. |
52 | GAOJ, ZHANGY. Regular expression learning from positive examples based on integer programming[J]. International Journal of Software Engineering and Knowledge Engineering, 2020, 30(10): 1 - 37. |
53 | ANGLUIND. Learning regular sets from queries and counterexamples[J]. Information and Control, 1988, 75(2): 87 - 106. |
54 | KEARNSM J, VAZIRANIU. An Introduction to Computational Learning Theory[M]. Cambridge, United States: MIT, 1994. |
55 | ISBERNERM, HOWARF, STEFFENB. The TTT algorithm: a redundancy‑free approach to active automata learning[A]. BONAKDARPOUR B, SMOLKA S A. Runtime Verification[C]. Cham, Switzerland: Springer, 2014. 307 - 322. |
56 | DENISF, LEMAYA, TERLUTTEA. Residual finite state automata[A]. FERREIRA A, REICHEL H. Proceedings of the 18th Annual Symposium on Theoretical Aspects of Computer Science[C]. Berlin, Germany: Springer, 2001. 144 - 157. |
57 | BOLLIGB, HABERMEHLP, KERNC, et al. Angluin‑style learning of NFA[A]. KITANO H. Proceedings of the 21st International Joint Conference on Artifical intelligence[C]. San Francisco, United States: Morgan Kaufmann. 1004 - 1009. |
58 | ANGLUIND, EISENSTATS, FISMAND. Learning regular languages via alternating automata [A]. YANG Q, WOOLDRIDGE M. Proceedings of the 24th International Conference on Artificial Intelligence[C]. Palo Alto, United States: AAAI Press, 2015. 3308 - 3314. |
59 | BERNDTS, LIŚKIEWICZM, LUTTERM, et al. Learning residual alternating automata[A]. Proceedings of the 31st AAAI Conference on Artificial Intelligence[C]. Palo Alto, United States: AAAI Press, 2017. 1749 - 1755. |
60 | RIVESTR L, SCHAPIRER E. Inference of finite futomata using homing sequences[A]. JOHNSON D S. Proceedings of the 21st Annual ACM Symposium on Theory of Computing[C]. New York, United States: ACM, 1989. 411 - 420. |
61 | ELMANJ L. Finding structure in time[J]. Cognitive Science, 1990, 14(2): 179 - 211. |
62 | WATROUSR L, KUHNG M. Induction of finite‑state automata using second‑order recurrent networks[J]. NIPS, 1991, 4(3): 309 - 316. |
63 | GRACHEVP, LOBANOVI, SMETANNIKOVI, et al. Neural network for synthesizing deterministic finite automata[A]. KLIMOVA A, BILYATDINOVA A, KORTELAINEN J, et al. Proceedings of the 6th International Young Scientist Conference on Computational Science[C]. North‑Holland, Netherlands: Elsevier, 2017. 73 - 82. |
64 | MICHALENKOJ J, SHAHA, VERMAA, et al. Representing formal languages: a comparison between finite automata and recurrent neural networks[EB/OL]. , 2019‑05‑06/2020‑04‑27. |
65 | OMLINC W A GC. L. Extraction of rules from discrete‑ time recurrent neural networks[J]. Neural Networks, 1996, 9(1): 41 - 52. |
66 | CECHINA L, SIMOND.R.P., STERTZAND, K. State automata extraction from recurrent neural nets using k‑means and fuzzy clustering[A]. Proceedings of the XXIII International Conference of the Chilean Computer Science Society[C]. Washington, United States: IEEE Computer Society, 2003. 73 - 78. |
67 | AYACHES, EYRAUDR, GOUDIANN. Explaining black boxes on sequential data using weighted automata[A]. UNOLD O, DYRKA W, WIECZOREK W. Proceedings of Machine Learning Research 93[C]. Cambridge, United States: MIT, 2018. 81 - 103. |
68 | WEISSG, GOLDBERGY, YAHAVE. Extracting automata from recurrent neural networks using queries and counterexamples[A]. JENNIFER D, ANDREAS K. Proceedings of the 35th International Conference on Machine Learning[C]. Cambridge, United States: JMLR, 2018. 5247 - 5256. |
69 | AVELLANEDAF, PETRENKOA. FSM inference from long traces[A]. HAVELUND K, PELESKA J, ROSCOE B, et al. Formal Methods[C]. Cham, Switzerland: Springer, 2018. 93 - 109. |
70 | WIEMANR, ANICHEM, LOBBEZOOW, et al. An experience report on applying passive learning in a large‑scale payment company[A]. Proceedings of IEEE International Conference on Software Maintenance and Evolution[C]. Piscataway, United States: IEEE, 2017. 564 - 573. |
71 | BRĀZMAA. Efficient identification of regular expressions from representative examples[A]. PITT L. Proceedings of the 6th Annual Conference on Computational Learning Theory[C]. New York, United States: ACM, 1993. 236 - 242. |
72 | FELDMANJ A, GIPSJ, HORNINGJ J, et al. Grammatical Complexity and Inference[R]. Stanford, United States: Stanford University, 1969. |
73 | ITOGAS Y. A new heuristic for inferring regular grammars[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1981, PAMI‑3(2): 191 - 197. |
74 | FUK‑S, BOOTHT L. Grammatical inference: introduction and survey‑part I[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1986, PAMI‑8(3): 343 - 359. |
75 | GGEMANN-KLEINABR. Regular expressions into finite automata[J]. Theoretical Computer Science, 1993, 120(2): 197 - 213. |
76 | RULOTH, VIDALE. Modelling (Sub) String Length based Constraints Through a Grammatical Inference Method[A]. DEVIJVER P A, KITTLER J. Pattern Recognition Theory and Applications[C]. Berlin, Heidelberg: Springer, 1987. 451 - 459. |
[1] | 徐兴荣, 刘聪, 李婷, 郭娜, 任崇广, 曾庆田. 基于双向准循环神经网络和注意力机制的业务流程剩余时间预测方法[J]. 电子学报, 2022, 50(8): 1975-1984. |
[2] | 冀振燕, 韩梦豪, 宋晓军, 冯其波. 面向激光光条图像修复的循环相似度映射网络[J]. 电子学报, 2022, 50(5): 1234-1242. |
[3] | 袁冠, 邴睿, 刘肖, 代伟, 张艳梅, 蔡卓. 基于时空图神经网络的手势识别[J]. 电子学报, 2022, 50(4): 921-931. |
[4] | 肖斌, 陈嘉博, 毕秀丽, 张俊辉, 李伟生, 王国胤, 马旭. 基于一维卷积神经网络与循环神经网络串联的心音分析方法[J]. 电子学报, 2022, 50(10): 2425-2432. |
[5] | 桑海峰, 赵子裕, 何大阔. 基于循环区域关注和视频帧关注的视频行为识别网络设计[J]. 电子学报, 2020, 48(6): 1052-1061. |
[6] | 尤洪峰, 田生伟, 禹龙, 吕亚龙. 基于Word Embedding的遥感影像检测分割[J]. 电子学报, 2020, 48(1): 75-83. |
[7] | 查雄, 彭华, 秦鑫, 李天昀, 李广. 基于循环神经网络的卫星幅相信号调制识别与解调算法[J]. 电子学报, 2019, 47(11): 2443-2448. |
[8] | 吕品, 于文兵, 汪鑫, 计春雷, 周曦民. 异构分类器堆叠泛化及其在恶意评论检测中的应用[J]. 电子学报, 2019, 47(10): 2228-2234. |
[9] | 于昕妍, 沈艳霞, 陈杰, 纪志成. 考虑概率区间的微电网短期负荷多目标预测方法[J]. 电子学报, 2017, 45(4): 930-936. |
[10] | 徐勇, 毋国庆, 袁梦霆, 黄勃. 基于变型空间代数的自动程序修复方法[J]. 电子学报, 2017, 45(10): 2498-2505. |
[11] | 韩召伟. 量子Büchi自动机的代数及逻辑刻画[J]. 电子学报, 2013, 41(6): 1093-1100. |
[12] | 钟义信. 意识机:理论与模型[J]. 电子学报, 2000, 28(10): 41-44. |
阅读次数 | ||||||
全文 |
|
|||||
摘要 |
|
|||||