Multi-view active learning is a technique which can realize more significant reduction on version space than traditional active learning and has been used in large-scale data analysis.This paper proposes two improvements in both hypothesis generation and sampling strategy.We integrate boosting-like idea into the active learning framework which uses the weighted voting of all hypothetic outputs from the past queries.Furthermore,a novel adaptive hierarchical competition sampling is presented.In this sampling strategy,if the number of the contention samples is large,an unsupervised spectral clustering is activated to obtain the coarse distribution of these contention samples in the feature space and then both the classification uncertainty and redundancy measures are considered in each cluster to query the unlabeled samples in batch mode by solving quadratic programming.We apply multi-view active learning in image classification in order to prove the effectiveness of the improvements and different image features are used as views to generate the corresponding hypothesis.The experiments demonstrate that our two proposals can both efficiently improve the performance of the multi-view active learning and the random combination of these views can also obtain faster convergence and better classification accuracy than state-of-the-art single-view active learning algorithms.
姚拓中, 安鹏, 宋加涛. 基于历史分类加权和分级竞争采样的多视角主动学习[J]. 电子学报, 2017, 45(1): 46-53.
YAO Tuo-zhong, AN Peng, SONG Jia-tao. Multi-View Active Learning Based on Weighted Hypothesis Boosting and Hierarchical Competition Sampling. Acta Electronica Sinica, 2017, 45(1): 46-53.
[1] H A Simon,G Lea.Problem solving and rule education:a unified view knowledge and organization[J].Ernuam,1974,15(2):63-73.
[2] D Lewis,J Catlett.Heterogeneous uncertainty sampling for supervised learning[A].Proceedings of the 11th International Conference on Machine Learning[C].Rutgers University,New Brunswick,NJ,USA,1994.148-156.
[3] A McCallum,K Nigam.Employing EM in pool-based active learning for text classification[A].Proceedings of the 15th International Conference on Machine Learning[C].Madison,Wisconsin,USA,July 24-27,1998.359-367.
[4] D Cohn,L Atlas,R Ladner.Improving generalization with active learning[J].Machine Learning,1994,15(2):201-221.
[5] N Roy,A McCallum.Toward optimal active learning through sampling estimation of error reduction[A].Proceedings of the 18th International Conference on Machine Learning[C].Williamstown,MA,USA,2001.441-448.
[6] Y Freund,H S Seung,E Shamir,N Tishby.Selective sampling using the query by committee algorithm[J].Machine Learning,1997,28(2):133-168.
[7] N Abe,H Mamitsuka.Query learning strategies using boosting and bagging[A].Proceedings of the 15th International Conference on Machine Learning[C].Madison,Wisconsin,USA,July 24-27,1998.1-10.
[8] I Muslea,S Minton,C A Knoblock.Selective sampling with redundant views[C].Proceedings of the 17th National Conference on Artificial Intelligence,Austin,Texas,2000.621-626.
[9] A Blum,T Mitchell.Combining labeled and unlabeled data with co-training[A].Proceedings of the Workshop on Computational Learning Theory[C].Morgan Kaufmann,San Francisco,CA,1998.92-100.
[10] I Muslea,S Minton,C A Knoblock.Active+semi-supervised learning=robust multi-view learning[A].The 19th International Conference on Machine Learning[C].Sydney,Australia,July 8-12,2002.435-442.
[11] I Muslea,S Minton,C A Knoblock.Active learning with strong and weak views:a case study on wrapper induction[C].Proceedings of the International Joint Conference on Artificial Intelligence[C].Acapulco,Mexico,2003.415-420.
[12] C Dima,M Hebert.Active Learning for Outdoor Obstacle Detection[A].Proceedings of the Robotics Science and Systems[C].Massachusetts Institute of Technology,Cambridge,Massachusetts,2005.
[13] D Wei,M M Crawford.Active learning via multi-view and local proximity co-regularization for hyperspectral image classification[J].IEEE Journal of Selected Topics in Signal Processing,2011,5(3):618-628.
[14] J Cheng,K Q Wang.Active learning for image retrieval with Co-SVM[J].Pattern Recognition,2006,40(1):330-334.
[15] W Wang,Z H Zhou.On multi-view active learning and the combination with semi-supervised learning[A].International Conference on Machine Learning[C].Helsinki,Finland,2008.1152-1159.
[16] W Wang,Z H Zhou.Multi-view active learning in the non-realized case[A].Advances in Neural Information Processing Systems[C].Vancouver,Canada,2010.2388-2396.
[17] X Y Zhang,C S Xu,H Q Lu,S D Ma.Multi-view multi-label active learning for image classification[A].IEEE International Conference on Multimedia and Expo[C].New York,2009.258-261.
[18] W H Yang,G Q Liu,L Zhang,E H Chen.Multi-view learning with batch mode active selection for image retrieval[A].Proceedings of the 21st International Conference on Pattern Recognition[C].Tsukuba,2012.979-982.
[19] I Muslea,S Minton,C A Knoblock.Active learning with multiple views[J].Journal of Artificial Intelligence Research,2006:203-233.
[20] J B Shi,J Malik.Normalized cuts and image segmentation[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2000,22(8):888-905.
[21] S J Huang,R Jin,Z H Zhou.Active learning by querying informative and representative examples[A].Advances in Neural Information Processing Systems[C].USA,2010.36(10):1936-1949.
[22] C K Dagli,S Rajaram,T S Huang.Leveraging active learning for relevance feedback using an information theoretic diversity measure[A].ACM Conference on Image and Video Retrieval[C].Lecture Notes in Computer Science,Tempe,AZ,USA,2006.123-132.
[23] S C H Hoi,R Jin,J K Zhu,M R Lyu.Semi-supervised SVM batch mode active learning for image retrieval[A].IEEE Conference on Computer Vision and Pattern Recognition[C].IEEE,2008.1-7.
[24] Fei-Fei Li,P Perona.A Bayesian hierarchical model for learning natural scene categories[A].IEEE Computer Society Conference on Computer Vision and Pattern Recognition[C].San Diego,CA,June 2005.524-531.
[25] S Tong,E Chang.Support vector machine active learning for image retrieval[A].Proceedings of 9th ACM Iinternational Cconference on Multimedia[C].New York,NY,USA,2001.107-118.