摘要 生成适应模型利用生成对抗网络实现模型结构,并在领域适应学习上取得了突破.但其部分网络结构缺少信息交互,且仅使用对抗学习不足以完全减小域间距离,从而使分类精度受到影响.为此,提出一种基于生成对抗网络的无监督域适应分类模型(Unsupervised Domain Adaptation classification model based on GAN,UDAG).该模型通过联合使用生成对抗网络和多核最大均值差异度量准则优化域间差异,并充分利用无监督对抗训练及监督分类训练之间的信息传递以学习源域分布和目标域分布之间的共享特征.通过在四种域适应情况下的实验结果表明,UDAG模型学习到更优的共享特征嵌入并实现了域适应图像分类,且分类精度有明显提高.
Abstract:Generate-to-adapt model has used generative adversarial network to implement model structure and has made a breakthrough in domain adaptation learning.However,some of its network structures lack information interaction,and the ability to use only adversarial learning is not sufficient to completely reduce the inter-domain distance.In this paper,an unsupervised domain adaptation classification model based on generative adversarial network (UDAG) is proposed.This model optimizes inter-domain differences and makes full use of the information between unsupervised confrontation training and supervised classification training to learn the shared features between the source and target domain distribution.The experimental results under four domain adaptation conditions show that the UDAG model learns better shared feature embedding and implements domain adaptive classification,and the classification accuracy is significantly improved.
[1] WANG X,GUPTA A.Unsupervised learning of visual representations using videos[A].Proceedings of the IEEE International Conference on Computer Vision[C].Santiago,Chile:IEEE,2015.2794-2802.
[2] MAHJOURIAN R,WICKE M,ANGELOVA A.Unsupervised learning of depth and ego-motion from monocular video using 3D geometric constraints[A].Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition[C].Salt Lake City,USA:IEEE,2018.5667-5675.
[3] 刘建伟,孙正康,等.域自适应学习研究进展[J].自动化学报,2014,40(8):1576-1600. LIU Jian-Wei,SUN Zheng-Kang,et al.Review and research development on domain adaptation learning[J].Acta Automatica Sinica,2014,40(8):1576-1600.(in Chinese)
[4] PAN S J,YANG Q.A survey on transfer learning[J].IEEE Transactions on Knowledge and Data Engineering,2010,22(10):1345-1359.
[5] ROZANTSEV A,SALZMANN M,FUA P.Beyond sharing weights for deep domain adaptation[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2018,41(4):801-814.
[6] GANIN Y,LEMPITSKY V.Unsupervised domain adaptation by backpropagation[EB/OL].https://arxiv.org/abs/1409.7495v1,2015-02-27.
[7] GHIFARY M,KLEIJN W B,ZHANG M,et al.Deep reconstruction-classification networks for unsupervised domain adaptation[A].European Conference on Computer Vision[C].Amsterdam,the Netherlands:Springer,2016.597-613.
[8] TZENG E,HOFFMAN J,ZHANG N,et al.Deep domain confusion:Maximizing for domain invariance[EB/OL].https://arxiv.org/abs/1412.3474,2014-12-10.
[9] TZENG E,HOFFMAN J,SAENKO K,et al.Adversarial discriminative domain adaptation[A].Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition[C].Hawaii,USA:IEEE,2017.7167-7176.
[10] GOODFELLOW I,POUGET-ABADIE J,MIRZA M,et al.Generative adversarial nets[A].Advances in Neural Information Processing Systems[C].Montreal,Canada:MIT Press,2014.2672-2680.
[11] 王万良,李卓蓉.生成式对抗网络研究进展[J].通信学报,2018,39(2):135-148. WANG Wan-Liang,LI Zhuo-Rong.Advances in generative adversarial network[J].Journal on Communications,2018,39(2):135-148.(in Chinese)
[12] RUSSO P,CARLUCCIi F M,TOMMASI T,et al.From source to target and back:symmetric bi-directional adaptive gan[A].Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition[C].Salt Lake City,USA:IEEE,2018.8099-8108.
[13] VOLPI R,MORERIO P,SAVARESE S,et al.Adversarial feature augmentation for unsupervised domain adaptation[A].Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition[C].Salt Lake City,USA:IEEE,2018.5495-5504.
[14] HOFFMAN J,TZENG E,PARK T,et al.Cycada:cycle-consistent adversarial domain adaptation[EB/OL].https://arxiv.org/abs/1711.03213,2017-12-29.
[15] SANKARANARAYANAN S,BALAJI Y,CASTILLO C D,et al.Generate to adapt:aligning domains using generative adversarial networks[A].Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition[C].Salt Lake City,USA:IEEE,2018.8503-8512.
[16] GRETTON A,SEJDINOVIC D,STRATHMANN H,et al.Optimal kernel choice for large-scale two-sample tests[A].Advances in Neural Information Processing Systems[C].Nevada,USA:MIT Press,2012.1205-1213.
[17] LI J,LU K,HUANG Z,et al.Transfer independently together:a generalized framework for domain adaptation[J].IEEE Transactions on Cybernetics,2018 (99):1-12.
[18] GRETTON A,FUKUMIZU K,HARCHAOUI Z,et al.A fast,consistent kernel two-sample test[A].Advances in Neural Information Processing Systems[C].Vancouver,Canada:MIT Press,2009.673-681.
[19] DENG L.The mnist database of handwritten digit images for machine learning research[best of the web] [J].IEEE Signal Processing Magazine,2012,29(6):141-142.
[20] CHAABAN I,SCHEESSELA M R.Human Performance on the USPS Database[R].South Bend:Indiana University South Bend,2007.
[21] NETZER Y,WANG T,COATES A,et al.Reading digits in natural images with unsupervised feature learning[A].NIPS Workshop on Deep Learning and Unsupervised Feature Learning[C].Granada,Spain:MIT Press,2011.
[22] BOUSMALIS K,TRIGEORGIS G,SILBERMAN N,et al.Domain separation networks[A].Advances in Neural Information Processing Systems[C].Barcelona,Spain:MIT Press,2016.343-351.
[23] MAATEN L,HINTON G.Visualizing data using t-SNE[J].Journal of Machine Learning Research,2008,9(3):2579-2605.