[1] WANG X,GUPTA A.Unsupervised learning of visual representations using videos[A].Proceedings of the IEEE International Conference on Computer Vision[C].Santiago,Chile:IEEE,2015.2794-2802.
[2] MAHJOURIAN R,WICKE M,ANGELOVA A.Unsupervised learning of depth and ego-motion from monocular video using 3D geometric constraints[A].Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition[C].Salt Lake City,USA:IEEE,2018.5667-5675.
[3] 刘建伟,孙正康,等.域自适应学习研究进展[J].自动化学报,2014,40(8):1576-1600. LIU Jian-Wei,SUN Zheng-Kang,et al.Review and research development on domain adaptation learning[J].Acta Automatica Sinica,2014,40(8):1576-1600.(in Chinese)
[4] PAN S J,YANG Q.A survey on transfer learning[J].IEEE Transactions on Knowledge and Data Engineering,2010,22(10):1345-1359.
[5] ROZANTSEV A,SALZMANN M,FUA P.Beyond sharing weights for deep domain adaptation[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2018,41(4):801-814.
[6] GANIN Y,LEMPITSKY V.Unsupervised domain adaptation by backpropagation[EB/OL].https://arxiv.org/abs/1409.7495v1,2015-02-27.
[7] GHIFARY M,KLEIJN W B,ZHANG M,et al.Deep reconstruction-classification networks for unsupervised domain adaptation[A].European Conference on Computer Vision[C].Amsterdam,the Netherlands:Springer,2016.597-613.
[8] TZENG E,HOFFMAN J,ZHANG N,et al.Deep domain confusion:Maximizing for domain invariance[EB/OL].https://arxiv.org/abs/1412.3474,2014-12-10.
[9] TZENG E,HOFFMAN J,SAENKO K,et al.Adversarial discriminative domain adaptation[A].Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition[C].Hawaii,USA:IEEE,2017.7167-7176.
[10] GOODFELLOW I,POUGET-ABADIE J,MIRZA M,et al.Generative adversarial nets[A].Advances in Neural Information Processing Systems[C].Montreal,Canada:MIT Press,2014.2672-2680.
[11] 王万良,李卓蓉.生成式对抗网络研究进展[J].通信学报,2018,39(2):135-148. WANG Wan-Liang,LI Zhuo-Rong.Advances in generative adversarial network[J].Journal on Communications,2018,39(2):135-148.(in Chinese)
[12] RUSSO P,CARLUCCIi F M,TOMMASI T,et al.From source to target and back:symmetric bi-directional adaptive gan[A].Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition[C].Salt Lake City,USA:IEEE,2018.8099-8108.
[13] VOLPI R,MORERIO P,SAVARESE S,et al.Adversarial feature augmentation for unsupervised domain adaptation[A].Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition[C].Salt Lake City,USA:IEEE,2018.5495-5504.
[14] HOFFMAN J,TZENG E,PARK T,et al.Cycada:cycle-consistent adversarial domain adaptation[EB/OL].https://arxiv.org/abs/1711.03213,2017-12-29.
[15] SANKARANARAYANAN S,BALAJI Y,CASTILLO C D,et al.Generate to adapt:aligning domains using generative adversarial networks[A].Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition[C].Salt Lake City,USA:IEEE,2018.8503-8512.
[16] GRETTON A,SEJDINOVIC D,STRATHMANN H,et al.Optimal kernel choice for large-scale two-sample tests[A].Advances in Neural Information Processing Systems[C].Nevada,USA:MIT Press,2012.1205-1213.
[17] LI J,LU K,HUANG Z,et al.Transfer independently together:a generalized framework for domain adaptation[J].IEEE Transactions on Cybernetics,2018 (99):1-12.
[18] GRETTON A,FUKUMIZU K,HARCHAOUI Z,et al.A fast,consistent kernel two-sample test[A].Advances in Neural Information Processing Systems[C].Vancouver,Canada:MIT Press,2009.673-681.
[19] DENG L.The mnist database of handwritten digit images for machine learning research[best of the web] [J].IEEE Signal Processing Magazine,2012,29(6):141-142.
[20] CHAABAN I,SCHEESSELA M R.Human Performance on the USPS Database[R].South Bend:Indiana University South Bend,2007.
[21] NETZER Y,WANG T,COATES A,et al.Reading digits in natural images with unsupervised feature learning[A].NIPS Workshop on Deep Learning and Unsupervised Feature Learning[C].Granada,Spain:MIT Press,2011.
[22] BOUSMALIS K,TRIGEORGIS G,SILBERMAN N,et al.Domain separation networks[A].Advances in Neural Information Processing Systems[C].Barcelona,Spain:MIT Press,2016.343-351.
[23] MAATEN L,HINTON G.Visualizing data using t-SNE[J].Journal of Machine Learning Research,2008,9(3):2579-2605. |