[1] 贺前华,王志锋,RUDNICKY A I,等.基于改进PNCC特征和两步区分性训练的录音设备识别方法[J].电子学报,2014,42(1):191-198. HE Qian-hua,WANG Zhi-feng,RUDNICKY A I,et al.A recording device identification algorithm based on improved PNCC feature and two-step discriminative training[J].Acta Electronica Sinica,2014,42(1):191-198.(in Chinese) [2] ZOU L,YANG J,HUANG T.Automatic cell phone recognition from speech recordings[A].Proceedings of the 5th IEEE China Summit and International Conference on Signal and Information Processing[C].Xi' an,China:IEEE,2014.621-625. [3] LUO D,KORUS P,HUANG J.Band energy difference for source attribution in audio forensics[J].IEEE Transactions on Information Forensics and Security,2018,13(9):2179-2189. [4] QIN T,WANG R,YAN D,et al.Source cell-phone identification in the presence of additive noise from CQT domain[J].Information,2018,9(8):Article No.205. [5] JIANG Y,LEUNG F H F.Mobile phone identification from speech recordings using weighted support vector machine[A].Proceedings of the 42nd Annual Conference of the IEEE Industrial Electronics Society[C].Florence,Italy:IEEE,2016.963-968. [6] LI Y,ZHANG X,LI X,et al.Mobile phone clustering from speech recordings using deep representation and spectral clustering[J].IEEE Transactions on Information Forensics and Security,2018,13(4):965-977. [7] VERMA V,KHATURIA P,KHANNA N.Cell-phone identification from recompressed audio recordings[A].Proceedings of the 24th National Conference on Communications[C].Hyderabad,India:IEEE,2018.1-6. [8] QI S,HUANG Z,LI Y,et al.Audio recording device identification based on deep learning[A].Proceedings of the IEEE International Conference on Signal and Image Processing[C].Beijing,China:IEEE,2016.426-431. [9] JIN C,WANG R,YAN D,et al.Source cell-phone identification using spectral features of device self-noise[A].Proceedings of the 15th International Workshop on Digital Watermarking[C].Beijing,China:Springer,2016.29-45. [10] 裴安山,王让定,严迪群.基于设备本底噪声频谱特征的手机来源识别[J].电信科学,2017,33(1):85-94. PEI An-shan,WANG Rang-ding,YAN Di-qun.Cell-phone origin identification based on spectral features of device self-noise[J].Telecommunications Science,2017,33(1):85-94.(in Chinese) [11] 裴安山,王让定,严迪群.基于语音静音段特征的手机来源识别方法[J].电信科学,2017,33(7):103-111. PEI An-shan,WANG Rang-ding,YAN Di-qun.Source cell-phone identification from recorded speech using non-speech segments[J].Telecommunications Science,2017,33(7):103-111.(in Chinese) [12] BALDINI G,AMERINI I,GENTILE C.Microphone identification using convolutional neural networks[J].IEEE Sensors Letters,2019,3(7):Article No.6001504. [13] BALDINI G,AMERINI I.Smartphones identification through the built-in microphones with convolutional neural network[J].IEEE Access,2019,7:158685-158696. [14] BAI S,KOLTER J Z,KOLTUN V.An empirical evaluation of generic convolutional and recurrent networks for sequence modeling[OL].https://arxiv.org/abs/1803.01271,2018-04-19. [15] ABBASIAN H,NASERSHARIF B,AKBARI A,et al.Optimized linear discriminant analysis for extracting robust speech features[A].Proceedings of the 3rd International Symposium on Communications,Control and Signal Processing[C].St Julians,Malta:IEEE,2008.819-824. [16] CHANG C C,LIN C J.LIBSVM:A library for support vector machines[J].ACM Transactions on Intelligent Systems and Technology,2011,2(3):Article No.27. [17] MCFEE B,RAFFEL C,LIANG D,et al.librosa:audio and music signal analysis in Python[A].Proceedings of the 14th Python in Science Conference[C].Austin,Texas,USA:SciPy Organizers,2015.18-25. [18] IOFFE S,SZEGEDY C.Batch normalization:accelerating deep network training by reducing internal covariate shift[A].Proceedings of the 32nd International Conference on Machine Learning[C].Lille,France:JMLR.org,2015.448-456. [19] GRANDINI M,BAGLI E,VISANI G.Metrics for multi-class classification:an overview[OL].https://arxiv.org/abs/2008.05756,2020-08-13. [20] VAN DER MAATEN L,HINTON G.Visualizing data using t-SNE[J].Journal of Machine Learning Research,2008,9:2579-2605. [21] KASUN L L C,YANG Y,HUANG G,et al.Dimension reduction with extreme learning machine[J].IEEE Transactions on Image Processing,2016,25(8):3906-3918. |