ABSTRACT
This paper proposes to use autoencoders with nonlinear dimensionality reduction in the anomaly detection task. The authors apply dimensionality reduction by using an autoencoder onto both artificial data and real data, and compare it with linear PCA and kernel PCA to clarify its property. The artificial data is generated from Lorenz system, and the real data is the spacecrafts' telemetry data. This paper demonstrates that autoencoders are able to detect subtle anomalies which linear PCA fails. Also, autoencoders can increase their accuracy by extending them to denoising autoenconders. Moreover, autoencoders can be useful as nonlinear techniques without complex computation as kernel PCA requires. Finaly, the authors examine the learned features in the hidden layer of autoencoders, and present that autoencoders learn the normal state properly and activate differently with anomalous input.
- UFLDL Tutorial. http://ufldl.stanford.edu/wiki/index.php/Autoencoders_and_Sparsity. {Online; accessed 10-August-2014}.Google Scholar
- V. Chandola, A. Banerjee, and V. Kumar. Anomaly detection: A survey. ACM Computing Surveys, 41(3):1--58, July 2009. Google ScholarDigital Library
- S. Hawkins, H. He, G. Williams, and R. Baxter. Outlier detection using replicator neural networks. In Proceedings of the Fifth International Conference and Data Warehousing and Knowledge Discovery, pages 170--180, 2002. Google ScholarDigital Library
- G. E. Hinton and R. R. Salakhutdinov. Reducing the dimensionality of data with neural networks. Science, 313(5786):504–507, 2006.Google ScholarCross Ref
- H. Hoffmann. Kernel pca for novelty detection. Pattern Recognition, 40(3):863–874, 2007. Google ScholarDigital Library
- B. Hwang and S. Cho. Characteristics of auto-associative mlp as a novelty detector. In Proceedings of the International Joint Conference on Neural Networks, volume 5, pages 3086–3091, 1999.Google Scholar
- N. Japkowicz, C. Myers, and M. Gluck. A novelty detection approach to classification. In Proceedings of the 14th International Joint Conference on Artificial Intelligence, volume 1, pages 518–523, 1995. Google ScholarDigital Library
- M. A. Kramer. Nonlinear principal component analysis using autoassociative neural networks. AIChE J., 37(2):233–243, 1991.Google ScholarCross Ref
- M. Martinelli, E. Tronci, G. Dipoppa, and C. Balducelli. Electric power system anomaly detection using neural networks. In Knowledge-Based Intelligent Information and Engineering Systems, volume 3213 of Lecture Notes in Computer Science, pages 1242–1248. 2004.Google Scholar
- S. O. Song, D. Shin, and E. S. Yoon. Analysis of novelty detection properties of auto-associators. In Proceedings of the International Congress on Condition Monitoring and Diagnostic Engineering Management, pages 577–584, 2001.Google Scholar
- C. Surace, K. Worden, and G. Tomlinson. A novelty detection approach to diagnose damage in a cracked beam. In Proceedings of SPIE, pages 947–953, 1997.Google Scholar
- B. Thompson, R. Marks, J. Choi, M. El-Sharkawi, M.-Y. Huang, and C. Bunje. Implicit learning in autoencoder novelty assessment. In Proceedings of the 2002 International Joint Conference on Neural Networks, volume 3, pages 2878–2883, 2002.Google ScholarCross Ref
- P. Vincent, H. Larochelle, Y. Bengio, and P.-A. Manzagol. Extracting and composing robust features with denoising autoencoders. In Proceedings of the 25th International Conference on Machine Learning, pages 1096–1103, 2008. Google ScholarDigital Library
- P. Vincent, H. Larochelle, I. Lajoie, Y. Bengio, and P.-A. Manzagol. Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. Journal of Machine Learning Research, 11:3371–3408, Dec. 2010. Google ScholarDigital Library
- G. Williams, R. Baxter, H. He, S. Hawkins, and L. Gu. A comparative study of rnn for outlier detection in data mining. In Proceedings of the International Conference on Data Mining, page 709, 2002. Google ScholarDigital Library
- T. Yairi, M. Inui, A. Yoshiki, Y. Kawahara, and N. Takata. Spacecraft telemetry data monitoring by dimensionality reduction techniques. In Proceedings of SICE Annual Conference, pages 1230–1234, Aug 2010.Google Scholar
- T. Yairi, T. Tagawa, and N. Takata. Telemetry monitoring by dimensionality reduction and learning hidden markov model. In Proceedings of the International Symposium on Artificial Intelligence, Robotics and Automation in Space, 2012.Google Scholar
Index Terms
- Anomaly Detection Using Autoencoders with Nonlinear Dimensionality Reduction
Recommendations
Dimensionality reduction-based spoken emotion recognition
To improve effectively the performance on spoken emotion recognition, it is needed to perform nonlinear dimensionality reduction for speech data lying on a nonlinear manifold embedded in a high-dimensional acoustic space. In this paper, a new supervised ...
Anomaly preserving l2,∞-optimal dimensionality reduction over a Grassmann manifold
In this paper, we address the problem of redundancy reduction of high-dimensional noisy signals that may contain anomaly (rare) vectors, which we wish to preserve. Since anomaly data vectors contribute weakly to the l2-norm of the signal as compared to ...
Anomaly Detection with Robust Deep Autoencoders
KDD '17: Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data MiningDeep autoencoders, and other deep neural networks, have demonstrated their effectiveness in discovering non-linear features across many problem domains. However, in many real-world problems, large outliers and pervasive noise are commonplace, and one ...
Comments