ABSTRACT
We propose a framework for quantitative security analysis of machine learning methods. The key parts of this framework are the formal specification of a deployed learning model and attacker's constraints, the computation of an optimal attack, and the derivation of an upper bound on adversarial impact. We exemplarily apply the framework for the analysis of one specific learning scenario, online centroid anomaly detection, and experimentally verify the tightness of obtained theoretical bounds.
- M. Barreno, B. Nelson, R. Sears, A. Joseph, and J. Tygar. Can machine learning be secure? In ACM Symposium on Information, Computer and Communication Security, pages 16--25, 2006. Google ScholarDigital Library
- N. N. Dalvi, P. Domingos, Mausam, S. K. Sanghai, and D. Verma. Adversarial classification. In W. Kim, R. Kohavi, J. Gehrke, and W. DuMouchel, editors, KDD, pages 99--108. ACM, 2004. Google ScholarDigital Library
- P. Fogla and W. Lee. Evading network anomaly detection systems: formal reasoning and practical techniques. In ACM Conference on Computer and Communications Security, pages 59--68, 2006. Google ScholarDigital Library
- M. Kearns and M. Li. Learning in the presence of malicious errors. SIAM Journal on Computing, 22(4):807--837, 1993. Google ScholarDigital Library
- M. Kloft and P. Laskov. A poisoning attack against online anomaly detection. In NIPS Workshop on Machine Learning in Adversarial Environments for Computer Security, 2007.Google Scholar
- D. Lowd and C. Meek. Good word attacks on statistical spam filters. In Conference on Email and Anti-Spam, 2005.Google Scholar
- B. Nelson and A. D. Joseph. Bounding an attack's complexity for a simple learning model. In Proc. of the First Workshop on Tackling Computer Systems Problems with Machine Learning Techniques (SysML), Saint-Malo, France, 2006.Google Scholar
- R. Perdisci, D. Dagon, W. Lee, P. Fogla, and M. Sharif. Misleading worm signature generators using deliberate noise injection. In Proc. of IEEE Symposium on Security and Privacy, pages 17--31, 2006. Google ScholarDigital Library
- K. Rieck and P. Laskov. Linear-time computation of similarity measures for sequential data. Journal of Machine Learning Research, 9(Jan):23--48, 2008. Google ScholarDigital Library
- Y. Song, M. Locasto, A. Stavrou, A. Keromytis, and S. Stolfo. On the infeasibility of modeling polymorphic shellcode. In Conference on Computer and Communications Security (CCS), pages 541--551, 2007. Google ScholarDigital Library
- V. Vapnik. Statistical Learning Theory. Wiley, New York, 1998.Google ScholarDigital Library
- S. Venkataraman, A. Blum, and D. Song. Limits of learning-based signature generation with adversaries. In NDSS. The Internet Society, 2008.Google Scholar
Index Terms
- A framework for quantitative security analysis of machine learning
Recommendations
Adversarial Machine Learning Attacks and Defense Methods in the Cyber Security Domain
In recent years, machine learning algorithms, and more specifically deep learning algorithms, have been widely used in many fields, including cyber security. However, machine learning systems are vulnerable to adversarial attacks, and this limits the ...
Adversarial machine learning
AISec '11: Proceedings of the 4th ACM workshop on Security and artificial intelligenceIn this paper (expanded from an invited talk at AISEC 2010), we discuss an emerging field of study: adversarial machine learning---the study of effective machine learning techniques against an adversarial opponent. In this paper, we: give a taxonomy for ...
Can machine learning be secure?
ASIACCS '06: Proceedings of the 2006 ACM Symposium on Information, computer and communications securityMachine learning systems offer unparalled flexibility in dealing with evolving input in a variety of applications, such as intrusion detection systems and spam e-mail filtering. However, machine learning algorithms themselves can be a target of attack ...
Comments