ABSTRACT
Learning for security applications is an emerging field where adaptive approaches are needed but are complicated by changing adversarial behavior. Traditional approaches to learning assume benign errors in data and thus may be vulnerable to adversarial errors. In this paper, we incorporate the notion of adversarial corruption directly into the learning framework and derive a new criteria for classifier robustness to adversarial contamination.
- M. Barreno, B. Nelson, A. D. Joseph, and J. D. Tygar. The security of machine learning. Machine Learning, 81(2):121--148, 2010. Google ScholarDigital Library
- L. Bottou and O. Bousquet. The tradeoffs of large scale learning. In Advances in Neural Information Processing Systems (NIPS), volume 20, pages 161--168, 2008.Google Scholar
- N. Cesa-Bianchi and G. Lugosi. Prediction, Learning, and Games. Cambridge University Press, New York, NY, USA, 2006. Google ScholarDigital Library
- N. Dalvi, P. Domingos, Mausam, S. Sanghai, and D. Verma. Adversarial classification. In Proceedings of the International Conference on Knowledge Discovery and Data Mining (KDD), pages 99--108, 2004. Google ScholarDigital Library
- S. Dasgupta, A. T. Kalai, and C. Monteleoni. Analysis of perceptron-based active learning. Journal of Machine Learning Research, 10:281--299, 2009. Google ScholarDigital Library
- A. Globerson and S. Roweis. Nightmare at test time: Robust learning by feature deletion. In Proceedings of the 23rd International Conference on Machine Learning (ICML), pages 353--360, 2006. Google ScholarDigital Library
- F. R. Hampel, E. M. Ronchetti, P. J. Rousseeuw, and W. A. Stahel. Robust Statistics: The Approach Based on Influence Functions. John Wiley and Sons, 1986.Google Scholar
- P. Huber. Robust Statistics. John Wiley & Sons, 1981.Google Scholar
- M. Kearns and M. Li. Learning in the presence of malicious errors. SIAM Journal on Computing, 22(4):807--837, 1993. Google ScholarDigital Library
- P. Laskov and M. Kloft. A framework for quantitative security analysis of machine learning. In Proceedings of the 2nd ACM Workshop on Security and Artificial Intelligence (AISec), pages 1--4, 2009. Google ScholarDigital Library
- D. Lowd and C. Meek. Adversarial learning. In Proceedings of the International Conference on Knowledge Discovery and Data Mining (KDD), pages 641--647, 2005. Google ScholarDigital Library
- C. H. Teo, A. Globerson, S. T. Roweis, and A. J. Smola. Convex learning with invariances. In Advances in Neural Information Processing Systems (NIPS), 2007.Google Scholar
- V. N. Vapnik. The Nature of Statistical Learning Theory. Springer-Verlag New York, Inc., 1995. Google ScholarDigital Library
- H. Xu, C. Caramanis, and S. Mannor. Robustness and regularization of support vector machines. Journal of Machine Learning Research, 10:1485--1510, 2009. Google ScholarDigital Library
Index Terms
- Understanding the risk factors of learning in adversarial environments
Recommendations
Adversarial machine learning
AISec '11: Proceedings of the 4th ACM workshop on Security and artificial intelligenceIn this paper (expanded from an invited talk at AISEC 2010), we discuss an emerging field of study: adversarial machine learning---the study of effective machine learning techniques against an adversarial opponent. In this paper, we: give a taxonomy for ...
Can machine learning be secure?
ASIACCS '06: Proceedings of the 2006 ACM Symposium on Information, computer and communications securityMachine learning systems offer unparalled flexibility in dealing with evolving input in a variety of applications, such as intrusion detection systems and spam e-mail filtering. However, machine learning algorithms themselves can be a target of attack ...
Defending against adversarial machine learning attacks using hierarchical learning: A case study on network traffic attack classification
AbstractMachine learning is key for automated detection of malicious network activity to ensure that computer networks and organizations are protected against cyber security attacks. Recently, there has been growing interest in the domain of ...
Comments