skip to main content
10.1145/2046684.2046698acmconferencesArticle/Chapter ViewAbstractPublication PagesccsConference Proceedingsconference-collections
short-paper

Understanding the risk factors of learning in adversarial environments

Authors Info & Claims
Published:21 October 2011Publication History

ABSTRACT

Learning for security applications is an emerging field where adaptive approaches are needed but are complicated by changing adversarial behavior. Traditional approaches to learning assume benign errors in data and thus may be vulnerable to adversarial errors. In this paper, we incorporate the notion of adversarial corruption directly into the learning framework and derive a new criteria for classifier robustness to adversarial contamination.

References

  1. M. Barreno, B. Nelson, A. D. Joseph, and J. D. Tygar. The security of machine learning. Machine Learning, 81(2):121--148, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. L. Bottou and O. Bousquet. The tradeoffs of large scale learning. In Advances in Neural Information Processing Systems (NIPS), volume 20, pages 161--168, 2008.Google ScholarGoogle Scholar
  3. N. Cesa-Bianchi and G. Lugosi. Prediction, Learning, and Games. Cambridge University Press, New York, NY, USA, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. N. Dalvi, P. Domingos, Mausam, S. Sanghai, and D. Verma. Adversarial classification. In Proceedings of the International Conference on Knowledge Discovery and Data Mining (KDD), pages 99--108, 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. S. Dasgupta, A. T. Kalai, and C. Monteleoni. Analysis of perceptron-based active learning. Journal of Machine Learning Research, 10:281--299, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. A. Globerson and S. Roweis. Nightmare at test time: Robust learning by feature deletion. In Proceedings of the 23rd International Conference on Machine Learning (ICML), pages 353--360, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. F. R. Hampel, E. M. Ronchetti, P. J. Rousseeuw, and W. A. Stahel. Robust Statistics: The Approach Based on Influence Functions. John Wiley and Sons, 1986.Google ScholarGoogle Scholar
  8. P. Huber. Robust Statistics. John Wiley & Sons, 1981.Google ScholarGoogle Scholar
  9. M. Kearns and M. Li. Learning in the presence of malicious errors. SIAM Journal on Computing, 22(4):807--837, 1993. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. P. Laskov and M. Kloft. A framework for quantitative security analysis of machine learning. In Proceedings of the 2nd ACM Workshop on Security and Artificial Intelligence (AISec), pages 1--4, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. D. Lowd and C. Meek. Adversarial learning. In Proceedings of the International Conference on Knowledge Discovery and Data Mining (KDD), pages 641--647, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. C. H. Teo, A. Globerson, S. T. Roweis, and A. J. Smola. Convex learning with invariances. In Advances in Neural Information Processing Systems (NIPS), 2007.Google ScholarGoogle Scholar
  13. V. N. Vapnik. The Nature of Statistical Learning Theory. Springer-Verlag New York, Inc., 1995. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. H. Xu, C. Caramanis, and S. Mannor. Robustness and regularization of support vector machines. Journal of Machine Learning Research, 10:1485--1510, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Understanding the risk factors of learning in adversarial environments

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Conferences
        AISec '11: Proceedings of the 4th ACM workshop on Security and artificial intelligence
        October 2011
        124 pages
        ISBN:9781450310031
        DOI:10.1145/2046684

        Copyright © 2011 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 21 October 2011

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • short-paper

        Acceptance Rates

        Overall Acceptance Rate94of231submissions,41%

        Upcoming Conference

        CCS '24
        ACM SIGSAC Conference on Computer and Communications Security
        October 14 - 18, 2024
        Salt Lake City , UT , USA

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader