skip to main content
10.1145/1654988.1654990acmconferencesArticle/Chapter ViewAbstractPublication PagesccsConference Proceedingsconference-collections
research-article

A framework for quantitative security analysis of machine learning

Published:09 November 2009Publication History

ABSTRACT

We propose a framework for quantitative security analysis of machine learning methods. The key parts of this framework are the formal specification of a deployed learning model and attacker's constraints, the computation of an optimal attack, and the derivation of an upper bound on adversarial impact. We exemplarily apply the framework for the analysis of one specific learning scenario, online centroid anomaly detection, and experimentally verify the tightness of obtained theoretical bounds.

References

  1. M. Barreno, B. Nelson, R. Sears, A. Joseph, and J. Tygar. Can machine learning be secure? In ACM Symposium on Information, Computer and Communication Security, pages 16--25, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. N. N. Dalvi, P. Domingos, Mausam, S. K. Sanghai, and D. Verma. Adversarial classification. In W. Kim, R. Kohavi, J. Gehrke, and W. DuMouchel, editors, KDD, pages 99--108. ACM, 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. P. Fogla and W. Lee. Evading network anomaly detection systems: formal reasoning and practical techniques. In ACM Conference on Computer and Communications Security, pages 59--68, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. M. Kearns and M. Li. Learning in the presence of malicious errors. SIAM Journal on Computing, 22(4):807--837, 1993. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. M. Kloft and P. Laskov. A poisoning attack against online anomaly detection. In NIPS Workshop on Machine Learning in Adversarial Environments for Computer Security, 2007.Google ScholarGoogle Scholar
  6. D. Lowd and C. Meek. Good word attacks on statistical spam filters. In Conference on Email and Anti-Spam, 2005.Google ScholarGoogle Scholar
  7. B. Nelson and A. D. Joseph. Bounding an attack's complexity for a simple learning model. In Proc. of the First Workshop on Tackling Computer Systems Problems with Machine Learning Techniques (SysML), Saint-Malo, France, 2006.Google ScholarGoogle Scholar
  8. R. Perdisci, D. Dagon, W. Lee, P. Fogla, and M. Sharif. Misleading worm signature generators using deliberate noise injection. In Proc. of IEEE Symposium on Security and Privacy, pages 17--31, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. K. Rieck and P. Laskov. Linear-time computation of similarity measures for sequential data. Journal of Machine Learning Research, 9(Jan):23--48, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Y. Song, M. Locasto, A. Stavrou, A. Keromytis, and S. Stolfo. On the infeasibility of modeling polymorphic shellcode. In Conference on Computer and Communications Security (CCS), pages 541--551, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. V. Vapnik. Statistical Learning Theory. Wiley, New York, 1998.Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. S. Venkataraman, A. Blum, and D. Song. Limits of learning-based signature generation with adversaries. In NDSS. The Internet Society, 2008.Google ScholarGoogle Scholar

Index Terms

  1. A framework for quantitative security analysis of machine learning

          Recommendations

          Comments

          Login options

          Check if you have access through your login credentials or your institution to get full access on this article.

          Sign in
          • Published in

            cover image ACM Conferences
            AISec '09: Proceedings of the 2nd ACM workshop on Security and artificial intelligence
            November 2009
            72 pages
            ISBN:9781605587813
            DOI:10.1145/1654988

            Copyright © 2009 ACM

            Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

            Publisher

            Association for Computing Machinery

            New York, NY, United States

            Publication History

            • Published: 9 November 2009

            Permissions

            Request permissions about this article.

            Request Permissions

            Check for updates

            Qualifiers

            • research-article

            Acceptance Rates

            Overall Acceptance Rate94of231submissions,41%

            Upcoming Conference

            CCS '24
            ACM SIGSAC Conference on Computer and Communications Security
            October 14 - 18, 2024
            Salt Lake City , UT , USA

          PDF Format

          View or Download as a PDF file.

          PDF

          eReader

          View online with eReader.

          eReader