skip to main content
10.1145/3088525.3088673acmconferencesArticle/Chapter ViewAbstractPublication PagespldiConference Proceedingsconference-collections
research-article

Verified perceptron convergence theorem

Published:18 June 2017Publication History

ABSTRACT

Frank Rosenblatt invented the perceptron algorithm in 1957 as part of an early attempt to build ``brain models'', artificial neural networks. In this paper, we apply tools from symbolic logic such as dependent type theory as implemented in Coq to build, and prove convergence of, one-layer perceptrons (specifically, we show that our Coq implementation converges to a binary classifier when trained on linearly separable datasets).

Our perceptron and proof are extensible, which we demonstrate by adapting our convergence proof to the averaged perceptron, a common variant of the basic perceptron algorithm. We perform experiments to evaluate the performance of our Coq perceptron vs. an arbitrary-precision C++ implementation and against a hybrid implementation in which separators learned in C++ are certified in Coq. We find that by carefully optimizing the extraction of our Coq perceptron, we can meet -- and occasionally exceed -- the performance of the arbitrary-precision C++ implementation. Our hybrid Coq certifier demonstrates an architecture for building high-assurance machine-learning systems that reuse existing codebases.

References

  1. Andreas Abel. Termination Checking with Types. ITA, 2004.Google ScholarGoogle ScholarCross RefCross Ref
  2. Edgar Anderson. The Irises of the Gaspé Peninsula. Bulletin of the American Iris Society, 59:2–5, 1935.Google ScholarGoogle Scholar
  3. Yves Bertot. Fixed precision patterns for the formal verification of mathematical constant approximations. In Proceedings of the 2015 Conference on Certified Programs and Proofs, pages 147–155. ACM, 2015. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Sooraj Bhat. Syntactic foundations for machine learning. PhD thesis, Georgia Institute of Technology, 2013.Google ScholarGoogle Scholar
  5. Hans-Dieter Block. The Perceptron: A Model for Brain Functioning. I. Reviews of Modern Physics, 34(1):123, 1962.Google ScholarGoogle ScholarCross RefCross Ref
  6. Sylvie Boldo and Guillaume Melquiond. Flocq: A unified library for proving floating-point algorithms in Coq. In Computer Arithmetic (ARITH), 2011 20th IEEE Symposium on, pages 243–252. IEEE, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Christophe Brun, Jean-François Dufourd, and Nicolas Magaud. Formal Proof in Coq and Derivation of an Imperative Program to Compute Convex Hulls. In Aut. Deduction in Geom. 2012.Google ScholarGoogle Scholar
  8. T-S Chang and Khaled AS Abdel-Ghaffar. A universal neural net with guaranteed convergence to zero system error. IEEE Transactions on signal processing, 40(12):3022–3031, 1992. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Byron Cook, Andreas Podelski, and Andrey Rybalchenko. Termination Proofs for Systems Code. In PLDI, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Koby Crammer and Yoram Singer. Ultraconservative online algorithms for multiclass problems. Journal of Machine Learning Research, 3(Jan): 951–991, 2003. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Hal Daumé, III. A course in machine learning. http://ciml.info/, 2017. Accessed: 2017-03-22. Ronald A Fisher. The Use of Multiple Measurements in Taxonomic Problems. Annals of Eugenics, 7(2):179–188, 1936.Google ScholarGoogle Scholar
  12. Benjamin Grégoire and Laurent Théry. A purely functional library for modular arithmetic and its application to certifying large prime numbers. In International Joint Conference on Automated Reasoning, pages 423– 437. Springer, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Suyog Gupta, Ankur Agrawal, Kailash Gopalakrishnan, and Pritish Narayanan. Deep learning with limited numerical precision. CoRR, abs/1502.02551, 392, 2015.Google ScholarGoogle Scholar
  14. Jónathan Heras, Ekaterina Komendantskaya, Moa Johansson, and Ewen Maclean. Proof-pattern Recognition and Lemma Discovery in ACL2. In LPAR, 2013.Google ScholarGoogle ScholarCross RefCross Ref
  15. Ekaterina Komendantskaya, Jónathan Heras, and Gudmund Grov. Machine Learning in Proof General: Interfacing Interfaces. arXiv preprint arXiv:1212.3618, 2012.Google ScholarGoogle Scholar
  16. M. Lichman. UCI Machine Learning Repository, 2013.Google ScholarGoogle Scholar
  17. Ulf Norell. Towards a Practical Programming Language Based on Dependent Type Theory, 2007.Google ScholarGoogle Scholar
  18. Seymour Papert. Some mathematical models of learning. In Proceedings of the Fourth London Symposium on Information Theory, 1961.Google ScholarGoogle Scholar
  19. David Pichardie and Yves Bertot. Formalizing convex hull algorithms. In International Conference on Theorem Proving in Higher Order Logics, pages 346–361. Springer, 2001. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Tahina Ramananandro, Paul Mountcastle, Benoît Meister, and Richard Lethin. A unified coq framework for verifying c programs with floatingpoint computations. In Proceedings of the 5th ACM SIGPLAN Conference on Certified Programs and Proofs, pages 15–26. ACM, 2016. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Frank Rosenblatt. The Perceptron – A Perceiving and Recognizing Automaton. Technical Report 85-460-1, Cornell Aeronautical Laboratory, 1957.Google ScholarGoogle Scholar
  22. Frank Rosenblatt. Principles of Neurodynamics; Perceptrons and the Theory of Brain Mechanisms. Spartan Books, 1962.Google ScholarGoogle Scholar
  23. The Coq Development Team. The Coq Proof Assistant. https://coq. inria.fr/, 2016. {Online; accessed 2-19-2016}. Dimitrios Vytiniotis, Thierry Coquand, and David Wahlstedt. Stop When You Are Almost-Full. In ITP. 2012.Google ScholarGoogle Scholar

Index Terms

  1. Verified perceptron convergence theorem

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      MAPL 2017: Proceedings of the 1st ACM SIGPLAN International Workshop on Machine Learning and Programming Languages
      June 2017
      50 pages
      ISBN:9781450350716
      DOI:10.1145/3088525

      Copyright © 2017 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 18 June 2017

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article

      Upcoming Conference

      PLDI '24

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader