skip to main content
10.1145/167088.167200acmconferencesArticle/Chapter ViewAbstractPublication PagesstocConference Proceedingsconference-collections
Article
Free Access

Efficient noise-tolerant learning from statistical queries

Published:01 June 1993Publication History
First page image

References

  1. 1.Dana Angluin and Philip Laird. Learning from noisy examples. Machine Learning, 2(4):343-370, 1988.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. 2.Eric B. Baum and Yuh-Dauh Lyuu. The transition to perfect generalization in perceptrons. Neural Computation, 3:386- 401, 1991.]]Google ScholarGoogle ScholarCross RefCross Ref
  3. 3.Anselm Blumer, Andrzej Ehrenfeucht, David Haussler, and Manfred K. Warmuth. Learnability and the Vapnik- Chervonenkis dimension. Journal of the Association for Computing Machinery, 36(4):929-965, October 1989.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. 4.A. Ehrenfeucht, D. Haussler, M. Kearns, and L. Valiant. A general lower bound on the number of examples needed for learning. In First Workshop on Computatinal Learning Theor//, pages 139-154, Cambridge, Mass. August 1988. Morgan Kaufmann.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. 5.Merrick L. Furst, Jeffrey C. Jackson, and Sean W. Smith. Improved learning of AC0 functions. In Proceedings of the Fourth Annual Workshop on Computational Learning Theory, pages 317-325, August 1991.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. 6.E. Gardner and B. Derrida. Three unfinished works on the optimal storage capacity of networks. 3. Phys. A: Math. Gen., 22:1983-1994, 1989.]]Google ScholarGoogle ScholarCross RefCross Ref
  7. 7.Thomas Hancock and Yishay Mansour. Learning monotone k# DNF formulas on product distributions. In Proceedings of the Fourth Annual Workshop on Computational Learning Theory, pages 179-183, August 1991.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. 8.David Haussler. Quantifying inductive bias: AI learning algorithms and Valiant's learning framework. Artificial Intelligence, 36:177-221, 1988.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. 9.David Helmbold, Robert Sloan, and Manfred K. Warmuth. Learning integer lattices. SIAM Journal on Computing, 21(2):240-266, 1992.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. 10.Michael Kearns. The Computational Complexity of MacJ#ine Learning. The MIT Press, 1990.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. 11.Michael Kearns and Ming Li. Learning in the presence of malicious errors. In Proceedings of the Twentieth Annual A CM Symposium on Theory of Computing, pages 267-280, May 1988. To appear, SIAM Journal on Computing.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. 12.Michael Kearns, Ming Li, Leonard Pitt, and Leslie Valiant. On the learnability of Boolean formulae. In Proceedings of the Nineteenth Annual A CM Symposium on Theory of Computing, pages 285-295, May 1987.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. 13.Michael Kearns and Leonard Pitt. A polynomial-time algorithm for learning k-variable pattern languages from examples, in Proceedings of the Second Annual Workshop on Computational Learning Theory, pages 57-71, July 1989.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. 14.Michael J. Kearns and Robert E. Schapire. Efficient distribution-free learning of probabilistic concepts. In 31st Annual Symposium on Foundations of Computer Science, pages 382-391, October 1990. To appear, Journal of Computer and System Sciences.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. 15.Philip D. Laird. Learning from Good and Bad Data. Kluwer international series in engineering and computer science. Kluwer Academic Publishers, Boston, 1988.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. 16.Nathan Linial, Yishay Mansour, and Noam Nisan. Constant depth circuits, Fourier transform, and learnability. In 6!Oth Annual Symposium on Foundations of Computer Science, pages 574-579, October 1989.]]Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. 17.Marvin Minsky and Seymour Papert. Perceptrons: An Introduction to Computational Geometry (Expanded Edition). The MIT Press, 1988.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. 18.Leonard Pitt and Leslie G. Valiant. Computational limitations on learning from examples. Journal of the Association for Computing Machinery, 35(4):965-984, October 1988.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. 19.Ronald L. Rivest. Learning decision lists. Machine Learning, 2(3):229-246, 1987.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. 20.Yasubumi Sakakibara. Algorithmic Learning of Formal Languages and Decision Trees. PhD thesis, Tokyo Institute of Technology, October 1991. Research Report IIAS-RR-91- 22E, International Institute for Advanced Study of Social Information Science, Fujitsu Laboratories, Ltd.]]Google ScholarGoogle Scholar
  21. 21.Robert E. Schapire. Learning probabilistic read-once formulas on product distributions. In Proceedings of the Fourth Annual Workshop on Computational Learning Theory, August 1991. To appear, Machine Learning.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. 22.Robert Elias Schapire. The Design and Analysis of Efficient Learning Algorithms. The MIT Press, 1992.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. 23.H.S. Seung, H. Sompolinsky, and N. Tishby. Statistical mechanics of learning from examples. Physical Review A, 45(8):6056-6091, April 1992.]]Google ScholarGoogle ScholarCross RefCross Ref
  24. 24.Robert H. Sloan. Types of noise in data for concept learning. In Proceedings of the 1988 Workshop on Computational Learning Theory, pages 91-96, August 1988.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. 25.L. G. Valiant. A theory of the learnable. Communications o# the ACM, 27(11):1134-1142, November 1984.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. 26.L. G. Valiant. Learning disjunctions of conjunctions. In Proceedings of the 9th International Joint Conference on Artificial Intelligence, pages 560-566, August 1985.]]Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. 27.V. N. Vapnik and A. Ya. Chervonenkis. On the uniform convergence of relative frequencies of events to their probabilities. Theory of Probability and its applications, XVI(2):264- 280, 1971.]]Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. Efficient noise-tolerant learning from statistical queries

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in
        • Published in

          cover image ACM Conferences
          STOC '93: Proceedings of the twenty-fifth annual ACM symposium on Theory of Computing
          June 1993
          812 pages
          ISBN:0897915917
          DOI:10.1145/167088

          Copyright © 1993 ACM

          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 1 June 1993

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • Article

          Acceptance Rates

          Overall Acceptance Rate1,469of4,586submissions,32%

          Upcoming Conference

          STOC '24
          56th Annual ACM Symposium on Theory of Computing (STOC 2024)
          June 24 - 28, 2024
          Vancouver , BC , Canada

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader