- 1.Dana Angluin and Philip Laird. Learning from noisy examples. Machine Learning, 2(4):343-370, 1988.]] Google ScholarDigital Library
- 2.Eric B. Baum and Yuh-Dauh Lyuu. The transition to perfect generalization in perceptrons. Neural Computation, 3:386- 401, 1991.]]Google ScholarCross Ref
- 3.Anselm Blumer, Andrzej Ehrenfeucht, David Haussler, and Manfred K. Warmuth. Learnability and the Vapnik- Chervonenkis dimension. Journal of the Association for Computing Machinery, 36(4):929-965, October 1989.]] Google ScholarDigital Library
- 4.A. Ehrenfeucht, D. Haussler, M. Kearns, and L. Valiant. A general lower bound on the number of examples needed for learning. In First Workshop on Computatinal Learning Theor//, pages 139-154, Cambridge, Mass. August 1988. Morgan Kaufmann.]] Google ScholarDigital Library
- 5.Merrick L. Furst, Jeffrey C. Jackson, and Sean W. Smith. Improved learning of AC0 functions. In Proceedings of the Fourth Annual Workshop on Computational Learning Theory, pages 317-325, August 1991.]] Google ScholarDigital Library
- 6.E. Gardner and B. Derrida. Three unfinished works on the optimal storage capacity of networks. 3. Phys. A: Math. Gen., 22:1983-1994, 1989.]]Google ScholarCross Ref
- 7.Thomas Hancock and Yishay Mansour. Learning monotone k# DNF formulas on product distributions. In Proceedings of the Fourth Annual Workshop on Computational Learning Theory, pages 179-183, August 1991.]] Google ScholarDigital Library
- 8.David Haussler. Quantifying inductive bias: AI learning algorithms and Valiant's learning framework. Artificial Intelligence, 36:177-221, 1988.]] Google ScholarDigital Library
- 9.David Helmbold, Robert Sloan, and Manfred K. Warmuth. Learning integer lattices. SIAM Journal on Computing, 21(2):240-266, 1992.]] Google ScholarDigital Library
- 10.Michael Kearns. The Computational Complexity of MacJ#ine Learning. The MIT Press, 1990.]] Google ScholarDigital Library
- 11.Michael Kearns and Ming Li. Learning in the presence of malicious errors. In Proceedings of the Twentieth Annual A CM Symposium on Theory of Computing, pages 267-280, May 1988. To appear, SIAM Journal on Computing.]] Google ScholarDigital Library
- 12.Michael Kearns, Ming Li, Leonard Pitt, and Leslie Valiant. On the learnability of Boolean formulae. In Proceedings of the Nineteenth Annual A CM Symposium on Theory of Computing, pages 285-295, May 1987.]] Google ScholarDigital Library
- 13.Michael Kearns and Leonard Pitt. A polynomial-time algorithm for learning k-variable pattern languages from examples, in Proceedings of the Second Annual Workshop on Computational Learning Theory, pages 57-71, July 1989.]] Google ScholarDigital Library
- 14.Michael J. Kearns and Robert E. Schapire. Efficient distribution-free learning of probabilistic concepts. In 31st Annual Symposium on Foundations of Computer Science, pages 382-391, October 1990. To appear, Journal of Computer and System Sciences.]] Google ScholarDigital Library
- 15.Philip D. Laird. Learning from Good and Bad Data. Kluwer international series in engineering and computer science. Kluwer Academic Publishers, Boston, 1988.]] Google ScholarDigital Library
- 16.Nathan Linial, Yishay Mansour, and Noam Nisan. Constant depth circuits, Fourier transform, and learnability. In 6!Oth Annual Symposium on Foundations of Computer Science, pages 574-579, October 1989.]]Google ScholarDigital Library
- 17.Marvin Minsky and Seymour Papert. Perceptrons: An Introduction to Computational Geometry (Expanded Edition). The MIT Press, 1988.]] Google ScholarDigital Library
- 18.Leonard Pitt and Leslie G. Valiant. Computational limitations on learning from examples. Journal of the Association for Computing Machinery, 35(4):965-984, October 1988.]] Google ScholarDigital Library
- 19.Ronald L. Rivest. Learning decision lists. Machine Learning, 2(3):229-246, 1987.]] Google ScholarDigital Library
- 20.Yasubumi Sakakibara. Algorithmic Learning of Formal Languages and Decision Trees. PhD thesis, Tokyo Institute of Technology, October 1991. Research Report IIAS-RR-91- 22E, International Institute for Advanced Study of Social Information Science, Fujitsu Laboratories, Ltd.]]Google Scholar
- 21.Robert E. Schapire. Learning probabilistic read-once formulas on product distributions. In Proceedings of the Fourth Annual Workshop on Computational Learning Theory, August 1991. To appear, Machine Learning.]] Google ScholarDigital Library
- 22.Robert Elias Schapire. The Design and Analysis of Efficient Learning Algorithms. The MIT Press, 1992.]] Google ScholarDigital Library
- 23.H.S. Seung, H. Sompolinsky, and N. Tishby. Statistical mechanics of learning from examples. Physical Review A, 45(8):6056-6091, April 1992.]]Google ScholarCross Ref
- 24.Robert H. Sloan. Types of noise in data for concept learning. In Proceedings of the 1988 Workshop on Computational Learning Theory, pages 91-96, August 1988.]] Google ScholarDigital Library
- 25.L. G. Valiant. A theory of the learnable. Communications o# the ACM, 27(11):1134-1142, November 1984.]] Google ScholarDigital Library
- 26.L. G. Valiant. Learning disjunctions of conjunctions. In Proceedings of the 9th International Joint Conference on Artificial Intelligence, pages 560-566, August 1985.]]Google ScholarDigital Library
- 27.V. N. Vapnik and A. Ya. Chervonenkis. On the uniform convergence of relative frequencies of events to their probabilities. Theory of Probability and its applications, XVI(2):264- 280, 1971.]]Google ScholarCross Ref
Index Terms
- Efficient noise-tolerant learning from statistical queries
Recommendations
Efficient noise-tolerant learning from statistical queries
In this paper, we study the problem of learning in the presence of classification noise in the probabilistic learning model of Valiant and its variants. In order to identify the class of “robust” learning algorithms in the most general way, we formalize ...
On the Efficiency of Noise-Tolerant PAC Algorithms Derived from Statistical Queries
The Statistical Query (SQ) model provides an elegant means for generating noise-tolerant PAC learning algorithms that run in time inverse polynomial in the noise rate. Whether or not there is an SQ algorithm for every noise-tolerant PAC algorithm that ...
PAC Learning from Positive Statistical Queries
ALT '98: Proceedings of the 9th International Conference on Algorithmic Learning TheoryLearning from positive examples occurs very frequently in natural learning. The PAC learning model of Valiant takes many features of natural learning into account, but in most cases it fails to describe such kind of learning. We show that in order to ...
Comments