- 1.T. M. Cover. Behaviour of sequential predictors of binary sequences. In Transactions of the Fourth Prague Conference on Information Theory, Statistical Decision Functions, Random Processes, pages 263-272. Publishing House of the Czechoslovak Academy of Sciences, 1965.Google Scholar
- 2.T. M. Cover and A. Shanhar. CompoundBayes predictors for sequences with apparent Markov structure. IEEE Transactzons on Systems, Man and Cybernetics, SMC-7(6):421-424, Jmm 1977.Google Scholar
- 3.A. Dawid. Prequential data analysis. Current Issues in Statistical Inference, to appear.Google Scholar
- 4.A. P. Dawid. Statistical theory: The prequential approach. Journal of the Royal Statistical Society, Series A, pages 278- 292, 1984.Google Scholar
- 5.A. P. Dawid. Prequential analysis, stochastic complexity and Bayesian inference. Bayesian Statistics 4, to appear.Google Scholar
- 6.A. DeSantis, G. Markowski, and M. N. Wegman. Learning probabilistic prediction functions. In Proceedings of the 1988 Workshop on Computational Learning Theory, pages 312- 328. Morgan Kaufmann, 1988. Google ScholarDigital Library
- 7.M. Feder, N. Merhav, and M. Gutman. Universal prediction of individual sequences. IEEE Transactions on Information Theory, 38:1258-1270, 1992.Google ScholarDigital Library
- 8.A. Fiat, D. Foster, H. Karloff, Y. Rabani, Y. Ravid, and S. Vishwanathan. Competitive algorithms for layered graph traversal. In 32nd Annual Symposium on Foundations of Computer Science, pages 288-297, 1991. Google ScholarDigital Library
- 9.A. Fiat, R. Karp, M. Luby, L. McGeoch, D. Sleator, aald N. Yomlg. Competitive paging algorithms. Journal of Algorithms, 12:685-699, 1991. Google ScholarDigital Library
- 10.A. Fiat, Y. Rabani, and Y. Ravid. Competitive k-server algorithms. In 31st Annual Symposium on Foundations of Computer Science, pages 454-463, 1990.Google ScholarDigital Library
- 11.J. Galambos. The Asymptotic Theory of Extreme Oreder Stat2stics. R. E. Kreiger, second edition, 1987.Google Scholar
- 12.3. Hamlan. Approximation to Bayes risk in repeated play. ha Contributions to the theory of games, volume 3, pages 97-139. Princeton University Press, 1957.Google Scholar
- 13.D. Haussler and A. Barron. How well do Bayes methods work for on-line prediction of {+1, - 1 } values? In Proceedings of the Third NEC Symposium on Computation and Cognition. SIAM, to appear.Google Scholar
- 14.D. Haussler, M. Kearns, N. Littlestone, and M. K. Warmuth. Equivalence of models for polynomial learnability. Information and Computation, 95:129-161, 1991. Google ScholarDigital Library
- 15.D. Haussler, M. Kearns, and R. Schapire. Bounds on the sample complexity of Bayesian learning using information theory and the VC dimension. Machine Learning, to appear. Google ScholarDigital Library
- 16.D. Haussler, N. Littlestone, and M. Waxinuth. Fredicting {0,1}-fualctions on randomly drawn points. Technical Report UCSC-CRL-90-54, University of California Santa Cruz, Computer Research Laboratory, Dec. 1990. To appear, Information and Computation. Google Scholar
- 17.D. Helmbold and M. K. Warmuth. On weak learning. In Proceedings of the Third NEC Research Symposium on Co'rnpurational Learning and Cognition. SIAM, to appear.Google Scholar
- 18.D. P. Hehnbold and M. K. Warmuth. Some weak lealTSng results. In Proceedings of the Fifth Annual A CM Workshop on Computational Learning Theory, pages 399-412, 19!)2. Google ScholarDigital Library
- 19.M. J. Kearns and R. E. Schapire. Efficient distributiolL-free learning of probabilistic concepts. In 31st Annual Symposium on Foundations of Computer Science, pages 382-391, 1990.Google ScholarDigital Library
- 20.M. J. Kearns, R. E. Schapire, and L. M. Sellie. Toward efficient agnostic learning. In Proceedings of the Fifth Annual A CM Workshop on Computational Learning Theory, pages 341-352, 1992. Google ScholarDigital Library
- 21.N. Littlestone. From on-fine to batch learning. In Proceedings of the Second Annual Workshop on Computational Learning Theory, pages 269-284. Morgan Kaufmann, 1989. Google ScholarDigital Library
- 22.N. Littlestone, P. M. Long, and M. K. Warmuth. On-line learning of linear functions. In Proceedings of the Twenty Third Annual A CM Symposium on Theory of Computing, pages 465-475, 1991. Google ScholarDigital Library
- 23.N. Littlestone and M. Warmuth. The weighted majority algorithm, in 30th Annual IEEE Symposium on Foundations of Computer Science, pages 256-261, 1989. Long version: UCSC tech. rep. UCSC-CRL-91-28.Google ScholarDigital Library
- 24.N. Merhav and M. Feder. Universal sequential learning and decision from individual data sequences. In Proceedings of the Fifth Annual A CM Workshop on Computational Learning Theory, pages 413-427, 1992. Google ScholarDigital Library
- 25.J. Rissanen. Modeling by shortest data description. Automatica, 14:465-471, 1978.Google ScholarDigital Library
- 26.J. Rissanen. Stochastic complexity and modeling. The Annals of Statistics, 14(3):1080-1100, 1986.Google Scholar
- 27.J. Rissanen and G. G. Langdon, Jr. Universal modeling and coding. IEEE Transactions on Information Theory, IT- 27(1):12-23, Jan. 1981.Google ScholarDigital Library
- 28.H. S. Seung, H. Sompolinsky, and N. Tishby. Stati,#tical mechanics of learning from examples. Physical Review A, 45(8):6056-6091, 1992.Google ScholarCross Ref
- 29.H. Sompolinsky, N. Tishby, and H. Seung. Learning from examples in large neural networks. Physical Review Led!ters, 65:1683-1686, 1990.Google Scholar
- 30.M. Talagrand. Sharper bounds for Gaussian and empirical processes. Annals of Probability, to appear.Google Scholar
- 31.L. G. Valiant. A theory of the learnable. Communications of the ACM, 27(11):1134-42, 1984. Google ScholarDigital Library
- 32.V. Vapnik. Principles of risk minimization for learning theory. In J. E. Moody, S. J. Hanson, and R. P. Lippman, editors, Advances in Neural information Processing Systems 4. Morgan Kaufmann, 1992.Google Scholar
- 33.V. N. Vapnik. Estimation of Dependences Based on Empirical Data. Springer-Verlag, 1982. Google ScholarDigital Library
- 34.V. G. Vovk. Aggregating strategies. In Proceedings of the Third Annual Workshop on Computational Learning Theory, pages 371-383. Morgan Kaufmann, 1990. Google ScholarDigital Library
- 35.V. G. Vovk. Prequential probability theory. Unpublished manuscript, 1990.Google Scholar
- 36.V. G. Vovk. Universal forcasting algorithms. Information and Computation, 96(2):245-277, Feb. 1992. Google ScholarDigital Library
- 37.K. Yamanishi. A loss bound model for on-line stochastic prediction strategies. In Proceedings of the Fourth Annual 302. Morgan Kaufmann, 1991. Google ScholarDigital Library
Index Terms
- How to use expert advice
Recommendations
Providing Expert Advice by Analogy for On-Line Help
IAT '05: Proceedings of the IEEE/WIC/ACM International Conference on Intelligent Agent TechnologyOne of the principal problems of online help is the mismatch between the specialized knowledge and technical vocabulary of experts who are providing the help, and the relative naïveté of novices, who usually are often not in a position to understand ...
Providing Expert Advice by Analogy for On-Line Help
WI '05: Proceedings of the 2005 IEEE/WIC/ACM International Conference on Web IntelligenceOne of the principal problems of online help is the mismatch between the specialized knowledge and technical vocabulary of experts who are providing the help, and the relative naïveté of novices, who usually are often not in a position to understand ...
Prediction with expert evaluators' advice
ALT'09: Proceedings of the 20th international conference on Algorithmic learning theoryWe introduce a new protocol for prediction with expert advice in which each expert evaluates the learner's and his own performance using a loss function that may change over time and may be different from the loss functions used by the other experts. The ...
Comments