2010 | OriginalPaper | Buchkapitel
Learning without Coding
verfasst von : Samuel E. Moelius III, Sandra Zilles
Erschienen in: Algorithmic Learning Theory
Verlag: Springer Berlin Heidelberg
Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.
Wählen Sie Textabschnitte aus um mit Künstlicher Intelligenz passenden Patente zu finden. powered by
Markieren Sie Textabschnitte, um KI-gestützt weitere passende Inhalte zu finden. powered by
Iterative learning is a model of language learning from positive data, due to Wiehagen. When compared to a learner in Gold’s original model of language learning from positive data, an iterative learner can be thought of as
memory-limited
. However, an iterative learner can memorize
some
input elements by
coding
them into the syntax of its hypotheses. A main concern of this paper is: to what extent are such coding tricks
necessary
?
One means of preventing
some
such coding tricks is to require that the hypothesis space used be free of redundancy, i.e., that it be 1-1. By extending a result of Lange & Zeugmann, we show that many interesting and non-trivial classes of languages can be iteratively identified in this manner. On the other hand, we show that there exists a class of languages that can
not
be iteratively identified using any 1-1 effective numbering as the hypothesis space.
We also consider an iterative-like learning model in which the computational component of the learner is modeled as an
enumeration operator
, as opposed to a partial computable function. In this new model, there are no hypotheses, and, thus, no syntax in which the learner can encode what elements it has or has not yet seen. We show that there exists a class of languages that
can
be identified under this new model, but that can
not
be iteratively identified. On the other hand, we show that there exists a class of languages that can
not
be identified under this new model, but that
can
be iteratively identified using a Friedberg numbering as the hypothesis space.