2013 | OriginalPaper | Chapter
Identifiability of Model Properties in Over-Parameterized Model Classes
Author : Manfred Jaeger
Published in: Machine Learning and Knowledge Discovery in Databases
Publisher: Springer Berlin Heidelberg
Activate our intelligent search to find suitable subject content or patents.
Select sections of text to find matching patents with Artificial Intelligence. powered by
Select sections of text to find additional relevant content using AI-assisted search. powered by
Classical learning theory is based on a tight linkage between hypothesis space (a class of function on a domain
X
), data space (function-value examples (
x
,
f
(
x
))), and the space of queries for the learned model (predicting function values for new examples
x
). However, in many learning scenarios the 3-way association between hypotheses, data, and queries can really be much looser. Model classes can be over-parameterized, i.e., different hypotheses may be equivalent with respect to the data observations. Queries may relate to model properties that do not directly correspond to the observations in the data. In this paper we make some initial steps to extend and adapt basic concepts of computational learnability and statistical identifiability to provide a foundation for investigating learnability in such broader contexts. We exemplify the use of the framework in three different applications: the identification of temporal logic properties of probabilistic automata learned from sequence data, the identification of causal dependencies in probabilistic graphical models, and the transfer of probabilistic relational models to new domains.