2003 | OriginalPaper | Buchkapitel
Which First-Order Logic Clauses Can Be Learned Using Genetic Algorithms?
verfasst von : Flaviu Adrian Mărginean
Erschienen in: Inductive Logic Programming
Verlag: Springer Berlin Heidelberg
Enthalten in: Professional Book Archive
Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.
Wählen Sie Textabschnitte aus um mit Künstlicher Intelligenz passenden Patente zu finden. powered by
Markieren Sie Textabschnitte, um KI-gestützt weitere passende Inhalte zu finden. powered by
In this paper we present and prove both negative and positive theoretical results concerning the representation and evaluation of first-order logic clauses using genetic algorithms. Over the last few years, a few approaches have been proposed aiming to combine genetic and evolutionary computation (EC) with inductive logic programming (ILP). The underlying rationale is that evolutionary algorithms, such as genetic algorithms, might mitigate the combinatorial explosions generated by the inductive learning of rich representations, such as those used in first-order logic. Particularly, the genetic algorithms approach to ILP presented by Tamaddoni-Nezhad and Muggleton has attracted the attention of both the EC and ILP communities in recent years. Unfortunately, a series of systematic and fundamental theoretical errors renders their framework moot. This paper critically examines the fallacious claims in the mentioned approach. It is shown that, far from restoring completeness to the learner progol‘s search of the subsumption lattice, their proposed binary representation is both incomplete and noncompact. It is also shown that their fast evaluation of clauses based on bitwise operations is based on false theorems and therefore invalid. As an alternative to Tamaddoni-Nezhad and Muggleton’s flawed genetic algorithms attempt, we propose a framework based on provably sound and complete binary refinement.