2005 | OriginalPaper | Buchkapitel
On the Relation Between Answer Set and SAT Procedures (or, Between cmodels and smodels)
verfasst von : Enrico Giunchiglia, Marco Maratea
Erschienen in: Logic Programming
Verlag: Springer Berlin Heidelberg
Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.
Wählen Sie Textabschnitte aus um mit Künstlicher Intelligenz passenden Patente zu finden. powered by
Markieren Sie Textabschnitte, um KI-gestützt weitere passende Inhalte zu finden. powered by
Answer Set Programming (ASP) is a declarative paradigm for solving search problems. State-of-the-art systems for ASP include
smodels
,
dlv
,
cmodels
, and
assat
.
In this paper, our goal is to study the computational properties of such systems both from a theoretical and an experimental point of view. From the theoretical point of view, we start our analysis with
cmodels
and
smodels
. We show that though these two systems are apparently different, they are equivalent on a significant class of programs, called tight. By equivalent, we mean that they explore search trees with the same branching nodes, (assuming, of course, a same branching heuristic). Given our result and that the
cmodels
search engine is based on the Davis Logemann Loveland procedure (
dll
) for propositional satisfiability (SAT), we are able to establish that many of the properties holding for
dll
also hold for
cmodels
and thus for
smodels
. On the other hand, we also show that there exist classes of non-tight programs which are exponentially hard for
cmodels
, but “easy” for
smodels
. We also discuss how our results extend to other systems.
From the experimental point of view, we analyze which combinations of reasoning strategies work best on which problems. In particular, we extended
cmodels
in order to obtain a unique platform with a variety of reasoning strategies, and conducted an extensive experimental analysis on “small” randomly generated and on “large” non randomly generated programs. Considering these programs, our results show that the reasoning strategies that work best on the small problems are completely different from the ones that are best on the large ones. These results point out, e.g., that we can hardly expect to develop one solver with the best performances on all the categories of problems. As a consequence, (
i
) developers should focus on specific classes of benchmarks, and (
ii
) benchmarking should take into account whether solvers have been designed for specific classes of programs.