Authors: Howar, Falk M.
Title: Active learning of interface programs
Language (ISO): en
Abstract: Computer systems today are no longer monolithic programs; instead they usually comprise multiple interacting programs. With the continuous growth of these systems and with their integration into systems of systems, interoperability becomes a fundamental issue. Integration of systems is more complex and occurs more frequently than ever before. One solution to this problem could be the automated model-based synthesis of mediators at runtime. However, this approach has strong prerequisites. It requires the existence of adequate models of the systems to be connected. Many systems encountered in practice, on the other hand, do not come with models. In such cases models have to be constructed ex post (at runtime). Furthermore, adequate models must capture control as well as data aspects of a system. In most protocols, for instance, data parameters (e.g., session identifiers or sequence numbers) can influence system behavior. Models of such systems can be thought of as interface programs: Rather than covering only the control behavior, they describe explicitly which data values are relevant to the communication and have to be remembered and reused. This thesis addresses the problem of inferring interface programs of systems at runtime using active automata learning techniques. Active automata learning uses a test-based and counterexample-driven approach to inferring models of black-box systems. The method has originally been introduced for finite automata (the popular L* algorithm). Extending active learning to interface programs requires research in three directions: First, the efficiency of active learning algorithms has to be optimized to scale when dealing with data parameters. Second, techniques are needed for finding counterexamples driving the learning process in practice. Third, active learning has to be extended to richer models than Mealy machines or DFAs, capable of expressing interface programs. The work presented in this thesis improves the state of the art in all three directions. More concretely, the contributions of this thesis are the following: first, an efficient active learning algorithm for DFAs and Mealy machines that combines the ideas of several known active learning algorithms in a non-trivial way; second, a framework for finding counterexamples in black-box scenarios, leveraging the incremental and monotonic evolution of hypothetical models characteristic of active automata learning; third, and most importantly, the technically involved extension of the partition/refinement-based approach of active learning to interface programs. The impact of extending active learning to interface programs becomes apparent already for small systems. We inferred a simple data structure (a nested stack of overall capacity 16) as an interface program in no more than 20 seconds, using less than 45,000 tests and only 9 counterexamples. The corresponding Mealy machine model, on the other hand, would have more than 10 to the power of 9 states already in the case of a very small finite data domain of size 4 and require significantly more than 10 to the power of 9 tests when being inferred using the classic L* algorithm.
Subject Headings: automata learning
extended finite state machines
interface synthesis
register automata
register mealy machines
regular inference
URI: http://hdl.handle.net/2003/29486
http://dx.doi.org/10.17877/DE290R-4817
Issue Date: 2012-06-26
Appears in Collections:LS 05 Programmiersysteme

Files in This Item:
File Description SizeFormat 
Dissertation.pdfDNB519.46 kBAdobe PDFView/Open


This item is protected by original copyright



This item is protected by original copyright rightsstatements.org