Skip to main content

2012 | Buch

Eternal Systems

First InternationalWorkshop, EternalS 2011, Budapest, Hungary, May 3, 2011, Revised Selected Papers

herausgegeben von: Alessandro Moschitti, Riccardo Scandariato

Verlag: Springer Berlin Heidelberg

Buchreihe : Communications in Computer and Information Science

insite
SUCHEN

Über dieses Buch

This book constitutes the refereed post-proceedings of the First International Workshop on Eternal Systems, EternalS 2011, held in Budapest, Hungary, in May 2011. The workshop aimed at creating the conditions for mutual awareness and cross-fertilization among broad ICT areas such as learning systems for knowledge management and representation, software systems, networked systems and secure systems, by focusing on their shared objectives such as adaptation, evolvability and flexibility for the development of long living and versatile systems. The 6 revised full papers and 4 short papers presented were carefully reviewed and selected from 15 submissions. They are organized in topical sections on software and secure systems, machine learning for software systems, and ontology and knowledge representations.

Inhaltsverzeichnis

Frontmatter

Software and Secure Systems

Comparing Structure-Oriented and Behavior-Oriented Variability Modeling for Workflows
Abstract
Workflows exist in many different variants in order to adapt the behavior of systems to different circumstances and to arising user’s needs. Variability modeling is a way of keeping track at the model level of the currently supported and used workflow variants. Variability modeling approaches for workflows address two directions: structure-oriented approaches explicitly specify the workflow variants by means of linguistic constructs, while behavior-oriented approaches define the set of all valid compositions of workflow components by means of ontological annotations and temporal logic constraints. In this paper, we describe how both structure-oriented and behavior-oriented variability modeling can be captured in an eXtreme Model-Driven Design paradigm (XMDD). We illustrate this via a concrete case (a variant-rich bioinformatics workflow realized with the jABC platform for XMDD), and we compare the two approaches in order to identify their profiles and synergies.
Anna-Lena Lamprecht, Tiziana Margaria, Ina Schaefer, Bernhard Steffen
Towards Verification as a Service
Abstract
Modern software systems are highly configurable and evolve over time. Simultaneously, they have high demands on their correctness and trustworthiness. Formal verification technique are a means to ensure critical system requirements, but still require a lot of computation power and manual intervention. In this paper, we argue that formal verification processes can be cast as workflows known from business process modeling. Single steps in the verification process constitute verification tasks which can be flexibly combined to verification workflows. The verification tasks can be carried out using designated services which are provided by highly scalable computing platforms, such as cloud computing environments. Verification workflows share the characteristics of business processes such that well-established results and tool support from workflow modeling, management and analysis are directly applicable. System evolution causing re-verification is supported by workflow adaptation techniques such that previously established verification results can be reused.
Ina Schaefer, Thomas Sauer
Requirements-Driven Runtime Reconfiguration for Security
Abstract
The boundary between development time and run time of eternal software intensive applications is fading. In particular, these systems are characterized by the necessity to evolve requirements continuously, even after the deployment. In this paper we focus on the evolution of security requirements and introduce an integrated process to drive runtime reconfiguration from requirements changes. This process relies on two key proposals: systematic strategies to evolve architecture models according to security requirements evolution and the combination of reflective middleware and models@runtime for runtime reconfiguration.
Koen Yskout, Olivier-Nathanael Ben David, Riccardo Scandariato, Benoit Baudry

Machine Learning for Software Systems

Large-Scale Learning with Structural Kernels for Class-Imbalanced Datasets
Abstract
Much of the success in machine learning can be attributed to the ability of learning methods to adequately represent, extract, and exploit inherent structure present in the data under interest. Kernel methods represent a rich family of techniques that harvest on this principle. Domain-specific kernels are able to exploit rich structural information present in the input data to deliver state of the art results in many application areas, e.g. natural language processing (NLP), bio-informatics, computer vision and many others. The use of kernels to capture relationships in the input data has made Support Vector Machine (SVM) algorithm the state of the art tool in many application areas. Nevertheless, kernel learning remains a computationally expensive process. The contribution of this paper is to make learning with structural kernels, e.g. tree kernels, more applicable to real-world large-scale tasks. More specifically, we propose two important enhancements of the approximate cutting plane algorithm to train Support Vector Machines with structural kernels: (i) a new sampling strategy to handle class-imbalanced problem; and (ii) a parallel implementation, which makes the training scale almost linearly with the number of CPUs. We also show that theoretical convergence bounds are preserved for the improved algorithm. The experimental evaluations demonstrate the soundness of our approach and the possibility to carry out large-scale learning with structural kernels.
Aliaksei Severyn, Alessandro Moschitti
Combining Machine Learning and Information Retrieval Techniques for Software Clustering
Abstract
In the field of Software Maintenance the definition of effective approaches to partition a software system into meaningful subsystems is a longstanding and relevant research topic. These techniques are very important as they can significantly support a Maintainer in his/her tasks by grouping related entities of a large system into smaller and easier to comprehend subsystems.
In this paper we investigate the effectiveness of combining information retrieval and machine learning techniques in order to exploit the lexical information provided by programmers for software clustering. In particular, differently from any related work, we employ indexing techniques to explore the contribution of the combined use of six different dictionaries, corresponding to the six parts of the source code where programmers introduce lexical information, namely: class, attribute, method and parameter names, comments, and source code statements. Moreover their relevance is estimated on the basis of the project characteristics, by applying a machine learning approach based on a probabilistic model and on the Expectation-Maximization algorithm. To group source files accordingly, two clustering algorithms have been compared, i.e. the K-Medoids and the Group Average Agglomerative Clustering, and the investigation has been conducted on a dataset of 9 open source Java software systems.
Anna Corazza, Sergio Di Martino, Valerio Maggio, Giuseppe Scanniello
Reusing System States by Active Learning Algorithms
Abstract
In this paper we present a practical optimization to active automata learning that reduces the average execution time per query as well as the number of actual tests to be executed. Key to our optimization are two observations: (1) establishing well-defined initial conditions for a test (reset) is a very expensive operation on real systems, as it usually involves modifications to the persisted state of the system (e.g., a database). (2) In active learning many of the (sequentially) produced queries are extensions of previous queries. We exploit these observations by using the same test run on a real system for multiple “compatible” queries. We maintain a pool of runs on the real system (system states), and execute only suffixes of queries on the real system whenever possible. The optimizations allow us to apply active learning to an industry-scale web-application running on an enterprise platform: the Online Conference Service (OCS) an online service-oriented manuscript submission and review system.
Oliver Bauer, Johannes Neubauer, Bernhard Steffen, Falk Howar

Ontology and Knowledge Representations

Inferring Affordances Using Learning Techniques
Abstract
Interoperability among heterogeneous systems is a key challenge in today’s networked environment, which is characterised by continual change in aspects such as mobility and availability. Automated solutions appear then to be the only way to achieve interoperability with the needed level of flexibility and scalability. While necessary, the techniques used to achieve interaction, working from the highest application level to the lowest protocol level, come at a substantial computational cost, especially when checks are performed indiscriminately between systems in unrelated domains. To overcome this, we propose to use machine learning to extract the high-level functionality of a system and thus restrict the scope of detailed analysis to systems likely to be able to interoperate.
Amel Bennaceur, Richard Johansson, Alessandro Moschitti, Romina Spalazzese, Daniel Sykes, Rachid Saadi, Valérie Issarny
Predicting User Tags Using Semantic Expansion
Abstract
Manually annotating content such as Internet videos, is an intellectually expensive and time consuming process. Furthermore, keywords and community-provided tags lack consistency and present numerous irregularities. Addressing the challenge of simplifying and improving the process of tagging online videos, which is potentially not bounded to any particular domain, we present an algorithm for predicting user-tags from the associated textual metadata in this paper. Our approach is centred around extracting named entities exploiting complementary textual resources such as Wikipedia and Wordnet. More specifically to facilitate the extraction of semantically meaningful tags from a largely unstructured textual corpus we developed a natural language processing framework based on GATE architecture. Extending the functionalities of the in-built GATE named entities, the framework integrates a bag-of-articles algorithm for effectively searching through the Wikipedia articles for extracting relevant articles. The proposed framework has been evaluated against MediaEval 2010 Wild Wild Web dataset, which consists of large collection of Internet videos.
Krishna Chandramouli, Tomas Piatrik, Ebroul Izquierdo
LivingKnowledge: A Platform and Testbed for Fact and Opinion Extraction from Multimodal Data
Abstract
In this paper, we describe the work we are undertaking in producing a truly multimedia platform for the analysis of facts and opinions on the web. The system integrates the analysis of multimodal data (images, text and page layout) into a distributable platform that can be built upon for various applications. We give an overview of the natural language processing tools that have been developed for extracting facts and opinions from the textual content of articles, the image analysis techniques used to extract facts and to help support the opinions found in the contextually related written information, as well as other multimodal tools developed for the analysis of online articles. We describe two applications that have been developed as part of ongoing work of the LivingKnowledge project: the News Media Analysis application for the semi-automation of the work of a media analysis company and the Future Predictor application which allows exploration of claims that are made through time.
David Dupplaw, Michael Matthews, Richard Johansson, Paul Lewis
Behaviour-Based Object Classifier for Surveillance Videos
Abstract
In this paper, a study on effective exploitation of geometrical features for classifying surveillance objects into a set of pre-defined semantic categories is presented. The geometrical features correspond to object’s motion, spatial location and velocity. The extraction of these features is based on object’s trajectory corresponding to object’s temporal evolution. These geometrical features are used to build a behaviour-based classifier to assign semantic categories to the individual blobs extracted from surveillance videos. The proposed classification framework has been evaluated against conventional object classifiers based on visual features extracted from semantic categories defined on AVSS 2007 surveillance dataset.
Virginia Fernandez Arguedas, Krishna Chandramouli, Ebroul Izquierdo
Backmatter
Metadaten
Titel
Eternal Systems
herausgegeben von
Alessandro Moschitti
Riccardo Scandariato
Copyright-Jahr
2012
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-28033-7
Print ISBN
978-3-642-28032-0
DOI
https://doi.org/10.1007/978-3-642-28033-7