Skip to main content
Top

2013 | Book

Software and Data Technologies

5th International Conference, ICSOFT 2010, Athens, Greece, July 22-24, 2010. Revised Selected Papers

Editors: José Cordeiro, Maria Virvou, Boris Shishkov

Publisher: Springer Berlin Heidelberg

Book Series : Communications in Computer and Information Science

insite
SEARCH

About this book

This book constitutes the thoroughly refereed post-conference proceedings of the 5th International Conference on Software and Data Technologies, ICSOFT 2010, held in Athens, Greece, in July 2010. The 30 revised full papers presented together with 1 invited lecture were carefully reviewed and selected from a total of 410 submissions in two rounds of reviewing and improvement. The papers cover a wide range of topics and are organized in four general topical sections on healthinf, biodevices, biosignals, and bioinformatics.

Table of Contents

Frontmatter

Invited Paper

Frontmatter
E-Business and Social Networks: Tapping Dynamic Niche Markets Using Language-Action and Artificial Intelligence
Abstract
“That social networks are hard to penetrate” is an often given reason for recent corporate investment decreases in this technology. The cause is the traditional mass market perspective: firms use social networks to connect with, and eavesdrop on, customers. The result: online ads weakly aligned to customer conversations. One alternative: adopt a Language–Action Perspective; identify the market created by a conversation in a social network. The result: online ads strongly aligned to that conversation. One measure of this alignment is the mental distance between the intent of a social network conversation and the intent of the online ads that surround it. This paper will: explore the nature of “intention” in social networks, introduce its elements, show how these elements can create data and software architecture, and show how to create better online ads.
David A. Marca

Part I:Software Engineering

Frontmatter
Modelling the Requirements of Rich Internet Applications in WebRe
Abstract
In the last years, several Web methodological approaches were defined in order to support the systematic building of Web software. Together with the constant technological advances, these methods must be constantly improved to deal with a myriad of new feasible application features, such as those involving rich interaction features. Rich Internet Applications (RIA) are Web applications exhibiting interaction and interface features that are typical in desktop software. Some specific methodological resources are required to deal with these characteristics. This paper presents a solution for the treatment of Web Requirements in RIA development. For this aim we present WebRE+, a requirement metamodel that incorporates RIA features into the modelling repertoire. We illustrate our ideas with a meaningful example of a business intelligence application.
Esteban Robles Luna, M. J. Escalona, G. Rossi
A Survey on How Well-Known Open Source Software Projects Are Tested
Abstract
In this paper, we survey a set of 33 well-known Open Source Software (OSS) projects to understand how in 2010 developers perform quality assurance activities for their OSS projects. We compare our results with the data published in a previous survey done by L. Zhao and S. Elbaum. Our results are in line with the previous work and confirm that OSS is usually not validated enough and therefore its quality is not revealed enough.
To simplify the task of quality assurance, the paper suggests the use of a testing framework that can support most of the phases of a well-planned testing activity, and describes the use of Aspect Oriented Programming (AOP) to collect and expose dynamic quality attributes of OSS projects.
Davide Tosi, Abbas Tahir
Systematic Review on Software Product Line Testing
Abstract
This article presents a systematic review of the literature about Testing in Software Product Lines. The objective is to analyze the existing approaches to testing in software product lines, discussing the significant issues related to this area of knowledge and providing an up-to-date state of the art, which can serve as a basis for innovative research activities. The paper includes an analysis on how SPL research can contribute to dynamize the research in software testing.
Beatriz Pérez Lamancha, Macario Polo, Mario Piattini
A Programming Language That Combines the Benefits of Static and Dynamic Typing
Abstract
Dynamically typed languages have recently turned out to be suitable for developing specific scenarios where dynamic adaptability or rapid prototyping are important issues. However, statically typed programming languages commonly offer more opportunities for compiler optimizations and earlier type error detection. Due to the benefits of both approaches, some programming languages such as C# 4.0, Boo, Visual Basic or Objective-C provide both static and dynamic typing. We describe the StaDyn programming language that supports both type systems in the very same programming language. The main contribution of StaDyn is that it keeps gathering type information at compile time even over dynamically typed references, obtaining a better runtime performance, earlier type error detection, and an intuitive combination of statically and dynamically typed code.
Francisco Ortin, Miguel Garcia
Main Principles on the Integration of SOC and MDD Paradigms to Business Processes: A Systematic Review
Abstract
Over the last few years organizations have been dealing with the integration of their business processes and software and technologies to support their realization. One challenge is to unite the vision from business and software areas, to design and implement business processes in a way that allows organizations to react agilely to changes. New paradigms have appeared to support this vision: Business Process Management (BPM), Service-Oriented Computing (SOC) and Model-Driven Development (MDD). BPM deals with managing business process lifecycle from modeling, implementation, execution and evaluation to find improvement opportunities. SOC bases the design and implementation of software on services, which are defined to support business processes. MDD focus is on models, allowing the definition of automatic transformation between them along with code generation for different platforms. In this article we present the main principles for the integration of these paradigms as found in a systematic review carried out with the objective of establishing the bases for our research.
Andrea Delgado, Francisco Ruiz, Ignacio García-Rodríguez de Guzmán, Mario Piattini
A Model-Based Simulation Environment for Structured Textual Use Cases
Abstract
Although use cases are nowadays one of the most widespread techniques for the specification of system behavior low quality use case descriptions regularly cause serious problems in later phases of the development process. The simulation of use case based descriptions may be an important technique to overcome these issues because it enables especially non technical stakeholder to assess the quality of use cases. In this paper we present a model-based use case simulation approach for semi-formal textual use cases. We motivate core requirements of a simulation environment and an underlying execution model. Additionally we describe our technical solution for a model-based simulation environment and present some first experiences.
Veit Hoffmann, Horst Lichter
Automatic Co-evolution of Models Using Traceability
Abstract
Model Driven Engineering allows models to be considered as data and then used as first class entities in dedicated transformations languages. As a result, recurring problems linked to software production are emerging in this new development context. One such problem is to maintain an inter-model consistency during execution of a process based on models and model transformations. When some related models must co-evolve, what appends when different transformations are applied separately on each of these models? To prevent this, we assume that one of these models is the master model and we propose an automatic co-evolution of the other models based on the traceability of the main transformation. So the contribution of this paper is a conceptual framework where the necessary transformations of repercussion can be easily deployed.
Bastien Amar, Hervé Leblanc, Bernard Coulette, Philippe Dhaussy
FocalTest: A Constraint Programming Approach for Property-Based Testing
Abstract
Property-based testing is the process of selecting test data from user-specified properties fro testing a program. Current automatic property-based testing techniques adopt direct generate-and-test approaches for this task, consisting in generating first test data and then checking whether a property is satisfied or not. are generated at random and rejected when they do not satisfy selected coverage criteria. In this paper, we propose a technique and tool called FocalTest, which adopt a test-and-generate approach through the usage of constraint reasoning. Our technique utilizes the property to prune the search space during the test data generation process. A particular difficulty is the generation of test data satisfying MC/DC on the precondition of a property, when it contains function calls with pattern matching and high-order functions. Our experimental results show that a non-naive implementation of constraint reasoning on these constructions outperform traditional generation techniques when used to find test data for testing properties.
Matthieu Carlier, Catherine Dubois, Arnaud Gotlieb
HARM: Hacker Attack Representation Method
Abstract
Current security requirements engineering methods tend to take an atomic and single-perspective view on attacks, treating them as threats, vulnerabilities or weaknesses from which security requirements can be derived. This approach may cloud the big picture of how many smaller weaknesses in a system contribute to an overall security flaw. The proposed Hacker Attack Representation Method (HARM) combines well-known and recently developed security modeling techniques in order represent complex and creative hacker attacks diagrammatically from multiple perspectives. The purpose is to facilitate overviews of intrusions on a general level and to make it possible to involve different stakeholder groups in the process, including non-technical people who prefer simple, informal representations. The method is tied together by a meta model. Both the method and the meta model are illustrated with a security attack reported in the literature.
Peter Karpati, Andreas L. Opdahl, Guttorm Sindre
An Architecture Based Deployment of Component Distributed Systems
Abstract
Software deployment encompasses all post-development activities that make an application operational. The development of system-based components has made it possible to better highlight this piece of the global software lifecycle, as illustrated by numerous industrial and academic studies. However these are generally developed ad-hoc and are consequently platform-dependent. Deployment systems supported by middleware environments (CCM, .Net and EJB) specifically develop mechanisms and tools related to pre-specified deployment strategies. For this topic of distributed component-based software applications, our goal is to define what could be a unified meta modeling architecture for deployment of distributed components based software systems. To illustrate the feasibility of the approach, we introduce a tool called UDeploy (Unified Deployment architecture) which firstly, manages the planning process from meta-information related to the application, the infrastructure and the deployment strategies; secondly, the generation of specific deployment descriptors related to the application and the environment (i.e. the machines connected to a network where a software system is deployed); and finally, the execution of a plan produced by means of deployment strategies used to elaborate a deployment plan.
Noureddine Belkhatir, Mariam Dibo

Part II:Distributed Systems

Frontmatter
A Heuristic Algorithm for Finding Edge Disjoint Cycles in Graphs
Abstract
The field of data mining provides techniques for new knowledge discovery. Distributed mining offers the miner a larger dataset with the possibility of finding stronger and, perhaps, novel association rules. This paper addresses the role of Hamiltonian cycles on mining distributed data while respecting privacy concerns. We propose a new heuristic algorithm for discovering disjoint Hamiltonian cycles. We use synthetic data to evaluate the performance of the algorithm and compare it with a greedy algorithm.
Renren Dong, Ray Kresman

Part III:Data Management

Frontmatter
Distortion-Free Authentication Watermarking.
Abstract
In this paper we introduce a distortion free watermarking technique that strengthen the verification of integrity of the relational databases by using a public zero distortion authentication mechanism based on the Abstract Interpretation framework.
Sukriti Bhattacharya, Agostino Cortesi
FRIDAL: A Desktop Search System Based on Latent Interfile Relationships
Abstract
Desktop search is must-have features for modern operationg systems because retrieving desired files from massive amount of files is a major problem. Several desktop search tools using full-text search techniques have been developed. However, those files lacking any given keywords, such as picture files and the source data of experiments, cannot be found by tools based on full-text searches, even if they are related to the keywords. In this paper, we propose a search method based on latent interfile relationships derieved from file access logs. Our proposed method allows us retrieve files that lack keywords but do have an association with them, based on the concept that those files opened by a user in a particular time period are related. We have implemented a desktop search system “FRIDAL” based on the proposed method, and evaluated its effectiveness by experiment. The evaluation results indicate that the proposed method has superior precision and recall compared with full-text and directory-search methods.
Tetsutaro Watanabe, Takashi Kobayashi, Haruo Yokota
Fine Grained Access Control for Relational Databases by Abstract Interpretation
Abstract
In this paper, we propose an observation-based fine grained access control (OFGAC) mechanism where data are made accessible at various level of abstraction according to their sensitivity level. In this setting, unauthorized users are not able to infer the exact content of the data cell containing confidential information, while they are allowed to get partial information out of it, according to their access rights. The traditional fine grained access control (FGAC) can be seen as a special case of the OFGAC framework.
Raju Halder, Agostino Cortesi

Part IV:Knowledge-Based Systems

Frontmatter
“Facets” and “Prisms” as a Means to Achieve Pedagogical Indexation of Texts for Language Learning: Consequences of the Notion of Pedagogical Context
Abstract
Defining pedagogical indexation of texts for language learning as an indexation allowing users to query for texts in order to use them in language teaching requires to take into account the influence of the properties of the teaching situation we define as “pedagogical context”.
We propose to justify the notions of prisms and facets on which our model rely through the description of material selection in the task of planing a language class as an adaptation of Yinger’s model of planing. This interpretation of Yinger’s model is closely intertwined with the elaboration of the notion of pedagogical context. The latter provides sounder bases on which to build our model. This resulted in improvements in the potentialities of the model compared to its first published version.
Mathieu Loiseau, Georges Antoniadis, Claude Ponton
Backmatter
Metadata
Title
Software and Data Technologies
Editors
José Cordeiro
Maria Virvou
Boris Shishkov
Copyright Year
2013
Publisher
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-29578-2
Print ISBN
978-3-642-29577-5
DOI
https://doi.org/10.1007/978-3-642-29578-2

Premium Partner