Skip to main content

2010 | Buch

Advances in Software Engineering

International Conference, ASEA 2010, Held as Part of the Future Generation Information Technology Conference, FGIT 2010, Jeju Island, Korea, December 13-15, 2010. Proceedings

herausgegeben von: Tai-hoon Kim, Haeng-Kon Kim, Muhammad Khurram Khan, Akingbehin Kiumi, Wai-chi Fang, Dominik Ślęzak

Verlag: Springer Berlin Heidelberg

Buchreihe : Communications in Computer and Information Science

insite
SUCHEN

Über dieses Buch

Welcome to the Proceedings of the 2010 International Conference on Advanced Software Engineering and Its Applications (ASEA 2010) – one of the partnering events of the Second International Mega-Conference on Future Generation Information Technology (FGIT 2010). ASEA brings together researchers from academia and industry as well as practitioners to share ideas, problems and solutions relating to the multifaceted aspects of software engineering, including its links to computational sciences, mathematics and information technology. In total, 1,630 papers were submitted to FGIT 2010 from 30 countries, which includes 175 papers submitted to ASEA 2010. The submitted papers went through a rigorous reviewing process: 395 of the 1,630 papers were accepted for FGIT 2010, while 40 papers were accepted for ASEA 2010. Of the 640 papers were selected for the special FGIT 2010 volume published by Springer in the LNCS series. 32 papers are published in this volume, and 2 papers were withdrawn due to technical reasons. We would like to acknowledge the great effort of the ASEA 2010 International Advisory Board and members of the International Program Committee, as well as all the organizations and individuals who supported the idea of publishing this volume of proceedings, including SERSC and Springer. Also, the success of the conference would not have been possible without the huge support from our sponsors and the work of the Chairs and Organizing Committee.

Inhaltsverzeichnis

Frontmatter
Effective Web and Desktop Retrieval with Enhanced Semantic Spaces

We describe the design and implementation of the NETBOOK prototype system for collecting, structuring and efficiently creating semantic vectors for concepts, noun phrases, and documents from a corpus of free full text ebooks available on the World Wide Web. Automatic generation of concept maps from correlated index terms and extracted noun phrases are used to build a powerful conceptual index of individual pages. To ensure scalabilty of our system, dimension reduction is performed using Random Projection [13]. Furthermore, we present a complete evaluation of the relative effectiveness of the NETBOOK system versus the Google Desktop [8].

Amjad M. Daoud
Considering Patterns in Class Interactions Prediction

Impact analysis has been defined as an activity of assessing the potential consequences of making a set of changes to software artifacts. Several approaches have been developed including performing impact analysis on a reflected model of class interactions analysis using class interactions prediction. One of the important elements in developing the reflected model is a consideration of any design pattern that the software employs. In this paper we propose a new class interactions prediction approach that includes a basic pattern analysis i.e., Boundary-Controller-Entity (BCE) pattern in its prediction process. To demonstrate the importance of the pattern consideration in the prediction process, a comparison between the new approach (with pattern consideration) and two selected current approaches (without pattern consideration) were conducted. The contributions of the paper are two-fold: (1) a new class interactions prediction approach; and (2) evaluation results show the new approach gives better accuracy of class interactions prediction than the selected current approaches.

Nazri Kama, Tim French, Mark Reynolds
Design of an Unattended Monitoring System Using Context-Aware Technologies Based on 3 Screen

More recently, the importance of a multimedia-based monitoring system, keeps growing for the purpose of preventing crimes and, thus, various systems are being released. Nevertheless, most systems are lacking an independent feature of detection but simply saves a video or image. In order to overcome problems, therefore, this study has developed an unattended monitoring system based on a 3 Screen service environment by combining image tracking and analysis technologies, which can extract a human from images, with context-aware technologies using behavior patterns of people tracked and data of a surrounding environment, fully utilizing the wireless network infrastructure that has been increasingly available.

Seoksoo Kim
Requirements Elicitation Using Paper Prototype

Requirements engineering is both the hardest and critical part of software development since errors at this beginning stage propagate through the development process and are the hardest to repair later. This paper proposes an improved approach for requirements elicitation using paper prototype. The paper progresses through an assessment of the new approach using student projects developed for various organizations. The scope of implementation of paper prototype and its advantages are unveiled.

Jaya Vijayan, G. Raju
Quality-Driven Architecture Conformance

This paper describes the Quality-driven Architecture Conformance (QAC) discipline. Conformance is a particular type of assessment; in this case the architecture is compared with respect to a standard. Conformance process determines the degree of fulfillment of the architecture against a specific standard. QAC guarantees the compatibility and transferability of a system. The main contributions of this paper are: The definition of the QAC conceptual model, a workflow for the application of QAC and a summary of QAC techniques and tools. A case study has been executed in order to validate the QAC discipline.

Jose L. Arciniegas H., Juan C. Dueñas L.
Trends in M2M Application Services Based on a Smart Phone

M2M, which stands for communications between machines, offers various services today thanks to advanced communication networks and sensor systems. Also, a powerful terminal such as a smart phone provides sufficient physical environments, not requiring a special device for the services. However, the smart phone M2M environment involves various complex technologies, and there have been no clear policies or standards for the technology. This study, therefore, analyzes the current status of M2M service introduction and the trends in M2M application services using a smart phone.

Jae Young Ahn, Jae-gu Song, Dae-Joon Hwang, Seoksoo Kim
Using ERP and WfM Systems for Implementing Business Processes: An Empirical Study

Software systems mainly considered from enterprises for dealing with a business process automation belong to the following two categories: Workflow Management Systems (WfMS) and Enterprise Resource Planning (ERP) systems. The wider diffusion of ERP systems tends to favourite this solution, but there are several limitations of most ERP systems for automating business processes. This paper reports an empirical study aiming at comparing the ability of implementing business processes of ERP systems and WfMSs. Two different case studies have been considered in the empirical study. It evaluates and analyses the correctness and completeness of the process models implemented by using ERP and WfM systems.

Lerina Aversano, Maria Tortorella
Mining Design Patterns in Object Oriented Systems by a Model-Driven Approach

In this paper we present an approach to automatically mine Design Patterns in existing Object Oriented systems and to trace system’s source code components to the roles they play in the Patterns. The approach defines and exploits a model representing a Design Pattern by its high level structural Properties. It is possible to detect Pattern variants, by adequately overriding the Pattern structural properties. The approach was validated by applying it to some open-source systems.

Mario Luca Bernardi, Giuseppe Antonio Di Lucca
Exploring Empirically the Relationship between Lack of Cohesion and Testability in Object-Oriented Systems

The study presented in this paper aims at exploring empirically the relationship between lack of cohesion and testability of classes in object-oriented systems. We investigated testability from the perspective of unit testing. We designed and conducted an empirical study using two Java software systems for which JUnit test cases exist. To capture testability of classes, we used different metrics to measure some characteristics of the corresponding JUnit test cases. We used also some lack of cohesion metrics. In order to evaluate the capability of lack of cohesion metrics to predict testability, we performed statistical tests using correlation. The achieved results provide evidence that (lack of) cohesion may be associated with (low) testability.

Linda Badri, Mourad Badri, Fadel Toure
The Study of Imperfection in Rough Set on the Field of Engineering and Education

Based on the characteristic of rough set, rough set theory overlaps with many other theories, especially with fuzzy set theory, evidence theory and Boolean reasoning methods. And the rough set methodology has found many real-life applications, such as medical data analysis, finance, banking, engineering, voice recognition, image processing and others. Till now, there is rare research associating to this issue in the imperfection of rough set. Hence, the main purpose of this paper is to study the imperfection of rough set in the field of engineering and education. First of all, we preview the mathematics model of rough set, and a given two examples to enhance our approach, which one is the weighting of influence factor in muzzle noise suppressor, and the other is the weighting of evaluation factor in English learning. Third, we also apply Matlab to develop a complete human-machine interface type of toolbox in order to support the complex calculation and verification the huge data. Finally, some further suggestions are indicated for the research in the future.

Tian-Wei Sheu, Jung-Chin Liang, Mei-Li You, Kun-Li Wen
The Software Industry in the Coffee Triangle of Colombia

The so-called “Coffee Triangle” region is located in the Andean Region, in central Colombia, South America. This Andean Region is composed of the Departments of Caldas, Quindío and Risaralda. The Andean Region has been characterized by the production of coffee as a worldwide industry supported by high Quality and Research standards. These components have become the key bastions to compete in international markets. After the decline of the Coffee industry it is necessary to consider alternatives, supplemented by the success of the Software Industry at the global level. The strengthening of the Software Industry in the Coffee Triangle seeks to establish a productive alternative for regional growth in a visionary way, where knowledge, a fundamental input of the Software Industry, is emerging as one of the greatest assets present in this geographical area - Andean Region - of Colombia.

Albeiro Cuesta, Luis Joyanes, Marcelo López
Towards Maintainability Prediction for Relational Database-Driven Software Applications: Evidence from Software Practitioners

The accurate maintainability prediction of relational database-driven software applications can improve the project management for these applications, thus benefitting software organisations. This paper presents an up-to-date account of the state of practice in maintainability prediction for relational database-driven software applications. Twelve semi-structured interviews were conducted with software professionals. The results provide both an account of the current state of practice in that area and a list of potential maintainability predictors for relational database-driven software applications.

Mehwish Riaz, Emilia Mendes, Ewan Tempero
Software and Web Process Improvement – Predicting SPI Success for Small and Medium Companies

This study revisits our previous work in SPI success factors for small and medium companies [26] in order to investigate separately SPI success factors for Web companies that only develop Web applications (Web development companies (12 companies and 41 respondents)) from SPI success factors for Web companies that develop Web as well as software applications (Web & software development companies (8 companies and 31 respondents)). We have replicated Dyba’s theoretical model of SPI success factors [12] in [26], and later also in this paper. However, the study described herein differs from [12] and [26] in that Dyba used data from both software and Web companies, and Sulayman and Mendes used data from Web companies that developed Web applications and sometimes also software applications by employing quantitative assessment techniques. The comparison was also performed against the existing theoretical model of SPI success factors replicated in this study. The results indicate that SPI success factors contribute in different ratios for both types of companies.

Muhammad Sulayman, Emilia Mendes
Test Prioritization at Different Modeling Levels

Validation of real-life software systems often leads to a large number of tests; which, due to time and cost constraints, cannot exhaustively be run. Therefore, it is essential to prioritize the test cases in accordance with their importance the tester perceives. This paper introduces a model-based approach for ranking tests according to their preference degrees, which are determined indirectly, through event classification. For construction of event groups, Gustafson - Kessel clustering algorithms are used. Prioritizing is performed at different granularity levels in order to justify the effectiveness of the clustering algorithm used. A case study demonstrates and validates the approach.

Fevzi Belli, Nida Gökçe
Adoption of Requirements Engineering Practices in Malaysian Software Development Companies

This paper presents exploratory survey results on Requirements Engineering (RE) practices of some software development companies in Malaysia. The survey attempted to identify patterns of RE practices the companies are implementing. Information required for the survey was obtained through a survey, mailed self-administered questionnaires distributed to project managers and software developers who are working at software development companies operated across the country. The results showed that the overall adoption of the RE practices in these companies is strong. However, the results also indicated that fewer companies in the survey have use appropriate CASE tools or software to support their RE process and practices, define traceability policies and maintain traceability manual in their projects.

Badariah Solemon, Shamsul Sahibuddin, Abdul Azim Abd Ghani
Minimum Distortion Data Hiding

In this paper a new steganographic method is presented with minimum distortion, and better resistance against steganalysis. There are two ways to control detectability of stego message: one is less distortion, and another is less modification. Concerning the less distortion, this paper focuses on DCT rounding error, and optimizes the rounding error in a very easy way, resulting stego image has less distortion than other existing methods. The proposed method compared with other existing popular steganographic algorithms, and the proposed method achieved better performance. This paper considered the DCT rounding error for lower distortion with possibly higher embedding capacity.

Md. Amiruzzaman, M. Abdullah-Al-Wadud, Yoojin Chung
Model-Based Higher-Order Mutation Analysis

Mutation analysis is widely used as an implementation-oriented method for software testing and test adequacy assessment. It is based on creating different versions of the software by seeding faults into its source code and constructing test cases to reveal these changes. However, in case that source code of software is not available, mutation analysis is not applicable. In such cases, the approach introduced in this paper suggests the alternative use of a model of the software under test. The objectives of this approach are (i) introduction of a new technique for first-order and higher-order mutation analysis using two basic mutation operators on graph-based models, (ii) comparison of the fault detection ability of first-order and higher-order mutants, and (iii) validity assessment of the coupling effect.

Fevzi Belli, Nevin Güler, Axel Hollmann, Gökhan Suna, Esra Yıldız
ISARE: An Integrated Software Architecture Reuse and Evaluation Framework

Quality is an important consideration in the development of today’s large complex software systems. Software architecture and quality play a vital role in the success or failure of any software system. Similarly to maintain the qualities of a software system during development and to adapt the quality attributes as the software requirements changes, software architecture is necessary. This paper discusses software quality attributes and the support provided by software architecture to achieve the desired quality. A novel

Software Architecture Reuse and Evaluation

framework is proposed on the basis of existing software architecture evaluation methods with respect to quality requirements. A case study is used for experimental validation of the ISARE. The results show that ISARE ensures the required level of quality requirements in the software architecture and automatable.

Rizwan Ahmad, Saif ur Rehman Khan, Aamer Nadeem, Tai-hoon Kim
Cognitive Informatics for New Classes of Economic and Financial Information Systems

This publication discusses intelligent systems for cognitive data categorisation with a particular emphasis on image analysis systems used to analyse economic data. This type of systems used to interpret, analyse and reason work following the operating principles of cognitive information system. Cognitive systems interpret complex data by extracting semantic levels from it, which they use to determine the meaning of the data analysed, to cognitively understand it, as well as to reason and forecast changes in the area of the phenomena researched. Thus the course of human processes of learning about the described phenomenon becomes the foundation for developing automatic cognitive systems which are called cognitive data analysis systems.

Lidia Ogiela, Marek R. Ogiela
An Automated Approach to Testing Polymorphic Features Using Object-Z

Formal methods have proven their worth in different fields specially software engineering. Although object-oriented design features like inheritance and polymorphism improve the quality of design, these features may introduce new types of faults. However, the current research on formal specification based testing primarily focuses on unit level testing only. There is very little work on formal specification based inheritance and polymorphic testing. This paper describes a novel approach for testing of polymorphic relationships using an Object-Z specification. The proposed approach is based on the idea of coupling based testing. Tool support for the proposed technique has also been provided and is empirically evaluated by a real life example.

Mahreen Ahmad, Aamer Nadeem, Tai-hoon Kim
IDMS: A System to Verify Component Interface Completeness and Compatibility for Product Integration

The growing approach of Component-Based software Development has had a great impact on today system architectural design. However, the design of subsystems that lacks interoperability and reusability can cause problems during product integration. At worst, this may result in project failure. In literature, it is suggested that the verification of interface descriptions and management of interface changes are factors essential to the success of product integration process. This paper thus presents an automation approach to facilitate reviewing component interfaces for completeness and compatibility. The Interface Descriptions Management System (IDMS) has been implemented to ease and fasten the interface review activities using UML component diagrams as input. The method of verifying interface compatibility is accomplished by traversing the component dependency graph called Component Compatibility Graph (CCG). CCG is the visualization of which each node represents a component, and each edge represents communications between associated components. Three case studies were studied to subjectively evaluate the correctness and usefulness of IDMS.

Wantana Areeprayolkij, Yachai Limpiyakorn, Duangrat Gansawat
Software Framework for Flexible User Defined Metaheuristic Hybridization

Metaheuristic algorithms have been widely used for solving Combinatorial Optimization Problem (COP) since the last decade. The algorithms can produce amazing results in solving complex real life problems such as scheduling, time tabling, routing and tasks allocation. We believe that many researchers will find COP methods useful to solve problems in many different domains. However, there are some technical hurdles such as the steep learning curve, the abundance and complexity of the algorithms, programming skill requirement and the lack of user friendly platform to be used for algorithm development. As new algorithms are being developed, there are also those that come in the form of hybridization of multiple existing algorithms. We reckon that there is also a need for an easy, flexible and effective development platform for user defined metaheuristic hybridization. In this article, a comparative study has been performed on several metaheuristics software frameworks. The result shows that available software frameworks are not adequately designed to enable users to easily develop hybridization algorithms. At the end of the article, we propose a framework design that will help bridge the gap. We foresee the potential of scripting language as an important element that will help improve existing software framework with regards to the ease of use, rapid algorithm design and development. Thus, our efforts are now directed towards the study and development of a new scripting language suitable for enhancing the capabilities of existing metaheuristic software framework.

Suraya Masrom, Siti Zaleha Zainal Abidin, Puteri Norhashimah Megat Abdul Rahman, Abdullah Sani Abd. Rahman
Program Visualization for Debugging Deadlocks in Multithreaded Programs

Debugging deadlocks in multithreaded programs is a notoriously difficult task. A key reason for this is to understand the high behavioral complexity resulting from the inherent nondeterminism of multithreaded programs. We present a novel visualization technique which abstracts the nested patterns of locks and represents the happens-before relation of the patterns. We implement the technique in a prototype tool for Java, and demonstrate its power using a number of multithreaded Java programs. The experimental result shows that this graph provides a simple yet powerful representation to reason about deadlocks in an execution instance.

Byung-Chul Kim, Yong-Kee Jun
A Fast PDE Algorithm Using Adaptive Scan and Search for Video Coding

In this paper, we propose an algorithm that reduces unnecessary computations, while keeping the same prediction quality as that of the full search algorithm. In the proposed algorithm, we can reduce unnecessary computations efficiently by calculating initial matching error point from first 1/N partial errors. We can increase the probability that hits minimum error point as soon as possible. Our algorithm decreases the computational amount by about 20% of the conventional PDE algorithm without any degradation of prediction quality. Our algorithm would be useful in real-time video coding applications using MPEG-2/4 AVC standards.

Jong-Nam Kim
Evolvability Characterization in the Context of SOA

Service-Oriented Architecture (SOA) is an architectural style which promotes reuse of self-contained services. These self-contained services allow a better consideration of software quality characteristics as they can be independently analyzed. In our work, the evolvability quality characteristic has been considered, due to its impact in the stages of Maintenance and Evolution (M&E) for the software enterprises. Three goals are underlined in this paper: first, the relationship between SOA and quality characteristics focusing on a precise definition of evolvability of a software product from the SOA perspective, second a M&E model for SOA, and finally, some experiences are presented in order to assess evolvability in real software products. Two case studies have been executed: the first one analyzing the evolvability of the OSGi framework. And in the second case, the model is used in local Small and Medium Enterprises (SMEs), where an improvement process has been executed.

Jose L. Arciniegas H., Juan C. Dueñas L.
Design and Implementation of an Enterprise Internet of Things

Since the notion of “Internet of Things” (IoT) introduced about 10 years ago, most IoT research has focused on higher level issues, such as strategies, architectures, standardization, and enabling technologies, but studies of real cases of IoT are still lacking. In this paper, a real case of Internet of Things called ZB IoT is introduced. It combines the Service Oriented Architecture (SOA) with EPC global standards in the system design, and focuses on the security and extensibility of IoT in its implementation.

Jing Sun, Huiqun Zhao, Ka Wang, Houyong Zhang, Gongzhu Hu
Aggregating Expert-Driven Causal Maps for Web Effort Estimation

Reliable Web effort estimation is one of the cornerstones of good Web project management. Hence the need to fully understand which factors affect a project’s outcome and their causal relationships. The aim of this paper is to provide a wider understanding towards the fundamental factors affecting Web effort estimation and their causal relationships via combining six different Web effort estimation causal maps from six independent local Web companies, representing the knowledge elicited from several domain experts. The methodology used to combine these maps extended previous work by adding a mapping scheme to handle complex domains (e.g. effort estimation), and the use of an aggregation process that preserves all the causal relations in the original maps. The resultant map contains 67 factors, and also commonalities amongst Web companies relating to factors and causal relations, thus providing the means to better understand which factors have a causal effect upon Web effort estimation.

Simon Baker, Emilia Mendes
Bug Forecast: A Method for Automatic Bug Prediction

In this paper we present an approach and a toolset for automatic bug prediction during software development and maintenance. The toolset extends the Columbus source code quality framework, which is able to integrate into the regular builds, analyze the source code, calculate different quality attributes like product metrics and bad code smells; and monitor the changes of these attributes. The new bug forecast toolset connects to the bug tracking and version control systems and assigns the reported and fixed bugs to the source code classes from the past. It then applies machine learning methods to learn which values of which quality attributes typically characterized buggy classes. Based on this information it is able to predict bugs in current and future versions of the classes.

The toolset was evaluated on an industrial software system developed by a large software company called evosoft. We studied the behavior of the toolset through a 1,5 year development period during which 128 snapshots of the software were analyzed. The toolset reached an average bug prediction precision of 72%, reaching many times 100%. We concentrated on high precision, as the primary purpose of the toolset is to aid software developers and testers in pointing out the classes which contain bugs with a high probability and keep the number of false positives relatively low.

Rudolf Ferenc
TCD: A Text-Based UML Class Diagram Notation and Its Model Converters

Among several diagrams defined in UML, the class diagram is particularly useful through entire software development process, from early domain analysis stages to later maintenance stages. However conventional UML environments are often inappropriate for collaborative modeling in physically remote locations, such as exchanging models on a public mailing list via email. To overcome this issue, we propose a new diagram notation, called “TCD” (Text-based uml Class Diagram), for describing UML class diagrams using ASCII text. Since text files can be easily created, modified and exchanged in anywhere by any computing platforms, TCD facilitates the collaborative modeling with a number of unspecified people. Moreover, we implemented model converters for converting in both directions between UML class diagrams described in the XMI form and those in the TCD form. By using the converters, the reusability of models can be significantly improved because many of UML modeling tools support the XMI for importing and exporting modeling data.

Hironori Washizaki, Masayoshi Akimoto, Atsushi Hasebe, Atsuto Kubo, Yoshiaki Fukazawa
SQL-Based Compound Object Comparators: A Case Study of Images Stored in ICE

We introduce the framework for storing and comparing compound objects. The implemented system is based on the RDBMS model, which – unlike other approaches in this area – enables to access the most detailed data about considered objects. It also contains ROLAP cubes designed for specific object classes and appropriately abstracted modules that compute object similarities, referred as comparators. In this paper, we focus on the case study related to images. We show specific examples of fuzzy logic comparators, together with their corresponding SQL statements executed at the level of pixels. We examine several open source database engines by means of their capabilities of storing and querying large amounts of such represented image data. We conclude that the performance of some of them is comparable to standard techniques of image storage and processing, with far better flexibility in defining new similarity criteria and analyzing larger image collections.

Dominik Ślęzak, Łukasz Sosnowski
Intermediate Structure Reduction Algorithms for Stack Based Query Languages

Data processing often results in generation of a lot of temporary structures. They cause an increase in processing time and resources consumption. This especially concerns databases since their temporary data are huge and often they must be dumped to secondary storage. This situation has a serious impact on the query engine. An interesting technique of program fusion has been proposed for functional programming languages. Its goal is to reduce the size or entirely eliminate intermediate structures. In this paper we show how this technique can be used to generate robust execution plans of aggregate and recursive queries of query languages based on Stack Based Approach. We will use SBQL as an exemplary language.

Marta Burzańska, Krzysztof Stencel, Piotr Wiśniewski
Service-Oriented Software Framework for Network Management

A network management is important in managing network elements. While hundreds of network-related new services have been created, the network management functions should have been re-coded to adapt new business rules. In addition, redundant information could be transferred repeatedly to the inter-operating systems because they request similar information. This environment brings unnecessary system development cost, and increases redundancy in the inter-operating functions. To reduce the cost and the redundancy, we propose the service-oriented software framework for network management. In order to do so, we identified common services for network management, made service specifications and service flows, and conducted service realization. Also, we present four types of services as the case studies: authentication service, SMS service, previous alarm inquiry service, and current alarm management service. Furthermore, we implemented the framework in KT to manage IP backbone networks. After adopting the framework, the system’s performance and flexibility were improved, and duplicated functionalities of the systems were reduced.

Dongcheul Lee, Byungjoo Park
Backmatter
Metadaten
Titel
Advances in Software Engineering
herausgegeben von
Tai-hoon Kim
Haeng-Kon Kim
Muhammad Khurram Khan
Akingbehin Kiumi
Wai-chi Fang
Dominik Ślęzak
Copyright-Jahr
2010
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-17578-7
Print ISBN
978-3-642-17577-0
DOI
https://doi.org/10.1007/978-3-642-17578-7