Skip to main content

2012 | Buch

Computational Science and Its Applications – ICCSA 2012

12th International Conference, Salvador de Bahia, Brazil, June 18-21, 2012, Proceedings, Part IV

herausgegeben von: Beniamino Murgante, Osvaldo Gervasi, Sanjay Misra, Nadia Nedjah, Ana Maria A. C. Rocha, David Taniar, Bernady O. Apduhan

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

The four-volume set LNCS 7333-7336 constitutes the refereed proceedings of the 12th International Conference on Computational Science and Its Applications, ICCSA 2012, held in Salvador de Bahia, Brazil, in June 2012. The four volumes contain papers presented in the following workshops: 7333 - advances in high performance algorithms and applications (AHPAA); bioinspired computing and applications (BIOCA); computational geometry and applicatons (CGA); chemistry and materials sciences and technologies (CMST); cities, technologies and planning (CTP); 7334 - econometrics and multidimensional evaluation in the urban environment (EMEUE); geographical analysis, urban modeling, spatial statistics (Geo-An-Mod); 7335 - optimization techniques and applications (OTA); mobile communications (MC); mobile-computing, sensind and actuation for cyber physical systems (MSA4CPS); remote sensing (RS); 7336 - software engineering processes and applications (SEPA); software quality (SQ); security and privacy in computational sciences (SPCS); soft computing and data engineering (SCDE). The topics of the fully refereed papers are structured according to the four major conference themes: 7333 - computational methods, algorithms and scientific application; 7334 - geometric modelling, graphics and visualization; 7335 - information systems and technologies; 7336 - high performance computing and networks.

Inhaltsverzeichnis

Frontmatter

Workshop on Software Engineering Processes and Applications (SEPA 2012)

Modeling Road Traffic Signals Control Using UML and the MARTE Profile

The problem of software modeling and design of road traffic signals control has long been taken into consideration. A variety of modeling languages have been applied in this field. However, still no single modeling language can be considered a standard to model distributed real-time systems such as traffic signals systems. Thus, further evaluation is necessary. In this article, a UML profile created for designing real-time systems, MARTE, is applied to model a traffic signals control system. MARTE is compared with UML and SPT, a former UML profile. The result is that with MARTE, UML models are more specific, but also more complex.

Eduardo Augusto Silvestre, Michel dos Santos Soares
Analysis of Techniques for Documenting User Requirements

A number of approaches were proposed in past years to document user requirements. Choosing the most suitable one is difficult, and frequently based on ad-hoc decision. In order to assist the way requirements engineers choose, an evaluation is necessary. The purpose of this paper is to analyze methods and languages used for user requirements documentation considering a number of criteria. This analysis was performed after extensive literature research and action research at companies that develop software-intensive systems. The objective is not to show how to find the best technique, the one that will perfectly suit all software projects. Instead, our purpose is to propose a critical view on a number of chosen techniques that might be useful for practitioners when choosing which technique to use on a specific project. The assumption is that stakeholders can benefit from knowing which techniques fit better a number of pre-determined evaluation criteria.

Michel dos Santos Soares, Daniel Souza Cioquetta
Predicting Web Service Maintainability via Object-Oriented Metrics: A Statistics-Based Approach

The Service-Oriented Computing paradigm enables the construction of distributed systems by assembling loosely coupled pieces of software called services, which have clear interfaces to their functionalities. Service interface descriptions have many aspects, such as complexity and quality, all of which can be measured. This paper presents empirical evidence showing that services interfaces maintainability can be predicted by applying traditional software metrics in service implementations. A total of 11 source code level metrics and 5 service interface metrics have been statistically correlated using 154 real world services.

José Luis Ordiales Coscia, Marco Crasso, Cristian Mateos, Alejandro Zunino, Sanjay Misra
Early Automated Verification of Tool Chain Design

Tool chains are expected to increase the productivity of product development by providing automation and integration. If, however, the tool chain does not have the features required to support the product development process, it falls short of this expectation. Tool chains could reach their full potential if it could be ensured that the features of a tool chain are aligned with the product development process. As part of a systematic development approach for tool chains, we propose a verification method that measures the extent to which a tool chain design conforms to the product development process and identifies misalignments. The verification method can be used early in tool chain development, when it is relatively easy and cheap to perform the necessary corrections. Our verification method is automated, which allows for quick feedback and enables iterative design. We apply the proposed method on an industrial tool chain, where it is able to identify improvements to the design of the tool chain.

Matthias Biehl
Using UML Stereotypes to Support the Requirement Engineering: A Case Study

In this paper we discuss the transition of an educational process to real-life use. Specifically, a Requirements Engineering (RE) process was tailored and improved to comply with the organization business goals. We discuss challenges faced and proposed solutions, focusing on automation and integration support for RE activities. We use stereotypes to enhance UML diagram clarity, to store additional element properties, and to develop automated RE process support. Stereotypes are one of the core extension mechanisms of the Unified Modeling Language (UML). The benefits founds in their use in a software development organization support the claims that stereotypes play a significant role in model comprehension, reduce errors and increase productivity during the software development cycle.

Vitor A. Batista, Daniela C. C. Peixoto, Wilson Pádua, Clarindo Isaías P. S. Pádua
Identifying Business Rules to Legacy Systems Reengineering Based on BPM and SOA

Legacy systems include information and procedures which are fundamental to the organization. However, to maintain a legacy system is a complex and expensive task. Currently, researches propose the legacy systems reengineering using BPM and SOA. The benefits of the reengineering using BPM and SOA are software reuse and documentation of the business processes. However, practical experiences demonstrate that reengineering using BPM and SOA are not easy to apply, because there are no tools that help the developers understand the legacy system behavior. This paper presents a tool to extract the legacy system behavior. Based on the business rules concept, we propose a tool to identify the business rules contained legacy source code. In addition, our technique also enables the discovery of the partial order execution of the business rules during the runtime legacy system.

Gleison S. do Nascimento, Cirano Iochpe, Lucinéia Thom, André C. Kalsing, Álvaro Moreira
Abstraction Analysis and Certified Flow and Context Sensitive Points-to Relation for Distributed Programs

This paper presents a new technique for pointer analysis of distributed programs executed on parallel machines with hierarchical memories. One motivation for this research is the languages whose global address space is partitioned. Examples of these languages are Fortress, X10, Titanium, Co-Array Fortran, UPC, and Chapel. These languages allow programmers to adjust threads and data layout and to write to and read from memories of other threads.

The techniques presented in this paper have the form of type systems which are simply structured. The proposed technique is shown on a language which is the

while

langauge enriched with basic commands for pointer manipulations and also enriched with commands vital for distributed execution of programs. An abstraction analysis that for a given statement calculates the set of function abstractions that the statement may evaluate-to is introduced in this paper. The abstraction analysis is needed in the proposed pointer analysis. The mathematical soundness of all techniques presented in this paper are discussed. The soundness is proved against a new operational semantics presented in this paper.

Our work has two advantages over related work. In our technique, each analysis result is associated with a correctness proof in the form of type derivation. The hierarchical memory model used in this paper is in line with the hierarchical character of concurrent parallel computers.

Mohamed A. El-Zawawy
An Approach to Measure Understandability of Extended UML Based on Metamodel

Since UML does not provide any guidance for users to select a proper extension pattern, users are not able to assure the quality of extended UMLs, such as understandability, when they focus on their expression power. A metric of understandability for extended UMLs is proposed, which bases on measuring the deviation of understandability between the extended UMLs and the standard UML in their metamodel level. Our proposal can be used to compare different extended UMLs with the same expression power on the understandability characteristic. Moreover, the proposal can guide users to select an appropriate extension pattern to achieve their goal. We give the definition of the metric of understandability and the empirical validation of the proposed metric. A case from a real project is used to explain the application of the proposed metric.

Yan Zhang, Yi Liu, Zhiyi Ma, Xuying Zhao, Xiaokun Zhang, Tian Zhang
Dealing with Dependencies among Functional and Non-functional Requirements for Impact Analysis in Web Engineering

Due to the dynamic nature of the Web as well as its heterogeneous audience, web applications are more likely to rapidly evolve leading to inconsistencies among requirements during the development process. With the purpose to deal with these inconsistencies, web developers need to know dependencies among requirements considering that the understanding of these dependencies helps in better managing and maintaining web applications. In this paper, an algorithm has been defined and implemented in order to analyze dependencies among functional and non-functional requirements (in a goal-oriented approach) for understanding which is the impact derived from a change during the Model-Driven Web Engineering process. This Impact Analysis would support web developer in selecting requirements to be implemented ensuring that web applications finally satisfy the audience.

José Alfonso Aguilar, Irene Garrigós, Jose-Norberto Mazón, Anibal Zaldívar
Assessing Maintainability Metrics in Software Architectures Using COSMIC and UML

The software systems have been exposed to constant changes in a short period of time. The evolution of these systems demands a trade-off among several attributes to keep the software quality acceptable. It requires high maintainable systems and makes maintainability one of the most important quality attributes. This paper approaches the system evolution through the analysis of potential new architectures using the evaluation of maintainability level. The goal is to relate maintainability metrics applied in the source-code of OO systems, in particular CCC, to notations defined by COSMIC methods and proposes metrics-based models to assess CCC in software architectures.

Eudisley Gomes dos Anjos, Ruan Delgado Gomes, Mário Zenha-Rela
Plagiarism Detection in Software Using Efficient String Matching

String matching refers to the problem of finding occurrence(s) of a pattern string within another string or body of a text. It plays a vital role in

plagiarism detection

in software codes, where it is required to identify similar program in a large populations. String matching has been used as a tool in a software metrics, which is used to measure the quality of software development process. In the recent years, many algorithms exist for solving the string matching problem. Among them, Berry–Ravindran algorithm was found to be fairly efficient. Further refinement of this algorithm is made in TVSBS and SSABS algorithms. However, these algorithms do not give the best possible shift in the search phase. In this paper, we propose an algorithm which gives the best possible shift in the search phase and is faster than the previously known algorithms. This algorithm behaves like Berry-Ravindran in the worst case. Further extension of this algorithm has been made for parameterized string matching which is able to detect plagiarism in a software code.

Kusum Lata Pandey, Suneeta Agarwal, Sanjay Misra, Rajesh Prasad
Dynamic Software Maintenance Effort Estimation Modeling Using Neural Network, Rule Engine and Multi-regression Approach

The dynamic business environment of software projects typically involves a large number of technical, demographic and environmental variables. This coupled with imprecise data on human, management and dynamic factors makes the objective estimation of software development and maintenance effort a very challenging task. Currently, no single estimation model or tool has been able to coherently integrate and realistically address the above problems. This paper presents a multi-fold modeling approach using neural network, rule engine and multi-regression for dynamic software maintenance effort estimation. The system dynamics modeling tool developed using quantitative and qualitative inputs from real life projects is able to successfully simulate and validate the dynamic behavior of a software maintenance estimation system.

Ruchi Shukla, Mukul Shukla, A. K. Misra, T. Marwala, W. A. Clarke

Workshop on Software Quality (SQ 2012)

New Measures for Maintaining the Quality of Databases

Integrity constraints are a means to model the quality of databases. Measures that size the amount of constraint violations are a means to monitor and maintain the quality of databases. We present and discuss new violation measures that refine and go beyond previous inconsistency measures. They serve to check updates for integrity preservation and to repair violations in an inconsistency-tolerant manner.

Hendrik Decker
A New Way to Determine External Quality of ERP Software

Today many production systems plan to use Enterprise Resource Planning (ERP) software to gain competitive advantage. ERP promises in improving efficiency of business processes. However, inappropriate software selection results in a very complex managerial problem which is implementation of ERP software. To reduce the relevant risk, ERP software purchasers need to determine conformance of the software to their requirements. This study aims to define the requirement levels of external quality characteristics and provide a guide to production systems to evaluate ERP software in a systematic manner. In the frame of this reference, a way of evaluating external quality of ERP software is put forward to reduce the risk taken before purchasing it.

Ali Orhan Aydin
Towards a Catalog of Spreadsheet Smells

Spreadsheets are considered to be the most widely used programming language in the world, and reports have shown that 90% of real-world spreadsheets contain errors.

In this work, we try to identify spreadsheet smells, a concept adapted from software, which consists of a surface indication that usually corresponds to a deeper problem. Our smells have been integrated in a tool, and were computed for a large spreadsheet repository. Finally, the analysis of the results we obtained led to the refinement of our initial catalog.

Jácome Cunha, João P. Fernandes, Hugo Ribeiro, João Saraiva
Program and Aspect Metrics for MATLAB

In this paper we present the main concepts of a domain-specific aspect language for specifying cross-cutting concerns of

MATLAB

programs, together with a suite of metrics that is capable of assessing the overall advantage of introducing aspects in the development cycle of

MATLAB

software. We present the results of using our own suite to quantify the advantages of using aspect oriented programming, both in terms of programming effort and code quality. The results are promising and show a good potential for aspect oriented programming in

MATLAB

while our suite proves to be capable of analyzing the overall characteristics of

MATLAB

solutions and providing interesting results about them.

Pedro Martins, Paulo Lopes, João P. Fernandes, João Saraiva, João M. P. Cardoso
A Suite of Cognitive Complexity Metrics

In this paper, we propose a suite of cognitive metrics for evaluating complexity of object-oriented (OO) codes. The proposed metric suite evaluates several important features of OO languages. Specifically, the proposed metrics are to measure method complexity, message complexity (coupling), attributes complexity and class complexity. We propose also a code complexity by considering the complexity due to inheritance for the whole system. All these proposed metrics (except attribute complexity) use the cognitive aspect of the code in terms of cognitive weight. All the metrics have critically examined through theoretical and empirical validation processes.

Sanjay Misra, Murat Koyuncu, Marco Crasso, Cristian Mateos, Alejandro Zunino
Complexity Metrics for Cascading Style Sheets

Web applications are becoming important for small and large companies since they are integrated with their business strategies. Cascading Style Sheets (CSS) however are an integral part of contemporary Web applications that are perceived as complex by users and this result in hampering its widespread adoption. The factors responsible for CSS complexity include size, variety in its rule block structures, rule block reuse, cohesion and attribute definition in rule blocks. In this paper, we have proposed relevant metric for each of the complexity factors. The proposed metrics are validated through a practical framework. The outcome shows that the proposed metrics satisfy most of the parameters required by the practical framework hence establishing them as well structured.

Adewole Adewumi, Sanjay Misra, Nicholas Ikhu-Omoregbe
A Systematic Review on the Impact of CK Metrics on the Functional Correctness of Object-Oriented Classes

The Chidamber and Kemerer (CK) metrics suite is one of the most popular and highly cited suites for measuring Object-Oriented (OO) designs. A great amount of empirical studies have been conducted to evaluate these metrics as indicators of the functional correctness of classes in OO systems. However, there has been no attempt to systematically review and report these empirical evidences. To identify the relation of CK metrics with functional correctness, we have performed a systematic review of empirical evidences published in the literature that support or reject CK metrics as indicators of functional correctness. Our search strategy identified 20 papers that contain relevant empirical evidences. Our results conclude that WMC, CBO, RFC and LCOM metrics are good indicators of functional correctness of OO classes. Inheritance metrics, DIT and NOC, are however not useful indicators of functional correctness.

Yasser A. Khan, Mahmoud O. Elish, Mohamed El-Attar

Workshop on Security and Privacy in Computational Sciences (SPCS 2012)

Pinpointing Malicious Activities through Network and System-Level Malware Execution Behavior

Malicious programs pose a major threat to Internet-connected systems, increasing the importance of studying their behavior in order to fight against them. In this paper, we propose definitions to the different types of behavior that a program can present during its execution. Based on those definitions, we define

suspicious behavior

as the group of actions that change the state of a target system. We also propose a set of network and system-level dangerous activities that can be used to denote the malignity in suspicious behaviors, which were extracted from a large set of malware samples. In addition, we evaluate the malware samples according to their suspicious behavior. Moreover, we developed filters to translate from lower-level execution traces to the observed dangerous activities and evaluated them in the context of actual malware.

André Ricardo Abed Grégio, Vitor Monte Afonso, Dario Simões Fernandes Filho, Paulo Lício de Geus, Mario Jino, Rafael Duarte Coelh dos Santos
A Malware Detection System Inspired on the Human Immune System

Malicious programs (malware) can cause severe damage on computer systems and data. The mechanism that the human immune system uses to detect and protect from organisms that threaten the human body is efficient and can be adapted to detect malware attacks. In this paper we propose a system to perform malware distributed collection, analysis and detection, this last inspired by the human immune system. After collecting malware samples from Internet, they are dynamically analyzed so as to provide execution traces at the operating system level and network flows that are used to create a behavioral model and to generate a detection signature. Those signatures serve as input to a malware detector, acting as the antibodies in the antigen detection process. This allows us to understand the malware attack and aids in the infection removal procedures.

Isabela Liane de Oliveira, André Ricardo Abed Grégio, Adriano Mauro Cansian
Interactive, Visual-Aided Tools to Analyze Malware Behavior

Malicious software attacks can disrupt information systems, violating security principles of availability, confidentiality and integrity. Attackers use malware to gain control, steal data, keep access and cover traces left on the compromised systems. The dynamic analysis of malware is useful to obtain an execution trace that can be used to assess the extent of an attack, to do incident response and to point to adequate counter-measures. An analysis of the captured malware can provide analysts with information about its behavior, allowing them to review the malicious actions performed during its execution on the target. The behavioral data gathered during the analysis consists of filesystem and network activity traces; a security analyst would have a hard time sieving through a maze of textual event data in search of relevant information. We present a behavioral event visualization framework that allows for an easier realization of the malicious chain of events and for quickly spotting interesting actions performed during a security compromise. Also, we analyzed more than 400 malware samples from different families and showed that they can be classified based on their visual signature. Finally, we distribute one of our tools to be freely used by the community.

André Ricardo Abed Grégio, Alexandre Or Cansian Baruque, Vitor Monte Afonso, Dario Simões Fernandes Filho, Paulo Lício de Geus, Mario Jino, Rafael Duarte Coelho dos Santos
Interactive Analysis of Computer Scenarios through Parallel Coordinates Graphics

A security analyst plays a key role in tackling unusual incidents, which is an extenuating task to be properly done, a single service can generate a massive amount of log data in a single day. The analysis of such data is a challenge. Among several available techniques, parallel coordinates have been widely used for visualization of high-dimensional datasets and are also highly suited to plot graphs with a huge number of data points. Unusual conditions and rare events may be revealed in parallel coordinates graph when they are interactively visualized, which is a good feature for the analyst to count on. To address that, we developed the Picviz-GUI tool, adding interactivity to the visualization of parallel coordinates graph. With Picviz-GUI one can shape a graph to reduce visual clutter and to help finding patterns. With a set of simple actions, such as filtering, changing line thickness and color, and selections, the user can highlight the desired information, search through the variables for that subtle data correlation. Picviz-GUI visualization helps the security analyst to understand complex and innovative attacks, to later tune automatized classification systems. This article shows how features on top of parallel coordinates graph can be effective to uncover complex security issues.

Gabriel D. Cavalcante, Sebastien Tricaud, Cleber P. Souza, Paulo Lício de Geus
Methodology for Detection and Restraint of P2P Applications in the Network

P2P networks are consuming more and more Internet resources, it is estimated that approximately 70% of all Internet carried traffic is composed by packets from these networks. Moreover, they still represent the main infection vector for various types of malware and can be used as command and control channel for P2P botnets, besides being famous for being notoriously used to distribute a range of pirated files (movies, music, games,...). In this paper we present some typical characteristics of P2P networks and propose a new architecture based on filters to detect hosts running P2P applications. We also provide a methodology on how to prevent the communication of those hosts in order to avoid undesirable impacts in the operation of the network as a whole.

Rodrigo M. P. Silva, Ronaldo M. Salles

Workshop on Soft Computing and Data Engineering (SCDE 2012)

Text Categorization Based on Fuzzy Soft Set Theory

In this paper, we proposed a new method for Text Categorization based on fuzzy soft set theory so called fuzzy soft set classifier (FSSC). We use fuzzy soft set representation that derived from the bag-of-words representation and define each term as a distinct word in the set of words of the document collection. The FSSC categorize each document by using fuzzy c-means formula for classification, and use fuzzy soft set similarity to measure distance between two documents. We perform the experiments with the standard Reuters-21578 dataset, and using three kind of weigthing such as boolean, term frequency, and term frequency-invert document frequency to compare the performance of FSSC with others four classifier such as kNN, Bayesian, Rocchio, and SVM. We are using precision, recall, F-measure, retun-size, and the running time as a performance evaluation. Result shown that there is no absolute winner. The FSSC has precision, recall, and F-measure lower than SVM, and kNN but FSSC can work faster than both. When compared with the Bayesian and Rocchio, the FSSC works more slowly but has a higher precision and F-measure.

Bana Handaga, Mustafa Mat Deris
Cluster Size Determination Using JPEG Files

File can be recovered by simply using traditional recovery means. However, a technique is required to distinguish one file to another when dealing with hard disk with corrupted filesystem metadata. As in a computer file system, a cluster is the smallest allocation of disk space to hold a file, information about the cluster size can help in determining the start of file which can be used to distinguish one file to another. This paper introduces a method for acquiring the cluster size by using data sets from DFRWS 2006 and DFRWS 2007. A tool called PredClus is developed to automatically display the predicted cluster size according to probabilistic percentage. By using PredClus, the cluster size used in both DFRWS 2006 and DFRWS 2007 can be determined. Thus, JPEG images that are not located at the starting address of any cluster are most probably thumbnails or embedded files.

Nurul Azma Abdullah, Rosziati Ibrahim, Kamaruddin Malik Mohamad
Semantic Web Search Engine Using Ontology, Clustering and Personalization Techniques

Data accuracy and reliability have been a serious issue in the vast emergence of information on the web. Advanced web searching has assisted in knowledge retrieving. However, most knowledge on the Web is presented in natural-language text that understandable by human but difficult for computers to interpret. Therefore, Semantic Web approach is widely used to give more reliable application. This paper presents a framework in enhancing knowledge retrieval processes using Semantic Web technologies. Instead of using ontology and categorization alone, we are injecting personalization concept from Relational Database (RDB) to ensure more reliable data are obtained. The proposed framework is discussed in details. A case study is presented to see the viability of the proposed framework in retrieving the meaningful information.

Noryusliza Abdullah, Rosziati Ibrahim
Granules of Words to Represent Text: An Approach Based on Fuzzy Relations and Spectral Clustering

The amount of data available in semi-structured or unstructured format grows exponentially. The area of text mining aims at discovering knowledge from data of this type. Most work in this area uses the model known as bag of words to represent the texts. This form of representation, although effective, minimizes the quality of knowledge discovered because it is not able to capture essential characteristics of this type of data such as semantics and context. The paradigm of granular computing has been shown effective in the treatment of complex problems of information processing and can produce significant results in large-scale environments such as the Web. This paper explores the granulation process of words with a view to its application in the subsequent improvement in text representation. We use fuzzy relations and spectral clustering in this process and present some results.

Patrícia F. Castro, Geraldo B. Xexéo
Multivariate Time Series Classification by Combining Trend-Based and Value-Based Approximations

Multivariate time series data often have a very high dimensionality. Classifying such high dimensional data poses a challenge because a vast number of features can be extracted. Furthermore, the meaning of the normally intuitive term “similar to” needs to be precisely defined. Representing the time series data effectively is an essential task for decision-making activities such as prediction, clustering and classification. In this paper we propose a feature-based classification approach to classify real-world multivariate time series generated by drilling rig sensors in the oil and gas industry. Our approach encompasses two main phases: representation and classification.

For the representation phase, we propose a novel representation of time series which combines trend-based and value-based approximations (we abbreviate it as TVA). It produces a compact representation of the time series which consists of symbolic strings that represent the trends and the values of each variable in the series. The TVA representation improves both the accuracy and the running time of the classification process by extracting a set of informative features suitable for common classifiers.

For the classification phase, we propose a memory-based classifier which takes into account the antecedent results of the classification process. The inputs of the proposed classifier are the TVA features computed from the current segment, as well as the predicted class of the previous segment.

Our experimental results on real-world multivariate time series show that our approach enables highly accurate and fast classification of multivariate time series.

Bilal Esmael, Arghad Arnaout, Rudolf K. Fruhwirth, Gerhard Thonhauser

General Track on High Performance Computing and Networks

Impact of pay-as-you-go Cloud Platforms on Software Pricing and Development: A Review and Case Study

One of the major highlights of cloud computing concerns the

pay-as-you-go

pricing model, where one pays according to the amount of resources consumed. Some cloud platforms already offer the

pay-as-you-go

model and this creates a new scenario in which the rational computing resource consumption gains in importance. In this paper, we address the impact of this new approach in software pricing and software development. Our hypothesis is that hardware consumption may impact directly on the software vendor profit and thus it can be necessary to adapt some software development practices. In this direction, we discuss the need to revise well-established models such as COCOMO II and some aspects related to requirements engineering and benchmarking tools. We also present a case study pointing that disregarding the rational consumption of resources can generate wastes that may impact on the software vendor profit.

Fernando Pires Barbosa, Andrea Schwertner Charão
Resilience for Collaborative Applications on Clouds
Fault-Tolerance for Distributed HPC Applications

Because e-Science applications are data intensive and require long execution runs, it is important that they feature fault-tolerance mechanisms. Cloud and grid computing infrastructures often support system and network fault-tolerance. They repair and prevent communication and software errors. They allow also checkpointing of applications, duplication of jobs and data to prevent catastrophic hardware failures. However, only preliminary work has been done so far on application resilience, i.e., the ability to resume normal execution following application errors and abnormal executions. This paper is an overview of open issues and solutions for such errors detection and management. It also overviews the implementation of a workflow management system to design, deploy, execute, monitor, restart and resume distributed HPC applications on cloud infrastructures in cases of failures.

Toàn Nguyên, Jean-Antoine Désidéri
T-DMB Receiver Model for Emergency Alert Service

This paper presents the method of emergency warning system operation based on T-DMB and the design of T-DMB AEAS(Automatic Emergency Alert Service) receiver model. The proposed receiver model compares the geographical location of emergency with the location of DMB transmitting station from T-DMB broadcasting signal and classifies the receiver location into alert region, neighboring region and non-alert region and transmits the emergency alert message according to each region. The geographical location of emergency can be obtained from FIG(Fast Information Group) 5/2 EWS(Emergency Warning Service) data field for AEAS message and the location of DMB transmitting station can be estimated from either the latitude and the longitude in main identifier and sub identifier in FIG 0/22 data filed for TII(Transmitter Identification Information) or TII distribution database. Thus, the proposed receiver model consists of the checking process of AEAS message from the received DMB signal, the verifying process of DMB transmitting location and the displaying process of AEAS message according to the classified region. In our experiment, we implemented the proposed receiver model with display section, storage section, DMB module for receiving broadcasting signal and control section and performed test emergency alert broadcasting using T-DMB signal generator. From experimental results, we verified that AEAS message can be displayed on the receiver that is located on alert region and neighboring region and cannot be displayed on the receiver that is located on non-alert region.

Seong-Geun Kwon, Suk-Hwan Lee, Eung-Joo Lee, Ki-Ryong Kwon
A Framework for Context-Aware Systems in Mobile Devices

The pervasive and ubiquitous computing is a computing paradigm that aims to integrate the real world with the virtual world so that is not perceived by users. One of the areas of study of this paradigm is the context-aware systems that are systems that perform some action after collecting information that characterizes a given context. The objective of this paper is to describe the specification of a framework that can be utilized for the development of new context-sensitive applications.

Eduardo Jorge, Matheus Farias, Rafael Carmo, Weslley Vieira
A Simulation Framework for Scheduling Performance Evaluation on CPU-GPU Heterogeneous System

Modern PCs are equipped with multi-many core capabilities which enhance their computational power and address important issues related to the efficiency of the scheduling processes of the modern operating system in such hybrid architectures.

The aim of our work is to implement a simulation framework devoted to the study of the scheduling process in hybrid systems in order to improve the system performance. Through the simulator we are able to model events and to evaluate the scheduling policy for heterogeneous systems. We implemented as a use case a simple scheduling discipline, a non-prehemptive priority queue.

Flavio Vella, Igor Neri, Osvaldo Gervasi, Sergio Tasso
Influence of Topology on Mobility and Transmission Capacity of Human-Based DTNs

Casual encounters among people have been studied as a means to deliver messages indirectly, using delay tolerant networks (DTNs). This work analyses the message forwarding in human-based DTNs, focusing on the topology of the mobility area. By using simulations, we evaluate the influence of the environment connectivity on the network performance. Several parameters have also been considered: network density, forwarding algorithm and storage capacity. In general, considering the already limited capacity of mobile devices and a reduced network density, the mobility environment interconnectivity seems to have a relevant effect in message delivery rates.

Danilo A. Moschetto, Douglas O. Freitas, Lourdes P. P. Poma, Ricardo Aparecido Perez de Almeida, Cesar A. C. Marcondes
Towards a Computer Assisted Approach for Migrating Legacy Systems to SOA

Legacy system migration to Service-oriented Architectures (SOA) has been identified as the right path to the modernization of enterprise solutions needing agility to respond to changes and high levels of interoperability. However, one of the main challenges of migrating to SOA is finding an appropriate balance between migration effort and the quality of resulting service interfaces. This paper describes an approach to assist software analysts in the definition of produced services, which bases on the fact that poorly designed service interfaces may be due to bad design and implementation decisions present in the legacy system. Besides automatically detecting common design pitfalls, the approach suggests refactorings to correct them. Resulting services have been compared with those that resulted from migrating a real system by following two classic approaches.

Gonzalo Salvatierra, Cristian Mateos, Marco Crasso, Alejandro Zunino
1+1 Protection of Overlay Distributed Computing Systems: Modeling and Optimization

The development of the Internet and growing amount of data produced in various systems have triggered the need to construct distributed computing systems required to process the data. Since in some cases, results of computations are of great importance, (e.g., analysis of medical data, weather forecast, etc.), survivability of computing systems, i.e., capability to provide continuous service after failures of network elements, becomes a significant issue. Most of previous works in the field of survivable computing systems consider a case when a special dedicated optical network is used to connect computing sites. The main novelty of this work is that we focus on overlay-based distributed computing systems, i.e., in which the computing system works as an overlay on top of an underlying network, e.g., Internet. In particular, we present a novel protection scheme for such systems. The main idea of the proposed protection approach is based on 1+1 protection method developed in the context of connection-oriented networks. A new ILP model for joint optimization of task allocation and link capacity assignment in survivable overlay distributed computing systems is introduced. The objective is to minimize the operational (OPEX) cost of the system including processing costs and network capacity costs. Moreover, two heuristic algorithms are proposed and evaluated. The results show that provisioning protection to all tasks increases the OPEX cost by 110% and 106% for 30-node and 200-node systems, respectively, compared to the case when tasks are not protected.

Krzysztof Walkowiak, Jacek Rak
Scheduling and Capacity Design in Overlay Computing Systems

Parallel to new developments in the fields of computer networks and high performance computing, effective distributed systems have emerged to answer the growing demand to process huge amounts of data. Comparing to traditional network systems aimed mostly to send data, distributed computing systems are also focused on data processing what introduces additionally requirements in the system performance and operation. In this paper we assume that the distributed system works in an overlay mode, which enables fast, cost-effective and flexible deployment comparing to traditional network model. The objective of the design problem is to optimize task scheduling and network capacity in order to minimize the operational cost and to realize all computational projects assigned to the system. The optimization problem is formulated in the form of an ILP (Integer Linear Programming) model. Due to the problem complexity, four heuristics are proposed including evolutionary algorithms and Tabu Search algorithm. All methods are evaluated in comparison to optimal results yielded by the CPLEX solver. The best performance is obtained for the Tabu Search method that provides average results only 0.38% worse than optimal ones. Moreover, for larger problem instances with 20-minute limit of the execution time, the Tabu Search algorithm outperforms CPLEX for some cases.

Krzysztof Walkowiak, Andrzej Kasprzak, Michał Kosowski, Marek Miziołek
GPU Acceleration of the caffa3d.MB Model

This work presents a study of porting Strongly Implicit Procedure (SIP) solver to GPU in order to improve its computational efficiency. The SIP heptadiagonal linear system solver was evaluated to be the most time consuming stage in finite volume flow solver

caffa3d.MB

. The experimental evaluation of the proposed implementation of the solver demonstrates that a significant runtime reduction can be attained (acceleration values up to 10×) when compared with a CPU version, and this improvement significantly reduces the total runtime of the model. This results evidence a promising prospect for a full GPU-based implementation of finite volume flow solvers like

caffa3d.MB

.

Pablo Igounet, Pablo Alfaro, Gabriel Usera, Pablo Ezzatti
Security-Effective Fast Authentication Mechanism for Network Mobility in Proxy Mobile IPv6 Networks

This paper reinforced security under the network evaluation of wire/wireless integration of NEMO (NEwork MObility) supporting mobility and network-based PMIPv6 (Proxy Mobile IPv6). It also proposes SK-L

2

AM (Symmetric Key-Based Local-Lighted Authentication Mechanism) based on simple key which reduces code calculation and authentication delay costs. Moreover, fast handoff technique was also adopted to reduce handoff delay time in PMIPv6 and X-FPMIPv6 (eXtension of Fast Handoff for PMIPv6) was used to support global mobility. In addition, AX-FPMIPv6 (Authentication eXtension of Fast Handoff for PMIPv6) is proposed which integrated SK-L

2

AM and X-FPMIPv6 by applying Piggybacks method to reduce the overhead of authentication and signaling. The AX-FPMIPv6 technique suggested in this paper shows that this technique is better than the existing schemes in authentication and handoff delay according to the performance analysis.

Illkyun Im, Young-Hwa Cho, Jae-Young Choi, Jongpil Jeong
An Architecture for Service Integration and Unified Communication in Mobile Computing

Mobile communication devices often present different wireless network interfaces which allow users to access networked services from anywhere and at any time. Existing services, however, commonly require specific interfaces and infrastructure to operate, and often fail to convey different data flows to a device. Little context information is used in adapting contents to differing devices. This work presents a new context aware architecture for using the Bluetooth interface as a unifying communication channel with mobile devices. Content adaptation is provided as well as a mechanism for offloading services to the support infrastructure. The obtained results show the developed communication model is viable and effective.

Ricardo Aparecido Perez de Almeida, Hélio Crestana Guardia
Task Allocation in Mesh Structure: 2Side LeapFrog Algorithm and Q-Learning Based Algorithm

This paper concerns the problem of task allocation on the mesh structure of processors. Two created algorithms: 2 Side LeapFrog and Q-learning Based Algorithm are presented. These algorithms are evaluated and compared to known task allocation algorithms. To measure the algorithms’ efficiency we introduced our own evaluating function – the average network load. Finally, we implemented an experimentation system to test these algorithms on different sets of tasks to allocate. In this paper, there is a short analysis of series of experiments conducted on three different categories of task sets: small tasks, mixed tasks and large tasks. The results of investigations confirm that the created algorithms seem to be very promising.

Iwona Pozniak-Koszalka, Wojciech Proma, Leszek Koszalka, Maciej Pol, Andrzej Kasprzak
Follow-Us: A Distributed Ubiquitous Healthcare System Simulated by MannaSim

The monitoring of assisted living requires the existence of tools that capture vital signs from the user and also as well as environment data. Ubiquitous healthcare aims to use technologies such as: wireless sensor networks, context awareness, low-power electronics, and so on. This work employs Information and Communication Technologies (ICT) to provide a technologically and socially acceptable solution to aid elderly and disabled people to live a more independent life as long as possible, and to maintain their regular everyday activities at their own home. As a study case, we have implement a simulation of a smart home, called Follow-Us, comprised of networked computational devices. Follow-Us can be used in different ambients such as hospitals, houses, rest homes, gyms, so on. In particular, we present the simulation in a house plan.

Maria Luísa Amarante Ghizoni, Adauto Santos, Linnyer Beatrys Ruiz
Adaptive Dynamic Frequency Scaling for Thermal-Aware 3D Multi-core Processors

3D integration technology can provide significant benefits of reduced interconnection delay and low power consumption in designing multi-core processors. However, the 3D integration technology magnifies the thermal challenges in multi-core processors due to high power density caused by stacking multiple layers vertically. For this reason, the 3D multi-core architecture cannot be practical without proper solutions to the thermal problems such as Dynamic Frequency Scaling(DFS). This paper investigates how the DFS handles the thermal problems in 3D multi-core processors from the perspective of the function-unit level. We also propose an adaptive DFS technique to mitigate the thermal problems in 3D multi-core processors by assigning different DFS levels to each core based on the corresponding cooling efficiency. Experimental results show that the proposed adaptive DFS technique reduces the peak temperature of 3D multi-core processors by up to 10.35°C compared to the conventional DFS technique, leading to the improved reliability.

Hong Jun Choi, Young Jin Park, Hsien-Hsin Lee, Cheol Hong Kim
A Context-Aware Service Model Based on the OSGi Framework for u-Agricultural Environments

In ubiquitous environments, many services and devices may be heterogeneous and independent of each other. So, a service framework for ubiquitous environments have to be able to handle the heterogeneous applications and devices organically. Furthermore, because demand changes in ubiquitous environments tend to occur very dynamically and frequently, the service architecture also has to handle the demand changes of services in ubiquitous computing environments well. Services and devices in ubiquitous agricultural environments also have each other’s different characteristics. Therefore, we need a service framework which can negotiate the heterogeneous characteristics of the services and devices for context-aware services in ubiquitous agricultural environments. In this paper, we propose an OSGi framework-based context-aware service model for ubiquitous agricultural environments. The proposed service model is based on OSGi framework, which can support various context-aware applications based on RFID/USN in ubiquitous agricultural environments regardless of what sensors and devices in agricultural environments are. Therefore, the proposed service model can fast reorganize and easily reuse existing service resources for a new agricultural service demand without all or big change of the established system architecture of the agricultural environments. Especially, the proposed service model can be greatly helpful in the developments of context-aware agricultural services in various cultivation environments equipped with each other’s different sensors.

Jongsun Choi, Sangjoon Park, Jongchan Lee, Yongyun Cho
A Security Framework for Blocking New Types of Internet Worms in Ubiquitous Computing Environments

In ubiquitous computing environments, many of services may communicate valuable data with each other through networking based on Internet. So, a service of ubiquitous computing environments has to be invulnerable to any type of Internet worms or attacks. Internet worms can compromise vulnerable servers in a short period of time. The security systems using signatures cannot protect servers from new types of Internet worms. In this paper, we propose a security framework for blocking new types of Internet worms, using the Snort and Honeytrap. Honeytrap are installed on every client, which detects Internet worm attacks. All interactions with Honeytrap are regarded as attacks because it is installed to lure Internet worms. In the proposed framework, when Honeytraps detect Internet worm attacks, Snort blocks the attacks.

Iksu Kim, Yongyun Cho
Quality Factors in Development Best Practices for Mobile Applications

Smart mobile devices (hereafter, SMDs) are becoming pervasive and their applications have some particular attributes. Software Engineering deals with quality not only with traditional applications but also with process and product quality of this new application class. Models of software quality can aid to better understand the software characteristics that affect its quality. In this paper, we review some models of software quality factors, the best practices for SMD applications development proposed by UTI and W3C, and we discuss some of their relationships. We also discuss some deficiencies of the development best practices.

Euler Horta Marinho, Rodolfo Ferreira Resende
ShadowNet: An Active Defense Infrastructure for Insider Cyber Attack Prevention

The ShadowNet infrastructure for insider cyber attack prevention is comprised of a tiered server system that is able to dynamically redirect dangerous/suspicious network traffic away from production servers that provide web, ftp, database and other vital services to cloned virtual machines in a quarantined environment. This is done transparently from the point of view of both the attacker and normal users. Existing connections, such as SSH sessions, are not interrupted. Any malicious activity performed by the attacker on a quarantined server is not reflected on the production server. The attacker is provided services from the quarantined server, which creates the impression that the attacks performed are successful. The activities of the attacker on the quarantined system are able to be recorded much like a honeypot system for forensic analysis.

Xiaohui Cui, Wade Gasior, Justin Beaver, Jim Treadwell
Backmatter
Metadaten
Titel
Computational Science and Its Applications – ICCSA 2012
herausgegeben von
Beniamino Murgante
Osvaldo Gervasi
Sanjay Misra
Nadia Nedjah
Ana Maria A. C. Rocha
David Taniar
Bernady O. Apduhan
Copyright-Jahr
2012
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-31128-4
Print ISBN
978-3-642-31127-7
DOI
https://doi.org/10.1007/978-3-642-31128-4