Skip to main content
main-content

Über dieses Buch

This book constitutes the best papers selection from the proceedings of the 14th International Conference on Intelligent Software Methodologies, Tools and Techniques, SoMeT 2015, held in Naples, Italy, in September 2015.

The 47 full papers presented together with one short paper were carefully reviewed and selected from 118 submissions. The papers are organized in topical sections on embedded and mobile software systems, theory and application; real-time systems; requirement engineering, high-assurance and testing system; social networks and big data; cloud computing and semantic web; artificial intelligence techniques and intelligent system design; software development and integration; security and software methodologies for reliable software design; new software techniques in image processing and computer graphics; software applications systems for medical health care.

Inhaltsverzeichnis

Frontmatter

Embedded and Mobile Software Systems, Theory and Application

Frontmatter

Towards Automated UI-Tests for Sensor-Based Mobile Applications

Mobile devices changed human-computer interaction, caused the need for specialized software engineering methods and created new business opportunities. The mobile app market is highly competitive and software developers need to maintain high software quality standards for long-lasting economic success. While powerful software development kits support developers in creating mobile applications, testing them is still cumbersome, time-consuming and error-prone. Especially interaction methods depending on sensor input like device motion gestures prevent automated UI testing – developers and testers are forced to manually test all different aspects. We present an approach to integrate sensor information into user acceptance tests and use a sensor simulation engine to enable automatic test case execution for mobile applications.

Tobias Griebe, Marc Hesenius, Volker Gruhn

Indoor Position Detection Using BLE Signals Based on Voronoi Diagram

Bluetooth Low Energy (BLE) is a Bluetooth standard with low energy consumption. Beacons using BLE transmit BLE signals, which can be received by smart phones running iOS or Android OS. At present, demonstration experiments are conducted.

An indoor position detection using an ordered order-

k

Voronoi diagram was proposed. Beacons were installed in a building of Tokai University. Experiments are conducted to investigate position detection using the proposed approach. We have two results using the proposed system: (1) a floor decision success rate of 99.6 %; and (2) indoor position detection success rates of 85.5 % (first neighbor) and 48.9 % (second neighbor). Finally, we present some ideas for improving the proposed approach.

Kensuke Onishi

Mobile Application Testing in Industrial Contexts: An Exploratory Multiple Case-Study

Recent empirical studies in the area of mobile application testing indicate the need for specific testing techniques and methods for mobile applications. This is due to mobile applications being significantly different than traditional web and desktop applications, particularly in terms of the physical constraints of mobile devices and the very different features of their operating systems. In this paper, we presented a multiple case-study involving four software development companies in the area of mobile and smartphones application. We aimed to identify testing techniques currently being applied by developers and challenges that they are facing. Our principle results are that many industrial teams seem to lack sufficient knowledge on how to test mobile applications, particularly in the areas of mobile application life-cycle conformance, context-awareness, and integration testing. We also found that there is no formal testing approach or methodology that can facilitate a development team to systematically test a critical mobile application.

Samer Zein, Norsaremah Salleh, John Grundy

An Efficient Reconfiguration-Based Approach for Improving Smart Grid Performance

In this research paper, we deal with power grid modeling and reconfiguration to improve electrical network performance for which we propose a new definition in terms of automatic recovery rate. We propose a novel and original methodology for, intelligently, optimizing the use of power energy in smart grids based on Mutli-Agent System composed of static and mobile agents. Our work presents an assistance-design approach as it proposes new and efficient reconfigurations to automatically resolve the maximum of failures in the aim of improving the power system reliability and performance. To optimize the cost of the automatic actions to be taken for reducing the costs of the human interventions, we identify a pertinent short list of failures based on a proposed relations of dominance and equivalence between failures. The efficiency of the proposed strategy is showed by the experimental study.

Syrine Ben Meskina, Narjes Doggaz, Mohamed Khalgui

Real Time Systems

Frontmatter

PEDASA: Priority, Energy and Deadline Aware Scheduling Algorithm

We present a new approach for scheduling workloads containing periodic tasks in soft real-time systems. The proposed algorithm consists on finding a new set of priorities depending of the three main criteria identified in a real-time system: fixed priority initially assumed by user, deadline and energy efficiency. Our proposition involves a computational procedure that is responsible of extracting the new values of priorities out of the importance of the three factors previously mentioned. An eventual re-adjustment of the deadlines is also faced all along with the reloading of the system’s power on specified instants. The resulting system is, therefore, feasible and effectively schedulable compared to the mono-criteria algorithms. This contribution allows also the definition of precise instants of reloading which enforces the new concept of extending the lifetime of the system.

Maroua Gasmi, Olfa Mosbahi, Mohamed Khalgui, Luis Gomes

New Pack Oriented Solutions for Energy-Aware Feasible Adaptive Real-Time Systems

This paper addresses the management of tasks execution for real-time reconfigurable systems powered by battery. In this context, one of major problem concerns the management of battery life between two different recharges. For this type of systems, a reconfiguration scenario means the addition, removal or update of tasks in order to manage the whole system at the occurrence of hardware/software faults, or also to improve its performance at run-time. When such a scenario is applied, the system risks a fatal increase in energy consumption, a violation of real time constraints or a memory saturation. To prevent this type of problems during the execution, a new scheduling strategy is necessary. Our proposal is based on the definition of packs of tasks and the management of different parameters of these packs. For each reconfiguration scenario, modifications will be performed on packs/tasks parameters in order to respect the memory, real-time and energy constraints.

Aymen Gammoudi, Adel Benzina, Mohamed Khalgui, Daniel Chillet

New Solutions for Useful Execution Models of Communicating Adaptive RA2DL

The paper deals with adaptive component-based control systems following the Reconfiguration Architecture Analysis and Design Language (denoted by RA2DL). A system is assumed to be composed a network of RA2DL in coordination. When a fault occurs in the plant, RA2DL component will have a lot of problems to solve such as: the management of the reconfiguration flow, the correction of execution, the synchronization of reconfiguration with the other RA2DL components and the coordination between them. A correction is proposed therefore to improve RA2DL by three layers: the first one is the Middleware reconfiguration (MR) to manage the reconfiguration of RA2DL, the second one is the Execution Controller (EC) which describes the executable and reconfiguration part of RA2DL and the third one is the Middleware Synchronization (SM) for synchronous reconfigurations. When the system is distributed on a network of RA2DL components, we propose a coordination method between them using well-defined matrices to allow feasible and coherent reconfigurations. A tool is developed to simulate our approach. All the contributions of this work are applied to a case study dealing with IEEE 802.11 Wireless LAN.

Farid Adaili, Olfa Mosbahi, Mohamed Khalgui, Samia Bouzefrane

Requirement Engineering, High-Assurance and Testing System

Frontmatter

Architectural Specification and Analysis of the Aegis Combat System

Software architecture is nowadays considered as a highly important design activity due to enabling the analysis of system behaviours and detecting the design errors before they propagate into implementation. There have been many architecture description languages developed so far that focus on analysing software architectures. However, these languages require the use of process algebras for specifying system behaviours, which are found unfamiliar by practitioners in general. XCD (Connector-centric Design) is one of the most recent languages that is instead based on the well-known Design-by-Contract approach. In this paper, XCD is illustrated in architectural modelling and analysis via the Aegis Combat System case-study. With the Aegis system, it is aimed to show how one of the most common design errors, i.e., the deadlocking components, can be caught in

Xcd

and prevented in a modular way. In the paper, XCD is also compared with Wright, one of the most influencing architecture description languages, with which Aegis has been specified and analysed for deadlock too.

Mert Ozkaya

Visualization of Checking Results for Graphical Validation Rules

Graphically represented Business Process Models (BPMs) are common artifacts in documentation as well as in early phases of (software) development processes. The Graphical Computation Tree Logic (G-CTL) is a notation to define formal graphical validation rules on the same level of abstraction as the BPMs, allowing to specify high-level requirements regarding the content level of the BPMs. The research tool Business Application Modeler (BAM) enables the automatic validation of BPMs with G-CTL rules. While details of the validation procedure are hidden from the user, the checking results need to be presented adequately. In this contribution, we present and discuss methods for visualization and analysis of the checking results in the context of G-CTL based validations. We elaborate how artifacts, which are generated during a validation procedure, may be used to derive different visualizations, and we show how these methods can be combined into more expressive visualizations.

Sören Witt, Sven Feja, Christian Hadler, Andreas Speck, Elke Pulvermüller

Processor Rescue

Safe Coding for Hardware Aliasing

What happens if a Mars lander takes a cosmic ray through the processor and thereafter

$$1\,\mathtt{+ }\,1=3$$

1

+

1

=

3

? Coping with the fault is feasible but requires the numbers 2 and 3 to be treated as indistinguishable for the purposes of arithmetic, while as memory addresses they continue to access different memory cells. If a program is to run correctly in this altered environment it must be prepared to see address 2 sporadically access data in memory cell 3, which is known as ‘hardware aliasing’. This paper describes a programming discipline that allows software to run correctly in a hardware aliasing context, provided the aliasing is underpinned by hidden determinism.

Peter T. Breuer, Jonathan P. Bowen, Simon Pickin

Automatic Test Data Generation Targeting Hybrid Coverage Criteria

Software used in safety critical domains such as aviation and automotive has to be rigorously tested. Since exhaustive testing is not feasible, Modified Condition/Decision Coverage (MC/DC) has been introduced as an effective structural coverage alternative. However, studies have shown that complementing the test cases satisfying MC/DC to also satisfy Boundary Value Analysis (BVA) increases the bug finding rate. Hence, the industry adopted its testing processes to accommodate both. Satisfying these coverage requirements manually is very expensive and as a result many efforts were put to automate this task. Genetic algorithms (GA) have shown their effectiveness so far in this area. We propose an approach employing GA techniques and targeting hybrid coverage criteria to increase BVA in addition to MC/DC.

Ahmed El-Serafy, Cherif Salama, Ayman Wahba

Optimization of Generated Test Data for MC/DC

Structural coverage criteria are employed in testing according to the criticality of the application domain. Modified Condition/Decision Coverage (MC/DC) comes highly recommended by multiple standards, including, ISO 26262 and DO-178C in the automotive and avionics industries respectively. Yet, it is time and effort consuming to construct and maintain test suites that achieve high coverage percentages of MC/DC. Search based approaches were used to automate this task due to the problem complexity. Our results show that the generated test data could be minimized while maintaining the same coverage by considering that a certain test datum can satisfy multiple MC/DC test targets. This improves the maintainability of the generated test suite and saves the resources required to define their expected outputs and any part of the testing process that is repeated per test case.

Ghada El-Sayed, Cherif Salama, Ayman Wahba

Social Networks and Big Data

Frontmatter

Hybridized Feature Set for Accurate Arabic Dark Web Pages Classification

Security informatics and computational intelligence are gaining more importance in detecting terrorist activities as the extremist groups are misusing many of the available Internet services to incite violence and hatred. However, inadequate performance of statistical based computational intelligence methods reduces intelligent techniques efficiency in supporting counterterrorism efforts, and limits the early detection opportunities of potential terrorist activities. In this paper, we propose a feature set hybridization method, based on feature selection and extraction methods, for accurate content classification in Arabic dark web pages. The proposed method hybridizes the feature sets so that the generated feature set contains less number of features that capable of achieving higher classification performance. A selected dataset from Dark Web Forum Portal (DWFP) is used to test the performance of the proposed method that based on Term Frequency - Inverse Document Frequency (TFIDF) as feature selection method on one hand, while Random Projection (RP) and Principal Component Analysis (PCA) feature selection methods on the other hand. Classification results using the Support Vector Machine (SVM) classifier show that a high classification performance has been achieved base on the hybridization of TFIDF and PCA, where 99 % of F1 and accuracy performance has been achieved.

Thabit Sabbah, Ali Selamat

Accelerating Keyword Search for Big RDF Web Data on Many-Core Systems

Resource Description Framework (RDF) is the commonly used format for Semantic Web data. Nowadays, huge amounts of data on the Internet in the RDF format are used by search engines for providing answers to the queries of users. Querying through big data needs suitable searching methods supported by a very high processing power, because the traditional, sequential keyword matching on a semantic web server may take a prohibitively long time. In this paper, we aim at accelerating the search in big RDF data by exploiting modern many-core architectures based on Graphics Processing Units (GPUs). We develop several implementations of the RDF search for many-core architectures using two programming approaches: OpenMP for systems with CPUs and CUDA for systems comprising CPUs and GPUs. Experiments show that our approach is 20.5 times faster than the sequential search.

Chidchanok Choksuchat, Chantana Chantrapornchai, Michael Haidl, Sergei Gorlatch

Finding Target Users Interested in Regional Areas Using Online Advertising and Social Network Services

There is a shift in the Japanese population from rural areas to urban areas. As a result, the economic power of rural areas is decreasing. Various combinations of information technology and tourism have been applied to preserve the historic and cultural heritage of rural areas with limited successes. One reason for this lack of success is that the relevant government or regional community does not concentrate on the target users who are interested in the area. This study proposes methods to discover the target users that are interested in regional areas. We find the target users via online advertising and social network services. The results show that our proposed methods are effective.

Jun Sasaki, Shizune Takahashi, Li Shuang, Issei Komatsu, Keizo Yamada, Masanori Takagi

Semi-automatic Detection of Sentiment Hashtags in Social Networks

Communication in social networks is often carried in messages of limited size, and in some cases, like Twitter, the limit is imposed by the social network itself. Therefore, it can be hard, even for a human expert, to calculate the correct sentiment based on the message alone. Sentiment hashtags present one way to help determine tweet polarity better. In this paper we present a way to semi-automatically detect sentiment hashtags from initial tweet sentiment analysis and then recalculate tweet polarity and improve accuracy. The methodology presented helps jumpstart sentiment analysis research.

Gajo Petrovic, Hamido Fujita

Cloud Computing and the Semantic Web

Frontmatter

Ontology-Based Technology for Development of Intelligent Scientific Internet Resources

The paper discusses the main features of a technology for the development of subject-based Intelligent Scientific Internet Resources (ISIR) providing content-based access to systematized scientific knowledge and information resources related to a certain knowledge area and to their intelligent processing facilities. An important merit of ISIR is its ability to reduce appreciably the time required to access and analyze information thanks to the accumulation of the semantic descriptions of the basic entities of a knowledge area being modeled, the Internet resources relevant to this area, and the information processing facilities (including web-services) used in it directly in the ISIR content. The specific features of the technology are the use of ontology and semantic network formalisms and orientation to experts, i.e. specialists in the knowledge areas for which ISIR are built.

Yury Zagorulko, Galina Zagorulko

A Science Mapping Analysis of the Literature on Software Product Lines

To compete in the global marketplace, manufacturers try to differentiate their products by focusing on individual customer needs. Fulfilling this goal requires companies to shift from mass production to mass customization. In the context of software development, software product line engineering has emerged as a cost effective approach to developing families of similar products by support high levels of mass customization. This paper analyzes the literature on software product lines from its beginnings to 2014. A science mapping approach is applied to identify the most researched topics, and how the interest in those topics has evolved along the way.

Ruben Heradio, Hector Perez-Morago, David Fernandez-Amoros, Francisco Javier Cabrerizo, Enrique Herrera-Viedma

Asymmetry Theory and Asymmetry Based Parsing

We consider the properties of an integrated competence-performance model where the grammar generates the asymmetrical relations underlying linguistic expressions and the parser recovers these asymmetries. This model relates Asymmetry Theory, which is a theory of the Language Faculty, and asymmetry-based parsing, which is a theory of language use. We discuss the derivation and parsing of morphological and syntactic argument structure dependencies under the world level and above in order to show that the grammar generates these dependencies and that the parser recovers them. The integrated competence-performance model is sensitive to the configurational and featural asymmetries underlying linguistic expressions and contributes to reduce the complexity. Lastly, we draw consequences for natural language understanding.

Anna Maria Di Sciullo

Reuse of Rules in a Mapping-Based Integration Tool

In the Internet of Things scenario, the integration of devices with business application systems requires bridging the differences in schemas of transmitted and received data. Further, different device configuration may introduce a variety in a data schema of a single device. Currently, mitigating this schema variation problem requires a manual adaptation of data transformations between the devices and business application systems. In this paper, we propose an algorithm that uses previously created transformations to automatically adjust the new ones for schema variations. The algorithm only considers isolated schema element information in order to find possible candidates in a transformation repository. Schema elements can be compared using multiple comparators, and the result is combined in a final similarity metric. Both, the algorithm and the repository are implemented as a module of AnyMap – a mapping-based integration tool. We also present a case study on which we evaluated the approach.

Vladimir Dimitrieski, Milan Čeliković, Nemanja Igić, Heiko Kern, Fred Stefan

Artificial Intelligence Techniques and Intelligent System Design

Frontmatter

An Arbitrary Heuristic Room Matching Algorithm in Obtaining an Enhanced Initial Seed for the University Course Timetabling Problem

The curriculum-based course timetabling problem is a subset of the university course timetabling problem which is often regarded as both an NP-hard and NP-complete problem. The nature of the problem concerns with the assignment of lecturers-courses to available teaching space in an academic institution. The curriculum-based course timetabling problem confronts the problem of a multi-dimensional search space and matrices of high conflict-density, thus impeding the task to search for an improved solution. In this paper, the authors propose an arbitrary heuristic room matching algorithm which attempts to improve the initial seed of the curriculum-based course timetabling problem. The objective is to provide a reasonably advantageous search point to perform any subsequent improvement phase and the results obtained indicate that the proposed matching algorithm is able to provide very promising results as the fitness score of the solution is significantly enhanced within a short period of time.

Teoh Chong Keat, Habibollah Haron, Antoni Wibowo, Mohd. Salihin Ngadiman

Evaluating Extant Uranium: Linguistic Reasoning by Fuzzy Artificial Neural Networks

This paper aims at estimating the extant uranium by soft computing approach. The rising contribution of this resource in the energy cycle is the reason to this research. Untidy relations and uncertain values in geological data increase the complexity of estimating extant uranium, and thus it requires a proper approach. This paper applies artificial neural networks (ANNs), in both crisp and fuzzy concepts, with the exploit of genetic algorithms (GAs). Artificial neural networks (ANNs) trace the untidy relations even though under uncertain circumstances by fuzzy artificial neural networks (FANNs), where GAs can explore the best performance of these networks. We use the type-3 of FANNs against the conventional ANNs to reveal the results, where the Lilliefors and Pearson statistical tests validate them for two geological datasets. The results showed the type-3 of FANNs is preferred for desired outcome with uncertain values, while ANNs are unable to deliver this particular.

M. Reza Mashinchi, Ali Selamat, Suhaimi Ibrahim

A Method for Class Noise Detection Based on K-means and SVM Algorithms

One of the techniques for improving the accuracy of induced classifier is noise filtering. The classifiers prediction performance is affected by the noisy datasets used in the induction of classifiers. Therefore, it is very important to detect and remove the noise in order to increase the classification accuracy. This paper proposed a model for noise detection in the datasets using k-means and support vector machine (SVM) techniques. The proposed model has been tested using the datasets from University of California, Irvine machine learning repository. Experimental results reveal that the proposed model can improve data quality and increase the classification accuracies.

Zahra Nematzadeh, Roliana Ibrahim, Ali Selamat

GDM-VieweR: A New Tool in R to Visualize the Evolution of Fuzzy Consensus Processes

With the incorporation of web 2.0 frameworks the complexity of decision making situations has exponentially increased, involving in many cases many experts, and a pontentially huge number of different alternatives. In the literature we can find a great deal of methodologies to assist multi-person decision making. However these classical approaches are not prepared to deal with huge complexity environments such as the Web 2.0, and there is a lack of tools that support the decision processes providing some graphical information. In this context is where data visualizations plays a key role. Therefore the main objective of this contribution is to present an open source tool developed in R to provide a quick insight of the evolution of the decision making by means of meaningful graphical representations. These tools allows its users to convey ideas effectively, providing insights into a rather sparse and complex data set by communicating its key-aspects in a more intuitive way and contributiong to the decision maker engangement in the process.

Raquel Ureña, Francisco Javier Cabrerizo, Francisco Chiclana, Enrique Herrera-Viedma

Swarm Intelligence in Evacuation Problems: A Review

In this paper authors introduce swarm intelligence’s algorithms (ACO and PSO) to determine the optimum path during an evacuation process. Different PSO algorithms are compared when applied to an evacuation process and results reveal important aspects, as following detailed.

Guido Guizzi, Francesco Gargiulo, Liberatina Carmela Santillo, Hamido Fujita

A Framework for a Decision Tree Learning Algorithm with Rough Set Theory

In this paper, we improve the conventional decision tree learning algorithm using rough set theory. First, our approach gets the upper approximate for each class. Next, it generates the decision tree from each upper approximate. Each decision tree shows whether the data item is in this class or not. Our approach classifies the unlabeled data item using every decision trees and integrates the outputs of these decision trees to decide the class of unlabeled data item. We evaluated our method using mechanically-prepared datasets whose the proportion of overlap of classes in datasets differs. Experimental result shows our approach is better than the conventional approach when the dataset has the high proportion of overlap of classes and few data items which have the same set of attributes. We guess it is possible to get better classification rules from uncertain and dispersed datasets using our approach. However, we don’t use enough datasets to show this advantage in this experiment. In order to evaluate and enhance our approach, we analyze various and big datasets by our approach.

Masaki Kurematsu, Jun Hakura, Hamido Fujita

Software Development and Integration

Frontmatter

Description and Implementation of Business Logic for End-User-Initiative Development

The development of Web applications should be supported by business professionals since Web applications must be modified frequently based on their needs. In our recent studies utilizing the three-tier architecture of user interfaces, business logic and databases, Web applications are developed using a domain-specific application framework and visual modeling technologies. Description and implementation of business logic is a key technology. In this paper, an approach that business logic is implemented in stored procedures within database is introduced. The four basic stored procedures {select, insert, update and delete} for each table of a reuse support system are prepared. The other stored procedures are generated by modifying these basic stored procedures.

Takeshi Chusho, Jie Xu

Combining of Kanban and Scrum Means with Programmable Queues in Designing of Software Intensive Systems

The existing problem of the extremely low level of the success in developments of software intensive systems (SISs) is a reason for the ongoing search for innovations in software engineering. One of the promising areas of the search is a continuous improvement of agile methods of the project management the most popular of which are bound with Kanban and Scrum approaches. The paper presents the way of combining the Kanban and Scrum means with the programmable queues of project tasks that are implementing by designers in the real-time. In any queue, its elements present the states of the corresponding tasks in their conceptually-algorithmic solutions. Interactions of designers with queues provide the parallel and pseudo parallel work with tasks. Such way-of-working promotes increasing the reliability of operational planning. The specialized toolkit WIQA (Working In Questions and Answers) supports the offered version of the project management.

Petr Sosnin

Efficient Supply Chain Management via Federation-Based Integration of Legacy ERP Systems

The development of SCM systems is a difficult activity, since it involves integrating critical business flows both within and among participating companies. The inherently difficulty of the problem is exacerbated by the business constraint (that almost invariably applies in the real world) that the investments made by individual companies throughout the years must be preserved. This maps to major design constraints, since SCM systems must be built around the pre-existing ICT infrastructures of the individual companies and – also importantly – without affecting the local policies. We propose a federation-based approach to seamless and effective integration of legacy enterprise information systems into a unified SCM system. The proposed solution is implemented using a combination of Open Source BPM and ERP products, and validated with respect to a real world use case taken from a research activity (namely: the GLOB-ID project) conducted cooperatively by academic and industrial parties.

Luigi Coppolino, Salvatore D’Antonio, Carmine Massei, Luigi Romano

BizDevOps: Because DevOps is Not the End of the Story

DevOps, the common service responsibility of software development and IT operation within the IT department, promises faster delivery and less conflicts of competence within software development processes and is currently being implemented by many companies. However, the increasing business responsibility of IT, the increasing IT competence in the departments and the standardization of IT operations require a restructuring that goes beyond the boundaries of the IT department.

The logical consequence is the BizDevOps concept: Business, Development and Operations work together in software development and operations, creating a consistent responsibility from business over development to operations.

In this paper we draft a BizDevOps approach by extending the existing DevOps approach with techniques from the area of End User Software Engineering. We present a software platform to support this approach. Based on a case study at a large reinsurance company we share our experiences from using both approach and platform in practice.

Volker Gruhn, Clemens Schäfer

Modeling Tools for Social Coding

In recent years, the social coding paradigm has become commonly used in software development, taking advantage of version control systems and tracking functions. However, most social coding platforms do not provide modeling tools which support the creation of documents for corresponding products. In the present paper, we propose modeling tools for social coding. The tools are based on hybrid editors, where different experts on a project team can use the correct input methods to modify some features of software components. These editors allow users to manipulate both a visual construct in a high-level representation and the corresponding texts in the low-level format. Some advantages of these approaches are also discussed through a case study and its evaluation.

Mirai Watanabe, Yutaka Watanobe, Alexander Vazhenin

Security and Software Methodologies for Reliable Software Design

Frontmatter

A Change Impact Analysis Tool: Integration Between Static and Dynamic Analysis Techniques

Accepting too many software change requests could contribute to expense and delay in project delivery. On the other hand rejecting the changes may increase customer dissatisfaction. Software project management might use a reliable estimation on potential impacted artifacts to decide whether to accept or reject the changes. In software development phase, an assumption that all classes in the class artifact are completely developed is impractical compared to software maintenance phase. This is due to some classes in the class artifact are still under development or partially developed. This paper is a continuous effort from our previous work on combining between static and dynamic analysis techniques for impact analysis. We have converted the approach to an automated tool and call it a CIAT (Change Impact Analysis Tool). The significant achievements of the tool are demonstrated through an extensive experimental validation using several case studies. The experimental analysis shows improvement in the accuracy over current impact analysis results.

Nazri Kama, Saiful Adli Ismail, Kamilia Kamardin, Norziha Megat Zainuddin, Azri Azmi, Wan Shafiuddin Zainuddin

On the Probabilistic Verification of Time Constrained SysML State Machines

Software and hardware design of complex systems is becoming difficult to maintain and more time and effort are spent on verification than on construction. One of the reason is the number of constraints that must be hold by the system. Recently, Formal methods such as probabilistic approaches gain a great importance in real-time systems verification including avionic systems and industrial process controllers. In this paper, we propose a probabilistic verification framework of SysML state machine diagrams extended with time and probability features. The approach consists of mapping a SysML state machine diagrams to PRISM input language. To ensure the correctness of proposed approach, we capture the semantics of both SysML state machine diagrams and their generated PRISM code. We demonstrate the approach efficiency by analyzing PCTL temporal logic on ATM case study.

Abdelhakim Baouya, Djamal Bennouar, Otmane Ait Mohamed, Samir Ouchani

HMAC Authentication Mechanisms in a Grid Computing Environment Using Gridsim Toolkit

Recently the authentication of grid computing environment has been to secure the security of local and remote entities in the surrounding. The existing authentication mechanisms mostly use public key infrastructure (PKI), but the potential PKI vulnerability and implementation constraints are still cannot be a void. In order to satisfy the security needs of grid computing environment, this paper proposes an authentication mechanism using HMAC (a hash, MAC) to secure the user identification that bears the characteristic of public key replacement attack resistance, which run the experiment in the GridSim toolkit simulator.

Saiful Adli Ismail, Md Asri Ngadi, Johan Mohd Sharif, Nazri Kama, Othman Mohd Yusop

Hermes: A Targeted Fuzz Testing Framework

Security assurance cases (security cases) are used to represent claims for evidence-based assurance of security properties in software. A security case uses evidence to argue that a particular claim is true, e.g., buffer overflows cannot happen. Evidence may be generated with a variety of methods. Random negative testing (fuzz testing) has become a popular method for creating evidence for the security of software. However, traditional fuzz testing is undirected and provides only weak evidence for specific assurance concerns, unless significant resources are allocated for extensive testing. This paper presents a method to apply fuzz testing in a targeted way to more economically support the creation of evidence for specific security assurance cases. Our experiments produced results with target code coverage comparable to an exhaustive fuzz test run while significantly reducing the test execution time when compared to exhaustive methods. These results provide specific evidence for security cases and provide improved assurance.

Caleb Shortt, Jens Weber

New Middleware for Secured Reconfigurable Real-Time Systems

The paper deals with secured reconfigurable real-time embedded systems that should be adapted to their environment according to user requirements. For various reasons, a reconfiguration scenario is a run-time automatic operation that allows the addition-removal-update of Operating System tasks. These tasks are secured and should meet real-time constraints. In this case, security mechanisms are assumed to be executed to protect them. Nevertheless, when a reconfiguration is applied to add new tasks and their related mechanisms, some deadlines can be violated. We propose a new middleware that constructs a new execution model of the system after reconfiguration scenarios. It proposes to reduce the execution number of mechanisms that share the security of several tasks, or also reduces the security level of others. The middleware is based on a multi-agent model, is composed of Reconfiguration Agent that controls reconfigurations, Security Agent that controls the security of the system, Scheduling Agent that constructs the new execution model, and Execution Agent that applies the new model. The paper’s contribution is implemented and applied to a case study.

Rim Idriss, Adlen Loukil, Mohamed Khalgui

New Software Techniques in Image Processing and Computer Graphics

Frontmatter

Real-Time Light Shaft Generation for Indoor Rendering

Realistic natural phenomena such as light shaft for indoor and outdoor environments are used in various applications. The main issue with the existing light shaft algorithms is not sufficient quality in real-time rendering. In this paper, we address this issue by proposing a hybrid technique based on the widely used shadow generation techniques. Shadow maps are used to recognize the silhouette of the occluders geometrically. Then, shadow volume is employed to generate the volume of the light shaft. This volume is the volume between occluder and shadow receivers. Finally, light scattering is employed to create light shaft in real-time rendering. The results are convenient for any indoor rendering environments.

Hoshang Kolivand, Mohd Shahrizal Sunar, Ali Selamat

Motif Correlogram for Texture Image Retrieval

In this paper we present a novel, compact and effective method for extracting texture information from an image. We denominate this method as Motif Correlogram (MC), which computes the correlation between motif pairs of the same type. The proposed method was evaluated using different metrics commonly used in image retrieval, such as ARP (Average Retrieval Precision), ARR (Average Retrieval Rate) and ANMRR (Average Normalized Retrieval Rank). Also, the proposed scheme was compared with other texture descriptors, such as Steerable FIltres, Edge Histogram Descriptor (EHD) and two Co-occurrence Matrix-based algorithms: Motif Co-Occurrence Matrix (MCM) and Directional Local Motif XoR Patterns (DLMXoRP). The performance of the proposed method was evaluated using the

Kylberg Dataset

. The evaluation results show the proposed texture descriptor improves the texture image retrieval performance.

Atoany Nazareth Fierro-Radilla, Gustavo Calderon-Auza, Mariko Nakano-Miyatake, Héctor Manuel Pérez-Meana

Face Recognition Under Bad Illumination Conditions

Accurate face recognition in variable illumination environments has attracted the attention of the researchers in recent years, because there are many applications in which these systems must operate under uncontrolled lighting conditions. To this end, several face recognition algorithms have been proposed which include an image enhancement stage before performing the recognition task. However, although the image enhancement stage may improve the performance, it also increases the computational complexity of face recognition algorithms. Because this fact may limit their use in some practical applications, recently some algorithms have been developed that intend to provide enough robustness under variable illumination conditions without requiring an image enhancement stage. Among them, the local binary pattern and eigenphases-based schemes are two of the most successful ones. This paper presents an analysis of the recognition performance of these approaches under varying illumination conditions, with and without image enhancement preprocessing stages. Evaluation results show the robustness of both approaches when they are required to operate in illumination varying environments.

Daniel Toledo de los Santos, Mariko Nakano-Miyatake, Karina Toscano-Medina, Gabriel Sanchez-Perez, Hector Perez-Meana

A Prototype for Anomaly Detection in Video Surveillance Context

Security has been raised at major public buildings in the most famous and crowded cities all over the world following the terrorist attacks of the last years, the latest one at the Bardo museum in the centre of Tunis. For that reason, video surveillance systems have become more and more essential for detecting and hopefully even prevent dangerous events in public areas. In this paper, we present a prototype for anomaly detection in video surveillance context. The whole process is described, starting from the video frames captured by sensors/cameras till at the end some well-known reasoning algorithms for finding potentially dangerous activities are applied. The conducted experiments confirm the efficiency and the effectiveness achieved by our prototype.

F. Persia, D. D’Auria, G. Sperlí, A. Tufano

A Facial Expression Recognition with Automatic Segmentation of Face Regions

This paper proposes a facial expression recognition algorithm, which automatically detects and segments the face regions of interest (ROI) such as the forehead, eyes and mouth, etc. Proposed scheme initially detects the image face and segments it in two regions: forehead/eyes and mouth. Next each of these regions is segmented into N × M blocks which are characterized using 54 Gabor functions that are correlated with each one of the N × M blocks. Next the principal component analysis (PCA) is used for dimensionality reduction. Finally, the resulting feature vectors are inserted in a proposed classifier based on clustering techniques which provides recognition results closed to those provided by the support vector machine (SVM) with much less computational complexity. The experimental results show that proposed system provides a recognition rate of about 98 % when only one ROI is used. This recognition rate increases to about 99 % when the feature vectors of all ROIs are concatenated. This fact allows achieving recognition rates higher than 97 %, even when one of the two ROI are totally occluded.

Andres Hernandez-Matamoros, Andrea Bonarini, Enrique Escamilla-Hernandez, Mariko Nakano-Miyatake, Hector Perez-Meana

Automatic Estimation of Illumination Features for Indoor Photorealistic Rendering in Augmented Reality

In this paper, a fast and practical algorithm is presented to estimate the multiple number of lights from every single indoor scene image in Augmented Reality environmet. This algorithm provides a way to accurately estimate the position, directions, and intensities properties of the light sources in a scene. Unlike other state-of-the-art algorithms, it is able to give accurate results without any essential analysis on the objects in the scene. It uses the analysis of the saturation channel HSV data. The evaluation is done by testing a ground truth dataset of synthetic and real images with known properties of lights and then comparing the results with other studies in the field.

Hasan Alhajhamad, Mohd Shahrizal Sunar, Hoshang Kolivand

Improving the Efficiency of a Hospital ED According to Lean Management Principles Through System Dynamics and Discrete Event Simulation Combined with Quantitative Methods

The Emergency Department of a Hospital has both exogenous and endogenous management problems. The first ones are about the relationship with the other Departments, for which the Emergency Department is a noise element on the planned activities, as it generates an unplanned beds occupation. The second ones strictly depend on the Department organizational model.

After an intensive study on the Emergency Department of a medium size Italian Hospital, the Authors illustrate how, through the use of Lean Management philosophy combined with quantitative methods, the current situation can be analyzed and they propose a set of corrective actions. These allow to increase the system efficiency and to reduce both the number of the waiting patients and their stay. At the same time, the activities are reallocated to the staff, improving their utilization coefficient. So the Management can assess the validity of the proposed strategies before their eventual implementation in the field.

Ilaria Bendato, Lucia Cassettari, Roberto Mosca, Fabio Rolando

Software Applications Systems for Medical Health Care

Frontmatter

Robust Psychiatric Decision Support Using Surrogate Numbers

Decision analytical methods have been utilized and demonstrated to be of use for a broad range of applications in medical contexts, from regular diagnostic strategies and treatment to the evaluation of diagnostic tests and prediction models and benefit-risk assessments. However, a number of issues still remain to be clarified, for instance ease of use, realism of the input data, long-term outcomes and integration into routine clinical work. In particular, many people are unaccustomed or unwilling to express input information with the preciseness and correctness most methods require, i.e., the values need to be “true” in some sense. The common lack of complete information naturally increases this problem significantly and several attempts have been made to resolve this issue. This is not least the case within psychiatric emergency care where the information available often is of a highly qualitative nature. In this article we suggest the use of so called surrogate numbers that have proliferated for a while in the form of ordinal ranking methods for multi-criteria and show how they can be adapted for use in probability elicitation.

Mats Danielson, Love Ekenberg, Kristina Sygel

Supporting Active and Healthy Ageing by Exploiting a Telepresence Robot and Personalized Delivery of Information

Supporting active and healthy ageing represents an opportunity for improving the quality of life of older citizens while reducing the unsustainable pressure on health systems. The

GiraffPlus

project aims at improving health quality by offering personalized services for end users on top of a non-invasive state-of-the-art continuous data-gathering infrastructure. Specifically, the system collects elderly daily behaviour and physiological measures from distributed sensors in living environments. In addition,

GiraffPlus

organizes the gathered information so as to provide customizable visualization and monitoring services to the different users of the system. This paper describes the latest results achieved within the project specifically focusing on the interactive services toward the users.

Amedeo Cesta, Gabriella Cortellessa, Riccardo De Benedictis, Domenico M. Pisanelli

A Conceptual Model of Human Behaviour in Socio-technical Systems

The growing percentage of incidents connected with human error in several industries has led to investigate the factors influencing failures, so that many methods, the so-called Human Reliability Analysis methods, have been developed in order to quantify the human error probabilities. According to the needs of the last generation of the above mentioned methods (i.e. dynamic HRA methods), in order to overcome the lack into modelling the dynamic nature of human performance, this paper focuses on System Dynamics approach to highlight the relationships among the factors that influence human performance. Starting from a literature review on human error taxonomy, a Causal Loop Diagram is proposed to represent in a graphical fashion the interrelations between the variables which most influence and are influenced by human behaviour within a socio-technical system.

Mario Di Nardo, Mosè Gallo, Marianna Madonna, Liberatina Carmela Santillo

A System Dynamics Model for Bed Management Strategy in Health Care Units

Hospital overcrowding is a universal problem. Currently, Lean thinking is focused on removing process waste and variation. However, the high level of complexity and uncertainty inherent to healthcare make it incredibly challenging to remove variability and achieve the stable process rates necessary for lean redesign efforts to be effective. In this work, in fact, we proposed the Agile logic, which was developed in manufacturing to optimize product delivery in volatile demand environments with highly variable customer requirements. Agile redesign focuses on increasing system responsiveness to customers through improved resource coordination and flexibility. Furthermore, we propose a new method of bed management through the use of DRGs. System dynamics simulation is used to explore the impact of following an agile redesign approach in healthcare on service access and on diagnostic system and the effect of the Bed Manager application to the hospital.

Giuseppe Converso, Sara Di Giacomo, Teresa Murino, Teresa Rea

A Simulation Approach for Agile Production Logic Implementation in a Hospital Emergency Unit

The hospitals offer sweeping views of management ideas, as focus internally complex organizational structures, which can be improved in order to reduce waste and increase the effectiveness of the services provided. An analysis of the hospital system there are two characteristics: the demand variability and the variability of lead time. As a policy of innovative management of the health system is referred to Lean even though the characteristics identified do not allow you to get the continuous flow that is the strategic goal of Lean obtainable only with demand leveling and production lead time certain and repeatable. A management system that seems to contemplate these characteristics is the Agile Manufacturing applied to service sector, even though it has never been declined in health. Thus arises the need to simulate both policies so as to determine which best represents the health facilities.

Giuseppe Converso, Giovanni Improta, Manuela Mignano, Liberatina C. Santillo

Backmatter

Weitere Informationen

Premium Partner

    Bildnachweise