Skip to main content

Über dieses Buch

The five-volume set LNCS 9155-9159 constitutes the refereed proceedings of the 15th International Conference on Computational Science and Its Applications, ICCSA 2015, held in Banff, AB, Canada, in June 2015. The 232 revised full papers presented in 22 workshops and a general track were carefully reviewed and selected from 780 initial submissions for inclusion in this volume. They cover various areas in computational science ranging from computational science technologies to specific areas of computational science such as computational geometry and security.



Workshop on Software Quality (SQ 2015)


Code Ownership: Impact on Maintainability

Software systems erode during development, which results in high maintenance costs in the long term. Is it possible to narrow down where exactly this erosion happens? Can we infer the future erosion based on past code changes?
In this paper we investigate code ownership and show that a further step of code quality decrease is more likely to happen due to the changes in source files modified by several developers in the past, compared to files with clear ownership. We estimate the level of code ownership and maintainability changes for every commit of three open-source and one proprietary software systems. With the help of Wilcoxon rank test we compare the ownership values of the files in commits resulting maintainability increase with those of decreasing the maintainability. Three tests out of the four gave strong results and the fourth one did not contradict them either. The conclusion of this study is a generalization of the already known fact that common code is more error-prone than those of developed by fewer developers.
This result could be utilized in identifying the “hot spots” of the source code from maintainability point of view. A possible IDE plug-in, which indicates the risk of decreasing the maintainability of the source code, could help the architect and warn the developers.
Csaba Faragó, Péter Hegedűs, Rudolf Ferenc

Adding Constraint Building Mechanisms to a Symbolic Execution Engine Developed for Detecting Runtime Errors

Most of the runtime failures of a software system can be revealed during test execution only, which has a very high cost. The symbolic execution engine developed at the Software Engineering Department of University of Szeged is able to detect runtime errors (such as null pointer dereference, bad array indexing, division by zero) in Java programs without running the program in real-life environment.
In this paper we present a constraint system building mechanism which improves the accuracy of the runtime errors found by the symbolic execution engine mentioned above. We extend the original principles of symbolic execution by tracking the dependencies of the symbolic variables and substituting them with concrete values if the built constraint system unambiguously determines their value.
The extended symbolic execution checker was tested on real-life open-source systems as well.
István Kádár, Péter Hegedűs, Rudolf Ferenc

Comparison of Software Quality in the Work of Children and Professional Developers Based on Their Classroom Exercises

There is a widely accepted belief that education has positive impact on the improvement of expertise in software development. The studies in this topic mainly focus on the product, more closely the functional requirements of the software. Besides these, they often pay attention to the individual so-called basic skills like abstract and logical thinking. We could not find any references where the final products of classroom exercises were compared by using non-functional properties like software quality. In this paper, we introduce a case study where several children’s work is compared to works created by professional developers and not qualified adults. These numerical properties are difficult to measure and compare objectively. The model used to measure the various aspects of software quality also known in the industrial sector, hence it provides a well established base for our research. Finally, we analyse and evaluate the results and briefly introduce further research plans.
Gergő Balogh

Characterization of Source Code Defects by Data Mining Conducted on GitHub

In software systems the coding errors are unavoidable due to the frequent source changes, the tight deadlines and the inaccurate specifications. Therefore, it is important to have tools that help us in finding these errors. One way of supporting bug prediction is to analyze the characteristics of the previous errors and identify the unknown ones based on these characteristics. This paper aims to characterize the known coding errors.
Nowadays, the popularity of the source code hosting services like GitHub are increasing rapidly. They provide a variety of services, among which the most important ones are the version and bug tracking systems. Version control systems store all versions of the source code, and bug tracking systems provide a unified interface for reporting errors. Bug reports can be used to identify the wrong and the previously fixed source code parts, thus the bugs can be characterized by static source code metrics or by other quantitatively measured properties using the gathered data.
We chose GitHub for the base of data collection and we selected 13 Java projects for analysis. As a result, a database was constructed, which characterizes the bugs of the examined projects, thus can be used, inter alia, to improve the automatic detection of software defects.
Péter Gyimesi, Gábor Gyimesi, Zoltán Tóth, Rudolf Ferenc

Software Component Score: Measuring Software Component Quality Using Static Code Analysis

Static code analysis is a software verification method which analyzes software source code in terms of quality, security and reliability. Unlike other verification activities, static analysis can be automated; thus it can be applied without running the software or creating special test cases. Software metrics are widely practiced by many companies and researchers in order to evaluate their software. In this study, the software component quality measurement method which is developed in an embedded software team will be described. The method is based on automatically collected metrics and predetermined set of rules. First, the measured and calculated metrics under this method will be defined and the reasons for selecting these metrics will be described. Then, the software quality score calculation method using these metrics will be explained. Finally, the gains obtained with this method and the future plans will be related.
Berkhan Deniz

Validation of the City Metaphor in Software Visualization

The rapid developments in computer technology has made it possible to handle a large amount of data. New algorithms have been invented to process data and new ways have emerged to store their results.
However, the final recipients of these are still the users themselves, so we have to present the information in such a way that it can be easily understood. One of the many possibilities is to express that data in a graphical form. This conversion is called visualization. Various kinds of method exist, beginning with simple charts through compound curves and splines to complex three-dimensional scene rendering. However, they all have one point in common; namely, all of these methods use some underlying model, a language to express its content.
The improved performance of graphical units and processors have made it possible and the data-processing technologies have made it necessary to renew and to reinvent these visualization methods. In this study, we focus on the so-called city metaphor which represents information as buildings, districts, and streets.
Our main goal is to find a way to map the data to the entities in the fictional city. To allow the users to navigate freely in the artificial environment and to understand the meaning of the objects, we have to learn the difference between a realistic and an unrealistic city. To do this, we have to measure how similar it is to reality or the city-likeness of our virtual creations. Here, we present three computable metrics which express various features of a city. These metrics are compactness for measuring space consumption, connectivity for showing the low-level coherence among the buildings, and homogeneity for expressing the smoothness of the landscape. These metrics will be defined in a formal and an informal way and illustrated by examples. The connections among the high-level city-likeness and these low-level metrics will be analyzed. Our preliminary assumptions about these metrics will be compared to the opinions of users collected by an on-line survey. Lastly, we will summarize our results and propose a way to compute the city-likeness metric.
Gergő Balogh

A Systematic Mapping on Agile UCD Across the Major Agile and HCI Conferences

Agile User-Centered Design is an emergent and extremely important theme, but what does it exactly mean? Agile User-Centered Design is the use of user-centered design (UCD) in Agile environments. We build on previous work to provide a systematic mapping of Agile UCD publications at the two major agile and human-computer interaction (HCI) conferences. The analysis presented in this paper allows us to answer primary research questions such as: what is agile UCD; what types of HCI techniques have been used to integrate agile and UCD; what types of studies on agile UCD have been published; what types of research methods have been used in Agile UCD studies; and what benefits do these publications offer? Furthermore, we explore topics such as: who are the major authors in this field; and is the field driven by academics, practitioners, or collaborations? This paper presents our analysis of these topics in order to better structure future work in the field of Agile UCD and to provide a better understanding of what this field actually entails.
Tiago Silva da Silva, Fábio Fagundes Silveira, Milene Selbach Silveira, Theodore Hellmann, Frank Maurer

Boosting the Software Quality of Parallel Programming Using Logical Means

Parallel programming can be realized as a main tool to improve performance. A main model for programming parallel machines is the single instruction multiple data (SIMD) model of parallelism.
This paper presents an axiomatic semantics for SIMD programs. This semantics is useful for designating and attesting partial correctness properties for SIMD programs and is a generalization of the separation’s logical system (designed for sequential programs). The second contribution of this paper is an operational framework to conventionally define the semantics of SIMD programs. This framework has two sets of inference rules; for running a program on a single machine and for running a program on many machines concurrently. A detailed correctness proof for the presented logical system is presented using the proposed operational semantics. Also the paper presents a detailed example of a specification derivation in the proposed logical system.
Mohamed A. El-Zawawy

Analysis of Web Accessibility in Social Networking Services Through Blind Users’ Perspective and an Accessible Prototype

It is not just in architectural contexts that accessibility concern is present: accessibility in web portals is also a right guaranteed by law to people with any kind of disability in many countries, and Brazil takes part in this group. Although extensive researches focused on web accessibility were developed recently, not all areas were contemplated with these efforts. Social networking services are a great example of sites that ask for more attention in this field. They are important tools for integration of disabled people, but still lack web accessibility features. This paper presents an evaluation of the web accessibility of three major social networking services – Facebook, LinkedIn and Twitter – by the perspective of blind users, applying the WCAG 2.0 success criteria. Analysis demonstrated that there are many issues to be addressed in order to improve web accessibility for visually impaired in this domain. Besides that, a prototype of an accessible social networking service was proposed based on an instance of Elgg framework, where the experiences about web accessibility gathered with this study were applied.
Janaína Rolan Loureiro, Maria Istela Cagnin, Débora Maria Barroso Paiva

Systematic Mapping Studies in Modularity in IT Courses

Modularity is one of the most important quality attributes during system development. Its concepts are commonly used in disciplines of information technology courses, mainly in subjects as software project, software architecture, and others. However, it is notable among certain groups of students that this issue is not fully absorbed in a practical way. Although some researchers and practitioners have approach themes like this, there is still a lack of research about how modularity can be approached in IT courses. This paper presents a systematic mapping study about how the modularity is addressed in education. The main objective is to understand what are the main areas in this field and find more interesting points of research to improve the practice of modularity during IT disciplines.
Pablo Anderson de L. Lima, Gustavo da C. C. Franco Fraga, Eudisley G. dos Anjos, Danielle Rousy D. da Silva

Mobile Application Verification: A Systematic Mapping Study

The proliferation of mobile devices and applications has seen an unprecedented rise in recent years. Application domains of mobile systems range from personal assistants to point-of-care health informatics systems. Software development for such diverse application domains requires stringent and well-defined development process. Software testing is a type of verification that is required to achieve more reliable system. Even though, Software Engineering literature contains many research studies that address challenging issues in mobile application development, we could not have identified a comprehensive literature review study on this subject. In this paper, we present a systematic mapping of the Software Verification in the field of mobile applications. We provide definitive metrics and publications about mobile application testing, which we believe will allow fellow researchers to identify gaps and research opportunities in this field.
Mehmet Sahinoglu, Koray Incki, Mehmet S. Aktas

Markov Analysis of AVK Approach of Symmetric Key Based Cryptosystem

In Symmetric Key Cryptography domain, Automatic Variable Key (AVK) approach is in inception phase because of unavailability of reversible XOR like operators. Fibonacci-Q matrix has emerged as an alternative solution for secure transmission with varying key for different sessions [3, 10]. This paper attempts to analyze symmetric key cryptography scheme based on AVK approach. Due to key variability nature, the AVK approach is assumed to be more powerful, efficient and optimal but its analysis from hackers’ point of view is demonstrated in this paper. This paper also assumes various situations under which mining of future keys can be achieved. The paper also discusses concept of Key variability with less probability of extracted result under various scenario with the different degree of difficulty in key mining.
Shaligram Prajapat, Ramjeevan Singh Thakur

Comparison of Static Analysis Tools for Quality Measurement of RPG Programs

The RPG programming language is a popular language employed widely in IBM i mainframes nowadays. Legacy mainframe systems that evolved and survived the past decades usually data intensive and even business critical applications. Recent, state of the art quality assurance tools are mostly focused on popular languages like Java, C++ or Python. In this work we compare two source code based quality management tools for the RPG language. The study is focused on the data obtained using static analysis, which is then aggregated to higher level quality attributes. SourceMeter is a command line tool-chain capable to measure various source attributes like metrics and coding rule violations. SonarQube is a quality management platform with RPG language support. To facilitate the objective comparison, we used the SourceMeter for RPG plugin for SonarQube, which seamlessly integrates into the framework extending its capabilities. The evaluation is built on analysis success and depth, source code metrics, coding rules and code duplications. We found that SourceMeter is more advanced in analysis depth, product metrics and finding duplications, while their performance of coding rules and analysis success is rather balanced. Since both tools were presented recently on the market of quality assurance tools, we expect additional releases in the future with more mature analyzers.
Zoltán Tóth, László Vidács, Rudolf Ferenc

Novel Software Engineering Attitudes for Bussiness-Oriented Information Systems

Modern ICT technologies should and can be more business- and businessmen-friendly than they are now. We discuss a variant of service-oriented architecture solving this challenge. The architecture is a network of the analogues of real-life services communicating by exchange of business documents via a network of organizational (architecture) services. The architecture is especially useful in small-to-medium enterprises – generally in the cases when sourcing and dynamic business processes are used. It is also useful for e-government systems. It has many business as well as technical/engineering advantages. The architecture enables a smooth collection of transparent data for management and research. It further enables business document oriented communication of business services and simplifies the collaboration of users with developers.
Jaroslav Král, Michal Žemlička

Data Consistency: Toward a Terminological Clarification

‘Consistency’ is an ‘inconsistency’ are ubiquitous term in data engineering. Its relevance to quality is obvious, since ‘consistency’ is a commonplace dimension of data quality. However, connotations are vague or ambiguous. In this paper, we address semantic consistency, transaction consistency, replication consistency, eventual consistency and the new notion of partial consistency in databases. We characterize their distinguishing properties, and also address their differences, interactions and interdependencies. Partial consistency is an entry door to living with inconsistency, which is an ineludible necessity in the age of big data.
Hendrik Decker, Francesc D. Muñoz-Escoí, Sanjay Misra

Workshop on Virtual Reality and its Applications (VRA 2015)


Challenges and Possibilities of Use of Augmented Reality in Education

Case Study in Music Education
This paper aims to discuss the difficulties and possibilities of using augmented reality in education, especially for musical education. Among the difficulties addressed are the following types of issues: physical, technological, sociocultural, pedagogical and managerial. The possible solutions presented involve the use of authoring tools that are easily usable by teachers. An augmented reality application to teach musical perception was developed using an authoring tool, and tests with children are presented and discussed.
Valéria Farinazzo Martins, Letícia Gomes, Marcelo de Paiva Guimarães

A Relational Database for Human Motion Data

Motion capture data have been widely used in applications ranging from video games and animations to simulations and virtual environments. Moreover, all data-driven approaches for analysis and synthesis of motions are depending on motion capture data. Although multiple large motion capture data sets are freely available for research, there is no system which can provide a centralized access to all of them in an organized manner. In this paper we show that using a relational database management system (RDBMS) to store data does not only provide such a centralized access to the data, but also allows to include other sensor modalities (e.g. accelerometer data) and various semantic annotations. We present two applications for our system: A motion capture player where motions sequences can be retrieved from large datasets using SQL queries and the automatic construction of statistical models which can further be used for complex motion analysis and motions synthesis tasks.
Qaiser Riaz, Björn Krüger, Andreas Weber

Immersive and Interactive Simulator to Support Educational Teaching

Visualization and manipulation of a discipline’s content integrated with educational planning can enhance teaching and facilitate the learning process. This paper presents a set of tools developed to explore educational content using a multi projection system with high immersion and interaction support for group learning and a support run on Internet browsers for individual learning. The objects visualized are enriched with multimedia content such as video, audio, and text and can be used in different educational proposals. In this work, these tools were populated with content for teaching computer architecture for computer undergraduate students.
Marcelo de Paiva Guimarães, Diego Colombo Dias, Valéria Farinazzo Martins, José Remo Brega, Luís Carlos Trevelin

Unity Cluster Package – Dragging and Dropping Components for Multi-projection Virtual Reality Applications Based on PC Clusters

Virtual Reality applications created using game engines allow developers to quickly come up with a prototype that runs on a wide variety of systems, achieve high quality graphics, and support multiple devices easily. This paper aims to present a component set (Unity Cluster Package) for the Unity game engine that facilitates the development of immersive and interactive Virtual Reality applications. This drag-and-drop component set allows Unity applications to run on a commodity PC cluster with passive support for stereoscopy, perspective correction according to the user’s viewpoint and access to special servers to provide device-independent features. We present two examples of Unity multi-projection applications running in a mini CAVE (Cave Automatic Virtual Environment)-like (three-screens) system ported using this component set.
Mário Popolin Neto, Diego Roberto Colombo Dias, Luis Carlos Trevelin, Marcelo de Paiva Guimarães, José Remo Ferreira Brega


Weitere Informationen

Premium Partner

BranchenIndex Online

Die B2B-Firmensuche für Industrie und Wirtschaft: Kostenfrei in Firmenprofilen nach Lieferanten, Herstellern, Dienstleistern und Händlern recherchieren.



Best Practices für die Mitarbeiter-Partizipation in der Produktentwicklung

Unternehmen haben das Innovationspotenzial der eigenen Mitarbeiter auch außerhalb der F&E-Abteilung erkannt. Viele Initiativen zur Partizipation scheitern in der Praxis jedoch häufig. Lesen Sie hier  - basierend auf einer qualitativ-explorativen Expertenstudie - mehr über die wesentlichen Problemfelder der mitarbeiterzentrierten Produktentwicklung und profitieren Sie von konkreten Handlungsempfehlungen aus der Praxis.
Jetzt gratis downloaden!