Skip to main content

2016 | Buch

Computational Science and Its Applications -- ICCSA 2016

16th International Conference, Beijing, China, July 4-7, 2016, Proceedings, Part IV

herausgegeben von: Osvaldo Gervasi, Beniamino Murgante, Sanjay Misra, Ana Maria A.C. Rocha, Carmelo M. Torre, David Taniar, Bernady O. Apduhan, Elena Stankova, Shangguang Wang

Verlag: Springer International Publishing

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

The five-volume set LNCS 9786-9790 constitutes the refereed proceedings of the 16th International Conference on Computational Science and Its Applications, ICCSA 2016, held in Beijing, China, in July 2016.

The 239 revised full papers and 14 short papers presented at 33 workshops were carefully reviewed and selected from 849 submissions. They are organized in five thematical tracks: computational methods, algorithms and scientific applications; high performance computing and networks; geometric modeling, graphics and visualization; advanced and emerging applications; and information systems and technologies.

Inhaltsverzeichnis

Frontmatter
Where the Streets Have Known Names

Street names provide important insights into the local culture, history, and politics of places. Linked open data provide a wealth of knowledge that can be associated with street names, enabling novel ways to explore cultural geographies. This paper presents a three-fold contribution. We present (1) a technique to establish a correspondence between street names and the entities that they refer to. The method is based on Wikidata, a knowledge base derived from Wikipedia. The accuracy of this mapping is evaluated on a sample of streets in Rome. As this approach reaches limited coverage, we propose to tap local knowledge with (2) a simple web platform. Users can select the best correspondence from the calculated ones or add another entity not discovered by the automated process. As a result, we design (3) an enriched OpenStreetMap web map where each street name can be explored in terms of the properties of its associated entity. Through several filters, this tool is a first step towards the interactive exploration of toponymy, showing how open data can reveal facets of the cultural texture that pervades places.

Paulo Dias Almeida, Jorge Gustavo Rocha, Andrea Ballatore, Alexander Zipf
Functions and Perspectives of Public Real Estate in the Urban Policies: The Sustainable Development Plan of Syracuse

This study deals with the problem of the negotiation between public and private actors in the urban planning, in the case study of the Plan for the Sustainable Development of Syracuse (Italy). The contribution focuses on the modalities of execution of the Plan that envisages the tool of the Public-Private Partnership (PPP). The study intends to verify the equity of the negotiation mechanism and the advantage gained by the public actor from conferring two large buildings to a Real Estate Fund. The contribution is structured in three parts. The first part provides the general programmatic and valuation frame referring to the features of the area. It describes the overall development perspectives and therefore the whole process of real estate development that would be supported by means of the contribution of the fund. The second part describes the implementation of a cash flow analysis based on a hypothesis of use of the buildings previously outlined. The third part provides the elements of the analysis of the investment that are retroactive on the design hypotheses converging on the value assessed to determine the quota of participation of the Municipality in the Real Estate Fund.

Laura Gabrielli, Salvatore Giuffrida, Maria Rosa Trovato
Soil Loss, Productivity and Cropland Values GIS-Based Analysis and Trends in the Basilicata Region (Southern Italy) from 1980 to 2013

This paper concerns the trends assessment of the productivity values and croplands values of specific crops (cereals (arable cereals land), vineyards, olive-growing lands) in the Basilicata region at regional scale, from 1980 to 2013, in relation to the soil loss evaluated through the USPED method. The comparative analysis shows the interrelations between the soil loss by erosion and the economic value deriving from the erosive phenomenon affecting the croplands considered.

Antonella Dimotta, Mario Cozzi, Serverino Romano, Maurizio Lazzari
Fair Planning and Affordability Housing in Urban Policy. The Case of Syracuse (Italy)

Equalization can be implemented in the planning process by means of several tools. The Syracuse’s Master Plan has used “urban negotiation” to obtain land for facilities and public infrastructure in different urban areas basing on the rule of the transfer of a portion of land in return for the building permission for the remaining part of each property to be developed. The Master Plan also aimed at providing social housing because the economic crisis has amplified the gap between housing market prices and household income. This study proposes an equalization and compensation model to support the urban negotiation for providing the indexes of a fair and convenient development of several interstitial urban areas. Some different scenarios, based on an equalization pattern, are prefigured to provide affordable housing for low-income households.

Grazia Napoli, Salvatore Giuffrida, Maria Rosa Trovato
Cap Rate and the Historic City. Past and Future of the Real Estate of Noto (Italy)

A real estate market survey has been carried out within the old town of Noto, in order to describe the progressive increase of the property market prices performed in the last decade. A sample of 56 properties was analyzed by considering a large amount of characteristics grouped in a frame of five main features progressively detailed. Due to the architectural, historical and landscape performances of this urban-land context, many expectations influence prices and the kind of and exploitation of these properties, so that a further study concerning the costs of renovation, the rents and cap rates of the surveyed properties has been carried out in order to identify the economic, financial and monetary profile of each of them and some recursive rules for investments’ layout. A multi-layer clustering analysis was carried out to integrate two approaches, the empirical one sorting by characteristics, the analytic one based on the scored attributes.

Salvatore Giuffrida, Salvatore Di Mauro, Alberto Valenti
Industrial Areas and the City. Equalization and Compensation in a Value-Oriented Allocation Pattern

This study deals with the allocation of the firms in a large industrial area of Quarto, a town in the Naples’ district subject to a “Piano di Insediamenti Industriali” – PIP (Industrial Settlement Masterplan). The main concern of the Municipality is the fair integration between environmental issues, economic development and urban identity. Therefore a structured evaluation process, based on a survey about the company profiles and their geographical location, has been carried out in order to make the plan meet the needs of the firms, and to select the best companies to settle in the planned area. A “generative” MAVT pattern has been designed to outline several layout options and to select the best ones. The model also includes equalization and compensation elements by whose means it is possible to determine the extraordinary planning permission fees for the different areas where the firms are located.

Salvatore Giuffrida, Grazia Napoli, Maria Rosa Trovato
Environmental Noise Sensing Approach Based on Volunteered Geographic Information and Spatio-Temporal Analysis with Machine Learning

In this paper a methodology for analyzing the behavior of the environmental noise pollution is proposed. It consists of a mobile application called ‘NoiseMonitor’, which senses the environmental noise with the microphone sensor available in the mobile device. The georeferenced noise data constitute Volunteered Geographic Information that compose a large geospatial database of urban information of the Mexico City. In addition, a Web-GIS is proposed in order to make spatio-temporal analysis based on a prediction model, applying Machine Learning techniques to generate acoustic noise mapping with contextual information.According to the obtained results, a comparison between support vector machines and artificial neural networks were performed in order to evaluate the model and the behavior of the sensed data.

Miguel Torres-Ruiz, Juan H. Juárez-Hipólito, Miltiadis Demetrios Lytras, Marco Moreno-Ibarra
A Knowledge-Based Approach for the Implementation of a SDSS in the Partenio Regional Park (Italy)

The paper recommends a methodology for data gathering and processing through the spatial analysis techniques and the combinatorial multi-criteria procedure of Weighted Linear Combination (WLC). The purpose concerns the spatial problem structuring in a complex decisional context lacking in the geographical dataset. The processing of data and information provided by VGIs and Open Systems is crucial for the enrichment of spatial datasets in these circumstances, but it is advisable to make attention about the data reliability and the known problems of the geographic dataset, i.e. Modifiable Areal Unit Problem (MAUP). The method was tested with the case study of 27 Municipalities around the Partenio Regional Park, in the South of Italy. Within the SDSS, the multidimensional landscape’s indicators were combined with data gathering on the field, in order to build an evolving informative system. A multidimensional approach, focused on the recognition of environmental, social, economic and cultural resources, was chosen providing some strategies of enhancement for the overviewed landscape of the Park. The evaluation of the policy and actions for the examined regions generated scenario-maps through multi-criteria procedures and GIS tools.

Maria Cerreta, Simona Panaro, Giuliano Poli
Factors of Perceived Walkability: A Pilot Empirical Study

We present preliminary results of a pilot empirical study designed to examine factors associated with pedestrians’ perception of walkability, i.e. the perception of the quality, comfort and pleasantness of streets, and their conductivity to walk. Through a contingent field survey we collected 18 observable street attributes (independent variables), and a synthetic subjective perception of walkability (dependent variable), for the entire street network (408 street segments) of the city of Alghero in Italy. Regression analysis yields high goodness of fit (R-squared = 0.60 using all 18 variables), and points at 9 out of 18 as the most significant factors of perceived walkability (“useful sidewalk width”; “architectural, urban and environmental attractions”; “density of shops, bars, services, economic activities”; “vehicles-pedestrians separation”; “cyclability”; “opportunities to sit”; “shelters and shades”; “car roadway width”; “street lighting”; R-squared = 0.59). Among those, the first five factors in particular show as jointly most important as predictors of perceived walkability.

Ivan Blečić, Dario Canu, Arnaldo Cecchini, Tanja Congiu, Giovanna Fancello
Evaluating the Effect of Urban Intersections on Walkability

This study proposes an analytical and evaluative method of the performances of urban intersections from the perspective of pedestrians. We further present a case study assessment of walkability of crossings and their conduciveness to walk. Implications in integrated urban and transport planning practice are emphasized as the method is suited to support decision makers involved in urban roads management to identify major spatial and operational problems and to prioritize improvement interventions.

Ivan Blečić, Arnaldo Cecchini, Dario Canu, Andrea Cappai, Tanja Congiu, Giovanna Fancello
Coupling Surveys with GPS Tracking to Explore Tourists’ Spatio-Temporal Behaviour

Position tracking technologies developed in the last decade are a valuable addition to the traditional toolbox for data collection, as they offer the opportunity to gather a great amount of unprecedented information on tourists behaviour. In particular, they allow to collect detailed information on spatial and temporal behaviour with respect to different categories/profiles of tourists. We present the results of a survey of tourists’ spatial behaviour coupling GPS movement tracking and questionnaires, and furthermore discuss how this kind of studies may prove useful in providing guidelines for territorial, tourist and transportation policies.

Ivan Blečić, Dario Canu, Arnaldo Cecchini, Tanja Congiu, Giovanna Fancello, Stefania Mauro, Sara Levi Sacerdotti, Giuseppe A. Trunfio
Countryside vs City: A User-Centered Approach to Open Spatial Indicators of Urban Sprawl

The interplay between land take and climate change is reviving the debate on the environmental impacts of urbanization. Monitoring and evaluation of land-cover and land-use changes have secured political commitment worldwide, and in the European Union in particular – following the agreement on a “no net land take by 2050” target. This paper addresses the ensuing challenges by investigating how open data services and spatial indicators may help manage urban sprawl more effectively. Experts, scholars, students and local government officials were engaged in a living lab exercise centered around the uptake of geospatial data in planning, policy making and design processes. Main findings point to a great potential, and pressing need, for open spatial data services in mainstreaming sustainable land use practices. However, urban sprawl’s elusiveness calls for interactive approaches, since the actual usability of proposed tools needs to be carefully investigated and planned for.

Alessandro Bonifazi, Valentina Sannicandro, Raffaele Attardi, Gianluca Di Cugno, Carmelo Maria Torre
Integrating Financial Analysis and Decision Theory for the Evaluation of Alternative Reuse Scenarios of Historical Buildings

The expected utility theory assumes that the advantage of an agent under conditions of uncertainty can be calculated as a weighted average of the utilities in each state as possible, by using as weights the likelihood of the occurrence of individual states.The expected utility is thus an expected value (according to the terminology of the theory of probability). In order to determine the utility according to this method, it is supposed that the decision maker is be able to order their preferences with regard to the consequences of different decisions.The experiment described in the paper shows that different actors of a decision process tend “to move” the Centre of gravity of the decision to their preference. Arrow’s theorem taught that there is no unanimity.The starting point of the case of study is an analysis of the probability for different way of use of a Heritage property, related to tourism and leisure activities: Catering, Conferences and hosteling. The different actors have different preferences on each one of the three activities. Their vision partially contrasts with the likelihood of generating income through the activities. Each activity can create income as a function of use of the Fabric (for one use or for the other). The coexistence of the three forces of the combined use has a limitation, so some use combinations generate more income than others, according to a probability curve.Each actor will attempt to shift the business mix toward equilibrium that appreciates most. Whereas, as an element of interpretation of the behavior of multi-actor, Kannehman’s approach consider that each subject will see the advantage linked to a different combination from that with most likely utility; the different combination is affected by the expectations of actors, and described by the “perspective theory”.

Carmelo M. Torre, Raffaele Attardi, Valentina Sannicandro
Spatial Analysis for the Study of Environmental Settlement Patterns: The Archaeological Sites of the Santa Cruz Province

The aim of this work is to use spatial analysis to study relationships between environmental parameters and archaeological sites in the Santa Cruz province (Patagonia, Argentina) and to develop an analysis protocol that could reveal including and excluding factors, useful for the research and the discovery of new archaeological sites on the territory. Consequently, a model that considers interactions between sites and some parameters such as the elevation, the slope, the aspect, the landforms, the land use and the distances from water was constructed. Moreover a final sensibility map was produced, that wants to help the archaeologist to know, in every point in the space, how many and which are parameters that could increase the probability to find new archaeological sites and that are survey priorities on the study region.

Maria Danese, Gisela Cassiodoro, Francisco Guichón, Rafael Goñi, Nicola Masini, Gabriele Nolè, Rosa Lasaponara
Self-renovation in Rome: Ex Ante, in Itinere and Ex Post Evaluation

In Europe, self-construction/self-renovation are innovative and additional tools to meet the needs of a part of “disadvantaged” social groups that can not buy or rent dwelling at market prices. At the end of the 90 s of the twentieth century, the Municipality of Rome has set the first trial at the national level (still not completed and remained almost unique) related to disused building self-renovation (especially school buildings). The text shows the results of a research, still ongoing, aimed at ex post evaluation of items that have prevented to conclude timely and as provided such interventions and, consequently, doesn’t meet the housing needs for which had been started.This allows to highlight as the assessment tools, in the different phases of the development process of these initiatives, ex ante, ongoing and ex post may help to reduce the risks of “failure” of self-construction/self-renovation initiatives.

Maria Rosaria Guarini
Enhancing an IaaS Ontology Clustering Scheme for Resiliency Support in Hybrid Cloud

When an undesirable situation occurs in a hybrid cloud computing environment, vital issues arise when searching for IaaS cloud services that best-match to the user’s requirements. This includes the different descriptions/naming of IaaS cloud services, i.e., CPU, memory, and others, adapted by different companies making it difficult and ambiguous to select the best-match cloud services. Initially, we considered utilizing ontology technology and typical clustering methods to narrow down the selection process. In this paper, we proposed an improved ontology clustering scheme and describe the methodology. Preliminary experiments shows promising results showing a fair gathering of related elements in the cluster and the speedup of processing depicting a viable resiliency support for hybrid cloud.

Toshihiro Uchibayashi, Bernady Apduhan, Kazutoshi Niiho, Takuo Suganuma, Norio Shiratori
A Simple Stochastic Gradient Variational Bayes for Latent Dirichlet Allocation

This paper proposes a new inference for the latent Dirichlet allocation (LDA) [4]. Our proposal is an instance of the stochastic gradient variational Bayes (SGVB) [9, 13]. SGVB is a general framework for devising posterior inferences for Bayesian probabilistic models. Our aim is to show the effectiveness of SGVB by presenting an example of SGVB-type inference for LDA, the best-known Bayesian model in text mining. The inference proposed in this paper is easy to implement from scratch. A special feature of the proposed inference is that the logistic normal distribution is used to approximate the true posterior. This is counterintuitive, because we obtain the Dirichlet distribution by taking the functional derivative when we lower bound the log evidence of LDA after applying a mean field approximation. However, our experiment showed that the proposed inference gave a better predictive performance in terms of test set perplexity than the inference using the Dirichlet distribution for posterior approximation. While the logistic normal is more complicated than the Dirichlet, SGVB makes the manipulation of the expectations with respect to the posterior relatively easy. The proposed inference was better even than the collapsed Gibbs sampling [6] for not all but many settings consulted in our experiment. It must be worthwhile future work to devise a new inference based on SGVB also for other Bayesian models.

Tomonari Masada, Atsuhiro Takasu
On Optimizing Partitioning Strategies for Faster Inverted Index Compression

Inverted index is a key component for search engine to manage billions of documents and fast respond to users’ queries. While substantial effort has been made to compromise space occupancy and decoding speed, what has been overlooked is the encoding speed when constructing the index. VSEncoding is a powerful encoder that works by optimally partitioning a list of integers into blocks which are efficiently compressed by using simple encoders, however, these partitions are found by using a dynamic programming approach which is obviously inefficient. In this paper, we introduce compression speed as one criterion to evaluate compression techniques, and thoroughly analyze performances of different partitioning strategies. A linear-time optimization is also proposed, to enhance VSEncoding with faster compression speed and more flexibility to partition an index. Experiments show that our method offers a far more better compression speed, while retaining an excellent space occupancy and decompression speed.

Xingshen Song, Kun Jiang, Yu Jiang, Yuexiang Yang
The Analysis for Ripple-Effect of Ontology Evolution Based on Graph

Ontology is an important foundation for the Semantic Web and ontology-driven applications. While real network and application requirements are constantly changing, in order to adapt to these changes, the ontologies need to be updated in time. There are various types of semantic relationships, between the elements in ontology that the strength varies as well as their impacts of the process of evolution. To reflect them to the analysis of the ripple-effect of the ontology evolution, the common types of semantic relationships in ontology was extracted, and on this basis, proposed SRG (Semantic Relationship Graph) graph model in which the property-related semantic type substitutes the property and established the matrix and reachability matrix of the relationships among the elements in the ontology corresponding to this graph model. Then, the ripple-effect during the process of ontology evolution was analyzed via using matrix operations to calculate the comprehensive influence of the node in ontology. Experimental results show that the proposed method can provide accurate quantitative analysis for ripple-effect of ontology evolution under the premise of efficiency is possible.

Qiuyao Lv, Yingping Zhang, Jinguang Gu
Linearizability Proof of Stack Data

In order to guarantee the linearizability of stack data, linearization points are necessary. However, searching linearization points is very difficult. We propose a simpler method which restrains stack operations to guarantee the linearizability of stack data, instead of using the linearization points. First of all, based on the description of the problem, it is shown that if stack operations violate three basic properties, then data inconsistency will occur during push/pop operation. After that, it is proved that stack data is linearizable if and only if stack operations satisfy three basic properties under the situation where a complete history is available. Lastly, extending the method to general condition, it is shown that stack data is linearizable if purely-blocking history can be completed and then meets the three properties.

Jun-Yan Qian, Guo-Qing Yao, Guang-Xi Chen, Ling-Zhong Zhao
Anonymous Mutual Authentication Scheme for Secure Inter-Device Communication in Mobile Networks

An anonymous authentication scheme provides privacy preserving communication. Recently, many anonymous authentication schemes have been proposed, and various cryptanalysis and improvements have been followed. Some schemes use high-cost functions, such as symmetric and asymmetric cryptographic functions while the others use low-cost functions such as hash functions and simple bitwise operations. The previous schemes are based on the client-server model and a server anonymously authenticates clients. In this paper, we propose an anonymous authentication scheme for communication between client-side devices. The proposed scheme preserves secure inter-device communication by providing anonymous mutual authentication and fair key agreement. Since the proposed scheme uses only low-cost functions, it can apply to mobile networks considering lower energy consumption.

Youngseok Chung, Seokjin Choi, Dongho Won
Recommending Books for Children Based on the Collaborative and Content-Based Filtering Approaches

According to a study conducted by the National Institute of Child Health and Human Development, reading is the single most important skill necessary for a happy, productive, and successful life. A child who is an excellent reader often has confident and a high level of self esteem and can easily make the transition from learning to read to reading to learn. Promoting good reading habits among children is essential, given the enormous influence of reading on students’ development as learners and members of the society. Unfortunately, very few (children) websites or online applications recommend books to children, even though they can play a significant role in encouraging children to read. Popular book websites, such as goodreads.com, commonsensemedia.org, and readanybook.com, suggest books to children based on the popularity of books or rankings on books, which are not customized/personalized for each individual user and likely recommend books that users do not want or like. We have integrated the collaborative filtering (CF) approach and the content-based approach, in addition to predicting the grade levels of books, to recommend books for children. The user-based CF approaches filter books appealing to each user based on users’ ratings, whereas the content-based filtering method analyzes the descriptions of books rated by a user in the past and constructs a user profile to capture the user’s preferences. Recent research works have demonstrated that a hybrid approach, which combines the content-based filtering and CF approaches is more effective in making recommendations. Conducted empirical study has verified the effectiveness of our proposed children book recommender.

Yiu-Kai Ng
Ontology Evaluation Approaches: A Case Study from Agriculture Domain

The quality of an ontology very much depends on its validity. Therefore, ontology validation and evaluation is very important task. However, according to the current literature, there is no agreed method or approach to evaluate an ontology. The choice of a suitable approach very much depends on the purpose of validation or evaluation, the application in which the ontology is to be used, and on what aspect of the ontology we are trying to validate or evaluate. We have developed large user centered ontology to represent agricultural information and relevant knowledge in user context for Sri Lankan farmers. In this paper, we described the validation and evaluation procedures we applied to verify the content and examine the applicability of the developed ontology. We obtained expert suggestions and assessments for the criteria used to develop the ontology as well as to obtain user feedback especially from the farmers to measure the ontological commitment. Delphi Method, Modified Delphi Method and OOPS! Web-based tool were used to validate the ontology in terms of accuracy and quality. The implemented ontology is evaluated internally and externally to identify the deficiencies of the artifact in use. An online knowledge base with a SPARQL endpoint was created to share and reuse the domain knowledge. It was also made use of for the evaluation process. A mobile-based application is developed to check user satisfaction on the knowledge provided by the ontology. Since there is no single best or preferred method for ontology evaluation we reviewed various approaches used to evaluate the ontology and finally identified classification for ontology evaluation approaches based on our work.

Anusha Indika Walisadeera, Athula Ginige, Gihan Nilendra Wikramanayake
Effort Estimation for Program Modification in Object Oriented Development

One of the major problems faced by software developers and managers is the estimation of efforts for the development and maintenance of a programming system. In this paper, estimation of efforts needed to update programs according to a given requirement change has been discussed for the Object Oriented (OO) environment. Since the demand for these changes has to be met quickly, a method is required to estimate the efforts for making the required changes. Methods exist for estimating the efforts in OO environment but none of them cater to the needs of updating requirements. We have proposed an up-gradation to the approach for effort estimation, which makes use of certain characteristics of the OO paradigm, specifically Inheritance and Encapsulation. We found that the degree of inheritance has to be considered for effort estimation because it plays an important role for identifying which methods need to be modified and others to be reused as it is.

Yashvardhan Sharma
Populational Algorithm for Influence Maximization

Influence maximization is one of the most challenging tasks in network and consists in finding a set of the k seeder nodes which maximize the number of reached nodes, considering a propagation model. This work presents a Genetic Algorithm for influence maximization in networks considering Spreading Activation model for influence propagation. Four strategies for contructing the initial population were explored: a random strategy, a PageRank based strategy and two strategies which considers the community structure and the communities to which the seeders belong. The results show that GA was able to significantly improve the quality of the seeders, increasing the number of reached nodes in about $$25\,\%$$.

Carolina Ribeiro Xavier, Vinícius da Fonseca Vieira, Alexandre Gonçalves Evsukoff
Software Variability Composition and Abstraction in Robot Control Systems

Control systems for autonomous robots are concurrent, distributed, embedded, real-time and data intensive software systems. A real-world robot control system is composed of tens of software components. For each component providing robotic functionality, tens of different implementations may be available.The difficult challenge in robotic system engineering consists in selecting a coherent set of components, which provide the functionality required by the application requirements, taking into account their mutual dependencies. This challenge is exacerbated by the fact that robotics system integrators and application developers are usually not specifically trained in software engineering.Current approaches to variability management in complex software systems consists in explicitly modeling variation points and variants in software architectures in terms of Feature Models.The main contribution of this paper is the definition of a set of models and modeling tools that allow the hierarchical composition of Feature Models, which use specialized vocabularies for robotic experts with different skills and expertise.

Davide Brugali, Mauro Valota
A Methodological Approach to Identify Type of Dependency from User Requirements

Agents exhibit a high degree of inter-agent cooperation to achieve designated goals. The success of Multi-Agent System (MAS) depends on how well these agents cooperate with each other. Modeling cooperation with a thrust in finding an appropriate agent for delegating a task is a challenging area of MAS as it requires thorough analysis of the dependencies that exist among agents. In MAS, an agent may exhibit various dependencies viz goal, task, soft goal or resource that may complicate the final system. To handle the intricacy of such a system, it is crucial to identify various inter-agent dependencies from user requirements during the early phases of requirements engineering. This work employs notion of user story, composed of users’ requirements in modeling the type of dependencies in an Agent Oriented System. A methodological approach using Fuzzy set theory, Lexical analysis and Vector Model is used to identify Type of Dependency (ToD) from user requirements well before the development of final system. Lexical analysis is applied to obtain index terms from User Stories that eventually are classified in various categories viz. (i) Quality requirements (ii) Supplementary guidelines and (iii) want of a Resource or Information. These index terms are analyzed on the basis of their physical occurrences in User Stories as well as the importance as perceived by users to identify inter-agent dependencies. The inter-agent dependencies, if identified at the requirements stage, assist the developers in addressing inter-agent coordination issues to reduce the trivial dependencies and thereby unnecessary communication overheads. A case study using Materials e-Procurement System is presented to illustrate the proposed approach.

Anuja Soni, Vibha Gaur
Evolution of XSD Documents and Their Variability During Project Life Cycle: A Preliminary Study

During a software system life cycle, project modifications occur for different reasons. Regarding web services, communication contracts modifications are equally common, which induces the need for adaptation in every system node. To help reduce the contracts changing impact over software source code, it is necessary to understand how these contract changes occur. This paper presents a preliminary study on the evaluation of the change history of different open-source projects that defines XSD documents, specifying metrics for such files, extracting them by software repository mining and analyzing their evolution during the project life cycle. Based on the results, and considering that Web Service Definition Language (WSDL) contracts use XSD, a deeper study focused on web services projects only is further proposed to assess what exactly is changed at each contract revision, possibly revealing changing tendencies to support easy-to-adapt web service development.

Diego Benincasa Fernandes Cavalcanti de Almeida, Eduardo Martins Guerra
Efficient, Scalable and Privacy Preserving Application Attestation in a Multi Stakeholder Scenario

Measurement and reporting of dynamic behavior of a target application is a pressing issue in the Trusted Computing paradigm. Remote attestation is a part of trusted computing, which allows monitoring and verification of a complete operating system or a specific application by a remote party. Several static remote attestation techniques have been proposed in the past but most of the feasible ones are static in nature. However, such techniques cannot cater to dynamic attacks such as the infamous Heartbleed bug. Dynamic attestation offers a solution to this issue but is impractical due to the infeasibility of measurement and reporting of enormous runtime data. To an extent, it is possible to measure and report the dynamic behavior of a single application but not the complete operating system. The contribution of this paper is to provide the design and implementation of a scalable dynamic remote attestation mechanism that can measure and report multiple applications from different stakeholders simultaneously while ensuring privacy of each stakeholder. We have implemented our reference monitor and tested on Linux Kernel. We show through empirical results that this design is high scalable and feasible for a large number of stakeholders.

Toqeer Ali, Jawad Ali, Tamleek Ali, Mohammad Nauman, Shahrulniza Musa
An Approach for Code Annotation Validation with Metadata Location Transparency

The use of metadata in software development, specially by code annotations, has emerged to complement some limitations of object-oriented programming. A recent study revealed that a lack of validation on the configured metadata can lead to bugs hard to identify and correct. There are approaches to optimize metadata configuration that add the annotation out of the target code element, such as its definition on the enclosing code element or indirectly inside other annotations. Annotation validation rules that rely on the presence of other annotations are specially hard to perform when it is possible to configure it out of the target element. Available approaches for annotation validation in the literature consider their presence only in the target element. This paper presents a validation of code annotations approach in object-oriented software with location transparency, whereas definitions can occur in different parts of source code related to the target element. An evaluation with a meta-framework supports our hypothesis that the approach is capable of decoupling the annotation location from the validation rules.

José Lázaro de Siqueira Jr., Fábio Fagundes Silveira, Eduardo Martins Guerra
Towards a Software Engineering Approach for Cloud and IoT Services in Healthcare

In this work a tailored approach for requirements engineering for risk management solutions is presented. The approach focuses on cross-disciplinary risk paradigms in inpatient healthcare and aims to provide a novel methodology that accounts for specific viewpoints of relevant expert groups. The presented results demonstrate the feasibility and usefulness of the approach in the context of the current evaluation within the German market.

Lisardo Prieto-Gonzalez, Gerrit Tamm, Vladimir Stantchev
Smaller to Sharper: Efficient Web Service Composition and Verification Using On-the-fly Model Checking and Logic-Based Clustering

Model checking (MC) is an emerging approach recently suggested for the problem of Web Service Composition (WSC), since it can ensure both the soundness and completeness once verifying if an WSC solution fulfills a goal formally described or not. However, as the number of web services to be considered in practice is often very large, the MC-based approach suffers from the state space explosion problem. Clustering has been naturally considered reducing the number of candidates for the WSC problem. However, as typical clustering techniques are mostly semi-formal in terms of cluster representation, it poses a dilemma of maintaining both soundness and completeness. In this paper, we handle this problem by suggesting a logic-based approach for clustering. This work makes twofold contributions. We propose a logic-based similarity between web services, which results in more reasonable clustering results; and we represent the generated clusters as logical formula and enjoy a seamless integration between web service clustering and MC. This approach eventually brings significant improvement of WSC performance when applied on real and relatively large repository of web services.

Khai Huynh, Tho Quan, Thang Bui
MindDomain: An Interoperability Tool to Generate Domain Models Through Mind Maps

Requirements engineering establishes that requirements definition process must be applied to obtain, validate and maintain one or more requirement documents. This process handles different stakeholders expectations and viewpoints, among them, the software designer whose responsibility is to create software models from information provided by domain experts and business specialist. However, due to knowledge differences between stakeholder’s technical dialects, communication problems are constant, generating inconsistencies between the conceptual model and the problem to be solved. To help solving these issues an agile and cognitive modeling based approach supported by MDA based tools is proposed promoting better consistency between requirements and the conceptual models, guaranteed by specifying a mind map that serves as the basis for translating requirements to domain models, represented by the UML class diagrams and feature models. Thus, the main contribution of this work is to provide an interoperability tool to generate software models (e.g.: class diagrams and feature models) from mind maps. This tool provides the capability of transformation between different industrial mind map tools (including cloud tools - SaaS) to different domain modelling tools, both class diagrams and for feature models. Finally, a case study was applied to verify this feasibility and check this interoperability assessment. The main contribution is MindDomain can be used in small projects for agile requirements modeling solutions.

Alejo Ceballos, Fernando Wanderley, Eric Souza, Gilberto Cysneiros
Using Scrum Together with UML Models: A Collaborative University-Industry R&D Software Project

Conducting research and development (R&D) software projects, in an environment where both industry and university collaborate, is challenging due to many factors. In fact, industrial companies and universities have generally different interests and objectives whenever they collaborate. For this reason, it is not easy to manage and negotiate the industrial companies’ interests, namely schedules and their expectations. Conducting such projects in an agile framework is expected to decrease these risks, since partners have the opportunity to frequently interact with the development team in short iterations and are constantly aware of the characteristics of the system under development. However, in this type of collaborative R&D projects, it is often advantageous to include some waterfall practices, like upfront requirements modeling using UML models, which are not commonly used in agile processes like Scrum, in order to better prepare the implementation phase of the project. This paper presents some lessons learned that result from experience of the authors in adopting some Scrum practices in a R&D project, like short iterations, backlogs, and product increments, and simultaneously using UML models, namely use cases and components.

Nuno Santos, João M. Fernandes, M. Sameiro Carvalho, Pedro V. Silva, Fábio A. Fernandes, Márcio P. Rebelo, Diogo Barbosa, Paulo Maia, Marco Couto, Ricardo J. Machado
Rockfall Hazard Assessment in an Area of the “Parco Archeologico Storico-Naturale Delle Chiese Rupestri” of Matera (Basilicata Southern–Italy)

This work contains the results of the methodology used for the evaluation of the possible trajectories of the unstable blocks and their location along the slope that could destroy valuable rupestrian testimonies. The software used is the Rockfall, managed by RocScience (2002). It is an important tool allowing the rockfalls risk assessment. In the study area various simulations have been performed; they have led to the evaluation of the different parameters of the blocks movements: trajectories, maximum heights of bounce, propagation distances and energies of the blocks, to obtain a mapping of areas with different susceptibility to the transit and to the invasion of the blocks. In order to describe the blocks movement, the RocFall Software apply the parabolic equation of a corps motion in free fall and the principle of total energy conservation. This work leds to the result that the southern side of the site “Belvedere delle Chiese Rupestri” presents a big criticality to the collapse phenomena that reverberates on archaeological assets therein.

Lucia Losasso, Stefania Pascale, Francesco Sdao
Integrating Computing to STEM Curriculum via CodeBoard

Introductory programming has always suffered from low performance rates. These low performance rates are closely tied to high failure rates and low retention in introductory programming classes. The goal of this research is to develop models and instrumentation capable of giving insight into STEM student performance, learning patterns and behavior. This insight is expected to shed some light on low performance rates and also pave the way for formative measures to be taken. CodeBoard is a programming platform capable of managing and assesse student programming via using a functional test-driven approach. Instructors develop programming assignments along with corresponding test cases, which are then used as grading templates to evaluate student programs. The second phase of this research involves developing models for measuring and capturing events relevant to student performance over time. The preliminary results show that this CodeBoard is promising.

Hongmei Chi, Clement Allen, Edward Jones
Trusted Social Node: Evaluating the Effect of Trust and Trust Variance to Maximize Social Influence in a Multilevel Social Node Influential Diffusion Model

The use of social networking sites has been very successful on large-scale information sharing. Hence, a vast proposed application possibilities for different people and organizations emerged. Although the use of social networking sites nowadays for large scale information sharing and the spreading of messages on these platforms is considerably effective, this research hypothesizes that trust is able to increase the rate of successfully influenced social nodes. Trust is the fundamental motivation that people cooperates towards a common purpose. This paper discusses trust - a measure of belief and disbelief using experimental simulation to evaluate and compare on the rate of successfully influenced social nodes based on the Trusted Social Node (TSN). This paper considers trust variance and social node impact factor in the Genetic Algorithm Diffusion Model (GADM) to analyze on its successful influential rate with and without the presence of trust in the algorithm. Results produced are a set of influential diffusion time graph where the graph shows there are incremental rate of successfully influenced social nodes with the presence of trust metrics.

Hock-Yeow Yap, Tong-Ming Lim
Enhanced Metaheuristics with the Multilevel Paradigm for MAX-CSPs

As many real-world optimization problems become increasingly complex and hard to solve, better optimization algorithms are always needed. Nature inspired algorithms such as genetic algorithms and simulated annealing which belongs to the class of evolutionary algorithms are regarded as highly successful algorithms when applied to a broad range of discrete as well continuous optimization problems. This paper introduces the multilevel paradigm combined with genetic algorithm and simulated annealing for solving the maximum constraint satisfaction problem. The promising performances achieved by the proposed approach is demonstrated by comparisons made to solve conventional random benchmark problems.

Noureddine Bouhmala, Mikkel Syse Groesland, Vetle Volden-Freberg
Processing of a New Task in Conditions of the Agile Management with Using of Programmable Queues

The paper deals with an unpredictable appearance of a new project task in real-time designing a system with the software. Usually, this occurs at a time when the designer works in a multitasking mode and other members of the designers’ team also work with delegated tasks. For regulating such activity, the use of the Agile management is not sufficient. In described case study, Agile means are combined with means of an interruption management which is adjusted on processing the interruption reason “New task”. In the processing of the new task, the designer applies a framework “model of precedent” and its iterative filling by content with using a figuratively semantic support. Combining the indicated means is implemented in the toolkit OwnWIQA.

P. Sosnin
Using CQA History to Improve Q&A Experience

Social query is the practice of sharing questions through collaborative environments. In order to receive help, askers usually broadcast their request to the entire community. However, the prerequisite to receive help is to have the problem noticed by someone able and available to answer. Some works found a correlation between the characteristics of the questions and the outcome of receiving or not an answer. These findings suggest that there are some characteristics that are more likely to attract the attention of helpers. Our proposal is to analyse CQA history to identify the similar characteristics of previously asked questions that were answered. We believe that adding these characteristics in new questions will impact the receiving of answers. We evaluate our proposal using real world data and a real world experiment. Our results indicate that including “good characteristics” in the question reduce time for first response and improve answer quality.

Cleyton Souza, Franck Aragão, José Remígio, Evandro Costa, Joseana Fechine
The Use of Computer Technology as a Way to Increase Efficiency of Teaching Physics and Other Natural Sciences

The use of computer technologies for increasing the efficiency of physics teaching is discussed. Proposed approach includes the development of professionally oriented teaching, the use of multimedia lecture courses, the use of Learning Management Systems (such as Blackboard Learn and MOODLE), webinars, etc. It is proposed to organize distributed educational system in a form of Grid infrastructure joining educational facilities of 4 St. Petersburg universities. This Grid infrastructure can provide equal access to all kinds of resources for the students and lecturers in order to teach and study physics more efficiently.

E. N. Stankova, A. V. Barmasov, N. V. Dyachenko, M. N. Bukina, A. M. Barmasova, T. Yu. Yakovleva
A Nonlinear Multicriteria Model for Team Effectiveness

The study of team effectiveness has received significant attention in recent years. Team effectiveness is an important subject since teams play an increasingly decisive role on modern organizations. This study is inherently a multicriteria problem as different criteria are typically required to assess team effectiveness. Among the different aspects of interest on the study of team effectiveness one of the utmost importance is to acknowledge, as accurately as possible, the relationships that team resources and team processes establish with team effectiveness. Typically, these relationships are studied using linear models which fail to explain the complexity inherent to group phenomena. In this study we propose a novel approach using radial basis functions to construct a multicriteria nonlinear model to more accurately capture the relationships between the team resources/processes and team effectiveness. By combining principal component analysis, radial basis functions interpolation, and cross-validation for model parameter tuning, we obtained a data fitting method that generated an approximate response with reliable trend predictions between the given data points.

Isabel Dórdio Dimas, Humberto Rocha, Teresa Rebelo, Paulo Renato Lourenço
Assessment of the Code Refactoring Dataset Regarding the Maintainability of Methods

Code refactoring has a solid theoretical background while being used in development practice at the same time. However, previous works found controversial results on the nature of code refactoring activities in practice. Both their application context and impact on code quality needs further examination.Our paper encourages the investigation of code refactorings in practice by providing an excessive open dataset of source code metrics and applied refactorings through several releases of 7 open-source systems. We already demonstrated the practical value of the dataset by analyzing the quality attributes of the refactored source code classes and the values of source code metrics improved by those refactorings.In this paper, we have gone one step deeper and explored the effect of code refactorings at the level of methods. We found that similarly to class level, lower maintainability indeed triggers more code refactorings in practice at the level of methods and these refactorings significantly decrease size, coupling and clone metrics.

István Kádár, Péter Hegedűs, Rudolf Ferenc, Tibor Gyimóthy
A Public Bug Database of GitHub Projects and Its Application in Bug Prediction

Detecting defects in software systems is an evergreen topic, since there is no real world software without bugs. Many different bug locating algorithms have been presented recently that can help to detect hidden and newly occurred bugs in software. Papers trying to predict the faulty source code elements or code segments in the system always use experience from the past. In most of the cases these studies construct a database for their own purposes and do not make the gathered data publicly available. Public datasets are rare; however, a well constructed dataset could serve as a benchmark test input. Furthermore, open-source software development is rapidly increasing that also gives an opportunity to work with public data.In this study we selected 15 Java projects from GitHub to construct a public bug database from. We matched the already known and fixed bugs with the corresponding source code elements (classes and files) and calculated a wide set of product metrics on these elements. After creating the desired bug database, we investigated whether the built database is usable for bug prediction. We used 13 machine learning algorithms to address this research question and finally we achieved F-measure values between 0.7 and 0.8. Beside the F-measure values we calculated the bug coverage ratio on every project for every machine learning algorithm. We obtained very high and promising bug coverage values (up to 100 %).

Zoltán Tóth, Péter Gyimesi, Rudolf Ferenc
Towards an Intelligent System for Monitoring Health Complaints

Users’ dissatisfaction, related to healthcare institutions in Portugal, has increased in the recent years. This fact can be seen through the increase of formal complaints, which the responsible regulatory entity has been receiving daily, in this country. More and more technical efforts have been done in order to understand and analyze this tendency. In this paper, the authors pretend to present a group of studies about the distribution and use of words in the formulation of a formal complaint, which aims to find the existence of a characteristic pattern. This paper presents an analysis of the words used in complaints and data analysis platform with dashboards, to offer the opportunity of visualizing the information in different ways.

André Oliveira, Filipe Portela, Manuel Filipe Santos, José Neves
Proposal to Reduce Natural Risks: Analytic Network Process to Evaluate Efficiency of City Planning Strategies

Natural hazards have greater social and economic impact in urban areas because urbanization and economic development increase people and assets’ concentration in high-risk prone areas: hazards generate risks in relation to population’s exposure and its physical and economic assets.Advanced urban planning is one of the involved disciplines in the process of human exposure and risks reduction: defining strategic actions, it can reduce losses following natural disasters and, in the same time, ensure a flexible design able to absorb external impacts, to transform and to adapt itself, increasing urban resilience.The paper defines two possible strategies of intervention in city as risk mitigation methods: Areal Change and Functional Change.Authors describe the use of scenario planning and Multicriteria evaluation (ANP method) to deepen suitable strategies at urban level. Moreover, authors introduce the first instruments useful for the future application of the presented method: ideal city modeling and evaluation criteria definition.

Roberto De Lotto, Veronica Gazzola, Silvia Gossenberg, Cecilia Morelli di Popolo, Elisabetta Maria Venco
Data Processing for a Water Quality Detection System on Colombian Rio Piedras Basin

Freshwater is considered one of the most important of planet’s renewable natural resources. In this sense, it is vital to study and evaluate the water quality in rivers and basins. A study area is Rio Piedras Basin, which is the main water supplier source of 9 rural communities in Colombia. Nevertheless, these communities do not make a water quality control. Different research has been conducted to develop water quality detection systems through supervised learning algorithms. However, these research approaches set aside the data processing for improve the outcomes of supervised learning algorithms. This paper presents an improvement of data processing techniques for a water quality detection system based on supervised learning and data quality techniques for Rio Piedras Basin.

Edwin Castillo, David Camilo Corrales, Emmanuel Lasso, Agapito Ledezma, Juan Carlos Corrales
Validation of Coffee Rust Warnings Based on Complex Event Processing

The rust is the main coffee crop disease in the world. In the Colombian and Brazilian plantations, the damage leads to a yield reduction of 30 % and 35 % respectively in regions where the meteorological conditions are propitious to the disease. Recently, researchers have focused on detecting the coffee rust disease starting from climate monitoring and parameters of crop control; however most of the monitoring systems lack the ability to process multiple source information and analyse it in order to identify abnormal situations and validate the generated warnings. In this paper, we propose a CEP engine and a prediction system integration for early warning systems applied to the coffee rust detection, capable of analysing multiple incoming events from the monitoring system and validating the warnings detection; evaluating an experimental prototype in a field test with satisfactory results.

Julián Eduardo Plazas, Juan Sebastián Rojas, David Camilo Corrales, Juan Carlos Corrales
Backmatter
Metadaten
Titel
Computational Science and Its Applications -- ICCSA 2016
herausgegeben von
Osvaldo Gervasi
Beniamino Murgante
Sanjay Misra
Ana Maria A.C. Rocha
Carmelo M. Torre
David Taniar
Bernady O. Apduhan
Elena Stankova
Shangguang Wang
Copyright-Jahr
2016
Electronic ISBN
978-3-319-42089-9
Print ISBN
978-3-319-42088-2
DOI
https://doi.org/10.1007/978-3-319-42089-9