Skip to main content

2021 | Buch

Enterprise Information Systems

22nd International Conference, ICEIS 2020, Virtual Event, May 5–7, 2020, Revised Selected Papers

herausgegeben von: Prof. Dr. Joaquim Filipe, Michał Śmiałek, Alexander Brodsky, Slimane Hammoudi

Verlag: Springer International Publishing

Buchreihe : Lecture Notes in Business Information Processing

insite
SUCHEN

Über dieses Buch

This book constitutes extended, revised and selected papers from the 22nd International Conference on Enterprise Information Systems, ICEIS 2020, held online during May 5-7, 2020.

The 41 papers presented in this volume were carefully reviewed and selected for inclusion in this book from a total of 255 submissions. They were organized in topical sections as follows: database and information systems integration; artificial intelligence and decision support systems; information systems analysis and specification; software agents and internet computing; human-computer interaction; and enterprise architecture.

Inhaltsverzeichnis

Frontmatter
Correction to: A Modern Approach to Personalize Exergames for the Physical Rehabilitation of Children Suffering from Lumbar Spine

In the originally published version of chapter 35, the special characters in the names of Cristian Gómez-Portes and Santiago Sánchez-Sobrino had been left out initially due to conversion errors. This has been corrected.

Cristian Gómez-Portes, Carmen Lacave, Ana I. Molina, David Vallejo, Santiago Sánchez-Sobrino

Databases and Information Systems Integration

Frontmatter
Anonimisation, Impacts and Challenges into Big Data: A Case Studies

In a context in which privacy is increasingly demanded by citizens and by various institutions, reflected in protection laws, anonymity emerges as an essential tool. Both the General Data Protection Regulation (GDPR) in the EU and the Brazilian General Data Protection Law (LGPD) provide a softer regulation for anonymised data, compared to personal data. Despite the legal advantages in their use, anonymisation tools have limits that should be considered, especially when it comes to massive data contexts. The work seeks to analyze whether anonymisation techniques can satisfactorily ensure privacy in big data environments, without taking other measures in favor of privacy. Based on two hypothetical cases, we realized that the anonymisation techniques, although well implemented, must be associated with governance techniques to avoid latent breaches of privacy. Besides that, we point out some guidelines identified in the case studies for the use of anonymous data in Big Data.

Artur Potiguara Carvalho, Edna Dias Canedo, Fernanda Potiguara Carvalho, Pedro Henrique Potiguara Carvalho
Streaming Set Similarity Joins

We consider the problem of efficiently answering set similarity joins over streams. This problem is challenging both in terms of CPU cost, because similarity matching is computationally much more expensive than equality comparisons, and memory requirements, due to the unbounded nature of streams. This article presents SSTR, a novel similarity join algorithm for streams of sets. We adopt the concept of temporal similarity and exploit its properties to improve efficiency and reduce memory usage. Furthermore, we propose a sampling-based technique for ordering set elements that increases the pruning power of SSTR and, thus, reduce even further the number of similarity comparisons and memory consumption. We provide an extensive experimental study on several synthetic as well as real-world datasets. Our results show that the techniques we proposed significantly reduce memory consumption, improve scalability, and lead to substantial performance gains over the baseline approach.

Lucas Pacífico, Leonardo Andrade Ribeiro
Flexible OPC UA Data Load Optimizations on the Edge of Production

Recent trends like the (Industrial) Internet of Things and Industry 4.0 lead to highly integrated machines and thus to greater challenges in dealing with data, mostly with respect to its volume and velocity. It is impossible to collect all data available, both at maximum breadth (number of values) and maximum depth (frequency and precision). The goal is to achieve an optimal trade-off between bandwidth utilization versus information transmitted. This requires optimized data collection strategies, which can extensively profit from involving the domain expert’s knowledge about the process.In this paper, we build on our previously presented optimized data load methods, that leverage process-driven data collection. These enable data providers (i) to split their production process into phases, (ii) for each phase to precisely define what data to collect and how and (iiii) to model transitions between phases via a data-driven method. This paper extends the previous approach in both breadth and depth and focuses especially on making its benefits, like the demonstrated 39% savings in bandwidth, to domain experts. We propose a novel, user-friendly assistant that enables domain experts to define, deploy and maintain a flexible data integration pipeline from the edge of production to the cloud.

Johannes Lipp, Michael Rath, Maximilian Rudack, Uwe Vroomen, Andreas Bührig-Polaczek
Using Image Mining Techniques from a Business Process Perspective

Business process modeling is an established method to improve business procedures and to provide more insights into internal workflows. Once the process is visualized in a business process model, future process executions correspond to the workflow prescribed by the process model. Process details like input specifications or the order of internal sub-steps are only considered during process execution if contained in the process model. These details may be decisive since they can have an impact on the success of the overall process. In some cases, such important process details are not modeled due to different aspects, like modeling with a high degree of abstraction to preserve the traceability. Nevertheless, it is necessary to identify missing but essential process details that reduce the success rate of a process. In this paper, we present a conceptual approach to use image mining techniques in order to analyze and extract process details from image data recorded during process executions. We propose to redesign business process models considering the analysis results to ensure successful process executions. We discuss different requirements regarding the image analysis output and present an exemplary prototype.

Myriel Fichtner, Stefan Schönig, Stefan Jablonski

Artificial Intelligence and Decision Support Systems

Frontmatter
A Machine Learning Based Framework for Enterprise Document Classification

Enterprise Content Management (ECM) systems store large amounts of documents that have to be conveniently labelled for easy managed and searching. The classification rules behind the labelling process are informal and tend to change that complicates the labelling even more. We propose a machine learning based document classification framework (Framework) that allows for continuous retraining of the classification bots, for easy analysis of the bot training results and for tuning of the further training runs to address this challenge. Documents and metadata fields typical for ECM systems are used in the research. The Framework comprises selection of classification and vectorization methods, configuration of the methods hyperparameters and general learning related parameters. The model provides user with visual tools to analyze the classification performance and to tune the further steps of the learning. A couple of challenges are addressed – as handling informal and eventually changing criteria for document classification, and dealing with imbalanced data sets. A prototype of the proposed Framework is developed and short analysis of the prototype performance is presented in the article.

Juris Rāts, Inguna Pede, Tatjana Rubina, Gatis Vītols
Improving Corporate Support by Predicting Customer e-Mail Response Time: Experimental Evaluation and a Practical Use Case

Customer satisfaction is an important aspect for any corporations customer support process. One important factor keeping the time customers’ wait for a reply at acceptable levels.By utilizing learning models based on the Random Forest Algorithm, the extent to which it is possible to predict e-Mail time-to-respond is investigated. This is investigated both for customers, but also for customer support agents. The former focusing on how long until customers reply, and the latter focusing on how long until a customer receives an answer.The models are trained on a data set consisting of 51, 682 customer support e-Mails. The e-Mails covers various topics from a large telecom operator. The models are able to predict the time-to-respond for customer support agents with an AUC of 0.90, and for customers with an AUC of 0.85. These results indicate that it is possible to predict the TTR for both groups. The approach were also implemented in an initial trial in a live environment.How the predictions can be applied to improve communication efficiency, e.g. by anticipating the staff needs in customer support, is discussed in more detail in the paper. Further, insights gained from an initial implementation are provided.

Anton Borg, Jim Ahlstrand, Martin Boldt
A Mixed Approach for Pallet Building Problem with Practical Constraints

We study a pallet building problem that originates from a case study in a company that produces robotized systems for freight transportation and logistics. We generalize the problem by including the concept of family of items, which allows us to consider specific constraints such as visibility and contiguity. We solve the problem with an algorithm based on a two-step strategy: an Extreme Points heuristic is used to group items into horizontal layers and an exact method is invoked to stack layers one over the other to form pallets. The performance of the algorithm is assessed through extensive computational tests on real-world instances. The results show that the exact model considerably increases the solution quality, creating very compact packings with a limited computational effort.

Manuel Iori, Marco Locatelli, Mayron C. O. Moreira, Tiago Silveira
A Reference Process and Domain Model for Machine Learning Based Production Fault Analysis

Early detection of errors in production processes is of crucial importance for manufacturing companies. With the advent of machine learning (ML) methods, the interaction of ML and human expertise offers the opportunity to develop a targeted understanding of error causes and thus to proactively avoid errors. The power of such a model can only be used if relevant domain knowledge is taken into account and applied correctly. When using ML methods, a systematic failure analysis without the need of deeper ML knowledge is crucial for an efficient quality management. Focusing on these two aspects, we develop an updated holistic solution by expanding and detailing our previously proposed approach to support production quality.

Christian Seiffer, Alexander Gerling, Ulf Schreier, Holger Ziekow
An Investigation of Problem Instance Difficulty for Case-Based Reasoning and Heuristic Search

For managing the ever increasing variability of hardware/software interfaces (HSIs), e.g., in automotive systems, there is a need for the reuse of already existing HSIs. This reuse should be automated, and we (meta-)modeled the HSI domain for design space exploration. These models together with additionally defined transformation rules that lead from a model of one specific HSI to another one facilitate automatic adaptations of HSI instances in these models and, hence, both case-based reasoning (CBR) and (heuristic) search. Using these approaches for solving concrete problem instances, estimating their difficulty really matters, but there is not much theory available.This work compares different approaches to estimating problem instance difficulty (similarity metrics, heuristic functions). It also shows that even measuring problem instance difficulty depends on the ground truth available and used. In order to avoid finding only domain-specific insights, we also employed sliding-tile puzzles for our experiments. The experimental results in both domains show how different approaches statistically correlate. Overall, this paper investigates problem instance difficulty for CBR and heuristic search. This investigation led to the insight that admissible functions guiding heuristic search may also be used for retrieving cases for CBR.

Hermann Kaindl, Ralph Hoch, Roman Popp, Thomas Rathfux, Franz Lukasch
Opti-Soft: Decision Guidance on Software Release Scheduling to Minimize the Cost of Business Processes

Many approaches have been developed to increase the return on a software investment, but each one has drawbacks due to subjectivity or imprecision. This paper proposes a novel approach that addresses this problem for a particular class of information systems that improve business processes. The approach, called Opti-Soft, models a software development project as a mixed integer linear programming problem, where the objective function maximizes the return on investment and the corresponding decision variables produce an optimal release schedule. The uniqueness of Opti-Soft is in accurately modeling the business process and its improvement due to new software features, which leads to cost reduction. The approach includes a formal model, a methodology and a decision-guidance system.

Fernando Boccanera, Alexander Brodsky
Fast and Efficient Parallel Execution of SARIMA Prediction Model

Mathematical models for predicting values in time series are powerfultools for the process of knowledge discovery and decision making in several areas. However, the choice of the predictive model and its configuration are not trivial tasks, requiring a long processing time to obtain the results due to the high complexity of the models and the uncertainty of the value of the best parameters. Calculations performed by these approaches use sampling from the dataset, which can present discrepancies and variations that can directly impact the final result. Therefore, this work presents a new approach based on the SARIMA model for the prediction of values in time series. The proposal aims at predictive calculation from multiple executions of SARIMA in parallel, configured with predefined order and seasonal order parameters and applied to values already known in a time series. Thus, from the results obtained in past observations, it is possible to determine the percentage of precision that each parameter obtained, and, in this way, to determine the parameters that are more likely to obtain more accurate values in future observations, thus, eliminating the need to use specific algorithms to estimate them. The proposed approach is capable of achieving results with greater precision and performance compared to the traditional SARIMA execution, achieving results with greater assertiveness, reaching up to 10.77% of better accuracy and with better processing times, without the need for validation and parameter adjustments required by the settings obtained by functions, such as ACF and PACF.

Tiago Batista da Silveira, Felipe Augusto Lara Soares, Henrique Cota de Freitas
An Approach to Intelligent Control Public Transportation System Using a Multi-agent System

Traffic congestion has increased globally during the last decade representing an undoubted menace to the quality of urban life. A significant contribution can be made by the public transport system in reducing the problem intensity if it provides high-quality service. However, public transportation systems are highly complex because of the modes involved, the multitude of origins and destinations, and the amount and variety of traffic. They have to cope with dynamic environments where many complex and random phenomena appear and disturb the traffic network. To ensure good service quality, a control system should be used in order to maintain the public transport scheduled timetable. The quality service should be measured in terms of public transport key performance indicators (KPIs) for the wider urban transport system and issues. In fact, in the absence of a set of widely accepted performance measures and transferable methodologies, it is very difficult for public transport to objectively assess the effects of specific regulation system and to make use of lessons learned from other public transport systems. Moreover, vehicle traffic control tasks are distributed geographically and functionally, and disturbances might influence on many itineraries and occur simultaneously. Unfortunately, most existing traffic control systems consider only a part of the performance criteria and propose a solution without man-aging its influence on neighboring areas of the network. This paper sets the context of performance measurement in the field of public traffic management and presents the regulation support system of public transportation (RSSPT). The aim of this regulation support system is (i) to detect the traffic perturbation by distinguishing a critical performance variation of the current traffic, (ii) and to find the regulation action by optimizing the performance of the quality service of the public transportation. We adopt a multi-agent approach to model the system, as their distributed nature, allows managing several disturbances concurrently. The validation of our model is based on the data of an entire journey of the New York City transport system in which two perturbation scenarios occur. This net-work has the nation’s largest bus fleet and more subway and commuter rail cars than all other U.S. transit systems combined. The obtained results show the efficiency of our system especially in case many performance indicators are needed to regulate a disturbance situation. It demonstrates the advantage as well of the multiagent approach and shows how the agents of different neighboring zones on which the disturbance has an impact, coordinate and adapt their plans and solve the issue.

Nabil Morri, Sameh Hadouaj, Lamjed Ben Said
Comparative Evaluation of the Supervised Machine Learning Classification Methods and the Concept Drift Detection Methods in the Financial Business Problems

Machine Learning methods are key tools for aiding in the decision making of financial business problems, such as risk analysis, fraud detection, and credit-granting evaluations, reducing the time and effort and increasing accuracy. Supervised machine learning classification methods learn patterns in data to improve prediction. In the long term, the data patterns may change in a process known as concept drift, with the changes requesting retraining the classification methods to maintain their accuracies. We conducted a comparative study using twelve classification methods and seven concept drift detection methods. The evaluated methods are Gaussian and Incremental Naïve Bayes, Logistic Regression, Support Vector Classifier, k-Nearest Neighbors, Decision Tree, Random Forest, Gradient Boosting, XGBoost, Multilayer Perceptron, Stochastic Gradient Descent, and Hoeffding Tree. The analyzed concept drift detection methods are ADWIN, DDM, EDDM, HDDMa, HDDMw, KSWIN, and Page Hinkley. We used the next-generation hyperparameter optimization framework Optuna and applied the non-parametric Friedman test to infer hypotheses and Nemeyni as a posthoc test to validate the results. We used five datasets in the financial domain. With the performance metrics of F1 and AUROC scores for classification, XGBoost outperformed other methods in the classification experiments. In the data stream experiments with concept drift, using accuracy as performance metrics, Hoeffding Tree and XGBoost showed the best results with the HDDMw, KSWIN, and ADWIN concept drift detection methods. We conclude that XGBoost with HDDMw is the recommended combination when financial datasets that exhibit concept drift.

Victor Ulisses Pugliese, Renato Duarte Costa, Celso Massaki Hirata
Sammon Mapping-Based Gradient Boosted Trees for Tax Crime Prediction in the City of São Paulo

With the currently vast volume of data available, several institutions, including the public sector, benefit from information, aiming to improve decision-making. Machine Learning enhances data-driven decision-making with its predictive power. In this work, our principal motivation was to apply Machine Learning to ameliorate fiscal audit planning for São Paulo’s municipality. In this study, we predicted crimes against the service tax system of São Paulo using Machine Learning. Our methodology embraced the following steps: data extraction; data preparation; dimensionality reduction; model training and testing; model evaluation; model selection. Our experimental findings revealed that Sammon Mapping (SM) combined with Gradient Boosted Trees (GBT) outranked other state-of-the-art works, classifiers and dimensionality reduction techniques as regards classification performance. Our belief is that the ensemble of classifiers of GBT, combined with SM’s ability to identify relevant dimensions in data, contributed to produce higher prediction scores. These scores enable São Paulo’s tax administration to rank fiscal audits according to the highest probabilities of tax crime occurrence, leveraging tax revenue.

André Ippolito, Augusto Cezar Garcia Lozano
Extraction of Speech Features and Alignment to Detect Early Dyslexia Evidences

Specific reading disorders are conditions caused by neurological dysfunctions that affect the linguistic processing of printed text. Many people go untreated due to the lack of specific tools and the high cost of using proprietary software; however, new audio signal processing technologies can help identify genetic pathologies. The methodology developed by medical specialists extracts characteristics from the reading of a text aloud and returns evidence of dyslexia. This work proposes an improvement of the research presented in [25], extracting new features and improvements serving as a tool for dyslexia indication efficiently. The analysis is done in recordings of the reading of pre-defined texts with school-age children. Direct and indirect characteristics of the audio signal are extracted. The direct ones are obtained through the methodology of separation of pauses and syllables. Simultaneously, the indirect characteristics are extracted through the alignment of audio signals, the Hidden Markov Model, and some heuristics of improvement. The indication of the probability of dyslexia is performed using a machine learning algorithm. The tests were compared with the specialist’s classification, obtaining high accuracy on the evidence of dyslexia. The difference between the values of the characteristics collected automatically and manually was below 20% for most features. Finally, the results show a promising research area for audio signal processing concerning the aid to specialists in the decision making related to language pathologies.

Fernanda M. Ribeiro, Alvaro R. Pereira Jr., Débora M. Barroso Paiva, Luciana M. Alves, Andrea G. Campos Bianchi

Information Systems Analysis and Specification

Frontmatter
An Extended Secondary Study to Characterize the Influence of Developers Sentiments on Practices and Artifacts in Open Source Software Projects

Context: Sentiment Analysis applies computational techniques for both automated and semi-automated identification of human behavior. There is a trend to use such techniques in Sentiment Analysis tasks in the Software Engineering context. Objective: Characterize the influence of developers sentiments on software practices and artifacts in open source software projects. Methods: We conducted a Systematic Literature Review (SLR) to identify references in the literature related to the influence of developers sentiments on software practices and artifacts. Results: Evidence showed an increasing number of studies in this theme shedding light on issues related to the influence of developers sentiments on software practices. Practices focusing on developers productivity and collaboration, as well as source code, are the most vulnerable to sentiments variation. Conclusions: Based on the results provided in this SLR, we intend to present an updated and comprehensive overview regarding how the sentiments of developers can positively or negatively impact software practices and artifacts.

Rui Santos Carigé Júnior, Glauco de Figueiredo Carneiro
Improving Quality of Use-Case Models by Correlating Defects, Difficulties, and Modeling Strategies

Use case (UC) models play an essential role in software specification since they describe system functional requirements. A UC model should be free of defects due to its relevance and impact throughout the software development life cycle. However, inspections in UC models frequently identify defects related to modelers’ difficulties in different activities during the modeling process. The quality of a UC model is usually analyzed based on quality criteria such as ambiguity and inconsistency. Several strategies in the literature assist use case modeling in mitigating defects, but these strategies do not identify which potential defects they aim to prevent or eliminate. In this context, we proposed a correlation between UC modeling difficulties and strategies to mitigate these difficulties based on UC’s quality criteria. In this paper, we describe each strategy contained in the correlation and present, in detail, the controlled experiment that assesses the correlation effectiveness, including the discrete data collected in analyzing the participants’ models and statistical analysis performed in these data. Besides, we also propose a mechanism to guide in elaborating checklists to identify defects in UC models focusing on quality criteria. Through a controlled experiment, we evaluate the Antipattern strategy, and the results showed a clear indication that this strategy mitigates the difficulties in which it is related according to the correlation. Besides, the UC models developed in the experiment were evaluated using the checklist generated based on the proposed mechanism.

Cristiana Pereira Bispo, Ana Patrícia Magalhães, Sergio Fernandes, Ivan Machado
iVolunteer - A Platform for Digitization and Exploitation of Lifelong Volunteer Engagement

Volunteering is an important cornerstone of our society, from social welfare to disaster relief, supported by a variety of volunteer management systems (VMS). These VMS focus primarily on centralized task management within non-profit organizations (NPOs) but generally do not provide mechanisms that allow volunteers to privately digitize and exploit their engagement assets in terms of digital badges for activities accomplished or competences acquired, in a trustful manner. This lack of sovereignty hampers volunteers in the exploitation of their engagement assets wrt. self-exploration, but also with regard to possible transfers of assets to other NPOs and beyond, e.g., to the education or labor market.We put volunteers in the middle of concern by investigating “how engagement can be digitized and exploited in a lifelong way”, thus adhering to the idea of human-centric personal data management. First we propose a conceptual architecture for a web-based volunteer platform based on a systematic identification of the requirements for trustworthy digitization and exploitation of engagement assets. Second, to address the massive heterogeneity that prevails in different areas of volunteering, a generic and extensible engagement asset model is proposed. Third a reification-based configuration mechanism is proposed so that each NPO can adapt the proposed model to its specific needs. Finally, a prototypical web application is presented which allows to »blockchainify« lifelong volunteer engagement in order to establish trust between all stakeholders.

Elisabeth Kapsammer, Birgit Pröll, Werner Retschitzegger, Wieland Schwinger, Markus Weißenbek, Johannes Schönböck, Josef Altmann, Marianne Pührerfellner
An Investigation of Currently Used Aspects in Model Transformation Development

In Model driven development, a transformation chain is responsible for the conversion of high abstraction level models into other models until code generation. It comprises a set of transformation programs that automates a software development process. The development of a transformation chain is not a trivial task as it involves metamodeling, specific languages, knowledge of specific tools, and other issues that are not common in traditional software project development. Therefore, the adoption of software engineering facilities such as development processes, modeling languages, patterns, among others, have been proposed to assist in transformation development. This paper discusses currently facilities used to develop model transformations from a systematic literature review result. To better organize the currently aspects found, we structure the proposals according to a classification concerning software engineering development technologies in order to help developers and researchers in searching by solutions and challenges.

Ana Patrícia Magalhães, Rita Suzana P. Maciel, Aline Maria S. Andrade
Software Evolution and Maintenance Using an Agile and MDD Hybrid Processes

The growing software usage in modern society motivated several new development processes proposals aiming to increase productivity, reduce software delivery time, and improve the final product quality. In this context, while some software development processes emphasize source code production, such as the agile processes, others focus on modeling activities, such as the Model-Driven Development (MDD). Hybrid processes, which integrate different approaches into another one, aim to mitigate weaknesses and at the same time take advantage of the involved approaches strengths. This paper discusses the Agile and MDD approaches hybridization benefits concerning evolution and maintenance aspects through a controlled experiment. The influence of modeling tasks in development agility, as well as the consequences of model specification in product quality, are some of the issues discussed. The experiment results showed that the hybrid agile MDD process adopted conducted participants to produce a more correct and complete software release than the participants that developed the same release directly in code. So, we important indications that the hybrid process can assist developers during software evolution. We hope by sharing our results with the community, new replications could be performed towards news finds and generalizations.

Elton Figueiredo da Silva, Ana Patrícia F. Magalhães, Rita Suzana Pitangueira Maciel
DSL Based Approach for Building Model-Driven Questionnaires

Surveys are pervasive in the modern world, with its usage ranging from the field of customer satisfaction measurement to global economic trends tracking. Data collection is at the core of survey processes and, usually, is computer- aided. The development of data collection software involves the codification of questionnaires, which vary from simple, straightforward questions to complex questionnaires in which validations, derived data calculus, triggers used to guarantee consistency, and dynamically created objects of interest are the rule. Questionnaire specification is part of what is called survey metadata and is a key factor for collected data and survey quality. Survey metadata establishes most of the requirements for survey support systems, including data collection software. This article proposes a Domain Specific Language (DSL) for modeling questionnaires, presents a prototype, and evaluates DSL use as a strategy to reduce the gap between survey domain experts and software developers, improve reuse, eliminate redundancy, and minimize rework.

Luciane Calixto de Araujo, Marco A. Casanova, Luiz André P. P. Leme, Antônio L. Furtado
Modeling of Robot Interaction in Coalition Through Smart Space and Blockchain: Precision Agriculture Scenario

Modelling of interaction of intelligent agents is one of the urgent topics for verifying interaction models of joint task solving. It includes platform selection or new platform development that will provide functions for coalition formation, task decomposition and distribution, winnings sharing, and implementation of proposed techniques and models. This work focuses on modeling and visualizing the interaction of intelligent robots using open software Gazebo, and Robotic Operation System with information exchange between coalition members through distributed ledger technology and smart contracts using the Hyperledger Fabric platform. To adjust robot actions the ontologies of robot and context are used. Context ontology combines environmental characteristics with robots and task descriptions to provide a full situation context. The ontology of robot provides description of main robot functions and characteristics. The architecture of the modeling environment is described as the result of existing solutions overview and task required, as well as an example of modeling and visualization based on precision agriculture scenario.

Alexander Smirnov, Nikolay Teslya
A Capability Based Method for Development of Resilient Digital Services

Capability Driven Development (CDD) is a capability-based method for developing context-aware and adaptive systems. This paper proposes to extend CDD to address security and resilience concerns in organizational networks. A method extension defining modeling concepts and development procedure is elaborated. It includes development of a data-driven digital twin, which represents the security and resilience concerns of the network and is used to diagnose security incidents and to formulate a resilient response to these incidents. Application of the proposed method extension is illustrated using examples of secure computer network governance and secure supplier onboarding.

Jānis Grabis, Janis Stirna, Jelena Zdravkovic
Adopting Agile Software Development Combined with User-Centered Design and Lean Startup: A Systematic Literature Review on Maturity Models

The use of Agile in the software development industry in the past two decades revealed that it is lackluster in some aspects, such as in guaranteeing user involvement and assuring that the right software is being built. There are reports that combining Agile with Lean Startup and User-Centered Design (UCD) helps in overcoming these shortcomings while also yielding several other benefits. However, there is not much documentation on how to use this “combined approach” and adapting existing organizations to use it is a challenge in of itself, in which the use of an instrument to guide or assess such transformations is typically pivotal to their success. As such, in this paper we seek to identify maturity models that assess the use of Agile, Lean Startup, and UCD. We conducted a systematic literature review of maturity models for these three methods published between 2001 and 2020. We characterized the maturity models and determined how they see maturity, how they are applied, and how they were evaluated. As an extended version of a previous paper, we augmented our analysis criteria and further classified the models in how they interpret maturity and what strategy they suggest when undergoing an improvement process, in addition to providing new insight on various aspects of the models. We found 35 maturity models, of which 23 were for Agile, 5 for Lean thinking, 5 for UCD, and 2 for Agile and UCD combined. No models for the combination of the three methods were found (nor for Lean Startup), as expected due to the novelty of the approach. Existing models mostly focus on practice adoption and acquiring continuous improvement capabilities, and are typically developed with a specific context in mind. We also note a lack of proper evaluation procedures being conducted on the majority of models, which could be due to the lack of well-established maturity model development methods and guidelines.

Maximilian Zorzetti, Cassiano Moralles, Larissa Salerno, Eliana Pereira, Sabrina Marczak, Ricardo Bastos
Evaluating a LSTM Neural Network and a Word2vec Model in the Classification of Self-admitted Technical Debts and Their Types in Code Comments

Context: Software development teams constantly opt for faster, lower quality solutions to solve current problems without planning for the future. This situation will have a negative long-term impact and is called technical debt. Similar to a financial debt, technical debts require interest payments and must be managed and detected so that the team can evaluate the best way to deal with them. One way to detect technical debts is through classification of source code comments. Developers often insert comments warning of the need to improve their own code in the future. This is known as Self-Admitted Technical Debt (SATD). Objective: Combine Word2vec for word embedding with a Long short-term memory (LSTM) neural network model to identify SATDs from comments in source code and compare with other studies and LSTM without word embedding. Method: We plan and execute an experimental process with model’s effectiveness data validation. Results: In general, the classification improves when all SATD types were grouped in a single label. In relation to other studies, the LSTM model with Word2vec achieved better recall and f-measure. The LSTM model without word embedding achieves greater recall, but perform worse in precision and f-measure. Conclusion: We found evidence that LSTM models combined with word embedding are promising for the development a more effective SATD classifier.

Rafael Meneses Santos, Israel Meneses Santos, Methanias Colaço Júnior, Manoel Mendonça

Software Agents and Internet Computing

Frontmatter
From Simple to Structural Clones: Tapping the Benefits of Non-redundancy

Similarities are common in software. We observe them at all levels, from software requirements, to design, to program code and to other software artifacts such as models or test cases. Similar program parts are termed code clones. Detection of clones in code, as well as methods to achieve non-redundancy in programs have been an active area of research for last decades. In this chapter, I discuss sources of redundancies, variety of forms of their manifestation, software productivity benefits that can be gained by avoiding clones, and difficulties to realize these benefits with conventional programming languages and design techniques. I point to generative techniques as a promising approach to tap the benefits of non-redundancy. Research so far has mainly focused on detection of similar code fragments, so-called simple clones. The knowledge and possible unification of simple clones can help in maintenance. Still, further gains can be obtained by elevating the level of software similarity analysis to the design-level, larger granularity similar program structures, such as recurring architecture-level patterns of collaborating components. Recurring patterns of simple clones often indicate the presence of interesting higher-level similarities that we call structural clones. Detection of structural clones can help in understanding the design of a system for quality assessment, better maintenance and re-engineering, opening new options for design recovery (reverse engineering) which has been an active area of research for last 25 years, with only limited impact on software practice. In this paper, I broadly discuss the landscape of clone research, with particular emphasis on structural clones (This study was supported by the grant of WZ/WI-IIT/3/2020 from Bialystok University of Technology and founded from the resources for research by Ministry of Science and Higher Education, Poland.)

Stan Jarzabek
SACIP: An Agent-Based Constructionist Adaptive System for Programming Beginners

Brazilian universities have a high dropout rate in Computing courses. We believe E-learning personalized solutions can help to reduce this problem. This paper presents an architectural model for an adaptive system called SACIP that uses learning paths to deliver personalized assistance for students learning to program. Constructivist and constructionist theories were used as guidelines for the system modeling, and a collaborative multiagent system was developed to assist students in their choice of paths. Details of the SACIP implementation on different platforms are described, as well as its benefits and advantages over similar adaptive systems that use learning paths in a distinct manner. The application of SACIP with beginners in programming is aimed to facilitate learning, allow curricular flexibility and help to reduce dropout rates in Computing courses.

Adson M. da S. Esteves, Aluizio Haendchen Filho, André Raabe, Rudimar L. S. Dazzi
An Adaptive and Proactive Interface Agent for Interactivity and Decision-Making Improvement in a Collaborative Virtual Learning Environment

According to a report published by the Brazilian Association of Maintainers of Higher Education, the number of places offered in distance learning courses (aka distance education) in 2018 exceeded the places offered in classroom courses. At the same year, there was an increase of 20.6% in enrollments in distance education, with a forecast that exceeds classroom attendance by 2023. The same report showed concerns about the high dropout rates and low graduation rates in distance learning. The research indicates that the possible reasons for this are the students' feelings of loneliness, isolation, and demotivation. This paper presents an Interface Agent in the context of collaborative agents which main responsibility is to make the Virtual Learning Environment (VLE) more interactive, proactive, and able to adapt to the student's profile. This research is expected to contribute for making VLE more attractive, improve the teaching process, motivate students and, consequently, reduce dropout rates.

Aluizio Haendchen Filho, Karize Viecelli, Hercules Antonio do Prado, Edilson Ferneda, Jeferson Thalheimer, Anita Maria da Rocha Fernandes

Human-Computer Interaction

Frontmatter
Classification and Synthesis of Emotion in Sign Languages Using Neutral Expression Deviation Factor and 4D Trajectories

3D Avatars are an efficient solution to complement the representation of sign languages in computational environments. A challenge, however, is to generate facial expressions realistically, without high computational cost in the synthesis process. This work synthesizes facial expressions with precise control through spatio-temporal parameters automatically. With parameters compatible with the gesture synthesis models for 3D avatars, it is possible to build complex expressions and interpolations of emotions through the model presented. The built method uses independent regions that allow the optimization of the animation synthesis process, reducing the computational cost and allowing independent control of the main facial regions. This work contributes to the definition of non-manual markers for 3D Avatar facia expression and its synthesis process. Also, a dataset with the base expressions was built where 4D information of the geometric control points of the avatar built for the experiments presented is found. The results of the generated outputs are validated in comparison with other expression classification approaches using Spatio-temporal data and machine learning, presenting superior accuracy for the base expressions. The rating is reinforced by evaluations conducted with the deaf community showing a positive acceptance of the facial expressions and synthesized emotions.

Diego Addan Gonçalves, Maria Cecília Calani Baranauskas, Eduardo Todt
How Communication Breakdowns Affect Emotional Responses of End-Users on Progressive Web Apps

Progressive Web App (PWA) is a recent approach made of a set of techniques from both web and native apps. End-User Development (EUD) is an approach from which end-users are allowed to express themselves. The impacts of associating EUD and PWAs have been little exploited. With this in mind, we proposed the PWA-EU approach in previous work. In this paper, we compare the communication breakdowns and the users’ emotional responses with the aim of finding if both aspects are correlated. We conducted a study with 18 participants that interacted with Calendar, a mobile app based on the PWA-EU approach. We carried out a qualitative analysis based on the communication breakdowns and emotional responses of the participants’ interaction. Our findings point out that common issues occurred and affected both the emotional response and breakdowns of the participants. Still, how these common issues affected the participants were different between individuals.

Giulia de Andrade Cardieri, Luciana A. M. Zaina
Visualizing Deliberation and Design Rationale: A Case Study in the OpenDesign Platform

The open phenomenon coming from the free-software movement has gained several fields, including services, digital and physical products. Nevertheless, some authors point out the limited availability of supporting methods and online tools to face the challenges of the distributed collaboration of volunteers with diverse backgrounds and motivations. In this paper, we present the OpenDesign Platform and its potential to support distributed co-creation. A case study conducted with 22 participants attending a Conference in Organizational Semiotics illustrates their use of the platform to clarify the tensions and ideas towards the conception of a community-driven solution to a given design challenge. Results of their participation through the platform analyzed through graphical representations based on concepts of the Actor-Network Theory provided a visual representation of the network constituted by both the participants and the artifacts (boundary objects). These analyses, corroborated by the perception of the participants on their use of the platform, have shown the effectiveness of the OpenDesign Platform to afford online deliberation and communicate elements of the design rationale between participants. The QUID tool, used for the network visualization, revealed its representational power as an instrument for visualization and analysis. Further studies include investigating how the integration of the visualization tool into the OpenDesign platform may increase awareness of other’s contributions during the (open) design process.

Fabrício Matheus Gonçalves, Alysson Prado, Maria Cecília Calani Baranauskas
Tabletop Interface Design: Developing and Using Usability Heuristics

Tabletops enable the development of applications with high interactivity and simultaneous collaboration between users. Due to the size of these devices, they differ considerably from other touchscreen devices such as tablets and smartphones. Therefore, understanding these differences to define appropriate heuristics for the evaluation of tabletop interface designs is necessary. This paper presents a set of heuristics to be considered in tabletop interface design and a checklist composed of a set of questions for each proposed heuristic that aims to assist in the evaluation of interfaces for tabletops. Based on the literature, we analyze different proposals for heuristics for specific applications to identify the main concepts involved in defining a heuristic and how to write heuristics for specific applications. Also, we researched the particular features of tabletop applications that need to be considered in interaction design. From these characteristics, Nielsen’s ten heuristics were specifically adapted for use with tabletop devices and two new heuristics were defined. A case study was carried out in the SIS-ASTROS project, the proposed heuristics were considered to design a virtual tactical simulator, and an evaluation of the interface by the military was performed. Observing the militaries using them, we have gathered evidence that these heuristics can help designers to think about essential interface characteristics to support users to realize and understand the interface goal and how to interact with it.

Vinícius Diehl de Franceschi, Lisandra Manzoni Fontoura, Marcos Alexandre Rose Silva
User Stories and the Socially-Aware Design Towards the OpenDesign Platform

Successful design projects require the involvement of stakeholders to adequately capture their needs, desires and objectives. However, literature lacks studies and open software platforms to support in this task to help solve wide-ranging design problems. In this paper, we explore user stories articulated with Socially-aware Design artifacts for the ideation and construction of the OpenDesign platform. We present how the platform was conceptualized and its key features obtained. Our study explored participatory practices in the elaboration of user stories and involved stakeholders for achieving problem clarification in a Socially-aware Design perspective. Our results illustrate the developed platform to support open design processes.

Julio Cesar dos Reis, Andressa Cristina dos Santos, Emanuel Felipe Duarte, Fabrício Matheus Gonçalves, Breno Bernard Nicolau de França, Rodrigo Bonacin, M. Cecilia C. Baranauskas
Electronic Coupon Management System for Regional Vitalization

We propose an electronic coupon system to support regional vitalization. This system is designed for a number of events held in Japan, where local shopping districts offer special menus to rediscover the shops and their at-tractions. The system has a web application for customers that incorporates a mechanism for interaction between customers and shop clerks, and a function for event organizers to visualize the success of the event. The proposed system was used in an actual event and its effectiveness was confirmed using a user survey.

Hiroki Satoh, Toshiomi Moriki, Yuuichi Kurosawa, Tasuku Soga, Norihisa Komoda
A Modern Approach to Personalize Exergames for the Physical Rehabilitation of Children Suffering from Lumbar Spine

Physical rehabilitation of people with injuries or illnesses related to the lumbar spine involves an intensive treatment to reduce pain or improve mobility. Research studies have evidenced the benefits of complementing the patient’s regular treatment with exercise routines at home. However, in the case of children and adolescents, there is a risk of abandoning the exercise routine if it is not motivating enough. Currently, there is a trend which consists in using games for rehabilitation exercises, called exergames, as a possible solution for motivating patients while they perform physical rehabilitation. However, both customizing and creating them is still a task that requires considerable investment both in time and effort. Thus, this paper presents a language along with a system based on the physical rehabilitation of children suffering from some sort of lower back pain, which enables the customization and the automatic generation of exergames. We have conducted an experiment with children for evaluating the capabilities of our approach. The obtained results show that the tool is fun, interesting and easy to use.

Cristian Gómez-Portes, Carmen Lacave, Ana I. Molina, David Vallejo, Santiago Sánchez-Sobrino

Enterprise Architecture

Frontmatter
Digital Innovation and Transformation to Business Ecosystems

Digital technologies have been penetrating every aspect of business. Such pervasive deployment of digital technologies enables organisations to reinvent themselves in defining and conducting business. Leveraging the value of information as the key resource, through digital innovation such as digitisation and servitisation of products and services, becomes necessary for a business to survive and to remain competitive. Business ecosystems of the organisations and their partners have been formed through digital connectivity and digital platform which offer clearly strategic advantages and competitiveness. Interconnectivities between entities in the ecosystem are realised through multiple flows such as goods, finance, and information. For a business organisation to gain competitive advantages, a successful transformation of the organisation in many dimensions is essential to allow value co-creation with other members in the ecosystem. These dimensions include mindset, culture, values, leadership, structure, process and IT systems. This keynote will discuss the notions of digital innovation and transformation, and the prerequisites and readiness for the transformation towards business ecosystems. By examining the current practice of successful examples, key findings will lead to the principles and models of organisational transformation for value co-creation and optimising benefits in the business ecosystems. This keynote will hopefully inspire practitioners and researchers to benefit from existing theoretical lenses and methods and to derive their own guidelines and models to support organisations in digital transformation.

Kecheng Liu, Hua Guo
Reference Architecture for Project, Program and Portfolio Governance

This paper presents a reference architecture on projects, programs and portfolios (PPP) governance models. Usually, organizations have projects organized into programs that in turn are organized into portfolios. Within organizations it is necessary to establish project, program and portfolio management policies, so that all the expected benefits of projects are achieved and the previously imposed constraints (time, costs, etc.) are met. It is therefore also important for organizations to have a governance model for projects, programs and portfolios. A governance model entails how decisions are made and how actions are carried out, considering organizational values. The required roles, responsibilities, and performance criteria should be an integral part of the governance model for projects, programs, and portfolios. However, given the permeability of projects, programs, and portfolios to several factors (e.g. regulatory factors and the project's project management maturity), several roles can be considered about governance models, thus promoting the existence of different project, program and portfolio governance models. This research proposes a reference architecture that allows the comparison of governance models to verify deviations and detect which model best suits the context of an organization.

Gonçalo Cordeiro, André Vasconcelos, Bruno Fragoso
Pre-modelled Flexibility for the Control-Flow of Business Processes: Requirements and Interaction with Users

At process-aware information systems (PAIS), it is sometimes necessary to deviate from the predefined process. Otherwise the users are restricted too much. This paper presents an approach that allows to pre-model predictable flexibility already at build-time. An advantage, compared to completely dynamic changes at run-time, is that the effort for the end users necessary to trigger a deviation is reduced significantly. Furthermore, process safety is increased since, for instance, it can be predefined which users are allowed to perform which modifications. The corresponding requirements for the control-flow perspective are presented in this paper, with a special focus on the kind of information that shall be predefined at build-time. Examples from practice are presented in order to illustrate the necessity of the requirements. Furthermore, the interaction with the users is explained in order to show that triggering a flexible deviation causes only little effort at run-time.

Thomas Bauer
On Enterprise Architecture Patterns: A Tool for Sustainable Transformation

Organizations across the world today face similar problems, thus, solutions to these problems could also be re-used across many organizations. Climate change is one such common problem to organizations, where any solution implemented by one could be re-used by many others. These re-usable solutions could be Enterprise Architecture Patterns, but there is a lack of guidance on how to use them in practice. This study proposes an extension to the commonly used TOGAF, to better leverage these reusable solutions. Thus, this enables organizations using this framework to enhance it with Enterprise Architecture Patterns. As part of the proposed methodology, re-usable patterns supporting sustainable characteristics are built. The resulting methodology is validated with an expert panel, gathering positive comments.

Roberto García-Escallón, Adina Aldea, Marten van Sinderen
Security Architecture Framework for Enterprises

Security is a complex issue for organisations, with its management now a fiduciary responsibility as well as a moral one. Without a holistic robust security structure that considers human, organisational and technical aspects to manage security, the assets of an organisation are at critical risk. Enterprise architecture (EA) is a strong and reliable structure that has been tested and used effectively for at least 30 years in organisations globally. It relies on a holistic classification structure for organisational assets. Grouping security with EA promises to leverage the benefits of EA in the security domain. We conduct a review of existing security frameworks to evaluate the extent to which they employ EA. We find that while the idea of grouping security with EA is not new, there is a need for developing a comprehensive solution. We design, develop, and demonstrate a security EA framework for organisations regardless of their industry, budgetary constraints or size; and survey professionals to analyse the framework and provide feedback. The survey results support the need for a holistic security structure and indicate benefits including reduction of security gaps, improved security investment decisions, clear functional responsibilities and a complete security nomenclature and international security standard compliance among others.

Michelle Graham, Katrina Falkner, Claudia Szabo, Yuval Yarom
GDPR Compliance Tools: Best Practice from RegTech

Organisations can be complex entities, performing heterogeneous processing on large volumes of diverse personal data, potentially using outsourced partners or subsidiaries in distributed geographical locations and jurisdictions. Many organisations appoint a Data Protection Officer (DPO) to assist them with their demonstration of compliance with the GDPR Principle of Accountability. The challenge for the DPO is to monitor these complex processing activities and to advise and inform the organisation with regard to the organisations demonstration of compliance with the Principle of Accountability. A review of GDPR compliance software solutions shows that organisations are being greatly challenged in meeting compliance obligations as set out under the GDPR, despite the myriad of software tools available to them. Many organisations continue to take a manual and informal approach to GDPR compliance. Our analysis shows significant gaps on the part of GDPR tools in their ability to demonstrate compliance in that they lack interoperability features, and they are not supported by published methodologies or evidence to support their validity or even utility. In contrast, RegTech has brought great success to financial compliance, using technological solutions to facilitate compliance with, and the monitoring of regulatory requirements. A review of the State of the Art identified the four success features of a RegTech system to be, strong data governance, automation through technology, interoperability of systems and a proactive regulatory framework. This paper outlines a set of requirements for GDPR compliance tools based on the RegTech experience and evaluate how these success features could be applied to improve GDPR compliance. A proof of concept prototype GDPR compliance tool was explored using the four success factors of RegTech, in which RegTech best practice was applied to regulator based self-assessment checklist to establish if the demonstration of GDPR compliance could be improved. The application of a RegTech success factors provides opportunities for demonstrable and validated GDPR compliance, notwithstanding the risk reductions and cost savings that RegTech can deliver and can facilitate organisations in meeting their GDPR compliance obligations.

Paul Ryan, Martin Crane, Rob Brennan
Backmatter
Metadaten
Titel
Enterprise Information Systems
herausgegeben von
Prof. Dr. Joaquim Filipe
Michał Śmiałek
Alexander Brodsky
Slimane Hammoudi
Copyright-Jahr
2021
Electronic ISBN
978-3-030-75418-1
Print ISBN
978-3-030-75417-4
DOI
https://doi.org/10.1007/978-3-030-75418-1