Skip to main content

2018 | Buch | 1. Auflage

Trends and Advances in Information Systems and Technologies

Volume 1

herausgegeben von: Álvaro Rocha, Hojjat Adeli, Luís Paulo Reis, Sandra Costanzo

Verlag: Springer International Publishing

Buchreihe : Advances in Intelligent Systems and Computing

insite
SUCHEN

Über dieses Buch

This book includes a selection of papers from the 2018 World Conference on Information Systems and Technologies (WorldCIST'18), held in Naples, Italy on March27-29, 2018. WorldCIST is a global forum for researchers and practitioners to present and discuss recent results and innovations, current trends, professional experiences and the challenges of modern information systems and technologies research together with their technological development and applications.

The main topics covered are: A) Information and Knowledge Management; B) Organizational Models and Information Systems; C) Software and Systems Modeling; D) Software Systems, Architectures, Applications and Tools; E) Multimedia Systems and Applications; F) Computer Networks, Mobility and Pervasive Systems; G) Intelligent and Decision Support Systems; H) Big Data Analytics and Applications; I) Human–Computer Interaction; J) Ethics, Computers & Security; K) Health Informatics; L) Information Technologies in Education; M) Information Technologies in Radiocommunications; N) Technologies for Biomedical Applications.

Inhaltsverzeichnis

Frontmatter
Erratum to: Validity Issues of Digital Trace Data for Platform as a Service: A Network Science Perspective
Mehmet N. Aydin, Dzordana Kariniauskaite, N. Ziya Perdahci

Information and Knowledge Management

Frontmatter
A Comparison of Small Area Estimation and Kriging in Estimating Rainfall in Sri Lanka

Accurate prediction of rainfall is crucial for a country like Sri Lanka as its economy is mainly based on agriculture. Many statistical techniques have been used in the past to estimate rainfall. The main purpose of this study is to evaluate and compare the accuracy of two techniques in estimating seasonal rainfall in Sri Lanka. The two techniques that were compared are namely; Small Area Estimation and Kriging. The four rainfall seasons considered is First Inter Monsoon, South West Monsoon, Second Inter Monsoon and North East Monsoon. Monthly rainfall data collected at 75 rain gauges during the year 2011 were used in this study. Data is withheld from 14 stations were then used to compare the predictions of seasonal rainfall using both techniques. Root Mean Squared Error, Correlation coefficient and scatter plots of observed and fitted seasonal rainfall values were used to compare the two techniques. The Comparison revealed that Kriging provided better predictions for rainfall in First Inter Monsoon and South West Monsoon. Predictions of both techniques were not much successful in estimating rainfall in Second Inter Monsoon. Small area estimation yielded more accurate predictions for rainfall in North East Monsoon.

Kalum Udagepola, Mithila Perera
Managing Telework from an Austrian Manager’s Perspective

Telework is a management technique, proven to improve results and morale. The key to success in telework is an effective leader. This paper is designed to provide an understanding of telework in Austria through face-to-face interviews, from the manager’s perspective in the private sector. The manager determines which employees are permitted to telework, monitors progress, provides support and encouragement, and determines if he/she is productive at work. Becoming effective at managing teleworkers means a change in management style and technique. The principal key issue for results to be most effective in the virtual workplace is that managers accomplish effective managerial communication. Included in this is the importance of feedback, with trust being seen as one of the most important managerial tasks at a distance. An effective style of management can lead to stunning improvements in productivity, profits and customer service.

Michal Beno
Soft Innovation in the Video Game Industry

This paper reviews literature on soft innovation with a focus on soft innovation in the video game industry. A systematic review of books, reports and papers, in all 39 literary works form the bases of this paper. Four streams were identified from the literature - management, design, market and network. Based on the findings some key insights were made. The need for a holistic view of hard and soft innovation is necessary, the product becomes valuable only through experience, the market pressure increases the need for soft innovation, and networking increases knowledge, which generates intangible and tangible value.

Hrafnhildur Jónasdóttir
Does the Interactivity and Participation of the Audience in the Iberoamerican Cyber-Media Advance?

The interactivity together with the multi mediality they are defining elements of the cybermedia. The purpose of the investigation is to recognize the progress of the Web 2.0. in the cyber-media of Ibero-America. About the basis of the evaluation model proposed by Rodríguez-Martínez, Codina and Pedraza-Jiménez (2012) 60 websites are measured between June and July 2017. The hypothesis is that the participation of the audiences has been incorporated in a wide way, and, unlike previous investigations, compliance with the parameters it is appreciated: “Publication of contents created by users” and “Information access”. The methodology is complemented with 22 semi-structured interviews, done to experts from Colombia, USA, Ecuador, México and Uruguay. It is concluded that the cyber-media does not locate conditions to dialogue. The participation of the audience is limited. It is necessary to integrate to the users through interactivity. The trend seen in previous studies is kept.

Abel Suing, Kruzkaya Ordóñez, Lilia Carpio
Knowledge Sharing in Industrialization Project Management Practices

Facing today’s complex and dynamic business environments, many organizations, especially those working in the new product development field, adopt a project-based approach to become more effective and flexible in managing change. Nevertheless, managing complex projects requires well identified, defined, yet customized practices. This becomes even more significant when the project managers are newcomer employees, so that they get rapidly acquainted with company’s specific practices. The case study research methodology was applied, by using direct observation and document analysis, to identify the industrialization project management practices at a plant of a first tear automotive industry supplier. Core processes of Product Engineering Process and Project Management, the main used tools, and activities performed by industrialization project managers were identified, systemized and compiled in a workbook, with the purpose of being a guide and a work tool for new industrialization project managers.

Ana Almeida, Anabela Tereso, João Faria, Teresa Ruão
Mexican Spanish Affective Dictionary

In the study of Affective Computing, the lexicon-based approach represents a useful mechanism that consists on using rated words to understand their affective role in a conversation. One of the most used lists of affectively rated words is the Affective Norms for English Words (ANEW), which evaluates the dimensions of pleasure, arousal and dominance for the English language. This list has been translated for other languages such as German or Spanish with effective results; however, there is not an affective lexicon for Mexican Spanish, rated by Mexicans. Based on the ANEW methodology, but using the most frequently words in Mexican Spanish language, similar to emoticons figures for the evaluation and an ad hoc app to collect the data, a list with means and standard deviation for Mexican Spanish words was obtained. Results and main differences with the ANEW study are here discussed.

Adriana Peña Pérez Negrón, Luis Casillas, Graciela Lara, Mario Jiménez
Measuring Knowledge Management Project Performance

Knowledge management (KM) is recognized as contributing significantly to the organizational performance. Researchers have worked hard on KM performance measurement on the enterprise level. They investigate KM performance determinants and provide numerous KM performance models. Meanwhile, little research has been undertaken to assess knowledge management projects performance. In fact, the growing number of KM project confirms the need for a performance measurement model, able to assess, each time a KM project is introduced, the performance of such project in order to rationalize its use, evaluate its effectiveness and justify its consequent financial costs.This research aims to fill the gap by proposing a generic model that carries out the performance measurement of KM projects. KM activities, KM success factors and KM outcomes are the main constructs of this model. For the operationalization of the model constructs, a literature review was conducted for KM outcomes and KM factors. Whereas a particular emphasis was placed on the KM activity construct and a KM flow model were designed to this end. The measurement properties of the model research constructs are investigated using the confirmatory factor analysis.This study formulates the foundations for the understanding of KM projects performance constructs. It is believed to be used as a stepping stone for a further in depth theoretical and empirical studies.

Latifa Oufkir, Ismail Kassou
Feasibility and Development Analysis of P2P Online Lending Platforms in Taiwan

After the financial crisis, the governments have been stricter in regulating the financial systems so that banks to raise their lending thresholds. Unmet demands for loans coupled with the matured techniques impel the emergence of Peer-to-Peer (P2P) lending. The first online P2P lending platform, Zopa, was established in 2005. The Peer-to-Peer Finance Association (P2PFA) and the Financial Conduct Authority (FCA) promote the steady growth in UK P2P lending marketplace together. Prosper and Lending Club are two major P2P lending platforms. After the financial crisis, Securities and Exchange Commission (SEC) has strengthened regulations on the US P2P lending corporations. In this paper, we discuss the development and supervision process of P2P lending in the UK and US, as well as Taiwan lending marketplace on the aspects of environmental and legal. Further, we examine the impact of 4 interested parties - borrowers, investors, banks and P2P lending platforms, and whether the development of P2P lending is suitable for Taiwan. In conclusion, the development of P2P lending in Taiwan partly benefits the interested parties. Hence, this paper suggests that the P2P lending platforms should readjust the operational strategies to fit Taiwan’s social culture, encouraging the government to extend the related regulations, and partnered or competing healthily with banks in order to develop sustainably.

Bih-Huang Jin, Yung-Ming Li, Tsai-Wei Liu
Analysis of Interoperability Attributes for Supplier Selection in Supply Chain Segmentation Strategy

Because of the competitive market, companies have been feeling the need to adjust and rethink their systems, processes and organizational methods in meeting increasingly stringent demands and imposed at speed by the environment in which they are inserted. The delay in processing information in different parts of the company as well as with their suppliers represents a major challenge for supply chains. One way for companies to adjust and rethink their organizational processes, allowing them to assess the characteristics of their relations with partner companies in their productive processes, is through the company’s interoperability. This paper performs an integrative literature review checking the necessary attributes for implementing interoperability in companies. The motivation for the paper lies in vetting the necessary requirements for implementing and evaluating interoperability in order to contribute to the selection of suppliers with the ideal configuration of company environments in studying supply chain segmentation.

Laion Xavier Pereira, Eduardo Alves Portela Santos, Eduardo de Freitas Rocha Loures
Solvency II – A BPM Application

Solvency II brings challenging requirements for insurance companies concerning the amount and periodicity of the reported information. Hence, it is difficult for an insurer to comply with such requirements without mapping and organizing the flow of information regarding the process of Solvency II.This paper contributes to literature manifold: we provide an assessment of Information Management needs for a medium-sized Portuguese insurance firm - CA Seguros - regarding Solvency II; in addition, we map and document the whole process of Information Management for quantitative reporting (Pillars 1 and 3) on Solvency II. Hence, we assess the possibility of mapping a firm’s Solvency II process with a business process model using Event-driven Process Chains.

Tiago Gonçalves, João Silva, Nuno Silva
Legal and Economic Aspects of Virtual Organizations

The paper focuses on the issue of virtual organizations which can be also called as virtual offices, enterprises registered at virtual addresses etc. The paper solves several goals. First this contribution supports other findings that the number of this kind of businesses is significant in the Czech Republic. Second the legal environment is discussed. Then different reasons for using the virtual address are mentioned. These reasons are presented as legal and economic aspects. The paper tries to uncover enterprise incentives for the virtual address choice. The classical incentives are presented in the literature. Some incentives could be valid only for the specific national conditions. This paper uses available data as a support for its statements which should explain the current importance of the virtual addresses in the Czech Republic.

Luboš Smrčka, Dagmar Čámská
An Approach for Knowledge Extraction from Source Code (KNESC) of Typed Programming Languages

Knowledge extraction is the discovery of knowledge from structured and/or unstructured sources. This knowledge can be used to build or enrich a domain ontology. Source code is rarely used. But implementation platforms evolve faster than business logic and these evolutions are usually integrated directly into source code without updating the conceptual model. In this paper, we present a generic approach for knowledge extraction from source code of typed programming languages using Hidden Markov Models. This approach consist of the definition of the HMM so that it can be used to extract any type of knowledge from the source code. The method is experimented on EPICAM and GeoServer developed in Java and on MapServer developed in C/C++. Structural evaluation shows that source code contains a structure that permit to build a domain ontology and functional evaluation shows that source code contains more knowledge than those contained in both databases and meta-models.

Azanzi Jiomekong, Gaoussou Camara
Measuring the Quality of Humanitarian Information Products: Insights from the 2015 Nepal Earthquake

Information plays a critical role in humanitarian assistance. It has become a product that is shared for multiple purposes such as situational awareness, decision-making, coordination, reporting, and attracting funding. In the aftermath of sudden onset disasters, humanitarians are constrained with huge workload, time pressure, and uncertainties and thus, information products are often criticized with respect to quality issues.In this paper, we aim at developing an empirically grounded framework that can measure the quality of information products through accuracy, objectivity, completeness, and consistency. We validate the framework with the help of practitioners and apply it to the information products of UN WFP for the 2015 Nepal earthquake response. Our analysis shows that the quality of studied information products could be improved with respect to consistency, accuracy, and objectivity. We discuss the implications of our study and propose future research directions.

Hossein Baharmand, Tina Comes
Assessing Review Reports of Scientific Articles: A Literature Review

Computational support has been applied in different stages for automation of the peer review process, such as reviewer assignment to the article, review of content of the scientific article, detection of plagiarism and bias, all applying Machine Learning (ML) techniques. However, there is a lack of studies which identify the instruments used to evaluate the reviewers’ reports. This systematic literature review aims to find evidence about which techniques have been applied in the assessment of the reviewers’ reports. Therefore, six online databases were evaluated, in which 55 articles were identified, all published since 2000, meeting the inclusion criteria of this review. The result shows 6 relevant studies, which address models of assessment of scientific article reviews. Nevertheless, the use of ML was not identified in any case. Therefore, our findings demonstrate that there are a few instruments used to assess the reviewers’ reports and furthermore, they cannot be reliably used to extensively automate the review process.

Amanda Sizo, Adriano Lino, Álvaro Rocha
Differences Between Urgent and Non Urgent Patients in the Paediatric Emergency Department: A 240,000 Visits’ Analysis

The volume of non urgent attenders in the emergency department has been a problem in the Emergency Department for several decades leading to overcrowding, unnecessary exposure to the hospital environment and unnecessary costs to the National Health System. This study aims to search for differences between urgent and non urgent patients. Considering information available at the time of admission, this study identifies referrals and previous visits within 24 h as a deterrent for non urgent visits. The further away the patient lives from the paediatric emergency department the less likely is for him to be admitted as a non urgent visit. This study also identifies Ophthalmology and Stomatology as the discharge physician’s specialities that are more likely to receive a non urgent visit. The cost of non urgent visits ascends to 2,500,000€ per year in this paediatric emergency department alone. This burden would be greatly reduced by profiling these patients and implementing measures for them to find alternative and more appropriate means of health care.

João Viana, Almeida Santos, Alberto Freitas
Methodology for the Implementation of Knowledge Management Systems in the University

This paper describes a methodology for the implementation of knowledge management systems in the university context, based on the study and adaptation of the most recognized methodologies in this area, as well as considering the strategic planning and systemic approach, which facilitates the development of technological support systems.The methodology aims to integrate and standardize the different phases and activities required to manage this type of systems in the university, strategically establishing the most important activities and adopting actions for their development, in order to facilitate organizational strengthening, academic, scientific or technological.Its main structure emphasizes a spiral approach that represents the infinite recurrent development between its different phases and activities, and that leads to improvement through an iterative and incremental process, since new plans or activities can be added to achieve the objectives.

Víctor Hugo Medina García, Lina María Medina Estrada, Joaquín Javier Meza Alvarez
Adapting a Multi-SOM Clustering Algorithm to Large Banking Data

It the recent years, Big Data (BD) has attracted researchers in many domains as a new concept providing opportunities to improve research applications including business, science, engineering. Big Data Analytics is becoming a practice that many researchers adopt to construct valuable information from BD. This paper presents the BD technologies and how BD is useful in Cluster Analysis. Then, a clustering approach named multi-SOM is studied. In doing so, a banking dataset is analyzed integrating R statistical tool with BD technologies that include Hadoop Distributed File System, HBase and Map Reduce. Hence, we aim to decrease the time execution of multi-SOM clustering method in determining the number of clusters using R and Hadoop. Results show the performance of integrating R and Hadoop to handle big data using multi-SOM clustering algorithm and to overcome the weaknesses of R.

Imèn Khanchouch, Mohamed Limam
Redundant Independent Files (RIF): A Technique for Reducing Storage and Resources in Big Data Replication

Most of cloud computing storage systems widely use a distributed file system (DFS) to store big data, such as Hadoop Distributed File System (HDFS) and Google File System (GFS). Therefore, the DFS depends on replicate data and stores it as multiple copies, to achieve high reliability and availability. On the other hand, that technique increases storage and resources consumption.This paper addresses these issues by presenting a decentralized hybrid model. That model; called CPRIF, is a combination of a cloud provider (CP) and a suggested service that we call Redundant Independent Files (RIF). The CP provides HDFS without replica, and the RIF acts as a service layer that splits data into three parts and uses the XOR operation to generate a fourth part as parity. These four parts are to be stored in HDFS files as independent files on CP. The generated parity file not only guarantees the security and reliability of data but also reduces storage space, resources consumption and operational costs. It also improved the writing and reading performance.The suggested model was implemented on a cloud computing storage that we built using three physical servers (Dell T320) running a total 12 virtual nodes. The TeraGen benchmark tool and Java Code were used to test the model. Implemented results show the suggested model decreased the storage space by 35% compared to other models and improved the data writing and reading by about 34%.

Mostafa R. Kaseb, Mohamed H. Khafagy, Ihab A. Ali, ElSayed M. Saad
Improving Employee Recruitment Through Data Mining

Companies have always struggled with recruiting suitable candidates. In this age of data, we believe that the process of recruiting candidates is broken. This paper presents our efforts to improve the process by introducing data analytics and smart decision making. Recruiters and recruiting companies can benefit from such findings by analyzing key performance indicators and recommendation systems when recruiting new candidates. Furthermore, we propose an approach of identifying employment trends as well as new skills that are required by the job market. The procedure is fully automatic and relies on machine learning approaches.

Visar Shehu, Adrian Besimi
Where to go in Brooklyn: NYC Mobility Patterns from Taxi Rides

Urban centers attractive for local citizens commonly house local cuisine restaurants or commercial areas. Local authorities are interested in discovering pattern to explain why city residents go to different areas of the city at a given time of the day. We explore a massive dataset of taxi rides, 69 million records in New York city, to uncover attractive places for local residents when going to Brooklyn. First, we obtain the origin destination matrix for New York boroughs. Second, we apply a density based clustering algorithm to detect popular drop-off locations. Next, we automatically find the closest venue, using the Foursquare API, to the most popular destination in each cluster. Our methodology let us to uncover popular destinations in urban areas in any city for which taxi rides information is available.

Juan Carlos Garcia, Allan Avendaño, Carmen Vaca
Supporting IoT Data Similarity at the Edge Towards Enabling Distributed Clustering

Hundreds of billions of things are expected to be integrated for heterogeneous Internet-of-Things (IoT) applications, which promises to drive the Future Internet. This variant IoT data mandates intelligent solutions to make sense of current data in real-time closer to the data origin. Clustering physically distributed data would enable efficient utilization where finding similarity becomes the central issue. To counter this, Jaro-Winkler and Jaccard-like algorithm have been proposed and extended to a distributed protocol to enable distributed clustering at the edge. Performance study, on a scalable IoT platform and an edge device, shows feasibility and effectiveness of the approach with respect to efficiency and applicability.

Hasibur Rahman
Uncertainty in Context-Aware Systems: A Case Study for Intelligent Environments

Data used be context-aware systems is naturally incomplete and not always reflect real situations. The dynamic nature of intelligent environments leads to the need of analysing and handling uncertain information. Users can change their acting patterns within a short space of time. This paper presents a case study for a better understanding of concepts related to context awareness and the problem of dealing with inaccurate data. Through the analysis of identification of elements that results in the construction of unreliable contexts, it is aimed to identify patterns to minimize incompleteness. Thus, it will be possible to deal with flaws caused by undesired execution of applications.

Leandro Oliveira Freitas, Pedro Rangel Henriques, Paulo Novais
An Incident Handling Guide for Small Organizations in the Hospitality Sector

Security threats to small organizations are on the rise and many small organizations do not have the aptitude to address and properly responds to these incidents within their organizations. According to a 2016 Ponemon Institute survey of 600 small businesses, fifty percent (50%) had experienced a data breach and fifty-five percent (55%) had experienced a cyber attack in the past twelve months. Having an incident response plan is the most noteworthy cost-saving measure. The 2017 IBM and Ponemon Study found that organizations who can contain a breach in less than 30 days can save about $1 million. Hence, a small organization without an incident response plan is very likely to face great reputational damage and financial losses. The research methodology reviews current incident response frameworks, identifies relevant incident response guidelines and tailors the current and relevant frameworks into a small business-centric incident response guide that tackles threats the small hotels and casinos face.

Oluwadamilola Ogunyebi, Bobby Swar, Shaun Aghili
The Moderating Influence of Socially-Inspired Trust on the Connection Between Persuasive Governmental Social Media Quality and Success

The connection between governmental social media quality and success needs more verification. While some empirical studies back this direct link, some other studies failed to back this direct connection between the two variables. This research-in-progress investigation tries to analyze the moderating role of socially-inspired trust in the relationship between citizens’ perceptions of social media persuasive quality and their perceived success of the same. The statistical analyses of responses collected via a field survey provided partial support for the importance of socially-inspired trust for social media success.

Adel M. Aladwani
Persuasive Website Quality and Knowledge Sharing Success: A Preliminary Examination

Persuasive website quality evaluation is a technique concerned with testing the robustness and appeal of an online environment from a user’s perspective. The topic has recently gained some attention and in this study we will try to re-examining it in a knowledge sharing setting, e.g., digital libraries of academic institutions. The objective of the current article then is to analyze the connection between the persuasive quality of Kuwait University’s library portal and knowledge sharing success; and to report the results in hope of informing relevant discussion on the topic.

Adel M. Aladwani
Human Post-editing in Hybrid Machine Translation Systems: Automatic and Manual Analysis and Evaluation

This study assesses, automatically and manually, the performance of two hybrid machine translation (HMT) systems, via a text corpus of questions in the Spanish and English languages. The results show that human evaluation metrics are more reliable when evaluating HMT performance. Further, there is evidence that MT can streamline the translation process for specific types of texts, such as questions; however, it does not yet rival the quality of human translations, to which post-editing is key in this process.

Juncal Gutiérrez-Artacho, María-Dolores Olvera-Lobo, Irene Rivera-Trigueros
LGBT Tourism: The Competitiveness of the Tourism Destinations Based on Digital Technology

In this article, we present the literature review on the influence of LGBT consumers on tourism. Several authors demonstrate the clear relationship between countries progressive policies towards LGBT people and the economic benefits for their tourism sector, and the increasing social benefits as a result from the associated brand image of tolerance, inclusiveness and diversity. LGBTs are a community with a strong sense of identity, constantly sharing experiences and information and in constant virtual interaction, use all the available means of communication, especially the communication channels and online platforms, developed specifically for this community, such as online associations and forums, specialized websites and apps, and several social networks. Those responsible for marketing, tourism products and services, have been increasingly committed to reaching this target through the several online communication channels. Nonetheless, international publications point to the need for even more products and websites designed and directed to this type of customers. Information and communication technologies modifies the interaction with the individual or institutional clients and enables the adoption of innovative business models and electronic sales channels of tourism products. The role of the ICT in tourism has become an essential tool in today’s world of quick information, especially regarding LGBT tourist’s options.

Pedro Liberato, Dália Liberato, António Abreu, Elisa Alén, Álvaro Rocha
Facial Emotion Detection in Massive Open Online Courses

Recently, the Massive Open Online Course (MOOC) has appeared as a new emerging method of online teaching with the advantages of low cost and unlimited participation as well as open access via the web. However, the use of facial emotion detection in MOOCs is still unexplored and challenging. In this paper, we propose a new innovative approach for facial emotion detection in MOOCs, which provides an adaptive learning content based on students’ emotional states and their profiles. Our approach is based on three principles: (i) modeling the learner using the MOOC (ii) using of pedagogical agents during the learning activities (iii) capturing and interpreting the facial emotion of the students. The proposed approach was implemented and tested in a case study on the MOOC.

Mohamed Soltani, Hafed Zarzour, Mohamed Chaouki Babahenini
Integration of Process Mining and Simulation: A Survey of Applications and Current Research

Process mining is a field that uses elements from data mining and business process modeling to perform tasks such as process discovery, conformance checking, and process enhancement. This article presents a study about the application of process mining techniques in simulation and the integration between the two disciplines. Specifically, it shows a series of developments in the field that illustrate the possible applications. Also, the current main challenges of integrating process mining and simulation are exposed, one of the main issues being the lack of compatibility between tools for process mining and simulation. The objective of this article is to show the importance and practical utility of applying process mining approaches in simulation. The literature review shows that while there have been developments towards an integrated framework for process mining and simulation, the development of a standard unified methodology is still an open problem.

Brian Keith Norambuena
Improving Regression Models Using Simulated Annealing for Stock Market Speculation

The Forex aims at exchanging the so-called convertible currencies from one specific currency to another worldwide. The currency exchange rates can be increased or reduced according to time, between various participants (particular investors, central banks and enterprises). The main pillar of the Forex market is the temporal prediction of the currency exchange rate; it must be well-forecasted to invest in currencies and to maximize profits which will make the speculation more flexible. In the literature, many papers talk about the combination of two methods to improve the prediction of currency exchange. In this paper we propose a hybrid model which is combining both the regression algorithm and the simulated annealing algorithm in order to predict the daily exchange rates of the USD/EUR pair. Finally, the experiments validate that the Hybrid model of the regression algorithm and the simulated annealing algorithm can be beneficial for the prediction of exchange rates.

Hana Jamali, Omar Bencharef, Abdellah Nabaji, Khalid El Housni, Zahra Asebriy
Machine Learning Based Sentiment Analysis on Spanish Financial Tweets

Nowadays, financial data on social networks play an important role to predict the stock market. However, the exponential growth of financial information on social networks such as Twitter has led to a need for new technologies that automatically collect and categorise large volumes of information in a fast and easy manner. The Natural Language Processing (NLP) and sentiment analysis areas can solve this problem. In this respect, we propose a supervised machine learning method to detect the polarity of financial tweets. The method employs a set of lexico-morphological and semantic features, which were extracted with UMTextStats tool. Furthermore, we have conducted a comparison of the performance of three classification algorithms (J48, BayesNet, and SMO). The results showed that SMO provides better results than BayesNet and J48 algorithms, obtaining an F-measure of 73.2%.

José Antonio García-Díaz, María Pilar Salas-Zárate, María Luisa Hernández-Alcaraz, Rafael Valencia-García, Juan Miguel Gómez-Berbís
Software-Based Brand Ambassador Selection — A Celebrity-Branding Assessment Framework in Action

Premium consumer product manufacturers are frequently cooperating with celebrities as brand ambassadors today for generating PR leverage, enhancing an emotional brand appeal and creating unique customer experiences. In this context, the selection of the right celebrity brand ambassador is crucial. The related decision-making process involves the strategic evaluation of celebrities and the subsequent selection of brand ambassadors, which are human-centric, knowledge-intensive tasks. Therefore, these are frequently performed in a rather informal way. While celebrity endorsement has been extensively studied in the literature, a more formal, process-oriented approach for brand ambassador selection as basis for an appropriate decision support system is still lacking. Therefore, in this paper a celebrity branding assessment framework is proposed. On the basis of the framework, a celebrity evaluation software tool is developed and evaluated. The evaluation verifies the feasibility of the proposed approach. However, the implementation of brand-specific adaptations reveals to be essential.

Selina Görgner, Philipp Brune
Field Measurements of Electrical Consumption in a Multi-purpose Educational Building

The energy demand of non-domestic building includes a wide variety of uses and energy services. Researchers of Polytechnic University of Madrid and American University of Ras Al Khaimah have developed a smart metering system to report and analyze the electrical consumption in buildings. The system is non-intrusive, can be easily deployed in electric boards and sends data to a central base station located away from the metering device. In this article the system is tested in an educational facility. The lighting, power outlets and HVAC circuits are analyzed and finally the results are discussed.

Fernando del Ama Gonzalo, Jose A. Ferrandiz, David Fonseca Escudero, Juan A. Hernandez
Role of Data Properties on Sentiment Analysis of Texts via Convolutions

Dense and low dimensional word embeddings opened up the possibility to analyze text polarity with highly successful deep learning techniques like Convolution Neural Networks. In this paper we utilize pretrained word vectors in combination with simple neural networks of stacked convolution and max-pooling layers, to explore the role of dataset size and document length in sentiment polarity prediction. We experiment with song lyrics and reviews of products or movies and see that convolution-pooling combination is very fast and yet quiet effective. We also find interesting relations between dataset size, text length and length of feature maps with classification accuracy. Our next goal is the design of a generic neural architecture for analyzing polarity of various text types, with high accuracy and few hyper-parameter changes.

Erion Çano, Maurizio Morisio
A Comparison of Feature Selection Methods to Optimize Predictive Models Based on Decision Forest Algorithms for Academic Data Analysis

Nowadays, Feature Selection (FS) methods are essential (1) to create easy-to-explain predictive models in shorter periods of time, (2) to reduce overfitting and (3) avoid sparsity of data. The suitability of using these techniques is studied in this paper. Furthermore, a comparison of some widely extended techniques is performed to know which one is more appropriated to create predictive models using decision forest algorithms. For this comparison, experiments are conducted in which predictive models for each FS method are built to foresee if students will finish their degree after finishing their first year in college. A real dataset with students’ data provided by the University of Almería is used to generate the predictive models. By comparing the accuracy of the built models, we can measure the effectiveness of each FS method, being the Chi-Square statistic the method that leads to better results in our experimental study.

Antonio Jesús Fernández-García, Luis Iribarne, Antonio Corral, Javier Criado
Building Sustainable NRENs in Africa - A Technological and e-Education Perspective

Nowadays, it is consensual that Information and Communication Technologies play a crucial role in the universal access to education and science, being National Research and Education Networks (NRENs) one major example of its successful application in the academic and research context. Although in developed countries NREN infrastructures and services are widely adopted and mature, in developing countries many challenges need to be overcome to guarantee NRENs deployment and sustainability.In this context, this paper is devoted to discuss the NRENs panorama in Africa, with particular focus on Mozambique, covering the technological steps behind NRENs deployment and the challenges to provide relevant services to the enrolled community, in an encompassing and sustainable way. For this purpose, the technological and scientific ecosystem of Africa and Mozambique are analysed, and the national/international synergies at NREN and high education levels are identified. Finally, a service model for fostering e-learning and science sharing in Mozambique is proposed.

Marangaze Munhepe Mulhanga, Solange Rito Lima
Augmenting SMT with Semantically-Generated Virtual-Parallel Corpora from Monolingual Texts

Several natural languages have undergone a great deal of processing, but the problem of limited textual linguistic resources remains. The manual creation of parallel corpora by humans is rather expensive and time consuming, while the language data required for statistical machine translation (SMT) do not exist in adequate quantities for their statistical information to be used to initiate the research process. On the other hand, applying known approaches to build parallel resources from multiple sources, such as comparable or quasi-comparable corpora, is very complicated and provides rather noisy output, which later needs to be further processed and requires in-domain adaptation. To optimize the performance of comparable corpora mining algorithms, it is essential to use a quality parallel corpus for training of a good data classifier. In this research, we have developed a methodology for generating an accurate parallel corpus (Czech-English) from monolingual resources by calculating the compatibility between the results of three machine translation systems. We have created translations of large, single-language resources by applying multiple translation systems and strictly measuring translation compatibility using rules based on the Levenshtein distance. The results produced by this approach were very favorable. The generated corpora successfully improved the quality of SMT systems and seem to be useful for many other natural language processing tasks.

Krzysztof Wołk, Agnieszka Wołk
Assessing the Impact of Internet of Everything Technologies in Football

Internet of Things has been one of the hottest technology concepts in the last years. It started with the wearable devices and any digital one connected online, and evolved to a web connected network linking everything from devices, sensors, machines, people, processes, companies and so on, creating the Internet of Everything concept. There are many application areas, but one stands out due to its popularization and importance industry, one talks about Sports, specifically Football. Football has been reinventing itself with the implementation of technology, recreating the formula used in the United States Major Sports, where technology helps enhancing the spectacle experience, expand game analysis by coaches, players, media; live refereeing and improve health recoveries and detection of injuries. This research will make a state of situation regarding technology in football, recognizing the present used technologies and what could be implemented, and ultimately measuring the impact of these devices in Football.

Marcelo Pires, Vítor Santos
A Model for Knowledge Management and Information Systems at the Faculty of Education in Kuwait University

No one can deny the role of knowledge management and information systems in improving the quality of services being provided by educational institutions. The aim of the current research paper is to propose a model of knowledge management and information systems at the Faculty of Education in Kuwait University, in an attempt to improve outputs obtained from Kuwaiti higher education institutions. To achieve the objective of the study, the researcher will depend on reviewing relevant literature related to knowledge management and information systems to determine the critical elements of the proposed model.

Sultan Ghaleb Aldaihani
Who Are Your Design Heroes? Exploring User Roles in a Co-creation Community

Co-creation with users in online communities proved to be a powerful means for product innovation. Crowdsourcing ideas in a contest setting within a community represents an effective method to gather a variety of ideas within a short time and with reasonable financial investment. Users benefit as well. They can be part of industrial value creation, enjoy interacting with a company and socializing with other users, and win a prize. Interestingly, many users not only compete for prizes, but also collaborate with others by giving feedback and exchanging ideas. Thus, we find high heterogeneity among users which asks for adequate community management (incentives, facilitation, communication etc.). In this study, we explore user roles and communication patterns in an industrial design contest community by applying cluster analysis based on network measures and content analysis. Four user roles were found that differ in communication and contribution behavior.

Manuel Moritz, Tobias Redlich, Jens Wulfsberg
Digital Transformation: A Literature Review and Guidelines for Future Research

The aim of this paper is to provide insights regarding the state of the art of Digital Transformation, and to propose avenues for future research. Using a systematic literature review of 206 peer-reviewed articles, this paper provides an overview of the literature. Among other things, the findings indicate that managers should adapt their business strategy to a new digital reality. This mainly results in the adaptation of processes and operations management. Scholars, for the other side, are also facing challenges, as prior research may not have identified all the opportunities and challenges of Digital Transformation. Furthermore, while the Digital Transformation has expanded to all sectors of activity there are some areas with more prospects of being developed in the future than others.

João Reis, Marlene Amorim, Nuno Melão, Patrícia Matos
Evaluation of the Maturity Level of the IT Services in Higher Education Institutions: Portugal and Russia

This work presents an evaluation of the maturity level for IT service management, of two higher education institutions from Portugal and Russia. We will identify the correspondences between the process capability levels and the organizational systems maturity levels. One of the options to measure the IT maturity is by comparing the IT service team’s operations, planning and processes with international models. In this context, we decide to use the Information Technology Infrastructure Library (ITIL) framework to analyze the selection, planning, delivery and support of IT services by both institutions. This methodology uses units as the standard is an example of best practice that covers the control and management of all aspects of IT-related operations. To the development of this study, we use the survey technique as we can get clear, direct and objective answers. With results from these studies, we can identify policies that can be adapted and used to improve efficiency and achieve predictable service levels, and help in the implementation of improvements for the management of IT services.

João Paulo Pereira, Iuliia Zubar, Efanova Natalya
Improving Project Management Practice Through the Development of a Business Case: A Local Administration Case Study

The identification and implementation of the best practices of project management are preponderant and decisive factors for the success of the companies, regardless of their area of intervention. This highlight arose from the need for companies to respond quickly, efficiently and in an integrated manner to the challenges that an ever-changing environment offers.In a public transportation company, the challenges should focus on project management improvement initiatives, considering their organizational context and the low level of organizational maturity existing in project management.The purpose of the research was to develop a solution of a Business Case template that was intended to be integrated into the life cycle of project management of the company under study, transversal to all the areas of knowledge described by PMBoK, having several inputs provided by PRINCE2 and BABOK.The purpose of the research work was to develop a solution to justify initiatives that lead to projects and ensure a correct management throughout the life cycle of each project.

Fernando Martins, Pedro Ribeiro, Francisco Duarte
An Online Focus Group Approach to e-Government Acceptance and Use

The emergence and widespread use of Information and Communication Technologies enabled government agencies to develop their capacity for citizens interaction, hence providing better public services. These electronic services, typically known as e-Government, have been undergoing significant development over the last few years. Information technologies have become one of the central elements in electronic governance today, and probably they will outstand in the future too. Despite the clear advantages inherent in e-Government services usage, the interaction rate between citizens and governments is still low and considered to be unsatisfactory. In order to understand which determinants may influence and trigger the adoption of e-Government services by civil society, a research methodology consisting of a theory-based qualitative validation, has been defined. This validation is performed by an Online Focus Group, where several experts were involved with the goal of reaching a set of remarkable considerations about potential determinants that can influence the use of e-Government services.

Soraia Nunes, José Martins, Frederico Branco, Mijail Zolotov
Archetype Development Process: A Case Study of Support Interoperability Among Electronic Health Record in the State of Minas Gerais, Brazil

The interoperability among electronic medical records requires a standard that guarantees the semantic persistency of information. The study proposes an archetypes development process to support the Electronic Health Record (EHR) in the State of Minas Gerais (MG), Brazil. It was case study with a qualitative analysis of applied nature with methodological exploratory purposes. For this, there was a literature review on archetypes development processes. The selected studies had their processes compared. Then, an own archetypes development process was proposed, also considering the legislation of Unified Health System in Brazil (SUS). The process was tested in a proof of concept, a practical test on a theoretical proposal. The proposed governance model was considered adequate for the organization of EHR at SUS in MG. It is expected that with its effective implementation, the proposed process support the interoperability among clinical data arising from different levels of health care services.

Thais Abreu Maia, Cristiana Fernandes De Muylder, Zilma Silveira Nogueira Reis
Towards Information Warehousing: A Case Study for Tweets

In this paper, we introduce the paradigm of information warehousing and provide a generic information warehouse infrastructure for social media to enable the storage and analysis of massive information volumes generated by users daily through these platforms. We illustrate the implementation of the proposed framework in the case of Twitter by giving a multidimensional model which consists of fact and dimension tables. The extracted Twitter stream is exploited to perform clustering analysis using the BSO-CLARA algorithm in order to discover topics. The obtained results are very promising and the information warehouse is expected to be applicable for other types of information such as scientific articles.

Hadjer Moulai, Habiba Drias
Predictive Maintenance in the Metallurgical Industry: Data Analysis and Feature Selection

As a consequence of the increasing competitivity in the current economic environment, Proactive Maintenance practices are gradually becoming more common in industrial environments. In order to implement these practices, large amounts of heterogeneous information must be analysed, such that knowledge about the status of the equipment can be acquired. However, for this data to be of use and before it can be processed by machine learning algorithms, it must go through an exploratory phase. During this step, relationships in the data, redundancy of features and possible meanings can be assessed. In this paper, a number of these procedures are employed, resulting in the discovery of meaningful information. Moreover, a subset of features is selected for future analysis, allowing for the reduction of the feature space from 47 to 32 features.

Marta Fernandes, Alda Canito, Verónica Bolón, Luís Conceição, Isabel Praça, Goreti Marreiros
Model for Sharing Knowledge in a Co-creation Process with High Ability Students

A way to motivate high ability students in their learning process is by involving them in the co-creation of educational material. In doing so, the knowledge gained from the ensuing interaction between the high ability students, their parents and teachers can be very useful because it not only enhances the co-creation process itself, but also the subsequent activities as they are able to be adjusted to the students’ particular needs and goals. This paper describes two parts of a Knowledge Management System that focuses on utilizing students’ personal interests and encouraging their active participation to make improvements to the co-creation process by sharing the responsibility for their learning materials. To achieve this, we propose and develop a knowledge model that can identify the knowledge used in a co-creation process. Based on this knowledge model, we present a Knowledge Management System design which considers processing, support and general informatic system components.

Juan Pablo Meneses-Ortegón, Teodor Jové, Ramon Fabregat, Mery Yolima Uribe-Rios
Evaluation of Information Systems Curriculum in Portugal and Russia

The importance of Information Technology (IT) and Information Systems (IS) to organizations and the need for skilled professionals in the field is one of the most important challenges to universities. With the technological and organizational changes, IS education has been under continued adaptation, and higher education institutions have several difficulties in keeping the bachelor degrees curriculum updated. Several international organizations (ACM, AIS, BCS, IFIP, etc.) proposed for the last 40 years several curriculum guidelines, which are important to redesign the curriculum for survival in the current economic environment. The main purpose of this work is to compare Portuguese and Russian bachelor degrees with several standard curriculum on Information Systems proposed by recognized international organizations. The results obtained show the differences that exist between international curriculum guidelines and the bachelor degrees, and give us a perspective of the adequacy of the Portuguese and Russian curricula to the current requirements.

João Paulo Pereira, Ivlev Aleksandr, Elena Popova
Implementation of Paperless Office in the Classroom

Paper is a very necessary thing in our daily lives and a basic requirement in various fields such as office, an education, etc. The paper always needed by students during learning process. A student generally use paper covers the needs of Students’ books, activities pre-test and post-test, reports, portfolios and do homework. One student could use up to one hundreds sheets of paper per semester. But, after all of these usage of paper, we actually has spent $792 only for printing all of these tasks every semester without reuse the paper which ends up in the trash. Based on these issues, this study provides an opportunity for lecturer to perceive the external and internal conditions of implementation paperless classroom in the learning process. Technology as a mediator of paperless classroom, which are proposed based on the identified SWOTs for solving weaknesses and dealing with threats are significant for improving the learning problems. Actually, this paper is not to modify the basic model of Paperless Office concept, but Paperless Office concept to making as a function of mediator between the traditional learning processes to the digital learning process in the classroom. The authors used a systematic literature review approach, starts with literature review, problems identification, selection process, assess, synthesize and write down the ideas proposed, and then make conclusions. Finally, the output of this research is a new model (schematic and technical) of the process and transfer learning process from traditional to digital on the classroom.

Richardus Eko Indrajit, Saide, Rizqi Wahyuningsih, Lisa Tinaria
Towards a Mathematical Knowledge Management System: Ontology to Model Linear Equations

Knowledge management systems based on ontologies are an important software tool to maintain the knowledge of experts, however, the mathematical area needs to improve aspects such as creating repositories of formalized mathematics, mathematical search and retrieval, and implementing math assistants. This article proposes an ontology to be used in a Mathematical Knowledge Management System (MKMS) with the objective of storing and retrieving systems of linear equations, these equations will serve to teach students to solve problems with an approach based on examples. We built the ontology considering following phases: specification, conceptualization, formalization, and implementation. Besides, the ontology was evaluated before incorporating it into the MKMS. Finally, the article shows a general architecture of the MKMS to understand the theoretical operation. Although ontology focuses on modeling a single topic, it defines the basis for modeling other mathematics topics and being applied in an MKMS.

Alan Ramírez-Noriega, Yobani Martínez-Ramírez, José Armenta, Sergio Miranda, J. Francisco Figueroa Pérez, José Mendivil-Torres, Samantha Jiménez
A Conceptual Research Model Proposal of Digital Marketing Adoption and Impact on Low Density Tourism Regions

Nowadays, tourism faces the technology progress challenge. Tourists are changing the way they search for information and the way they buy tourism products and services. Therefore, becomes important to analyze the influence of relevant digital marketing tools on low density tourism regions success, measured through destination image, tourists’ satisfaction and loyalty. The main aim of this article is to demonstrate the theoretical support of a model about the impact of digital marketing tools technologies on low density tourism region. To achieve this purpose, a literature review will be used as a methodological basis. This study also intends to contribute to the scientific debate through the improvement of knowledge in digital marketing tools applied to tourism and for this industry stakeholders.

Filipa Jorge, Mário Sérgio Teixeira, Ricardo Jorge Correia, Ramiro Gonçalves, José Martins, Maximino Bessa
A Conceptual Model Proposal for Characterizing Discount and Outlet Platforms Adoption

The importance of e-commerce continues to grow in retail, providing companies with a critical tool to improve marketing and commercial strategies. In this context, understanding the distribution channels and the new business models becomes a fundamental issue for both researchers and business managers. This paper has two priority objectives. First is the accomplishment of a specific recent literature review survey on the theme of e-commerce platforms adoption that will support the next step. Second is to propose an adoption model that characterizes Discount and Outlet Platforms (DOP) adoption. The last contribution is distributed in the form of practical and theoretical implications, as well as future lines of action for possible investigations.

Carlos Peixoto, José Martins, Ramiro Gonçalves, Frederico Branco, Manuel Au-Yong-Oliveira
IoT Semantic Modeling Using the GWE (Generalized World Entities) Paradigm

We present here an overview of the Generalized World Entities (GWEs) paradigm, to be used to add a semantic/conceptual dimension to the IoT/WoT procedures. GWEs offer a unified way to model seamlessly, at conceptual level, both the digital counterparts of elementary entities like physical objects, humans, robots etc. and the semantic representations of higher level of abstractions corresponding to structured situations/behaviors.

Gian Piero Zarri
Search in Collections of Mathematical Articles

In this paper we analyze an approach to semantic search of the mathematical expressions, allowing to perform queries and seek for the mathematical formulae by textual names of the variables contained in the formulae. We propose the method of establishing relations between textual definitions of variables and formulae containing these variables. Marked up formulae are related via noun phrases to the concepts of mathematical ontology. We describe software implementation of semantic search in mathematical documents. We also discuss ways to increase the search efficiency by improving accuracy of noun phrases extraction and establishing relations between these entities and formulae. The research showed the general efficiency of the method and a high percentage of relevant relations established.

Eugeny Birialtsev, Alexander Gusenkov, Olga Zhibrik, Polina Gusenkova, Yana Palacheva
Extending PythonQA with Knowledge from StackOverflow

Question and Answering (QA) Systems provide a platform where users can ask questions in natural language to a system and get answers retrieved from a knowledge base. The work proposed in PythonQA create a Question and Answer System for the Python Programming Language. The knowledge is built from the Python Frequent Answered Questions (PyFAQ). In this paper, we extend the PythonQA system by enhancing the Knowledge Base with Question-Answer pairs from the StackExchange Python Question Answering Community Site. Some tests were performed to analyze the impact of a richer Knowledge Base on the PythonQA system, increasing the number of answer candidates.

Renato Preigschadt de Azevedo, Pedro Rangel Henriques, Maria João Varanda Pereira
Using Probabilistic Topic Models to Study Orientation of Sustainable Supply Chain Research

Even though the notion of sustainable development calls for an equilibrium among social, environmental and economic dimensions, several studies have suggested that an unbalance exists about the attention given to the three dimensions. Nonetheless, few contributions have demonstrated such unbalance. In this article, we propose a method based on LDA Topic Model, conceived to speed up the analysis of the sustainable orientation of a corpus. To test the procedure, we compared the results obtained using our method against those from a manual coding procedure performed on about ten years of literature from top-tier journals dealing with Sustainable Supply Chain issues. Our results confirm unbalance on research in this field, as they were reported previously. They show that most research is oriented to environmental and economic aspects, leaving aside social issues.

Carlos Montenegro, Edison Loza-Aguirre, Marco Segura-Morales
A Vulnerability Study of Mhealth Chronic Disease Management (CDM) Applications (apps)

The mhealth applications industry has witnessed a significant growth both in revenue and popularity since its inception. The introduction of mhealth CDM apps has improved the management of chronic diseases as it provides the physicians with an opportunity to monitor their patients’ health for symptoms more efficiently and effectively. With the benefits of the mhealth CDM apps, also comes vulnerabilities that can cause unauthorized access to the patients’ health information and manipulation to the patients’ data. The presence of these vulnerabilities can cause harm to the patients’ health and reputations. Currently there is a lack of security assurance framework tailored to the mhealth CDM apps. In this regard, the objective of the research was to conduct a vulnerability study on mhealth CDM apps and to provide a set of security assurance recommendations tailored to the mhealth CDM apps for better security and assurance in the apps. In order to achieve the research objective, thirty mhealth CDM apps were tested for vulnerabilities using vulnerability scanner apps, after identifying the vulnerabilities, mobile applications related frameworks and guidelines were reviewed to come up with the security assurance recommendations for mhealth CDM apps.

Tolulope Mabo, Bobby Swar, Shaun Aghili
Specialized Case Tools for the Development of Expert Systems

This report presents an approach to building specialized computer-aided software engineering (CASE) tools for the development of expert systems. These tools form an integrated development environment allowing the computer aided development of different applications in the appropriate field. The integrated environment which we consider in our report represents the combination of SWI-PROLOG and Data Base Management System (DBMS) PostgreSQL tools. SWI-PROLOG provides the most appropriate tools for the solution of logical tasks in expert systems. However, SWI-PROLOG cannot manage large amounts of data. Therefore, we need to apply an appropriate data base management system to extend the capability of the knowledge base. For this purpose we used the most advanced open source PostgreSQL tools. As a result of our research we have created tools enabling the compatibility of SWI-PROLOG and DBMS PostgreSQL within the integrated development environment.

Rustam A. Burnashev, Albert V. Gubajdullin, Arslan I. Enikeev
A BPMN Extension for the Annotation of HACCP Plans in Hospital Protocols

This paper introduces a Business Process Model and Notation (BPMN) extension designed to enrich the flowcharts included in the HACCP (Hazard Analysis and Critical Control Points) plans for hospital protocols. From previous works by the authors, a number of issues were identified regarding the application of BPMN on actual hospital HACCP plans. Those drawbacks guided the extension presented. In this line, the main aim of this work is to able to transfer several certain pieces of information, usually expressed just in natural language on attached documents, into graphic-based models including machine understandable information. The provided extension makes possible the adoption of advanced analysis mechanisms for the traceability of HACCP systems. A real world example taken for an actual hospital HACCP deployment is used to show the benefits of the proposal.

Mateo Ramos Merino, Luis M. Álvarez Sabucedo, Juan M. Santos Gago, Víctor M. Alonso Rorís
An Evaluation of Data Model for NoSQL Document-Based Databases

NoSQL databases offer flexibility in the data model. The document-based databases may have some data models built with embedded documents, and others made with referenced documents. The challenge lies in choosing the structure of the data. This paper proposes a study to analyze if different data models can have an impact on the performance of database queries. To this end, we created three data models: embedded, referenced, and hybrid. We ran experiments on each data model in a MongoDB cluster, comparing the response time of 3 different queries in each model. Results showed a disparity in performance between the data models. We also evaluated the use of indexes in each data model. Results showed that, depending on the type of query and field searched some types of indexes presented higher performance compared to others. Additionally, we carried out an analysis of the space occupied on the storage disk. This analysis shows that the choice of model also affects disk space for storing data and indexes.

Debora G. Reis, Fabio S. Gasparoni, Maristela Holanda, Marcio Victorino, Marcelo Ladeira, Edward O. Ribeiro
Information Sharing as a Coordination Tool in Supply Chain Using Multi-agent System and Neural Networks

The accurate understanding of future demand in a supply chain is certainly a crucial key to enhance the commercial competitiveness. Indeed, for any member of the supply chain system, a clear vision regarding the future demand affects its planning, performance, and profit. However, supply chains usually suffer from issues of coordination between its members and the uncertain character of customer’s demand. To solve these two problems, this paper examines the combination of two concepts: neural networks and multi-agent systems in order to model information sharing as a coordination mechanism in supply chain and to implement a daily demand-predicting tool. The proposed approach resulted in an MSE of 0.002 in the training set and 0.0086 in the test set, and is used on a real dataset provided by a supermarket in Morocco.

Halima Bousqaoui, Ilham Slimani, Said Achchab
CaVa: An Example of the Automatic Generation of Virtual Learning Spaces

In order to construct web Learning Spaces (LS), more than collect and digitalize information, a powerful data extraction and querying engine and a sophisticated web publishing mechanism are needed. In this paper, a system to automatically construct those learning spaces based on a digital repository is presented. The system takes XML files from repositories and populates an ontology (representing the knowledge base, the core of our system) to create the triples internal representation. A Domain Specific Language (CaVaDSL) will be used to specify the learning spaces based on that ontology. The formal description, written in that DSL, will be processed by Cavagen engine to generate the final LS.

Ricardo G. Martini, Cristiana Araújo, Pedro Rangel Henriques, Maria João Varanda Pereira
Using Social Network Analysis to Identify User Preferences for Cultural Events

The subject of this paper is social network analysis and its possible application in the field of culture. The main goal is to investigate the potentials of social network analysis for detecting user profiles and preferences regarding the certain types of cultural events. Data was collected through a mobile application and a survey. The results were analyzed using Ucinet and Vosviewer tools. The results reveal three clusters of events, as well as the most influential users and their preferences towards certain types of cultural events. The obtained results can be used to define future marketing activities, such as customization of offers regarding the identified users’ preferences.

Stevan Milovanović, Zorica Bogdanović, Aleksandra Labus, Dušan Barać, Marijana Despotović-Zrakić
Validity Issues of Digital Trace Date for Platform as a Service: A Network Science Perspective

Data validity becomes a prominent research area in the context of data science driven research in the past years. In this study, we consider an application development on a cloud computing platform as a promising research area to examine digital trace data belonging to records of development activity undertaken. Trace data display such characteristics as found data that is not especially produced for research, event-based, and longitudinal, i.e., occurring over a period of time. Having these characteristics underlies many validity issues. We employ two application development trace data to articulate validity issues along with an iterative 4-phase research cycle. We demonstrate that when working with digital trace data, data validity issues must be addressed; otherwise it can lead to awry results of the research.

Mehmet N. Aydin, Dzordana Kariniauskaite, N. Ziya Perdahci
The Role of Big Data in University Strategy and Performance Eco System: A Case Study

Faculty members are the beating heart of any University around the world. They are one of the pillars of any University strategy that usually includes excellence in teaching, research activities and both university and community services. Over their academic service lifetime, faculty members produce huge amount of information and knowledge because of the above activities. Unfortunately, and for cultural, technical and management reasons, this information is never captured and maintained, and hence the opportunity of creating value and knowledge is lost. On the other hand, lots of effort are wasted in gathering and aggregating such information every time it is needed by the management either to measure the progress and achievements of the university, or even when the faculty member is applying for promotion. This paper describes the architecture of a knowledge management system that has been implemented, to help the University to harness knowledge out of the big data that is generated by various faculty members, but scattered around the University. The system, which was successfully piloted in College of IT, enables faculty members to set their objectives, track academic performance throughout academic lifetime and guiding them towards their promotion through the promotion dashboard. The system, which was built on the promotion bylaws, follows a point system concept. This makes it easy to measure performance of the faculty member at any time and in a real time. Upon full implementation, the system will enable the university’s top management to monitor the University’s performance at different levels (personal, department, college and management levels). Hence moving them, in a systematic structured way, towards common goals of the University.

Ali AlSoufi
A P2P Algorithm for a Smart Information System

Last generation applications, often, generate a huge amount of data also characterized by high variety and velocity, like those Internet of Things (IoT) paradigm based. Big data analytics increasingly drive decision making in several fields and, then, new approaches and methodologies are needed to improve management operations. Mobile, social and ubiquitous services, comprising data sources and middleware, can be considered a dynamic content, and an effective solution to manage contents are the Content Delivery Networks (CDNs). However, CDNs show limits in dynamic and large systems in which a large amount of data is managed. To deal with these weaknesses and exploit their benefits, new algorithms and approaches have to be designed and employed. This paper introduces RadixOne, a distributed and self-organizing algorithm, to build an information system in pervasive and dynamic environments. Thanks to autonomous and local operations performed by hosts of a distributed system, a logically organized overlay network emerges and the resource discovery operations faster and efficient. Preliminary experimental results show the effectiveness of our approach.

Agostino Forestiero, Giuseppe Papuzzo
Proposal of a BI/SSBI System for Knowledge Management of the Traffic of a Network Infrastructure – A University of Trás-os-Montes e Alto Douro Case Study

The data volume in organizations has grown at an ever-increasing rate and part of it is associated with the operation of the network infrastructure used to support systems and applications. Given the importance of this infrastructure for organizations and the large amount of data that their operation originates, it is fundamental to manage and monitor it so that it can perform well. The previous concern is transversal to higher education institutions, where the research team assumed as important the development of a BI/SSBI system for the Informatics and Communications Services of the institution where it operates (UTAD), which allows the managing of all data volume; that enables it to be transformed into information and knowledge, which are fundamental resources to support decision-making processes. The purpose of this article is to demonstrate the usefulness of a BI/SSBI system in the described context, therefore a system of this type is presented, along with the adopted technologies, the performed tests and the obtained results.

José Bessa, Frederico Branco, António Rio Costa, Ramiro Gonçalves, Fernando Moreira
Application of High Performance Computing Techniques to the Semantic Data Transformation

The growth of the Life Science Semantic Web is illustrated by the increasing number of resources available in the Linked Open Data Cloud. Our SWIT tool supports the generation of semantic repositories, and it has been successfully applied in the field of orthology resources, helping to achieve objectives of the Quest for Orthologs consortium. However, our experience with SWIT reveals that despite the computational complexity of the algorithm is linear with the size of the dataset, the time required for the generation of the datasets is longer than desired.The goal of this work is the application of High Performance Computing techniques to speed up the generation of semantic datasets using SWIT. For this purpose, the SWIT kernel was reimplemented, its algorithm was adapted for facilitating the application of parallelization techniques, which were finally designed and implemented.An experimental analysis of the speed up of the transformation process has been performed using the orthologs database InParanoid, which provides many files of orthology relations between pairs of species. The results show that we have been able to obtain accelerations up to 7000x.The performance of SWIT has been highly improved, which will certainly increase its usefulness for creating large semantic datasets and show that HPC techniques should play an important role for increasing the performance of semantic tools.

José Antonio Bernabé-Díaz, María del Carmen Legaz-García, José M. García, Jesualdo Tomás Fernández-Breis
A Semantic MatchMaking Framework for Volunteering MarketPlaces

Volunteering is an omnipresent cornerstone of our society. Currently, new forms of volunteering like crowd workers, engagement hoppers or patchwork volunteers are arising. This next-generation volunteers more than ever demand for volunteering marketplaces providing adequate MatchMaking capabilities. This paper proposes a semantic MatchMaking framework allowing to compute a ranked list of tasks or volunteers whose profiles match “as closely as possible”. For this, an ontology-based vocabulary is established which explicitly captures the multifaceted nature of profiles for both, tasks and volunteers. Each of these facets is associated with adequate similarity measures and meta information explicitly capturing domain expertise. The feasibility of the approach is demonstrated by a simple example and a first prototype.

Johannes Schönböck, J. Altmann, E. Kapsammer, E. Kimmerstorfer, B. Pröll, W. Retschitzegger, W. Schwinger
A Candidate Generation Algorithm for Named Entities Disambiguation Using DBpedia

Word Sense Disambiguation (WSD) is the process of choosing one sense to an ambiguous word in a context. Ambiguity refers to the fact that a word can have different meanings. One form of the lexical ambiguity is polysemy (Apple is the company and eventually the fruit). The state-of-art approaches generally extract named entities (NE), generate candidate entities from a Knowledge Base (KB), and apply a comparison method to select the correct one. As a complement to the majority of those approaches which do not use the NE categories, we propose a disambiguation algorithm that uses those categories to reduce the number of the candidates. For instance, categories include person, location, organization, etc. we will show that considering them will considerably reduce the number of the resulting candidates. In this paper, we will focus on the step of generating the candidate entities from a KB, thus we will propose an algorithm that will use DBpedia to link NE categories to the values of rdf:type property. The obtained results are very promising.

Wissem Bouarroudj, Zizette Boufaida
Higher Education Analytics: New Trends in Program Assessments

End of course evaluations technologies can provide critical analytics that can be used to improve the academic outcomes of almost any university. This paper presents key findings from a study conducted on more than twenty different academic degree-programs, regarding their use of end of course evaluation technology. Data was collected from an online survey instrument, in-depth interviews with academic administrators, and two case studies, one in the US and another in the UAE. The study reveals new trends including sectioning and categorization; questions standardization and benchmarking; alignment with key performance indicators and key learning outcomes; and grouping by course, program outcome, program, college, etc. in addition to those vertical structures, higher education institutions are vertically examining a specific question(s) across.

Adam Marks, Maytha AL-Ali
Visualization on Decision Support Systems Models: Literature Overview

The information visualization (InfoVis) is defined as “the use of visual, interactive and visual representations supported by computer, to increase cognition”. InfoVis tools and methods help us to accelerate our understanding and action in a world of ever-increasing data volumes. Data visualization improves understanding, particularly with multidimensional data sets. Visual analysis methods allow decision makers to match their human flexibility, creativity and background knowledge with the enormous storage and processing capabilities of today’s computers for information on complex problems. Using advanced visual interfaces, humans can interact directly with data analysis. In this article, we will review the literature about the subject “Visualization on Decision Support Systems Models”, with a main focus on the architectures used and the approaches to the visualization of the extracted information resulting from the existing data, with main incidence in the last few years.

Carlos Manuel Oliveira Alves, Manuel Pérez Cota
An Efficient Real-Time Monitoring to Manage Home-Based Oxygen Therapy

IoT based systems are significantly different from other mainly due to the multitude of parameters they can monitor and/or control. In medical home-care, these systems can make the whole service supply chain more efficient and safer. We discuss the case study of home-care oxygen therapy where continuous monitoring systems allow real-time access to critical data of clinical process and logistics, resulting in new levels of knowledge about the oxygen delivery process: actual oxygen consumption, system maintenance needs, deviations from the Therapeutic Plan defined by the physician. This paper presents the architecture developed to manage the oxygen therapy service efficiently focusing on the oxygen cylinder. It is able to communicate differently with the different actors, depending on its status and position, and conveys the information to the person/company that will need accurate and timely information to ensure the success of the entire medical process.

Vincenza Carchiolo, Lucio Compagno, Michele Malgeri, Natalia Trapani, Maria Laura Previti, Mark Philip Loria, Marco Toja
Video Analytics on a Mixed Network of Robust Cameras with Processing Capabilities

Public safety is, to a greater or lesser extent, a significant concern in most modern cities. In many of these cities, video surveillance is employed to prevent and deter crime, often building systems with hundreds of cameras and sensors. Such systems proved to be effective in crime fighting and prevention, but they have high bandwidth requirements in order to bring a real-time monitoring. In countryside areas, where only low speed connections are available, the installation of such systems is not suitable and a new approach is required. In this context, this project proposes a platform based on open source libraries for video analysis techniques such as motion detection, object tracking, object classification on a low-bandwidth network.The proposed architecture is open and scalable as a result of performing image processing on the camera. These platform runs on robust distributed smart cameras that are ready for being installed in hard places. The system is prepared to deal with energy power failures; in order to increase reliability.This work describes the platform design, the smartCAM layout and components, the algorithms currently used for object tracking and classification, and exposes results regarding the efficiency of the solution.

Juan Pablo D‘Amato, Alejandro Perez, Leonardo Dominguez, Aldo Rubiales, Rosana Barbuzza, Franco Stramana
Increasing Authorship Identification Through Emotional Analysis

Writing style is considered the manner how an author expresses his thoughts, influenced by language characteristics of an individual, period, school, or nation. Most of the times, this writing style can identify the author. Yet, one of the most famous examples comes from 1914 in Portuguese literature, with Fernando Pessoa and his heteronyms Alberto Caeiro, Álvaro de Campos and Ricardo Reis, who had completely different writing styles and led people to believe that they were different individuals. So, the discussion about authorship identification already exists along a century. Currently, there are several alternatives to identify authors of text, however, these solutions do not consider the emotion contained in the text as source of information in the writing style.This paper is about a process to analyse the emotion contained in social media messages as Facebook (http://www.facebook.com) in order to identify the author’s emotional profile and use it to improve the ability to predict the authors of the messages. Using preprocessing techniques, lexicon-based approaches and machine learning, we achieved an authorship identification improvement around 5% in the whole dataset and more than 50% in specific authors, when considering the emotional profile on the writing style.

Ricardo Martins, José Almeida, Pedro Henriques, Paulo Novais
Modeling an Information Visualization Engine for Situational-Awareness in Health Insurance

Nowadays private health insurance is experiencing a substantial transformation. The number of chronic patients has increased greatly because of unhealthy behaviors, chronic medical conditions are responsible for two-thirds of current healthcare cost’s increase. Health insurance players face new challenges on how to prevent disease burdens and how to act preventively by promoting healthy behaviors to improve quality of life and wellbeing. Promoting a collaborative dataflow to keep all actors aware about events requiring their immediate attention assumes therefore a key relevance to enable an active participation of healthcare professionals in monitoring patient’s healthy behaviors. The implementation of self-awareness mechanisms addresses the responsibility of the patients for their actions. Following a user-centric approach and service design thinking principles, this paper presents an information visualization model for situational-awareness that streamlines visual representation of events in a collaborative decision making environment, as a way to proactively monitor unhealthy risk behaviors and risk situations within the health insurance domain. This work presents the outcomes of a prototype that describes two operational scenarios, demonstrating how the proposed model can be applied to health insurance.

Flávio Epifânio, Gabriel Pestana
Learning Ecosystem Metamodel Quality Assurance

The learning ecosystem metamodel is a framework to support Model-Driven Development of learning ecosystems based on Open Source software. The metamodel must be validated in order to provide a robust solution for the development of this type of technological solutions. The first phase of the validation process has done manually, but to ensure the quality of the metamodel, the last phase should be made using a tool. The first version of the metamodel is an instance of MOF, the standard defined by the Object Management Group. There are not stable tools to support the definition and mapping of metamodels and models using the standards. For this reason, is necessary to transform the metamodel from MOF to Ecore in order to use the tools provided by Eclipse. This work describes the transformation process and the measures to ensure the quality of the learning ecosystem metamodel in Ecore.

Alicia García-Holgado, Francisco J. García-Peñalvo
Statistical Approach to Noisy-Parallel and Comparable Corpora Filtering for the Extraction of Bi-lingual Equivalent Data at Sentence-Level

Text alignment and text quality are critical to the accuracy of Machine Translation (MT) systems and other text processing tasks requiring bilingual data. In this study, we propose a language independent bi-sentence filtering approach based on Polish to English translation. This approach was developed using a noisy TED Talks corpus and tested on a Wikipedia-based comparable corpus; however, it can be extended to any text domain or language pair. The proposed method uses various statistical measures for sentence comparison and can be used for in-domain data adaptation tasks as well. Minimization of data loss was ensured by parameter adaptation. An improvement in MT system score using the text processed using our tool is discussed and in-domain data adaptation results are presented. We also discuss measures to improve performance such as bootstrapping and comparison model pruning. The results show significant improvement in filtering in terms of MT quality.

Krzysztof Wołk, Emilia Zawadzka, Agnieszka Wołk
Empirical-Evolution of Frameworks Supporting Co-simulation Tool-Chain Development

Co-simulation has been proposed as a method for facilitating integrated simulation of multi-domain models of Cyber-physical Systems (CPS). To ensure that co-simulations are well-managed, concerns beyond technical mechanisms for co-simulation also need to be addressed during tool-chain development. In this paper, an evolution of two frameworks supporting co-simulation tool-chain development is first introduced. Drawing upon the empirical findings from an initial framework SPIT developed based on model-driven techniques, we develop a service-oriented framework, SPIRIT based on model-driven and tool-integration techniques. Moreover, we propose a 3D viewpoint based method to formalize concept models of co-simulation tool-chains. In order to evaluate the evolution, we use visualizations of related concept models to compare tool-chains developed based on these two frameworks.

Jinzhi Lu, Didem Gürdür, De-Jiu Chen, Jian Wang, Martin Törngren
Trust and Reputation Modelling for Tourism Recommendations Supported by Crowdsourcing

Tourism crowdsourcing platforms have a profound influence on the tourist behaviour particularly in terms of travel planning. Not only they hold the opinions shared by other tourists concerning tourism resources, but, with the help of recommendation engines, are the pillar of personalised resource recommendation. However, since prospective tourists are unaware of the trustworthiness or reputation of crowd publishers, they are in fact taking a leap of faith when then rely on the crowd wisdom. In this paper, we argue that modelling publisher Trust & Reputation improves the quality of the tourism recommendations supported by crowdsourced information. Therefore, we present a tourism recommendation system which integrates: (i) user profiling using the multi-criteria ratings; (ii) k-Nearest Neighbours (k-NN) prediction of the user ratings; (iii) Trust & Reputation modelling; and (iv) incremental model update, i.e., providing near real-time recommendations. In terms of contributions, this paper provides two different Trust & Reputation approaches: (i) general reputation employing the pairwise trust values using all users; and (ii) neighbour-based reputation employing the pairwise trust values of the common neighbours. The proposed method was experimented using crowdsourced datasets from Expedia and TripAdvisor platforms.

Fátima Leal, Benedita Malheiro, Juan Carlos Burguillo
A Context-Awareness System to Promote Cooperation in Implanting High-Speed Electronic Communications Network

A high quality digital infrastructure underpins virtually all sectors of a modern and innovative economy and is of strategic importance to social and territorial cohesion. Therefore, all citizens as well as the private and public sectors must have the opportunity to be part of this emerging digital economy. Optimizing the information flow triggered by announcements of constructions works regarding the deployment of high-speed electronic communications networks might be seen as a way to reduce costs and promote the cooperation of entities interested in that kind of information. This is particular relevant to the public sector, with increased efficiency of services provided to citizens. The paper presents how such approach was implemented in Portugal using the SIIA system, a geographical web based platform specifically developed to collect data about high-speed electronic communications networks, providing information in an opendata format. The paper also presents a business scenario regarding the efficient coordination of civil works, with simpler and more transparent permit-granting procedures. Using a BPMN diagram, a special focus in the process of constructions works and expansion of infrastructure is provided outlining the messages flow between the different participants in the process.

Gabriel Pestana, Ana Isaias, Manuel Barros
FAMAP: A Framework for Developing m-Health Apps

The edge-cutting mobile technologies have allowed the expansion of m-health applications for both patients and doctors. However, the variety of technologies, platforms and general-purpose development frameworks make developers and researchers to spend a considerable amount of time in developing m-health apps from scratch. This papers presents an ongoing research project about the creation of a framework for assisting developers and researchers in creating m-health apps called FAMAP. This framework is presented for the first time in the current article. Among others, this framework contains components for respectively (1) collecting data, (2) visualizing data analytics, (3) automating the definition and management of questionnaires, (4) implementing agent-based decision support systems and (5) supporting multi-modal communication. To show the utility of the proposed framework, this article presents some well-known and in-progress m-health apps developed with this framework. This work is assessed by considering (a) the usage data to show the commitment of users in one of the apps, and (b) the downloads and ranking in stores of another of the apps.

Iván García-Magariño, Manuel Gonzalez Bedia, Guillermo Palacios-Navarro

Organizational Models and Information Systems

Frontmatter
PMBOK as a Reference Model for Academic Research Management

Nowadays, in different institutions of higher education there is a tendency towards quality, and even more so with the publication of the New University Law 30220, which reformed education to promote formative research. Managing the research and formative quality processes require tools, model and methods of management to achieve the academic management goals. This paper presents the process of selecting a project management model to restructure the academic environment. PMBOK was selected as the proposed method. This framework was adapted to manage the research process at two universities.

Sussy Bayona, Jose Bustamante, Nemias Saboya
Classifying Cloud Computing Challenges: A Taxonomy Proposal

The rapid growth of Cloud Computing technology has made possible to conduct numerous studies and make investments in it to support the business models of organizations. However, this inclusion has been accompanied by several challenges which, although they have been actually studied, there is not still a consensus about how they can be classified inside the Information Systems field. This paper proposes a taxonomy to contribute to the solution of this problem, so that professionals and researchers can count on a common basis to focus on challenges with greater priority, eliminating ambiguities in their solution proposals.

Bastian Ferrer, Aldo Quelopana
Development of a Grooming Process for an Agile Software Team in the Automotive Domain

At the current unpredictable technical evolution, the market is demanding an increasingly flexibility from companies to adapt to the pace of change in what customers want. The present research was developed in an automotive company, where software teams are pursuing Agile methodologies to coupe with these challenges. Teams use the Scrum framework, however, lack of efficient communication among team members results in poor performance of the product owner and the development team. In an attempt to solve this issue and according to the needs shared by the teams, this paper proposes a grooming process for a Scrum team. It provides a step-wise approach to work breakdown, from customer requirements elicitation to the development of ready work entities using the user story format. This paper describes how agile methods can support requirements engineering in a software project.

Francisca Ribeiro, André L. Ferreira, Anabela Tereso, Deborah Perrotta
Characterization of an Evaluation Success Model of an IS Project, Focused on Stakeholders

Typically, we measure the success of an Information Technology project through time, budget and quality. However, with the advance of years, we found that stakeholders have also something to say about success evaluation. Each stakeholder has an impact, its own interests and expectations about the project development. Therefore, what for some can be considered a success, for others can be considered a failure, because each one has a different perspective on the same project. This short paper proposes a flexible success evaluation model for an Information Technology project, called Success Breakdown Structure (SBS). In this model, the various stakeholders present in a project are presented, as well as their success assessment criteria over three different visions: after the completion of the project, after a few months and last, after a few years. From the obtained results, we can conclude that the evaluation of a project success is not concern of project managers only, but is transversal to all stakeholders.

Luís Barros, Pedro Ribeiro
Development of an Interface for Managing and Monitoring Projects in an Automotive Company

In a competitive world, companies lose customers and opportunities due to lack of organisation, failure to meet deadlines, bad planning, among other reasons. Project management is presented as a form to guarantee compliance with deadlines, costs reduction, sales increase, revenues growth, clients’ satisfaction, among other benefits. This paper presents the development of interfaces for managing and monitoring projects at Bosch Car Multimedia in Braga, Portugal. With the increasing number of projects under the supervision of CM/MFT3, the section had greater difficulties in monitoring projects’ status. In order to tackle the situation, tools for project status control and project status overview (cockpit chart) were proposed, so that the project management practices could be improved.

Andreia Reis, Anabela Tereso, Cláudio Santos, Jorge Coelho
Evaluation of “Method” as IT Artifacts in Soft Design Science Research: Development of Community Based E-Museum Framework Towards Sustainable Cultural Heritage Information System

This paper reports work on the evaluation of IT artifacts in soft design science research. We believed that socio-technical concern could draw attention as a vital component in the development of a sustainable cultural heritage information system as social-technical system. Therefore, with social-technical concerned, the soft design science methodology is refined and IT artifacts for community based e-museum framework were constructed. The IT artifacts resulting from this research include constructs, methods, models and instantiations of Community based e-museum framework components. However, the evaluation of IT artifacts presented in this paper includes evaluation criteria in terms of completeness of methods that represents the components of community based e-museum framework.

Suriyati Razali
Business Process Reengineering Using Enterprise Social Network

Organizations are using business process reengineering methods to improve their products and services in term of quality, speed and cost. In this study business process reengineering method used to enhance the Innovation and Support Center services. The method specifies the process to be reengineered, proposes alternative automated processes generated from Enterprise Social Network used by the Center and simulate the new proposed reengineered business process. The result shows that the reengineering of current business process using Enterprise Social Network business process is improving the Center services by delivering faster service with lower cost.

Amjed Al-Thuhli, Mohammed Al-Badawi
Information Technology Determinants of Organizational Performance in the Context of a Cameroonian Electricity Company

Increased organizational dependence on information technology (IT) drives management attention towards improving information system quality. Given that IT quality is a multidimensional measure, it is important to determine which of its aspects are critical to organization’s performance so that chief information officers (CIOs) may make more informed choices when selecting technologies for their organizations. This research investigates the relationship between system quality (SQ) and organizational performance, system quality and system use, user satisfaction system use and organizational performance, and finally, user satisfaction and organizational performance. A total of 140 responses were collected through a questionnaire-based survey with an electrical company, and the data were analyzed using the structural equation modelling partial least square (SEM-PLS) method. Our results show that system quality influences user satisfaction significantly, which in turn influences organizational performance. Thus, this paper highlights the importance of user satisfaction in organizational performance.

Francis Dany Balie Djong, Jean Robert Kala Kamdjoug, Samuel Fosso Wamba
Evaluation Metrics of the Development of Municipal e-Government

The determination of the index of development of the e-services of the local governments allows to obtain results that give account of the evolution in the use and development of the governmental policies. To do this, web portals are used as the observable means to measure advances and setbacks in a comparative manner between e-governments. In the present work an analysis of the metrics is done to evaluate the presence of the e-services oriented towards the municipal governments through the literature review. It has been determined that there is no uniform or complete set of evaluation metrics for web portals, since there are different methodologies that each model or author proposes to determine the level of development of local e-government. Therefore, it is necessary to propose the unification of models in an integrated model at least when they refer to the same level of e-government.

Vicente Morales, Sussy Bayona
The Importance of the Dimensions of a Smart City – A Survey in Small Cities

The pursuit of territorial rebalance through an investment in new development models for cities is adapting them to the new challenges (from demographic to environmental) and with a duty to turn them into smart cities without taking away their identity. However, the definition of smart cities and the measurement of their levels of sustainability, quality of life and wellbeing are still low-consensus topics, and so are the dimensions and sub-dimensions which should have most weight when classifying a city as a “smart city”. In this research work and by means of a survey, we intend to assess, as far as possible, which dimensions (Governance, Innovation, Sustainability, Inclusion and Connectivity) are seen as the most important for a city to be considered a smart city. The results are discussed in the light of literature and future works are identified with the aim to shed some light into such an emerging, promising and current field as this one of “smart cities”.

Isabel Maria Lopes, Pedro Oliveira
The Rise of the Unicorn: Shedding Light on the Creation of Technological Enterprises with Exponential Valuations

The ambition to create start-ups with exponential valuations is the goal of many entrepreneurs. How may one create the so-called unicorns (start-ups with a valuation in excess of one billion US dollars)? Our study involved interviews and a focus group and a theoretical model was created. Business plans are surprisingly seen to be less important than is normally perceived, as is a higher education degree. The research also points to variables such as one’s DNA, experience, implementation capacity, intrinsic motivation, vision and timing, and the ability to seek feedback and learn from mistakes and not just from success.

Manuel Au-Yong-Oliveira, João Pedro Costa, Ramiro Gonçalves, Frederico Branco
Mobile Sharing Platform Operation Model and System Dynamic Analysis: Uber and Taiwan Taxi as Examples

It is inevitable that the appearance of sharing industry will impact the traditional industries. In Taiwan, Taiwan Taxi Company should have been significantly affected, but not necessarily, and now even reversed. This study found that the traditional taxi industry customers will not totally switch to new platform. The study deconstructed the relationships of two sides between Uber and Taiwan Taxi two platform services. And the following questions are explored, including (a) What are the factors that service demand side (passengers) choose Uber or Taiwan Taxi? (b) What are the factors that service provider (driver) choose Uber or Taiwan Taxi? What is the difference between the two platforms offering the service to the driver? (c) What are the reasons that Uber and Taiwan Taxi have been expanding in 2013–2017? Whether the new Uber business model can survive in the Taiwan taxi market.

Ting-Kai Hwang, Bih-Huang Jin, Yung-Ming Li, Shin-Jui Lee
Improving Work Allocation Practices in Business Processes Supported by BPMS

BPMS (Business Process Management Systems) are responsible for the execution of business process models, by delivering work activities to suitable agents (human or automatisms) that execute them. During the design-time, modelers have to specify the potential performers of a work activity according to their organizational position or role. Once several workers may share the same role, during run-time all of them can be assigned by BPMS to execute a work activity. However, distinct persons have different personality traits and, in a specific piece of work (for instance, requiring special teamwork skills), some of them can perform better than others. Addressing a gap in theory and practice of BPMS, in this paper we present a new approach that enables BPMS to assign (in run-time) the most suitable workers to perform specific work activities, grounded on the concept of psychological profile and taking into account technical, human and social aspects.

Robbie Uahi, José Luís Pereira, João Varajão
Integrated Framework for the Civil Construction Projects Management by Mean PMBOK, ISO 21500 and ITIL V3

Today, countless benchmarks and definitions of best practice have emerged in Technology Information and various business areas. Civil construction is a service that revolves around the hiring and provision of other services. A civil construction work is a construction service and becomes a construction project when a start, an end and a deliverable is defined. This article presents an integrated model proposal for managing projects and services of companies providing civil construction services. It has been generated through a match between the service management lifecycle exposed in ITIL v3 and the groups of processes exposed in PMBOK and ISO 21500. The results of applying this proposal in a real project management are presented.

Eduardo Isacás-Ojeda, Monserrate Intriago-Pazmiño, Hernán Ordoñez-Calero, Elizabeth Salazar-Jácome, Wilson Sánchez-Ocaña
Cognitive Determinants of IT Professional Belief and Attitude Towards Green IT

This study builds on the behavioural perspective of Green IT to investigate the cognitive determinants of IT professional Green IT beliefs and the implications of such beliefs on their attitude towards Green IT practices. We posited that an individual’s belief about Green IT could be informed by one’s knowledge and awareness of the adverse effect of non-sustainable practices on the environment. Also, we hypothesized beliefs as the determinant of one’s attitude towards Green IT practices. The outcome of the empirical investigation on a sample of data collected from IT professionals in Malaysia provided support for our hypotheses. Furthermore, the implications of the ensuing findings for the existing literature and the successful management of Green IT practices were discussed.

Adedapo Oluwaseyi Ojo, Murali Raman, Rameswaran Vijayakumar
Corporate Digital Learning – Proposal of Learning Analytics Model

This paper presents a research on the effectiveness of digital learning in organisations and how it can be measured. The aim of this research is to bridge the gap between scientific and practice-oriented research about digital learning. The focus will be on learning analytics with the goal to identify the potential of digital learning for organisations. This research is based on a systematic literature review to answer the following research questions: What are the contexts in which digital learning can take place? And what are the main metrics which allows measure the efficiency of digital learning in organisational contexts? The results showed that digital learning analytics can enhance the efficacy of the learning process impact in organisations and that contexts for learning supported on mobile technologies, tablet and smart phone applications become more and more popular among the employees. Additionally, the results helped to create a model of learning analytics with four dimensions: Participants, Learning Contexts, Learning processes, and Learning facilitators.

Maria José Sousa, Álvaro Rocha
Freemium Project Management Tools: Asana, Freedcamp and Ace Project

Companies have to manage a large number of projects, human resources and supplies. For companies to be competitive, they had to reduce their costs, accelerate product development, and focus on satisfying their customers. Project management knowledge focused on core skills in the areas of budgeting, scheduling, and resource allocation. Project management tools are critical for assisting in this process, as they have planning, scheduling, resource allocation, communication and documentation features. In this paper, we study the following popular project management tools: Asana, Freedcamp and Ace Project. This analysis may help project managers choose an appropriate tool according to the purpose of the company.

Tânia Ferreira, Juncal Gutiérrez-Artacho, Jorge Bernardino
Agile Analytics: Applying in the Development of Data Warehouse for Business Intelligence System in Higher Education

Majority of the Higher Learning Institutions are being implemented with information management systems for all the core activities ranging from admission, registration, alumni, graduate and academy operations. The data generated by these integrated information systems are transactional in nature and has been increasing exponentially. However, the use and value of the data has not been fully explored for driving decision-making process. This paper explores the importance of Agile Analytics in Business Intelligence and Data Warehouse development among Higher Learning Institutions. The paper concludes by outlining future directions relating to the development and implementation of an institutional project on data analytics.

Reynaldo Joshua Salaki, Kalai Anand Ratnam
Satisfaction with e-participation: A Model from the Citizen’s Perspective, Expectations, and Affective Ties to the Place

The diffusion and adoption of e-participation contributes to better democracy and more participative societies. Nevertheless, despite the potential benefits of e-participation, the level of citizen satisfaction regarding the use of e-participation and its effects on the continued intention to use have not been widely assessed yet in the literature. This article proposes a conceptual model that integrates the DeLone & McLean success model, that assesses the citizen satisfaction regarding the perception of the e-participation system quality; the expectation-confirmation model for the continued intention to use, which evaluates satisfaction based on the confirmation of ex-post experience on e-participation use and the perceived usefulness; and the dimensions of sense of place, which play a moderator role between the citizen satisfaction and the e-participation use.

Mijail Naranjo Zolotov, Tiago Oliveira, Frederico Cruz-Jesus, José Martins
Coalition-OrBAC: An Agent-Based Access Control Model for Dynamic Coalitions

In various collaborative environments, autonomous domains form coalitions to achieve shared goals. In most cases, these coalitions are dynamic in nature, as domains leave and new ones join the coalition. Normally, the coalition members have internal access control policies in place. Secure sharing of data requires that the members can exercise fine-grained access control over the shared resources governed by their own security policies. This paper presents an agent-based access control model for dynamic coalitions, which layers coalition management on top of an OrBAC model, and describes our proposition on implementing access control to manage sharing of resources between sector-agnostic monitoring multi-agent system platforms.

Iman Ben Abdelkrim, Amine Baina, Christophe Feltus, Jocelyn Aubert, Mostafa Bellafkih, Djamel Khadraoui
Health Data Analytics: A Proposal to Measure Hospitals Information Systems Maturity

In the last five decades, maturity models have been introduced as reference frameworks for Information System (IS) management in organizations within different industries. In the healthcare domain, maturity models have also been used to address a wide variety of challenges and the high demand for hospital IS (HIS) implementations. The increasing volume of data, is exceeded the ability of health organizations to process it for improving clinical and financial efficiencies and quality of care. It is believed that careful and attentive use of Data Analytics (DA) in healthcare can transform data into knowledge that can improve patient outcomes and operational efficiency. A maturity model in this conjuncture, is a way of identifying strengths and weaknesses of the HIS maturity and thus, find a way for improvement and evolution. This paper presents a proposal to measure Hospitals Information Systems maturity with regard to DA. The outcome of this paper is a maturity model, which includes six stages of HIS growth and maturity progression.

João Vidal Carvalho, Álvaro Rocha, José Vasconcelos, António Abreu
Supply Chain Challenges with Complex Adaptive System Perspective

At first glance, a supply chain is a complex system, since a slight change in one activity may cause tremors everywhere. In fact, the system is an interconnected autonomous entity that makes choices to survive, to evolve, and to be self-organized over time. Within a dynamic environment, several disciplines have adopted the Complex Adaptive System (CAS) perspective. Hence, the main purpose of this paper is to explore the supply chain as a CAS. In addition, using the complexity theory, the knowledge gained from this matching can be beneficial for supply chain to move from the static and the isolated field to dynamic and connected one.

Abla Chaouni Benabdellah, Imane Bouhaddou, Asmaa Benghabrit
Understanding the Adoption of Business Analytics and Intelligence

Our work addresses the factors that influence the adoption of business analytics and intelligence (BAI) among firms. Grounded on some of the most prominent adoption models for technological innovations, we developed a conceptual model especially suited for BAI. Based on this we propose an instrument in which relevant hypotheses will be derived and tested by means of statistical analysis. We hope that the findings derived from our analysis may offer important insights for practitioners and researchers regarding the drivers that lead to BAI adoption in firms. Although other studies have already focused on the adoption of technological innovations by firms, research on BAI is scarce, hence the relevancy of our research.

Frederico Cruz-Jesus, Tiago Oliveira, Mijail Naranjo
The Planning Process Integration Influence on the Efficiency of Material Flow in Production Companies

In the article, the authors present the results of the research on the planning process integration influence on the efficiency of material flow in production companies. In order to identify the influence the authors developed their own planning process integration model based on the Sales and Operations Planning (SOP) approach. Four planning process integration levels were qualitatively defined at the tactical level and were modelled in the simulation environment. The levels were applied to plan material flow in a production company. The flow efficiency was measured by means of the following indicators: customer service level and return on sales. The simulation study results made it possible to confirm that the planning process integration has a positive influence on material flow in production companies.

Michal Adamczak, Piotr Cyplik
Time Difference of Arrival Enhancement with Ray Tracing Simulation

A Hybrid technique is proposed to improve the Time Difference of Arrival (TDoA) Localization systems in Non-line of Sight situation. A Ray tracing simulation tool is used to extract the time of arrival difference characteristic from the environment and binding with the TDoA a multilateration sensor scheme to estimate the position of an electromagnetic emitter. The idea was to improve the TDoA sensor performance to overcome multipath imprecision typical in the urban environment, where different rays are sum up at each sensor increasing the error position. The technique proposed uses time Channel Impulse Response estimation and a Ray Tracing propagation tool to build a geographic time difference fingerprint that is used to enhance the performance. A measurement campaign was done to validate the technique and showed an enhancement in the localization precision.

Marcelo N. de Sousa, Eduardo F. S. Corrêa, Reiner S. Thomä
E-Mail Client Multiplatform for the Transfer of Information Using the SMTP Java Protocol Without Access to a Browser

At present, the use of emails has increased significantly. Today is one of the easiest and fastest forms of asynchronous communication, especially over long distances, in a convenient and secure way. This document details aspects of communication through Internet access, in order to show a simpler to traditional application, along with the SMTP protocol, an email transfer protocol, it consists of a client- server, which shows the sending of emails to different email addresses at the same time, application which was designed in Java programming language, this allows the client to be multiplatform, using the NetBeans ID, in addition to this, account with the use of different libraries including javax, where the results were to transmit information, with and without attachments, sending such information to 3 different mails at the same time, to interact between different mail servers, reliably, in addition to this provides a user-friendly interface, allowing effective communication.

Liliana Enciso, Ruben Baez, Alvaro Maldonado, Elmer Zelaya-Policarpo, Pablo Alejandro Quezada-Sarmiento
CRUDi Framework Application – Bank Company Case Study

The top management view of organizations tends not to reach consensus on the prioritization of investments in Information Systems, particularly when they must prioritize their impact on overall performance and budget constraints. This paper presents the results of applying CRUDi Framework to a bank. This allows to obtain new indicators to support the decision and alignment of investment priorities in the processes that support the business strategy. The Framework introduces a new method and tools that allow us to gauge the relative importance of Information Systems to the organizations’ businesses.

Jorge Pereira, Frederico Branco, Manuel Au-Yong-Oliveira, Ramiro Gonçalves
GRAPHED: A Graph Description Diagram for Graph Databases

Within recent years, graph database systems have become very popular and deployed mainly in situations where the relationship between data is significant, such as in social networks. Although they do not require a particular schema design, a data model contributes to their consistency. Designing diagrams is one approach to satisfying this demand for a conceptual data model. While researchers and companies have been developing concepts and notations for graph database modeling, their notations focus on their specific implementations. In this paper, we propose a diagram to address this lack of a generic and comprehensive notation for graph database modeling, called GRAPHED (Graph Description Diagram for Graph Databases). We verified the effectiveness and compatibility of GRAPHED in a case study in fraud identification in the Brazilian government.

Gustavo Van Erven, Waldeyr Silva, Rommel Carvalho, Maristela Holanda
Capabilities and Work Practices - A Case Study of the Practical Use and Utility

There exists a multitude of approaches, frameworks, and methods that are used for analysis, design, and planning of strategic capability systems, military capabilities and IS/IT systems. These approaches commonly dictate a single capability definition and practice that should be applied across an organisation or project. This paper examines the practical use and utility of the capability concept with special focus on examining differences between work practices of people with similar job to be done. The examination was done through a case study of a mega-scale programme. It was found that there exist varying common-sense meanings and overlaid practices of the idea capability. When the concept of capability evolved through learning-by-doing, usage of the capability concept was considered as very valuable, this opposed to when a ready-made enterprise architecture framework was introduced. Furthermore, analysis revealed that reported uses were many, varied significantly between work practices, and sometimes incoherent, contradictory and vague.

Anders W. Tell, Martin Henkel
Systematic Review of the Literature, Research on Blockchain Technology as Support to the Trust Model Proposed Applied to Smart Places

The smart places are vulnerable with corrupted or compromised data, with the false integration of new devices, and devices with firmware versions inconsistent. These risks worsen with the increasing volume and diversity of data, devices, infrastructures and users connected to the Web. The systematic review of the literature were selected 190 documents, which reveals the growing interest on the theme of blockchain technology with the publication of 14 documents in 2014 to about 100 already in 2017. The articles focused on the areas bitcoin (about 40%), IoT (about 30%), financial (about 15%), cryptocurrencies, electronic government (about 12%), smart contracts, smart cities, business (with about 10% each) and health (about 5%). This perspective confirms the generic model study data supported in blockchain technology for smart places, especially when applied to smart cities and the specific field of the mobility ecosystem, with the use of the new concepts of the application of blockchain in IoT, smart contracts and e-governance.

António Brandão, Henrique São Mamede, Ramiro Gonçalves
An Architecture for a Viable Information System

The present work is born in the context of problems that organizations facing with their information systems. Modern information systems are monolithic, complex and not ready for the future challenges. And they are playing a key role in a chain of value delivery. Most of the time people do not approach ISs problems in a systemic thinking way, but instead in a reductionist way. In this work, Viable System Model will be used to solve problems described above. The goal of this work is to apply Viable System Model to the Information artifacts. The main outcome from this work is an architecture for Viable Information System, which achieves following goals: understanding of complexity, resilience to change, survival to an external environment and ability to exist independently of its external environment. Systems thinking will be taken as a basis for the development of the ideas presented in this research.

Anton Selin, Vitor Santos
Smart Bengali Idiomatic Translator Service Using Morphological Marketing Technique

Bengali is one of the most spoken languages, ranked seventh in the world considering the context of the country, Research one Natural Language Translation is very important specially to translate Bengali to English proverb. Researcher has just only focused to develop software algorithm for implementation purpose but not to focus its value. Morphological Business plan is one of the important techniques that help people to get service or use the product in exact place. Bengali to English Idiomatic proverb translator is not translated properly in Google, Prompt, Lingue and BubleFish. Google has translated little information correctly and is not analyzed Bengali Idiomatic (Proverb) Language Demand. The project main goal is to improve Bengali Idiomatic Proverb translator, people can learn and gathered Idiomatic knowledge from Electronic Dictionary for communication and to show estimated profit forecasting for its value analysis. The propose method has designed Business Model Canvas, Bengali Idiomatic Payment Model, Bengali Proverb implementation using morpheme and to create a Business plan of BTEIS for describing the current market position of a Business. The propose project has used visual studio.net, HTML and CSS.

Amit Roy, Vitor Duarte dos Santos
Backmatter
Metadaten
Titel
Trends and Advances in Information Systems and Technologies
herausgegeben von
Álvaro Rocha
Hojjat Adeli
Luís Paulo Reis
Sandra Costanzo
Copyright-Jahr
2018
Verlag
Springer International Publishing
Electronic ISBN
978-3-319-77703-0
Print ISBN
978-3-319-77702-3
DOI
https://doi.org/10.1007/978-3-319-77703-0

Premium Partner