Skip to main content
main-content

Cloud Computing

weitere Buchkapitel

Open Access

Chapter 3. Addressed Challenges

In this chapter, we discuss the diverse set of challenges, from different perspectives, that we face because of our aim to incorporate knowledge in software and processes tailored for software and systems evolution. Firstly, the discovery and externalization of knowledge about requirements, the recording and representation of design decisions, and the learning from past experiences in evolution form the human perspective, including developers, operators, and users. Secondly, performance and security induce the software quality perspective. Thirdly, round-trip engineering, testing, and co-evolution define the technical perspective. And fourthly, formal methods for evolutionary changes provide the foundation and define the formal perspective.

Reiner Jung, Lukas Märtin, Jan Ole Johanssen, Barbara Paech, Malte Lochau, Thomas Thüm, Kurt Schneider, Matthias Tichy, Mattias Ulbrich

Chapter 5. Performance Analysis and Performance Testing

This chapter covers topics that you need to know if you want to control the performance level in a large product automatically. You will learn different kinds of performance tests, performance anomalies that you can observe, and how to protect yourself from these anomalies. At the end of this chapter, you will find a description of performance-driven development (an approach for writing performance tests) and a general discussion about performance culture.

Andrey Akinshin

Chapter 8. Spring Cloud

In Chapter 7 , you were introduced to the Spring Boot. You explored several examples built with Spring Boot so now you have the building blocks for an enterprise-level application, whether the application is medium scale or large scale. In this chapter, you will look into the next indispensable piece of Spring called Spring Cloud, which is built over Spring Boot itself. There are a bunch of common patterns in distributed, microservices ecosystems that can help you integrate the core services as a loosely coupled continuum, and Spring Cloud provides many powerful tools that enhance the behavior of Spring Boot applications to implement those patterns. You will learn only the major and critical blocks required to keep the discussion continuing in subsequent chapters, again with the help of concrete code samples. The samples in this chapter are incrementally built over the preceding samples, so please don’t skip any section or you may not get the succeeding samples to run smoothly.

Binildas Christudas

Chapter 4. Microservices Architecture

After the initial three chapters, you now have set a solid knowledge base to distinguish between the microservices style of software architecture and the architecture of a traditional monolith. You learned the technique of breaking down the monolith into multiple small logical and physically separate groupings called microservices, thereby improving the scale out capability in a flexible manner. While in the traditional monolith schema of architecture you have one single, big application to manage, the same application when redesigned into a microservices architecture will be more than one single deployment and hence many more concerns like the intermicroservice communications, will pop up. You will explore the details of this new set of architectural concerns in this chapter. You will also explore a few relevant trends in the software paradigm that have compelled software architects to move away from traditional architectural styles.

Binildas Christudas

Reflections on Bernhard Steffen’s Physics of Software Tools

Many software tools have been developed to implement the concepts of formal methods, sometimes with great success, but also with an impressive tool mortality and an apparent dispersion of efforts. There has been little analysis so far of such tool development as a whole, in order to make it more coherent, efficient, and useful to the society. Recently, however, Bernhard Steffen published a paper entitled “The Physics of Software Tools: SWOT Analysis and Vision” that precisely proposes such a global vision. We highlight the key ideas of this paper and review them in light of our own experience in designing and implementing the CADP toolbox for the specification and analysis of concurrent systems.

Hubert Garavel, Radu Mateescu

Efficient Hardware/Software Co-design for NTRU

The fast development of quantum computers represents a risk for secure communications. Current traditional public-key cryptography will not withstand attacks performed on quantum computers. In order to prepare for such a quantum threat, electronic systems must integrate efficient and secure post-quantum cryptography which is able to meet the different application requirements and to resist implementation attacks. The NTRU cryptosystem is one of the main candidates for practical implementations of post-quantum public-key cryptography. The standardized version of NTRU (IEEE-1363.1) provides security against a large range of attacks through a special padding scheme. So far, NTRU hardware and software solutions have been proposed. However, the hardware solutions do not include the padding scheme or they use optimized architectures that lead to a degradation of the security level. In addition, NTRU software implementations are flexible but most of the time present a low performance when compared to hardware solutions. In this work, for the first time, we present a hardware/software co-design approach compliant with the IEEE-1363.1 standard. Our solution takes advantage of the flexibility of the software NTRU implementation and the speedup due to the hardware accelerator specially designed in this work. Furthermore, we provide a refined security reduction analysis of an optimized NTRU hardware implementation presented in a previous work.

Tim Fritzmann, Thomas Schamberger, Christoph Frisch, Konstantin Braun, Georg Maringer, Johanna Sepúlveda

V

Verantwortung kann nach Otfried Höffe eingeteilt werden in Primärverantwortung (die man trägt), Sekundärverantwortung (zu der man gezogen wird) und Tertiärverantwortung (zu der man gezogen wird und die mit einer Sanktionierung verbunden ist). Mit der Primär- und Sekundärverantwortung wird der Mensch als Subjekt der Moral sichtbar, mit der Tertiärverantwortung auch als Subjekt (und Objekt) von Recht und Ordnung. Voraussetzung ist die Primärverantwortung, die lediglich (mündigen, urteilsfähigen) Personen zukommt. Eine Wiedergutmachung ist in der Informationsgesellschaft besonders schwierig, etwa wenn sich Falschbehauptungen im virtuellen Raum verbreitet und verselbstständigt haben; dieses Problem wird in der Informationsethik behandelt.

Oliver Bendel

S

„Schlüsselqualifikation“ ist ein Sammelbegriff für all diejenigen Fähigkeiten und Kompetenzen, die notwendig oder auch wünschenswert für die erfolgreiche Beherrschung von in einem bestimmten Kontext wiederkehrenden Anforderungen sind.

Oliver Bendel

B

Bargeld ist gedrucktes (Geldscheine) oder geprägtes Geld (Münzen). Es ist ein Barmittel (neben Bankguthaben und Schecks), ein Zahlungsmittel, das vom Staat in Umlauf gebracht wird, und Teil des Zahlungsverkehrs. Die Scheine und Münzen zeigen häufig berühmte Bauten und Personen oder Symbole für Länder und Verbünde. Damit sind sie nebenbei ein Identifikationsmittel.

Oliver Bendel

C

Mit dem Candystorm geht eine Welle des Zuspruchs im virtuellen Raum einher, z.B. in sozialen Netzwerken, Microblogs und Blogs sowie Kommentarbereichen von Onlinezeitungen und -zeitschriften. Er wird evoziert durch den Moralismus der Informationsgesellschaft und die Empathie und Euphorie der Netzbürgerinnen und -bürger. Personen oder Organisationen werden mit Worten des Zuspruchs und Begriffen wie „Flausch“ bedacht. Das Gegenteil ist der Shitstorm.

Oliver Bendel

D

Die Datenbrille ist ein mit Peripheriegeräten ergänzter Kleinstrechner, der am Kopf getragen und mit Augen und Händen gesteuert bzw. bedient wird. Verarbeitet werden Daten aus dem Internet und der Umgebung, vor allem im Sinne der Augmented Reality (deshalb auch AR-Brille). Dinge, Pflanzen, Tiere und Menschen respektive Situationen und Prozesse werden registriert, analysiert und mit virtuellen Informationen angereichert.

Oliver Bendel

E

Ein E-Book ist ein elektronisches Buch. Es wird mit einem Handy, Smartphone, Tablet, Reader oder einem anderen elektronischen Gerät, das mit einem Display ausgestattet ist, gelesen und betrachtet. Man kann es multimedial aufbereiten und mit Links ergänzen, sodass es zum Enhanced oder Enriched E-Book wird, also zum erweiterten oder angereicherten elektronischen Buch. Bei einem klassischen E-Book, etwa im PDF- oder EPUB-Format, bleibt das Buchhafte erhalten; es besteht zwar kein Buch als Ding, aber als Werk.

Oliver Bendel

Chapter 1. Information Science and Technology: A New Paradigm in Military Medical Research

The escalating pace of technologies such as computers and mobile communications systems, along with major advances in neurobiology, increases opportunities for military medical problem solving. This convergence of information technology with medicine was new as a core funded program in military medical research, but foundational research had been conducted by the Telemedicine and Advanced Technology Research Center (TATRC) through special congressional interest projects, small business innovative research programs (SBIRs), and other special funding programs in the Department of Defense (DoD) totaling $500 M/year for more than a decade before its inception. Five main thrust areas formed the new funded program supported by the Joint Program Committee 1 (JPC1) to transform military health care to a safer, will-predictive, preventative, evidence-based, and participatory system. These focus areas included medical simulation and training, mobile health (m-Health), open electronic health record and medical systems interoperability, computational biology and predictive models, and knowledge engineering. This modest investment in transformational research stands to produce huge benefits in cost savings in military medicine through improved efficiencies provided with everyday technologies.

Karl E. Friedl, Thomas B. Talbot, Steve Steffensen

Integrating Multi-agent Simulations into Enterprise Application Landscapes

To cope with increasingly complex business, political, and economic environments, agent-based simulations (ABS) have been proposed for modeling complex systems such as human societies, transport systems, and markets. ABS enable experts to assess the influence of exogenous parameters (e.g., climate changes or stock market prices), as well as the impact of policies and their long-term consequences. Despite some successes, the use of ABS is hindered by a set of interrelated factors. First, ABS are mainly created and used by researchers and experts in academia and specialized consulting firms. Second, the results of ABS are typically not automatically integrated into the corresponding business process. Instead, the integration is undertaken by human users who are responsible for adjusting the implemented policy to take into account the results of the ABS. These limitations are exacerbated when the results of the ABS affect multi-party agreements (e.g., contracts) since this requires all involved actors to agree on the validity of the simulation, on how and when to take its results into account, and on how to split the losses/gains caused by these changes. To address these challenges, this paper explores the integration of ABS into enterprise application landscapes. In particular, we present an architecture that integrates ABS into cross-organizational enterprise resource planning (ERP) processes. As part of this, we propose a multi-agent systems simulator for the Hyperledger blockchain and describe an example supply chain management scenario type to illustrate the approach.

Timotheus Kampik, Amro Najjar

Collective Behavior of Large Teams of Multi-agent Systems

In this paper, we study conditions of emergence of the phenomenon of collective behavior of agents in large multi-agent systems. Agents act in the two-dimensional (2D) Cellular Automata (CA) space, where each of them takes part in spatial Prisoner’s Dilemma (PD) game. The system modeled by 2D CA evolves in discrete moments of time, where each cell-agent changes its state according to a currently assigned to its rule. Rules are initially assigned randomly to cells-agents, but during iterated game agents may replace their current rules by rules used by their neighbors. While each agent is oriented on a maximization of its own profit in the game, we are interested in answering the question if and when a phenomenon of global cooperation in a large set of agents is possible. We present results of the experimental study showing conditions and degree of emerging such cooperation.

Franciszek Seredyński, Jakub Gąsior

Data Protection on Fintech Platforms

The security of data has been challenged by the incorporation of new services into the digital world. Data protection has become essential to continue operating in the new financial environment, especially due to the advent of Financial Technology (Fintech). This article reviews how data protection is applied to financial recommendation platforms identifying current trends in this area. Moreover, it looks at the evolution of computer security in the field of Fintech due to the security level that it requires. In addition, it examines the solution techniques for data storage issues in cloud security and encryption methods that assure data protection. Also, the European Union’s data protection regulation is considered; it not only affects the entities based in the European territory but also those that are outside of it but manage the data of European citizens.

Elena Hernández, Mehmet Öztürk, Inés Sittón, Sara Rodríguez

A Survey on Software-Defined Networks and Edge Computing over IoT

The Internet of Things (IoT) ceased to be a novel technology to become part of daily life through the millions of sensors, devices and tools that measure, collect, process and transfer data. The need to exchange, process, filter and store this huge volume of data has led to the emergence of Edge Computing (EC). The purpose of this new paradigm is to solve the challenges of IoT such as localized computing, reducing latency in information exchange, balancing data traffic on the network and providing responses in real-time. In order to reduce the complexity in the implementation of EC architectures, Software Defined Networks (SDNs) and the related concept Network Function Virtualization (NFV) are proposed by different approaches. This paper addresses the characteristics and capabilities of SDNs and NFV and why can be successful an innovative integration between SDNs and EC for IoT scenarios.

Ricardo S. Alonso, Inés Sittón-Candanedo, Sara Rodríguez-González, Óscar García, Javier Prieto

Chapter 18. Summary, Conclusions and Further Research

This chapter summarizes the key findings of the book. It also presents some novel issues that can be a subject matter of further inter-disciplinary and intra-disciplinary research.

Kalpana Tyagi

Cultural Heritage Digital Data: Future and Ethics

Actual technologies are changing Cultural Heritage research, analysis, conservation and development ways, allowing new innovative surveys. Terrestrial and aerial laser scanner, terrestrial and aerial photogrammetric techniques, GIS (Geographic Information System) and remote sensing techniques are nearly much needed methods in the Cultural Heritage field. These survey techniques produce different kinds of data that needs to be managed in the best way possible. Ethical questions come up about the future of these data. It is nearly necessary dealing with problems like data storage, hardware and software relationship and data redundancy.

Filippo Diara

Chapter 10. An Architecture to Improve the Security of Cloud Computing in the Healthcare Sector

Technology plays a vast role in all aspects of our lives. In every business, technology is helpful to fulfill the needs of the customers. Cloud computing is a technology that satisfies the demand for dynamic resources and makes it easier for jobs to work on all platforms. Cloud computing platforms on the internet provide rapid access with pay-as-you-go pricing. Many business organizations have deployed their data either fully or partially on a cloud platform. A healthcare cloud is a platform where one can easily find hospitals, doctors, clinics, pharmacies, etc. While one face of this cloud is quite beneficial, the other face is challenging because of security issues. The data are stored somewhere else; hence, they can be an attractive target for cybercriminals. This chapter provides an introduction to cloud computing and the healthcare cloud. Subsequently, security issues in cloud computing, especially in the context of the healthcare cloud, are introduced. Finally, some methods to improve cloud security for healthcare are discussed along with our proposed architecture.

Saleh M. Altowaijri

Chapter 19. Big Data Tools, Technologies, and Applications: A Survey

The outburst of data produced over the last few years in various fields has demanded new processing techniques, novel big data–processing architectures, and intelligent algorithms for effective and efficient exploitation of huge data sets to get useful insights and improved knowledge discovery. The explosion of data brings many challenges to deal with the complexity of information overload. Numerous tools and techniques have been developed over the years to deal with big data challenges. This chapter presents a summary of state-of-the-art tools and techniques for processing of big data applications by critically analyzing their objectives, methodologies, and key approaches to address the challenges associated with big data. Also, we critically analyze some of the core applications of big data and their impacts in improving the quality of human life by primarily focusing on healthcare and smart city applications, genome sequence annotation applications, and graph-based applications. We provide a detailed review and taxonomy of the research efforts within each application domain.

Yasir Arfat, Sardar Usman, Rashid Mehmood, Iyad Katib

Chapter 18. HPC-Smart Infrastructures: A Review and Outlook on Performance Analysis Methods and Tools

High-performance computing (HPC) plays a key role in driving innovations in health, economics, energy, transport, networks, and other smart-society infrastructures. HPC enables large-scale simulations and processing of big data related to smart societies to optimize their services. Driving high efficiency from shared-memory and distributed HPC systems have always been challenging; it has become essential as we move towards the exascale computing era. Therefore, the evaluation, analysis, and optimization of HPC applications and systems to improve HPC performance on various platforms are of paramount importance. This paper reviews the performance analysis tools and techniques for HPC applications and systems. Common HPC applications used by the researchers and HPC benchmarking suites are discussed. A qualitative comparison of various tools used for the performance analysis of HPC applications is provided. Conclusions are drawn with future research directions.

Thaha Muhammed, Rashid Mehmood, Aiiad Albeshri, Fawaz Alsolami

Chapter 26. Security Testing of Internet of Things for Smart City Applications: A Formal Approach

This is a work in progress in which we are interested in testing security aspects of Internet of Things for smart cities. For this purpose we follow a model-based approach which consists in: modeling the system under investigation with an appropriate formalism; deriving test suites from the obtained model; applying some coverage criteria to select suitable tests; executing the obtained tests; and finally collecting verdicts and analyzing them in order to detect errors and repair them. The adopted formalism is based on the model of extended timed automata with inputs and outputs. We propose a conformance testing relation, the so-called extended timed input–output conformance relation—etioco. For test execution, we introduce a cloud-based architecture.

Moez Krichen, Mariam Lahami, Omar Cheikhrouhou, Roobaea Alroobaea, Afef Jmal Maâlej

Chapter 12. A Mobile Cloud Framework for Context-Aware and Portable Recommender System for Smart Markets

Smart city systems are fast emerging as solutions that provide better and digitized urban services to empower individuals and organizations. Mobile and cloud computing technologies can enable smart city systems to (1) exploit the portability and context-awareness of mobile devices and (2) utilize the computation and storage services of cloud servers. Despite the wide-spread adoption of mobile and cloud computing technologies, there is still a lack of solutions that provide the users with portable and context-aware recommendations based on their localized context. We propose to advance the state-of-the-art on recommender systems—providing a portable, efficient, and context-driven digital matchmaking—in the context of smart markets that involves virtualized customers and business entities. We have proposed a framework and algorithms that unify the mobile and cloud computing technologies to offer context-aware and portable recommendations for smart markets. We have developed a prototype as a proof-of-the-concept to support automation, user intervention, and customization of users’ preferences during the recommendation process. The evaluation results suggest that the framework (1) has a high accuracy for context-aware recommendations, and (2) it supports computation and energy efficient mobile computing. The proposed solution aims to advance the research on recommender systems for smart city systems by providing context-aware and portable computing for smart markets.

Aftab Khan, Aakash Ahmad, Anis Ur Rahman, Adel Alkhalil

Chapter 20. Big Data for Smart Infrastructure Design: Opportunities and Challenges

Big data is being at the forefront of many ICT-based developments in all spheres of life, be it business, education, or entertainment. Big data is being generated from many diverse sources including social media, Internet of Things (IoT), manufacturing and operations. Big data technologies allow us to take informed decisions from structured or unstructured data. Management and analysis of heterogeneous data generated by various sources brings numerous challenges and diversity in solutions. The aim of this chapter is to discuss different opportunities, issues, and challenges of big data with the main focus on the Hadoop platforms. We provide a detailed survey of opportunities, challenges, and issues of Hadoop-based big data developments in terms of data locality, load balancing, heterogeneity issues, scheduling issues, in-memory computation, multiple query optimizations, and I/O issues. Taxonomy of these challenges and opportunities is also presented.

Yasir Arfat, Sardar Usman, Rashid Mehmood, Iyad Katib

Chapter 13. Association Rule Mining in Higher Education: A Case Study of Computer Science Students

Data mining (DM) is gaining significant importance these days because of the flow and accumulation of data from various sources and fields; it has been estimated that the world’s databases double every 20 months (Tan et al. Introduction to data mining, Pearson Addison Wesley, Boston, 2005. The data are growing not only in terms of their volume but also in terms of their complexity and diversity. Thus, DM techniques and tools are required, and many algorithms and techniques have been introduced. These techniques include clustering, association, and prediction via classification and regression. These techniques are used in many fields such as business, healthcare, and education. In this chapter, we explore the application of some of these techniques in education. In the educational field, instructors tend to use their experience and personal judgment to link grades and failures of students between courses on the basis of their knowledge of the courses’ content. As a result, major plans may be changed, and academic guidance is offered accordingly. However, such an opinion is not always validated and tested, and we cannot be certain of it. With the existence of DM techniques, along with the vast volumes of data held by education systems, universities and schools can predict students’ performance and find associations between many attributes such as course grades. The results of using these techniques can have a profound effect in helping to change program plans and offer guidance. Students can make better-informed decisions when presented with facts that can have effects on their study. In this chapter, we review DM techniques used to mine student data with a focus on association rule mining. We report our work on association rule mining of Computer Science students’ grades and address some related issues. In this work we used lift, Kulczynski (Kulc), and the imbalance ratio (IR) to measures the interestingness of the rules. Our results showed cases of correlation between courses with confidence from 80 to 100%.

Njoud Alangari, Raad Alturki

Chapter 15. On Performance of Commodity Single Board Computer-Based Clusters: A Big Data Perspective

In recent times, the commodity Single Board Computers (SBCs) have now become sufficiently powerful that they can run standard operating systems and mainstream workloads. In this chapter, we investigate the design and implementation of Single Board Computer (SBC)-based Hadoop clusters. We provide a compact design layout and build two clusters each using 20 nodes. We extensively test the performance of these clusters using popular performance benchmarks for task execution time, memory/storage utilization, network throughput, and energy consumption. We investigate the cost of operating SBC-based clusters by correlating energy utilization for the execution time of various benchmarks using workloads of different sizes. Although the low-cost benefit of a cluster built with ARM-based SBCs is desirable, these clusters yield low comparable performance and energy efficiency due to limited onboard capabilities. It is, however, possible to tweak Hadoop configuration parameters to ARM-based SBC specifications to efficiently utilize available resources. Finally, a discussion on the design implications of these clusters as a testbed for inexpensive and green cloud computing research is presented.

Basit Qureshi, Anis Koubaa

Chapter 21. Software Quality in the Era of Big Data, IoT and Smart Cities

Software quality is the degree to which the software conforms to its requirements. The complexity of software is on the rise with the developments of smart cities due to the complex nature of these applications and environments. Big data and Internet of Things (IoT) are driving radical changes in the software systems landscape. Together, big data, IoT, smart cities, and other emerging complex applications have exacerbated the challenges of maintaining software quality. The big data produced by IoT and other sources is used in designing or operating various software machines and systems. One of the challenges of big data is data veracity, which could lead to inaccurate or faulty system behavior. The aim of this paper is to review the technologies related to software quality in the era of big data, IoT, and smart cities. We elaborate on software quality processes, software testing and debugging. Model checking is discussed with some directions on the role it could play in the big data era and the benefits it could gain from big data. The role of big data in software quality is explored. Conclusion is drawn to suggest future directions.

Fatmah Yousef Assiri, Rashid Mehmood

Chapter 7. A Smart Disaster Management System for Future Cities Using Deep Learning, GPUs, and In-Memory Computing

Natural and manmade disasters have increased significantly over the past few decades. These include, among many others, the recent floods in Japan (June/July 2018) and Barcelona attack of August 2017. The Japan floods had left around 200 people dead, 70 were reported missing, and over eight million people were ordered to evacuate their homes, with 2bn USD the estimated cost of flood rebuilding. Disaster management plays a key role in reducing the human and economic losses. Our earlier work has focused on developing a disaster management system leveraging technologies including vehicular ad hoc networks (VANETs) and cloud computing to devise city evacuation strategies. The work was later extended to incorporate traffic management plans for smart cities using deep learning techniques. In-memory computations and graphics processing units (GPUs) were used to address intensive and timely computational demands of deep learning over big data in disaster situations. This paper extends our earlier work and provides extended analysis and results of the proposed system. A system architecture based on in-memory big data management and GPU-based deep learning computations is proposed. We have used road traffic data made publicly available by the UK Department for Transport. The results show the effectiveness of the deep learning approach in predicting traffic behavior in disaster and evacuation situations. This is the first system which brings together deep learning, in-memory data-driven computations, and GPU technologies for disaster management.

Muhammad Aqib, Rashid Mehmood, Ahmed Alzahrani, Iyad Katib

Chapter 1. Enterprise Systems for Networked Smart Cities

Smart city concept redefines the urban planning and development of the existing and new cities. It drives on economic, social, and environmental sustainability of a city and attracts citizens, professionals, and corporations to build sustainable living. It portrays a city that is operationally optimal and provides a space for innovation. This is achieved through state of the art physical, institutional, and digital infrastructure. This chapter addresses the challenges of the digital aspect of the smart city. Enterprise systems technology is widely used in the organizations and will be utilized in the Smart city systems conceptualization and implementation. Smart city systems definition has been derived by analyzing the smart city requirements. Enterprise systems technology has been explained and the latest ICT trends have been explored to develop the technological foundation of smart city systems. Finally, we introduce partial least square regression, a structural equation modeling method to explore interrelationships between different interdisciplinary constructs and show its application to studying smart city systems. From the digital perspective of smart city, it may be concluded that connectedness leads to integration and integration leads to dynamism and dynamism leads to smartness and cycle continues to realize the best in class smart city.

Naim Ahmad, Rashid Mehmood

Chapter 9. A Survey of Methods and Tools for Large-Scale DNA Mixture Profiling

DNA typing or profiling is being widely used for criminal identification, paternity tests, and diagnosis of genetic diseases. DNA typing is considered one of the hardest problems in the forensic science domain, and it is an active area of research. The computational complexity of DNA typing increases significantly with the number of unknowns in the mixture and has been the major deterring factor holding its advancements and applications. In this chapter, we provide an extended review of DNA profiling methods and tools with a particular focus on their computational performance and accuracy. The process of DNA profiling within the broader context of forensic science and genetics is explained. The various classes of DNA profiling methods including general methods, and those based on maximum likelihood estimators, are reviewed. The reviewed DNA profiling tools include LRmix Studio, TrueAllele, DNAMIX V.3, Euroformix, CeesIt, NOCIt, DNAMixture, Kongoh, LikeLTD, LabRetriever, and STRmix. A review of high-performance computing literature in bioinformatics and HPC frameworks is also given. Faster interpretations of DNA mixtures with a large number of unknowns and higher accuracies are expected to open up new frontiers for this area.

Emad Alamoudi, Rashid Mehmood, Aiiad Albeshri, Takashi Gojobori

An Intelligent-Agent Facilitated Scaffold for Fostering Reflection in a Team-Based Project Course

This paper reports on work adapting an industry standard team practice referred to as Mob Programming into a paradigm called Online Mob Programming (OMP) for the purpose of encouraging teams to reflect on concepts and share work in the midst of their project experience. We present a study situated within a series of three course projects in a large online course on Cloud Computing. In a $$3\times 3$$ Latin Square design, we compare students working alone and in two OMP configurations (with and without transactivity-maximization team formation designed to enhance reflection). The analysis reveals the extent to which grading on the produced software rewards teams where highly skilled individuals dominate the work. Further, compliance with the OMP paradigm is associated with greater evidence of group reflection on concepts and greater shared practice of programming.

Sreecharan Sankaranarayanan, Xu Wang, Cameron Dashti, Marshall An, Clarence Ngoh, Michael Hilton, Majd Sakr, Carolyn Rosé

Toward Design of Advanced System-on-Chip Architecture for Mobile Computing Devices

In this paper, we present the design of an advanced system-on-chip architecture for mobile computing devices. The presented architecture facilitates connectivity to an ARM compatible host processor, high-speed intellectual property (IP) cores and slower peripherals using industry standard advanced microcontroller bus architecture. The system consist of standard set of peripherals, such as clock generator, real time clock, a watchdog timer, an interrupt controller, programmable I/O, I2C Host, SPI master, UARTs, trusted platform module, NAND flash controller and USB controllers. Third party 2D/3D graphics engine, audio/video encoder-decoder, wireless network controller IPs are also integrated to provide a complete platform architecture for the development of mobile computing devices.

Mohammed S. BenSaleh, Syed Manzoor Qasim, Abdulfattah M. Obeid

Online DAG Scheduling with On-Demand Function Configuration in Edge Computing

Modern applications in mobile computing become increasingly complex and computation intensive. Task offloading from mobile devices to the cloud is more and more frequent. Edge Computing, deploying relatively small-scale edge servers close to users, is a promising cloud computing paradigm to reduce the network communication delay. Due to the limited capability, each edge server can be configured with only a small amount of functions to run corresponding tasks. Moreover, a mobile application might consist of multiple dependent tasks, which can be modeled and scheduled as Directed Acyclic Graphs (DAGs). When an application request arrives online, typically with a deadline specified, we need to configure the edge servers and assign the dependent tasks for processing. In this work, we jointly tackle on-demand function configuration on edge servers and DAG scheduling to meet as many request deadlines as possible. Based on list scheduling methodologies, we propose a novel online algorithm, named OnDoc, which is efficient and easy to deploy in practice. Extensive simulations on the data trace from Alibaba (including more than 3 million application requests) demonstrate that OnDoc outperforms state-of-the-art baselines consistently on various experiment settings.

Liuyan Liu, Haoqiang Huang, Haisheng Tan, Wanli Cao, Panlong Yang, Xiang-Yang Li

A New Outsourced Data Deletion Scheme with Public Verifiability

In the cloud storage, the data owner will lose the direct control over his outsourced data, and all the operations over the outsourced data may be executed by corresponding remote cloud server, such as cloud data deletion operation. However, the selfish cloud server might maliciously reserve the data copy for financial interests, and deliberately send a false deletion result to cheat the data owner. In this paper, we design an IBF-based publicly verifiable cloud data deletion scheme. The proposed scheme enables the cloud server to delete the data and return a proof. Then the data owner can check the deletion result by verifying the returned deletion proof. Besides, the proposed scheme can realize public verifiability by applying the primitive of invertible bloom filter. Finally, we can prove that our proposed protocol not only can reach the expected security properties but also can achieve the practicality and high-efficiency.

Changsong Yang, Xiaoling Tao, Feng Zhao, Yong Wang

An Efficient Revocable Attribute-Based Signcryption Scheme with Outsourced Designcryption in Cloud Computing

Sensitive data sharing through cloud storage environments has brought varies and flexible secure demands. Attribute-based signcryption (ABSC) is suitable for cloud storage because it provides combined data confidentiality and authentication, and fine-grained data access control. While, the existed ABSC schemes hardly support efficient attribute revocation. In addition, the heavy computational overhead of ABSC limits the applying resource-constrained device in cloud storage environments. In this paper, to tackle the above problems, we propose an efficiently revocable attribute-based signcryption scheme with decryption outsourcing. The proposed scheme achieves the efficient attribute revocation through delegating the cloud server to update ciphertext without decrypting it. During the decryption phase, it outsource massive decryption operations to the proxy server so that computation cost on user’s devices is small and constant. The security analysis proves the correctness, confidentiality, collusion resistance, unforgeability and forward secrecy of our scheme. Furthermore, performance analysis shows that our scheme is efficient in terms of the ciphertext, key size and computation cost while realizing desired functions.

Ningzhi Deng, Shaojiang Deng, Chunqiang Hu, Kaiwen Lei

Utility Aware Task Offloading for Mobile Edge Computing

Mobile edge computing (MEC) casts the computation-intensive and delay-sensitive applications of mobiles on the network edges. Task offloading incurs extra communication latency and energy cost, and extensive efforts have been focused on the offloading scheme. To achieve satisfactory quality of experience, many metrics of the system utility are defined. However, most existing works overlook the balancing between the throughput and fairness. This paper investigates the problem of seeking optimal offloading scheme and the objective of the optimization is to maximize the system utility for leveraging between throughput and fairness. Based on KKT condition, we analyze the expectation of time complexity for deriving the optimal scheme. We provide an increment based greedy approximation algorithm with $$1 + \frac{1}{{e - 1}}$$ ratio. Experimental results show that the proposed algorithm has better performance.

Ran Bi, Jiankang Ren, Hao Wang, Qian Liu, Xiuyuan Yang

Enabling Custom Security Controls as Plugins in Service Oriented Environments

Service oriented environments such as cloud computing infrastructures aim at facilitating the requirements of users and enterprises by providing services following an on-demand orientation. While the advantages of such environments are clear and lead to wide adoption, the key concern of the non-adopters refers to privacy and security. Even though providers put in place several measures to minimize security and privacy vulnerabilities, the users are still in many cases reluctant to move their data and applications to clouds. In this paper an approach is presented that proposes the use of security controls as plugins that can be ingested in service-oriented environments. The latter allows users to tailor the corresponding security and privacy levels by utilizing security measures that have been selected and implemented by themselves, thus alleviating their security and privacy concerns. The challenges and an architecture with the corresponding key building blocks that address these challenges are presented. Furthermore, results in the context of trustworthy requirements, i.e. dependability, are presented to evaluate the proposed approach.

Dimosthenis Kyriazis

Static and Dynamic Group Migration Algorithms of Virtual Machines to Reduce Energy Consumption of a Server Cluster

In prevent global warming, it is critical to reduce electric energy consumed in information systems, especially servers in clusters like cloud computing systems. In this paper, a process migration approach is discussed to reduce the total energy consumption of clusters by using virtual machines. We propose a pair of the static SM(v) and dynamic DM(v) migration algorithms where a group of at most v ( $$\ge $$ ≥ 0) virtual machines migrate from a host server to a guest server. A group of virtual machines on a host server to migrate to a guest server are selected so that the total energy to be consumed by the host and guest servers can be reduced. In the SM(v) algorithm, the total number of virtual machines is fixed in a cluster. In the DM(v) algorithm, virtual machines are resumed and suspended so that the number of processes on each virtual machine is kept fewer. In the evaluation, we show the total energy consumption of servers can be mostly reduced in the DM(v) algorithm compared with other algorithms.

Dilawaer Duolikun, Tomoya Enokido, Makoto Takizawa

QoS Preservation in Web Service Selection

In cloud computing domain, often service providers offer services with same functionalities, but with varying quality metrics. A suitable service selection method finds the most appropriate solution among the alternatives. The challenge is to deliver a solution satisfying the requirement (quality and other) of a consumer with minimum possible execution time. Many conflicting QoS objectives increase the complexity of the problem. In fact, the problem may be formulated as a multi-objective, NP-hard optimization problem. Most of the existing solutions either satisfies the QoS demands of consumer or only reduces execution time by considering a sub-set of required QoS metrics. Consumer’s feedback on the choice of required QoS metrics not only shall help increasing user satisfaction, but also may reduce the complexity effectively. However, this depends on the domain knowledge of a consumer. In this work, we have proposed a goodness measure that replaces all QoS metrics by a single one. The new technique using dimension reduction is proposed to offer significant improvement compared to the existing works in terms of execution time. Moreover, the solution satisfies all the QoS requirements of a consumer in most of the cases. The proposed data driven selection approach has been implemented and the experimental results substantiate the claims as mentioned.

Adrija Bhattacharya, Sankhayan Choudhury

Byzantine Collision-Fast Consensus Protocols

Atomic broadcast protocols are fundamental building blocks used in the construction of many reliable distributed systems. Atomic broadcast and consensus are equivalent problems, but the inefficiency of consensus-based atomic broadcast protocols in the presence of collisions (concurrent proposals) harms their adoption in the implementation of reliable systems, as the ones based on state machine replication. In the traditional consensus protocols, proposals that are not decided in some instance of consensus (commands not delivered) must be re-proposed in a new instance, delaying their execution. Moreover, whether different values (commands) are proposed in the same instance (leading to a collision), some of its phases must be restarted, also delaying the execution of these commands involved in the collision. The CFABCast (Collision-Fast Atomic Broadcast) algorithm uses m-consensus to decide and deliver multiple values in the same instance. However, CFABCast is not byzantine fault-tolerant, a requirement for many systems. Our first contribution is a modified version of CFABCast to handle byzantine failures. Unfortunately, the resulting protocol is not collision-fast due to the possibility of malicious failures. In fact, our second contribution is to prove that there are no byzantine collision-fast algorithms in an asynchronous model as traditionally extended to solve consensus. Finally, our third contribution is a byzantine collision-fast algorithm that bypasses the stated impossibility by means of a USIG (Unique Sequential Identifier Generator) trusted component.

Rodrigo Saramago, Eduardo Alchieri, Tuanir Rezende, Lasaro Camargos

The MASON Simulation Toolkit: Past, Present, and Future

MASON is a widely-used open-source agent-based simulation toolkit that has been in constant development since 2002. MASON’s architecture was cutting-edge for its time, but advances in computer technology now offer new opportunities for the ABM community to scale models and apply new modeling techniques. We are extending MASON to provide these opportunities in response to community feedback. In this paper we discuss MASON, its history and design, and how we plan to improve and extend it over the next several years. Based on user feedback will add distributed simulation, distributed GIS, optimization and sensitivity analysis tools, external language and development environment support, statistics facilities, collaborative archives, and educational tools.

Sean Luke, Robert Simon, Andrew Crooks, Haoliang Wang, Ermo Wei, David Freelan, Carmine Spagnuolo, Vittorio Scarano, Gennaro Cordasco, Claudio Cioffi-Revilla

Practical Software Engineering Capstone Course – Framework for Large, Open-Ended Projects to Graduate Student Teams

For students, capstone project represents the culmination of their studies and is typically one of the last milestones before graduation. Participating in a capstone project can be an inspiring learning opportunity or a struggle due various reasons yet a very educative learning experience. During the IT capstone project students practice and develop their professional skills in designing and implementing a solution to a complex, ill-defined real-life problem as a team. This paper reflects on organizing IT capstone projects in computer science and software engineering Master programmes in a Sino-Finnish setup, where the projects are executed in a framework provided by a capstone project course. We describe the course framework and discuss the challenges in finding and providing ill-defined challenges with meaningful real-life connection for project topics. Based on our observations complemented with students’ feedback we also propose areas for future development.

Timo Vasankari, Anne-Maarit Majanoja

Design and Implementation of a Research and Education Cybersecurity Operations Center

The growing number and severity of cybersecurity threats, combined with a shortage of skilled security analysts, has led to an increased focus on cybersecurity research and education. In this article, we describe the design and implementation of an education and research Security Operations Center (SOC) to address these issues. The design of a SOC to meet educational goals as well as perform cloud security research is presented, including a discussion of SOC components created by our lab, including honeypots, visualization tools, and a lightweight cloud security dashboard with autonomic orchestration. Experimental results of the honeypot project are provided, including analysis of SSH brute force attacks (aggregate data over time, attack duration, and identification of well-known botnets), geolocation and attack pattern visualization, and autonomic frameworks based on the observe, orient, decide, act methodology. Directions for future work are also be discussed.

C. DeCusatis, R. Cannistra, A. Labouseur, M. Johnson

Clustering Based Cybersecurity Model for Cloud Data

Due to the inexorable notoriety of ubiquitous mobile devices and cloud processing, storing of data (for example photographs, recordings, messages, and texts) in the cloud has turned into a pattern among individual and hierarchical clients. Be that as it may, cloud service providers can’t be trusted entirely to guarantee the accessibility or honesty of client data re-appropriated/transferred to the cloud. Consequently, to enhance the cybersecurity level of cloud data, a new security model is introduced along with optimal key selection. In the proposed study, first cluster the secret information which we are taken using K-Mediod clustering algorithm based on a data distance measure. Then, the clustered data are encrypted using Blowfish Encryption (BE) and stored in the cloud. To improve the cybersecurity level, the optimal key is chosen based on the maximum key breaking time; for that, we presented a technique called Improved Dragonfly Algorithm (IDA). The result demonstrates that the optimal blowfish algorithm improves the accuracy of cybersecurity for all secret information compared to existing algorithms.

A. Bhuvaneshwaran, P. Manickam, M. Ilayaraja, K. Sathesh Kumar, K. Shankar

Security and Privacy in Smart City Applications and Services: Opportunities and Challenges

A Smart City can be described as an urbanized town, wherein Information and Communication Technology are at the core of its infrastructure. Smart cities serve several innovative and advanced services for its citizens in order to improve the quality of their life. A Smart City must encompass all the forthcoming and highly advanced and integrated technology, the essence of which is the Internet of Things (IoT). Smart technologies like smart governance, smart communication, smart environment, smart transportation, smart energy, waste and water management applications promise the smart growth of the city, but at the same time, it needs to enforce pervasive security and privacy of the large volume of data associated with these smart applications. Special smart measures are required to cover urbanization trends in the innovative administration of urban transference and various smart services to the residents, visitors and local government to meet the ever expanding and manifold demands. When the city goes urban, its residents may suffer from various privacy and security issues due to smart city applications vulnerabilities. This chapter delivers a comprehensive overview of the security and privacy threats, vulnerabilities, and challenges of a smart city project; and suggests solutions in order to facilitate smart city development and governance.

Alka Verma, Abhirup Khanna, Amit Agrawal, Ashraf Darwish, Aboul Ella Hassanien

Chapter 5. Azure Security Center

Azure Security Center is Microsoft’s centralized dashboard solution for all things security, whether in Azure or in a hybrid topology.

Peter De Tender, David Rendon, Samuel Erskine

Designing and Implementing Data Warehouse for Agricultural Big Data

In recent years, precision agriculture that uses modern information and communication technologies is becoming very popular. Raw and semi-processed agricultural data are usually collected through various sources, such as: Internet of Thing (IoT), sensors, satellites, weather stations, robots, farm equipment, farmers and agribusinesses, etc. Besides, agricultural datasets are very large, complex, unstructured, heterogeneous, non-standardized, and inconsistent. Hence, the agricultural data mining is considered as Big Data application in terms of volume, variety, velocity and veracity. It is a key foundation to establishing a crop intelligence platform, which will enable resource efficient agronomy decision making and recommendations. In this paper, we designed and implemented a continental level agricultural data warehouse by combining Hive, MongoDB and Cassandra. Our data warehouse capabilities: (1) flexible schema; (2) data integration from real agricultural multi datasets; (3) data science and business intelligent support; (4) high performance; (5) high storage; (6) security; (7) governance and monitoring; (8) consistency, availability and partition tolerant; (9) distributed and cloud deployment. We also evaluate the performance of our data warehouse.

Vuong M. Ngo, Nhien-An Le-Khac, M-Tahar Kechadi

4. Vorgehensweise bei der Erstellung der Marktübersicht

Die folgenden Unterkapitel beschreiben die grundsätzliche Vorgehensweise, um die für den Marktspiegel relevanten OS Unternehmenssoftwaresysteme zu identifizieren und zu beschreiben, damit für Unternehmen eine hohe Qualität des Markspiegels erreicht werden kann.

Alexandra Kees, Dominic Raimon Markowski

6. Marktübersicht OS BI-Software

Dieses Kapitel zeigt, wie 17 praxistaugliche OS BI-Softwaresysteme identifiziert werden konnten und liefert zu jedem dieser Softwaresysteme drei Merkmalswerttabellen und drei Morphologische Merkmalschemata, die die jeweilige OS BI-Software beschreiben und die von Unternehmen für eine Softwarevorauswahl genutzt werden können.

Alexandra Kees, Dominic Raimon Markowski

5. Marktübersicht OS ERP-Software

In diesem Kapitel wird die erläuterte Methodik zur Erstellung des Marktspiegels auf OS ERP-Software angewendet. Die auf diese Weise identifizierten 19 praxistauglichen OS ERP-Softwaresysteme werden mit Hilfe der entwickelten Merkmalswerttabellen und Morphologischen Merkmalschemata beschrieben. Auf dieser Grundlage können Unternehmen eine Softwarevorauswahl vornehmen.

Alexandra Kees, Dominic Raimon Markowski

10. Fazit

Big Data, Collaboration, Cloud-Computing und Künstliche Intelligenz KI/Machine Learning sind digitale Technologien, die für Unternehmen im rasanten Tempo relevant werden. Stärker denn je muss sich die Unternehmensführung mit einem Wandel der Lebens- und Arbeitswelt auseinandersetzen und neue Aufgaben in die Geschäftsstrategie integrieren. Die Einbindung neuer Technologien dient dabei vor allem dem Ziel, das Informationsmanagement effizienter und effektiver zu machen. Denn Digitalisierung bedeutet immer auch eine Beschleunigung der internen und externen Kommunikation und ein Überdenken der Abwicklung der Geschäftsprozesse hinsichtlich ihrer zeitlichen Dimension. Das Ziel bleibt klar: Eine Erhöhung der Mitarbeiterproduktivität und eine verbesserte Wertschöpfung genießen oberste Priorität.

Wolfgang Riggert

8. Marktübersicht OS BPM- und Workflowmanagement-Software

Aufgrund der engen Verknüpfung von Business Process Modeling und Workflowmanagement macht es Sinn, diese beiden Bereiche in einem Marktspiegel zusammenzufassen. Es konnten 18 praxistaugliche OS BPM-/WFM-Softwaresysteme recherchiert werden, die nicht nur mit Hilfe von Merkmalswerttabellen für die Software, sondern auch für die zugehörigen Partnerunternehmen und Softwarecommunities beschrieben werden. Der so entstandene Marktspiegel kann schließlich in Unternehmen eingesetzt werden, um eine Vorauswahl für eine OS BPN/WfM-Software zu treffen.

Alexandra Kees, Dominic Raimon Markowski

Chapter 6. Blockchain and Its Shariah Compliant Structure

Islamic finance has gained momentum in the world today. Irrespective of faith conviction, it has been accepted as a mode of financing in the world. The development of Islamic finance was gradual in the past. At the initial stage of its development, Islamic finance was concerned more with Shariah compliance of transactions and contracts used in it. Subsequently, focus was realigned on Shariah harmonisation with respect to juristic views and Shariah governance. Islamic finance encompasses some fundamental religious prohibitions and the promotion of certain virtues enshrined in Islam, to be observed in all ramifications of business dealings, including services provision. Therefore, Islamic finance works in line with Islamic religious principles such as a ban on usury or interest, gambling, uncertainty and outright speculations.

Aishath Muneeza, Zakariya Mustapha

Chapter 9. A Software Toolkit for Complex Sensor Systems in Fog Environments

Smart things (such as sensors, and embedded devices) are going to become an essential source of data within the Internet of Things (IoT)—equipped with sensors and actuators; those devices can gather different information not only from their internal states but also about the environment and entities they interact with. A huge variety and amount of data are generated from various devices and sensors, which need to be processed and responded to in near real-time where the cloud is becoming a dispensable part of that process. This data is used as a basis for various kinds of decision algorithms, machine learning, and artificial intelligence in general. Hence, the amount of data that is being created and sent over the network is vastly increasing. More and more traffic in the network becomes a burden for low-bandwidth and high-latency networks. Purely cloud-based solutions are not able to overcome these issues as the physical distance between the user, edge devices, and the cloud services is an essential factor for transmission latency and response times. The next logical step is pushing cloud services to the edge of the network—to the devices collecting the actual data and at the same time processing data as well. Acquiring data and performing decisions locally on those devices instead of a physically distant cloud server will significantly reduce the amount of data that is sent through the network, reduce the required bandwidth and increase data security. These are the main principles of fog computing, which we will use as the fundamental paradigm developing a software toolkit considering the locality and connectivity of the devices in the underlying infrastructure to facilitate the integration of smart things within the network edge.

Dominik Grzelak, Carl Mai, René Schöne, Jan Falkenberg, Uwe Aßmann

Chapter 7. Power Consumption Minimization of Wireless Sensor Networks in the Internet of Things Era

Wireless Sensor Networks (WSN) are key components of the Internet of Things (IoT) revolution. WSN nodes are in general battery-powered, thereby and efficient usage of their energy budget is of paramount importance to avoid performance degradation in IoT applications. To this end, this chapter proposes techniques to manage the WSN nodes’ power consumption. The aim of the first technique is to minimize the transmitted power for a given quality of service requirement at the receiver side. To this end, a power control is considered at each WSN node as well as the use of multiple distributed access points at the receiver side. The second technique to reduce the WSN energy consumption is energy harvesting (EH). Namely, the use of artificial light EH is considered to extend the WSN lifetime. Thus, an experimental setup based on a photovoltaic cell, a boost converter and a commercial WSN node is presented. It is shown that under certain settings it is possible to extend the WSN node’s lifetime without bound, when the transmission time period is above a certain threshold.

Jordi Serra, David Pubill, Christos Verikoukis

Chapter 5. Computational Intelligence for Simulating a LiDAR Sensor

Cyber-Physical and Internet-of-Things Automotive Applications

In this chapter, an overview of some of the most commonly computational intelligence techniques used to provide new capabilities to sensor networks in Cyber-Physical and Internet-of-Things environments, and for verifying and evaluating the reliability issues of sensor networks is presented. Nowadays, on-chip Light Detection and Ranging (LiDAR) concept has driven a great technological challenge into sensor networks application for Cyber-Physical and Internet-of-Things systems. Therefore, the modelling and simulation of a LiDAR sensor networks is also included in this chapter that is structured as follows. First, a brief description of the theoretical modelling of the mathematical principle of operation is outlined. Subsequently, a review of the state-of-the-art of computational intelligence techniques in sensor system simulations is explained. Likewise, a use case of applying computational intelligence techniques to LiDAR sensor networks in a Cyber-Physical System environment is presented. In this use case, a model library with four specific artificial intelligence-based methods is also designed based on sensory information database provided by the LiDAR simulation. Some of them are multi-layer perceptron neural network, a self-organization map, a support vector machine, and a k-nearest neighbour. The results demonstrate the suitability of using computational intelligence methods to increase the reliability of sensor networks when addressing the key challenges of safety and security in automotive applications.

Fernando Castaño, Gerardo Beruvides, Alberto Villalonga, Rodolfo E. Haber

Blockchain Federation for Complex Distributed Applications

Blockchains are immutable distributed ledger systems usually without a central authority. Blockchains enables people to establish trusted application among untrusted parties. But, the performance of blockchain is a challenge for massive applications. There are many researches to improve the performance of blockchain including side blockchain, interconnection of blockchains. In fact, a distributed application usually needs resource of computing, storage and transportation, even with possible permissioned access. It means that any current blockchain cannot satisfy all the demands simultaneously. This paper proposes a new Blockchain Federation that consolidates several blockchains to support complex distributed applications. Two typical application scenarios are implemented following the proposed concept of Blockchain Federation. The new emerging blockchain technology are combined together to meet the demands of complex peer-to-peer applications. And, the conclusions are drawn and future direction of Blockchain Federation evolution, Federation Blockchain, is discussed as well.

Zhitao Wan, Minqiang Cai, Xianghua Lin, Jinqing Yang

The Development Trend of Intelligent Speech Interaction

To make the computers have capabilities of listening, speaking, understanding and even thinking is the latest development direction of human-computer interaction. As one of the most convenient and natural ways for communication, speech has become the most promising way of human-computer interaction in the future, which has more advantages than other interaction ways. As one of the most popular artificial intelligence (AI) technologies, intelligent speech interaction technology has been widely applied in many industries such as electronic commerce, smart home and intelligent industry as well as manufacturing. It will change the user behavior habits and become the new mode of human input and output. In this paper, we state the current situation of intelligent speech interaction at home and abroad, take many examples to illustrate the application scenarios of speech interaction technology and finally introduce its development trend in the future.

Yishuang Ning, Sheng He, Chunxiao Xing, Liang-Jie Zhang

CloudAgora: Democratizing the Cloud

In this paper we present CloudAgora, a platform that enables the realization of a democratic and fully decentralized cloud computing market where participating parties enjoy significant advantages: On one hand, cloud consumers have access to low-cost storage and computation without having to blindly trust any central authority. On the other hand, any individual or company, big or small, can potentially serve as cloud provider. Idle resources, be it CPU or disk space, are monetized and offered in competitive fees, regulated by the law of supply and demand. In the heart of the platform lies the blockchain technology, which is used to record commitment policies, publicly verify off-chain services and trigger automatic micropayments. Our prototype is built on top of the Ethereum blockchain and is provided as an open source project.

Katerina Doka, Tasos Bakogiannis, Ioannis Mytilinis, Georgios Goumas

A WS-Agreement Based SLA Ontology for IoT Services

In the Internet of Things (IoT), billions of physical devices, distributed over a large geographic area, provide a near real-time state of the world. These devices’ capabilities can be abstracted as IoT services and delivered to users in a demand-driven way. In such a dynamic large-scale environment, a service provider who supports a service level agreement (SLA) can have a comprehensive competitive edge in terms of service quality management, service customization, optimized resource allocation, and trustworthiness. However, there is no consistent way of drafting an SLA with respect to describing heterogeneous IoT services, which obstructs automatic service selection, SLA negotiation, and SLA monitoring. In this paper, we propose an ontology, WIoT-SLA, to achieve semantic interoperability. We combine IoT service properties with two prominent web service SLA specifications: WS-Agreement and WSLA, to take advantage of their complementary features. This ontology is used to formalize the SLAs and SLA negotiation offers, which further facilitates the service selection and automatic SLA negotiation. It can also be used by a monitoring engine to detect SLA violations by providing the semantics of service level objectives (SLOs) and quality metrics. To evaluate our work, a prototype is implemented to demonstrate its feasibility and efficiency.

Fan Li, Christian Cabrera, Siobhán Clarke

On Development of Data Science and Machine Learning Applications in Databricks

Databricks is a unified analytics engine that allows rapid development of data science applications using machine learning techniques such as classification, linear and nonlinear regression, clustering, etc. Existence of myriad sophisticated computational options, however, can become overwhelming for designers as it may not always be clear what choices can produce the best predictive model given a specific data set. Further, the mere high dimensionality of big data sets is a challenge for data scientists to gain a deep understanding of the results obtained by a utilized model.This paper provides general guidelines for utilizing a variety of machine learning algorithms on the cloud computing platform, Databricks. Visualization is an important means for users to understand the significance of the underlying data. Therefore, it is also demonstrated how graphical tools such as Tableau can be used to efficiently examine results of classification or clustering. The dimensionality reduction techniques such as Principal Component Analysis (PCA), which help reduce the number of features in a learning experiment, are also discussed.To demonstrate the utility of Databricks tools, two big data sets are used for performing clustering and classification. A variety of machine learning algorithms are applied to both data sets, and it is shown how to obtain the most accurate learning models employing appropriate evaluation methods.

Wenhao Ruan, Yifan Chen, Babak Forouraghi

ClientNet Cluster an Alternative of Transferring Big Data Files by Use of Mobile Code

Big Data has become a nontrivial problem in the field of business as well as in scientific applications. It becomes more complex with the growth of data and scaling of data entry points. These points refer to the remote and local sources where huge data is generated within tiny slots of time. This may also refer to the end user devices including computers, sensors and wireless gadgets. As far as scientific applications are concerned, for example, Geo Physics applications or real time weather forecast requires heavy data and complex mathematical computations. Such applications generate large chunks of data that needs to transfer it through conventional computer networks. Problem with Big Data applications emerges when heavy amount of data is transferred or downloaded (files or objects) from remote locations. The results drawn in real-time from large data files/sets become obsolete due to the fact data keeps on adding new data into the files and the downloading by remote machines remains slower as compared to file growth. This paper addresses this problem and provides possible solution through ClientNet Cluster of remote computers, Specialized Cluster of Computers, as one of the alternative to deal with real-time data analytics under the hard constraints of network. The idea is moving code, for analytic processing, to the remotely available big size files and returning the results to distributed remote locations. The Big Data file does not need to move around network for uploading or downloading whenever the processing is required from distributed locations.

Waseem Akhtar Mufti

SMT-Based Modeling and Verification of Cloud Applications

Cloud applications have been rapidly evolving and gained more and more attention in the past decade. Formal modeling and verification of cloud services are necessarily needed to guarantee their correctness and reliability of complex cloud applications. In this paper, we present a formal framework for modeling and verification of cloud applications based on the SMT solver Z3. Simple cloud services are specified as the basis for the modeling of composition and more complex cloud services. Three different classes Service, Composition and Cloud indicating simple cloud services, composition patterns and composed cloud services are defined, which facilitates the further development of attributes and methods. We also propose an approach to check the refinement and equivalence relations between cloud services, in which counter examples can be automatically generated when the relation is not valid.

Xiyue Zhang, Meng Sun

Maintaining Fog Trust Through Continuous Assessment

Cloud computing continues to provide flexible and efficient way for delivery of services, meeting user requirements and challenges of the time. Software, Infrastructures, and Platforms are provided as services in cloud and fog computing in a cost-effective manner. Migration towards fog instigate new aspects of research for security & privacy. Trust is dependent on measures taken for availability, security, and privacy of users’ services as well as data in fog as well as sharing of these statistics with stakeholders. Any type of lapses in measures for security & privacy shatter user’s trust. In order to provide a trust worthy security and privacy system, we have conducted a thorough survey of existing techniques. A generic model for trustworthiness is proposed in this paper. This model yields a comprehensive component-based architecture of a trust management system to aid fog service providers to preserve users’ Trust in a fog computing environment.

Hasan Ali Khattak, Muhammad Imran, Assad Abbas, Samee U. Khan

Kapitel 3. Einflussfaktoren im Bangalore Modell

In diesem Kapitel werden mit dem Design Thinking, der Service-Dominant Logic (S-DL) und der Digitalisierung die Entwicklungen beschrieben, die für die Autoren ursächlich für die Entwicklung des Mensch Marketings sind. Die Leser, die die einzelnen Konzepte bereits kennen, müssen diese Abschnitte nicht unbedingt lesen. Allerdings haben wir uns bewusst für eine recht ausführliche Beschreibung entschieden, da wir einerseits davon ausgehen, dass viele Leser die Themen noch nicht so genau kennen. Andererseits spiegeln die Ausführungen unseren spezifischen Blickwinkel wider, der die Grundlage für das Verständnis des Mensch Marketing legt.Beiden Autoren sind keine deutschsprachigen Quellen bekannt, welche Design Thinking so komprimiert und dennoch umfassend beschreiben. Design Thinking wird im Einzelnen definiert und als Mindset, Methode sowie Werkzeugkasten vorgestellt, indem die inzwischen umfassende Literatur zu dem Thema zusammengefasst wurde. Zudem wird Design Thinking im Zusammenhang mit der Lösung schlecht definierter Probleme („wicked problems“) und der Geschäftsmodellentwicklung mit Lean-Start-up-Methode sowie der Business Model Canvas (BMC) diskutiert.Die Bedeutung der S-DL für das Marketing spielt in Deutschland bestenfalls eine kleine Nebenrolle während sie in der anglo-amerikanischen Literatur in den letzten 10 Jahren eine überragende Rolle in der Diskussion um Marketing als mögliche „Grand Theory“ spielt. Auf der Basis der fundamentalen Prämissen der S-DL und dem Verständnis zentraler Begriffe wird auf die Veränderung des Mindsets sowie die theoretischen und integrierenden Impulse der S-DL eingegangen, die für das H2H Marketing von hoher Bedeutung sind.Zu guter Letzt stellen die Ausführungen zur Digitalisierung nur einen kleinen Ausschnitt dessen dar, welche Auswirkungen diese auf die Geschäftswelt hat und haben wird. Unser Fokus liegt auf Aspekten der Digitalisierung, die unserer Ansicht nach nachhaltig das Verhalten der Marktteilnehmer und damit auch das Marketing verändern. Marketing neigt dazu, die Digitalisierung auf die Wandlung analoger in digitale Informationen zu reduzieren. Damit geht einher, dass die Funktion von Marketing noch stärker als bisher auf die Kommunikationsfunktion reduziert wird. Aufgrund der fehlenden Kompetenzen, wird diese Funktion nun vermehrt von IT-Experten übernommen. Für eine Umkehr muss Marketing die Digitalisierung der eigenen Tätigkeiten in Angriff nehmen. Das wird aber nur funktionieren, wenn Marketing seine eigenen Prozesse definiert und transparent dokumentiert. Marketing erhält durch die Digitalisierung die Chance, tiefergehende Kenntnisse des „Homo Digitalis“ und dessen verändertes Kaufverhalten zu erhalten. Wenn Marketing diese Erkenntnisse kreativ und interdisziplinär auf der Basis von Kompetenzen in Datenmanagement und -analyse für das Wertangebot des eigenen Unternehmens nutzt, dann steigt damit auch wieder die Bedeutung von Marketing. Dann kann Marketing die Aufgabe übernehmen, die digitale Transformation der Unternehmen mit innovativen Geschäftsmodellen zu bewältigen, die für die beteiligten Menschen „Sinn“ machen.

Waldemar Pförtsch, Uwe Sponholz

Kapitel 3. Theoretische Einordnung und Hintergründe

Theorien bieten Orientierung in der komplexen Wirklichkeit. Picot et al. (2012) vergleichen Theorien mit Werkzeugen für Handwerker und bezeichnen sie als Erkenntnisinstrumente für Wissenschaftler, wobei abhängig von der Problemstellung bestimmte Faktoren berücksichtigt und andere vernachlässigt bzw. abstarhiert werden und sich die Nützlichkeit erst in der Anwendung herausstellt. Das heißt, wenn eine Theorie häufig zum Einsatz kommt, deutet dies auf ihre Nützlichkeit hin.

Katrin Coleman

Kapitel 2. Grundlegender Bezugsrahmen und Stand der Forschung

Um die arbeitsteilige Auftragsabwicklung in der Transportkette grundlegend zu erfassen, werden im Folgenden die inhaltlichen Hintergründe skizziert. Wenn verschiedene Ausprägungen und Eigenschaften bestehen, werden diese vorgestellt sowie darauf verwiesen, auf welchen Rahmen sich die vorliegende Arbeit bezieht. Hierzu werden zunächst die arbeitsteilige Auftragsabwicklung und die mehrgliedrige Transportkette getrennt voneinander beschrieben, bevor sie zusammengeführt werden und der Stand der Forschung in einer systematischen Literaturrecherche erarbeitet wird.

Katrin Coleman

Chapter 16. Online Anomaly Detection over Big Data Streams

In many domains, high-quality data are used as a foundation for decision-making. An essential component to assess data quality lies in anomaly detection. We describe and empirically evaluate the design and implementation of a framework for data quality testing over real-world streams in a large-scale telecommunication network. This approach is both general—by using general-purpose measures borrowed from information theory and statistics—and scalable—through anomaly detection pipelines that are executed in a distributed setting over state-of-the-art big data streaming and batch processing infrastructures. We empirically evaluate our system and discuss its merits and limitations by comparing it to existing anomaly detection techniques, showing its high accuracy, efficiency, as well as its scalability in parallelizing operations across a large number of nodes.

Laura Rettig, Mourad Khayati, Philippe Cudré-Mauroux, Michał Piorkówski

Chapter 3. Data Scientists

What is a data scientist? How can you become one? How can you form a team of data scientists that fits your organization? In this chapter, we trace the skillset of a successful data scientist and define the necessary competencies. We give a disambiguation to other historically or contemporary definitions of the term and show how a career as a data scientist might get started. Finally, we will answer the third question, that is, how to build analytics teams within a data-driven organization.

Thilo Stadelmann, Kurt Stockinger, Gundula Heinatz Bürki, Martin Braschler

Chapter 15. Security of Data Science and Data Science for Security

In this chapter, we present a brief overview of important topics regarding the connection of data science and security. In the first part, we focus on the security of data science and discuss a selection of security aspects that data scientists should consider to make their services and products more secure. In the second part about security for data science, we switch sides and present some applications where data science plays a critical role in pushing the state-of-the-art in securing information systems. This includes a detailed look at the potential and challenges of applying machine learning to the problem of detecting obfuscated JavaScripts.

Bernhard Tellenbach, Marc Rennhard, Remo Schweizer

Chapter 21. Large-Scale Data-Driven Financial Risk Assessment

The state of data in finance makes near real-time and consistent assessment of financial risks almost impossible today. The aggregate measures produced by traditional methods are rigid, infrequent, and not available when needed. In this chapter, we make the point that this situation can be remedied by introducing a suitable standard for data and algorithms at the deep technological level combined with the use of Big Data technologies. Specifically, we present the ACTUS approach to standardizing the modeling of financial contracts in view of financial analysis, which provides a methodological concept together with a data standard and computational algorithms. We present a proof of concept of ACTUS-based financial analysis with real data provided by the European Central Bank. Our experimental results with respect to computational performance of this approach in an Apache Spark based Big Data environment show close to linear scalability. The chapter closes with implications for data science.

Wolfgang Breymann, Nils Bundi, Jonas Heitz, Johannes Micheler, Kurt Stockinger

Intrusion Detection – Systeme für vernetzte Fahrzeuge – Konzepte und Herausforderungen für Privatheit und Cyber-Sicherheit

Die zunehmende Vernetzung im Fahrzeug erhöht das Risiko für Cyberangriffe mit schwerwiegenden Folgen sowohl für die Privatheit als auch für Leib und Leben der Insassen. Für derartige Angriffe werden intrinsische Schwachstellen im Controller Area Network Protokoll, das in der Automobilindustrie den verbreitetsten Standard zur Realisierung von In-Fahrzeugkommunikationsnetze darstellt, ausgenutzt. Ein sowohl in der Forschung als auch in der Industrie zunehmend als effektiv und praktikabel angesehener Lösungsansatz stellen Intrusion Detection Systeme (IDS) dar. Die grundlegende Architektur und vorgesehene Datenverarbeitungsschritte in derzeitigen Vorschlägen für IDS im Kontext vernetzter Fahrzeuge weisen auf ein wachsendes Spannungsverhältnis zwischen der notwendigen Absicherung der In-Fahrzeugnetze durch Datenanalyse (Cybersicherheit) und der in der Regel fehlenden oder ungenügenden Berücksichtigung der Prinzipien des Data Protection-by-Design und Data Protection-by-Default – also Datenschutz durch Technikgestaltung und datenschutzfreundliche Voreinstellungen.

Hervais Simo, Michael Waidner, Christian Geminn

Grundrechtsverwirklichung im vernetzten und automatisierten Straßenverkehr

Vernetzung und Automatisierung verändern das Automobil vom geschützten Privatraum zu einem Teil des Internet. Sie erhöhen die Sicherheit des Straßenverkehrs und die Bequemlichkeit der Fortbewegung. Zugleich aber wird das mobile Leben protokolliert und vielen Interessenten zugänglich. Diese Änderungen berühren viele Grundrechte, die Mobilität, Kommunikation und Persönlichkeitsschutz gewährleisten. Der Beitrag untersucht, wie das Recht die Verwirklichung dieser Grundrechte im vernetzten Auto zu einem Ausgleich bringen kann.

Alexander Roßnagel

Connected Cars in China: Technology, Data Protection and Regulatory Responses

China will be one of the major players of the future connected car industry because of its still growing domestic market, increasing technological advantages in ICT, data processing and platform services, strong industrial investment, and very dedicated industrial strategy and support from the central government. This contribution first reviews the recent developments of the connected car industry in China with respect to technology, innovation, major players and some emerging problems. Further, it discusses the major existing problems and difficulties of China’s new generation car industry, including lack of certain core technologies such as computing chips, complex road circumstances, weak data privacy protection, and platform services security, among other things. This is followed by a detailed discussion of the multiple regulatory responses from the Chinese state authority that has taken the smart car industry as one of China’s future key growing sectors capable of overtaking the west. This includes clear strategic police makings, tremendous financial support, experimental spirits and practices, considerable privacy law updating, etc. In concluding, this contribution offers valuable predictions and advices for both Chinese and foreign players to better engage in China’s increasingly competitive but promising connected car market.

Bo Zhao

Open Cars and Data Protection in the United States of America

Openness, data access and data protection have become important attributes, features and value factors for cars and mobility services more broadly. An open car comes with interoperable interfaces and openly disclosed software and hardware for technology upgrades, aftermarket products, services and security researchers. The open car can protect data privacy and security as well or better as proprietary automotive products do today. The closed car remains controlled by its original manufacturer, which is in most cases a large company with a strong brand, good safety track record, well-capitalized, subsidized or supported by governments, and generally considered more trustworthy than many smaller companies. Owners of closed cars will have less options und depend on the original manufacturers with respect to data privacy and security protections. Compared to the closed, proprietary car, the open car comes out ahead based on technology, competition, sustainability and environmental policy considerations. Car makers and buyers should start considering, communicating and bargaining about the degree of openness of vehicle interfaces and data access - as well as data protection and privacy safeguards relating to cars and mobility services.

Lothar Determann, Bruce Perens

Facial Recognition on Cloud for Android Based Wearable Devices

Facial recognition applications for Android Based Wearable Devices (ABWD) can benefit from cloud computing as they become easy to acquire and widely available. There are several applications of facial recognition in terms of assistance, guidance, security and so on. We can greatly reduce the processing time by executing the facial recognition application on cloud, and clients will not have to store the big data for the image verification on their local machine (mobile phones, pc’s etc.). Comparing to the cost of acquiring an equally strong server machine, cloud computing increases the storage and processing power with very less cost. In this research plan is to enhance the user experience of augmented display on android based wearable devices, and for doing that, this system is being proposed in which a person wearing Android based smart glasses will send an image of an object to Hadoop (open-source software for scalable, reliable, distributed computing) powered cloud server. Facial Recognition Application on cloud server will recognize the face from already present database on server and then respond results to Android Based Wearable client devices. Then android based wearable smart devices will display the detail result in form of augmented display to the person wearing them. By transferring the process of facial recognition and having the database on cloud server, multiple clients no longer need to maintain their local databases and the device will require less processing power which results in reduction of cost and processing time.

Zeeshan Shaukat, Chuangbai Xiao, M. Saqlain Aslam, Qurat ul Ain Farooq, Sara Aiman

Heterogeneity-Aware Data Placement in Hybrid Clouds

In next-generation cloud computing clusters, performance of data-intensive applications will be limited, among other factors, by disks data transfer rates. In order to mitigate performance impacts, cloud systems offering hierarchical storage architectures are becoming commonplace. The Hadoop File System (HDFS) offers a collection of storage policies that exploit different storage types such as RAM_DISK, SSD, HDD, and ARCHIVE. However, developing algorithms to leverage heterogeneous storage through an efficient data placement has been challenging. This work presents an intelligent algorithm based on genetic programming which allow to find the optimal mapping of input datasets to storage types on a Hadoop file system.

Jack D. Marquez, Juan D. Gonzalez, Oscar H. Mondragon

Dynamic Network Anomaly Detection System by Using Deep Learning Techniques

The Internet and computer networks are currently suffering from serious security threats. Those threats often keep changing and will evolve to new unknown variants. In order to maintain the security of network, we design and implement a dynamic network anomaly detection system using deep learning methods. We use Long Short Term Memory (LSTM) to build a deep neural network model and add an Attention Mechanism (AM) to enhance the performance of the model. The SMOTE algorithm and an improved loss function are used to handle the class-imbalance problem in the CSE-CIC-IDS2018 dataset. The experimental results show that the classification accuracy of our model reaches 96.2%, which is higher than other machine learning algorithms. In addition, the class-imbalance problem is alleviated to a certain extent, making our method have great practicality.

Peng Lin, Kejiang Ye, Cheng-Zhong Xu

Ultra-Low Power Localization System Using Mobile Cloud Computing

In the existing positioning system based on bluetooth (BT), the interference of the positioning device signal, the slow processing speed of the positioning data and the large energy consumption of the positioning device affect the system positioning accuracy and service quality. In this paper, we propose an Ultra-Low power indoor localization system using mobile cloud computing. The mobile cloud server reduces the signal interference of the positioning device, improves the positioning accuracy and reduces the system energy consumption by controlling the working mode of the positioning device. A simultaneous localization and power adaptation scheme is developed. In the real experiment evaluation, our proposed system can localize the area of a terminal located within 3 m distance with $$98\%$$ accuracy and average positioning error less then 1.55 m. Compare with other BLE system, $$97\%$$ average energy consumption of our system is reduced.

Junjian Huang, Yubin Zhao, XiaoFan Li, Cheng-Zhong Xu

A Web-Service to Monitor a Wireless Sensor Network

In recent years, the interest in the Internet of Things has been growing, and WSN is a promising technology that could be applied in many situations. Regardless of the nature of the application, WSNs are often used for data acquisition, to obtain information from an environment of interest, so it is essential to consider how this data will be made available to users. Over the last years, an increasing number of web services have been used to deal with databases and final users, providing familiar interfaces and multi-platform access to those data. To address this problem, our study proposes a web application based on MVC architecture to monitor, organize and manage devices and data in a wireless sensor network. Here, is presented a functional evaluation of the proposed system and a discussion regarding the test results.

Rayanne M. C. Silveira, Francisco P. R. S. Alves, Allyx Fontaine, Ewaldo E. C. Santana

A Novel Coalitional Game-Theoretic Approach for Energy-Aware Dynamic VM Consolidation in Heterogeneous Cloud Datacenters

Server consolidation technique plays an important role in energy management and load-balancing of cloud computing systems. Dynamic virtual machine (VM) consolidation is a promising consolidation approach in this direction, which aims at using least active physical machines (PMs) through appropriately migrating VMs to reduce resource consumption. The resulting optimization problem is well-acknowledged to be NP-hard optimization problems. In this paper, we propose a novel merge-and-split-based coalitional game-theoretic approach for VM consolidation in heterogeneous clouds. The proposed approach first partitions PMs into different groups based on their load levels, then employs a coalitional-game-based VM consolidation algorithm (CGMS) in choosing members from such groups to form effective coalitions, performs VM migrations among the coalition members to maximize the payoff of every coalition, and close PMs with low energy-efficiency. Experimental results based on multiple cases clearly demonstrate that our proposed approach outperforms traditional ones in terms of energy-saving and level of load fairness.

Xuan Xiao, Yunni Xia, Feng Zeng, Wanbo Zheng, Xiaoning Sun, Qinglan Peng, Yu Guo, Xin Luo

Profit Maximization and Time Minimization Admission Control and Resource Scheduling for Cloud-Based Big Data Analytics-as-a-Service Platforms

Big data analytics typically requires large amounts of resources to process ever-increasing data volumes. This can be time consuming and result in considerable expenses. Analytics-as-a-Service (AaaS) platforms provide a way to tackle expensive resource costs and lengthy data processing times by leveraging automatic resource management with a pay-per-use service delivery model. This paper explores optimization of resource management algorithms for AaaS platforms to automatically and elastically provision cloud resources to execute queries with Service Level Agreement (SLA) guarantees. We present admission control and cloud resource scheduling algorithms that serve multiple objectives including profit maximization for AaaS platform providers and query time minimization for users. Moreover, to enable queries that require timely responses and/or have constrained budgets, we apply data sampling-based admission control and resource scheduling where accuracy can be traded-off for reduced costs and quicker responses when necessary. We conduct extensive experimental evaluations for the algorithm performances compared to state-of-the-art algorithms. Experiment results show that our proposed algorithms perform significantly better in increasing query admission rates, consuming less resources and hence reducing costs, and ultimately provide a more flexible resource management solution for fast, cost-effective, and reliable big data processing.

Yali Zhao, Rodrigo N. Calheiros, Athanasios V. Vasilakos, James Bailey, Richard O. Sinnott

Live Migration of Virtual Machines in OpenStack: A Perspective from Reliability Evaluation

Virtualization technology is widely used in cloud data centers and today’s IT infrastructure. A key technology for server virtualization is the live migration of virtual machines (VMs). This technology allows VMs to be moved from one physical host to another while minimizing service downtime. The cloud providers usually use cloud operating system for virtual machine management. Currently the most widely used open source cloud operating system is OpenStack. In this paper, we investigate the reliability of VM live migration in OpenStack by increasing the system pressures and injecting network failures during the migration. We analyze the impact of these pressures and failures on the performance of VM live migration. The experimental results can be used to guide data center administrators in migration decisions and fault localization. Furthermore, it can help researchers to find bottlenecks and optimization methods for live migration in OpenStack.

Jin Hao, Kejiang Ye, Cheng-Zhong Xu

An Approach to Failure Prediction in Cluster by Self-updating Cause-and-Effect Graph

Cluster systems have been widely used in cloud computing, high-performance computing, and other fields, and the usage and scale of cluster systems have shown a sharp upward trend. Unfortunately, the larger cluster systems are more prone to failures, and the difficulty and cost of repairing failures are unusually huge. Therefore, the importance and necessity of failure prediction in cluster systems are obvious. In order to solve this severe challenge, we propose an approach to failure prediction in cluster systems by Self-Updating Cause-and-Effect Graph. Different from the previous approaches, the most novel point of our approach is that it can automatically mine the causality among log events from cluster systems, and set up and update Cause-and-Effect Graph for failure prediction throughout their life cycle. In addition, we use the real logs from Blue Gene/L system to verify the effectiveness of our approach and compare our approach to other approaches using the same logs. The result shows that our approach outperforms other approaches with the best precision and recall rate reaching 89% and 85%, respectively.

Yan Yu, Haopeng Chen

Toward Accurate and Efficient Emulation of Public Blockchains in the Cloud

Blockchain is an enabler of many emerging decentralized applications in areas of cryptocurrency, Internet of Things, smart healthcare, among many others. Although various open-source blockchain frameworks are available in the form of virtual machine images or docker images on public clouds, the infrastructure of mainstream blockchains nonetheless exhibits a technical barrier for many users to modify or test out new research ideas in blockchains. To make it worse, many advantages of blockchain systems can be demonstrated only at large scales, e.g., thousands of nodes, which are not always available to researchers. This paper presents an accurate and efficient emulating system to replay the execution of large-scale blockchain systems on tens of thousands of nodes. In contrast to existing work that simulates blockchains with artificial timestamp injection, the proposed system is designed to be executing real proof-of-work workload along with peer-to-peer network communications and hash-based immutability. In addition, the proposed system employs a preprocessing approach to avoid the per-node computation overhead at runtime and thus achieves practical scales. We have evaluated the system for emulating up to 20,000 nodes on Amazon Web Services (AWS), showing both high accuracy and high efficiency with millions of transactions.

Xinying Wang, Abdullah Al-Mamun, Feng Yan, Dongfang Zhao

Multiple Workflow Scheduling with Offloading Tasks to Edge Cloud

Edge computing can realize a data locality among a cloud and users, and it can be applied to task offloading, i.e., a part of workload on a mobile terminal is moved to an edge or a cloud system to minimize the response time with reducing energy consumption. Mobile workflow jobs have been widely used due to advance of computational power on a mobile terminal. Thus, how to offload or schedule each task in a mobile workflow is one of the current challenging issues.In this paper, we propose a task scheduling algorithm with task offloading, called priority-based continuous task selection for offloading (PCTSO), to minimize the schedule length with energy consumption at a mobile client being reduced. PCTSO tries to select dependent tasks such that many tasks are offloaded so as to utilize many vCPUs in the edge cloud; in this manner, the degree of parallelism can be maintained. Experimental results of the simulation demonstration that PCTSO outperforms other algorithms in the schedule length and satisfies the energy constraint.

Hidehiro Kanemitsu, Masaki Hanada, Hidenori Nakazato

Systematic Construction, Execution, and Reproduction of Complex Performance Benchmarks

In this work, we present the next generation of the Elba toolkit available under a Beta release, showing how we have used it for experimental research in computer systems using RUBBoS, a well-known n-tier system benchmark, as example. In particular, we show how we have leveraged milliScope – Elba toolkit’s monitoring and instrumentation framework – to collect log data from benchmark executions at unprecedented fine-granularity, as well as how we have specified benchmark workflows with WED-Make – a declarative workflow language whose main characteristic is to facilitate the declaration of dependencies. We also show how to execute WED-Makefiles (i.e., workflow specifications written with WED-Make), and how we have successfully reproduced the experimental verification of the millibottleneck theory of performance bugs in multiple cloud environments and systems.

Rodrigo Alves Lima, Joshua Kimball, João E. Ferreira, Calton Pu

Min-Edge P-cycles: An Efficient Approach for Computing P-cycles in Optical Data Center Networks

Effective network protection requires that extra resources be used in failure events. Pre-configured protection cycles (P-cycles) are proposed to protect mesh-based networks using few extra resources. A number of heuristic methods have been developed to overcome the complexity of finding optimum P-cycles in dense optical networks. The processing time of existing approaches depends on the number of working wavelengths. As the number of working wavelengths is increasing in modern networks, the processing time of current P-cycle computing approaches will continue to increase. In this paper, we propose an approach, called Min-Edge P-cycle (MEP), that addresses this problem. The core of the proposed approach is an iterative algorithm that uses the minimum-weight edge in each iteration. Our approach provides the same redundancy requirements as the previously known unity cycle method but it does not depend on the number of working wavelengths. The new approach can significantly reduce the processing time of computing P-cycles in large scale optical, server-centric data center networks, e.g., BCube, FiConn, and DCell networks.

Amir Mirzaeinia, Abdelmounaam Rezgui, Zaki Malik, Mehdi Mirzaeinia

An Overview of Cloud Computing Testing Research

With the rapid growth in information technology, there is a significant increase in research activities in the field of cloud computing. Cloud testing can be interpreted as (i) testing of cloud applications, which involves continuous monitoring of cloud application status to verify Service Level Agreements, and (ii) testing as a cloud service which involves using the cloud as a testing middleware to execute a large-scale simulation of real-time user interactions. This study aims to examine the methodologies and tools used in cloud testing and the current research trends in cloud computing testing.

Jia Yao, Babak Maleki Shoja, Nasseh Tabrizi

A Parallel Algorithm for Bayesian Text Classification Based on Noise Elimination and Dimension Reduction in Spark Computing Environment

The Naive Bayesian algorithm is one of the ten classical algorithms in data mining, which is widely used as the basic theory for text classification. With the high-speed development of the Internet and information systems, huge amount of data are being produced all the time. Some problems are certain to arise when the traditional Bayesian classification algorithm addresses massive amount of data, especially without the parallel computing framework. This paper proposes an improved Bayesian algorithm INBCS, for text classification in the Spark computing environment and improves the Naive Bayesian algorithm based on a polynomial model. For the data preprocessing, this paper first proposes a parallel noise elimination algorithm, and then proposes another parallel dimension reduction algorithm based on Information Gain and TextRank computation in the Spark environment. Based on these preprocessed data, an improved parallel method is proposed for calculating the conditional probability that comprehensively considers the effects of the feature items in each document, class and training set. Finally, through experiments on different widely used corpuses on the Spark computation platform, the results illustrate that INBCS can obtain higher accuracy and efficiency than some current improvements and implementations of the Naive Bayesian algorithms in Spark ML-library.

Zhuo Tang, Wei Xiao, Bin Lu, Youfei Zuo, Yuan Zhou, Keqin Li

The Case for Physical Memory Pools: A Vision Paper

The cloud is a rapidly expanding and increasingly prominent component of modern computing. Monolithic servers limit the flexibility of cloud-based systems, however, due to static memory limitations. Developments in OS design, distributed memory systems, and address translation have been crucial in aiding the progress of the cloud. In this paper, we discuss recent developments in virtualization, OS design and distributed memory structures with regards to their current impact and relevance to future work on eliminating memory limits in cloud computing. We argue that creating physical memory pools is essential for cheaper and more efficient cloud computing infrastructures, and we identify research challenges to implement these structures.

Heather Craddock, Lakshmi Prasanna Konudula, Kun Cheng, Gökhan Kul

Chapter 2. IoT Architecture

The domain of the internet of things will encompass a wide range of technologies. Thus, single reference architecture cannot be used as a blueprint for all possible concrete implementations. While a reference model can probably be identified, it is likely that several reference architectures will coexist in the internet of things. In this context, architecture is specifically defined as a framework for specifying the physical components and functional organization and configuration of a network, operational principles, and procedures, as well as data formats used in its operation. In fact, IoT is like an umbrella around all possible computer devices around us. Therefore, the IoT architecture should be open enough with open protocols to support a variety of existing network applications. Additionally, some middleware for scalability, security, and semantic representation should also be included to promote data world integration with the internet. This chapter provides a comprehensive review of the internet of things architectures.

Mohammad Ali Jabraeil Jamali, Bahareh Bahrami, Arash Heidari, Parisa Allahverdizadeh, Farhad Norouzi

Chapter 1. The IoT Landscape

During the history of human life, waves 1, 2, and 3 are the waves of agriculture, industry, and information technology, respectively. These waves have created tremendous changes in the quality of human life. The fourth wave of human life is the emergence of a cyber-age in which everything is connected to everyone in any place at any time. With the halp of dis huge evolution, all communication needs will be provided at any time, with minimal human intervention and easily through the internet of things.

Mohammad Ali Jabraeil Jamali, Bahareh Bahrami, Arash Heidari, Parisa Allahverdizadeh, Farhad Norouzi

Chapter 3. IoT Security

Even though internet of things is a hybrid platform of the overlay network such as the internet, cloud computing, fog, or edge, many security solutions for the above-mentioned networks cannot be directly used on the resource-constrained devices of the IoT, hence the need for new security solutions. Security is one of the most important problems for IoT technologies, applications, and platforms. Security is not an issue that can be treated independently. Security has to be designed and built in each layer of the IoT solutions (from the device layer to the application layer). IoT security is not only about securing the network and data; it goes beyond that to attacks which can target human health or life. In this chapter, we discuss the security challenges of the IoT. First, we discuss some basic concepts of security and security requirements in the context of IoT. Then, we consider fundamental security issues in the IoT and thereafter highlight the security issues that need immediate attention.

Mohammad Ali Jabraeil Jamali, Bahareh Bahrami, Arash Heidari, Parisa Allahverdizadeh, Farhad Norouzi

Optimal Design of Smarter Tourism User Experience Driving by Service Design

The rapid development of information technology and the increasing individual demand of consumers have promoted the intelligent development of the tourism market. The service design concept has important research value for the user experience optimization of “smarter tourism”. This article takes Taierzhuang Ancient City as a research sample, focuses on designing and optimizing each contact point in the service blueprint, and sums up the complete scenic tour experience model. The results show that in the process of “smarter tourism”, it mainly involves two aspects: the efficient protection of scenic service and the degree of satisfaction with individual needs. The latter is the key to judging the satisfaction of scenic spot service. It can be seen that the “smarter tourism” user experience service needs to be improved, and more innovations can be launched in the future based on service contact points and service process optimization.

Qing Xu

Study on Interaction Design of SANGFOR Institute Based on Mental Model

Establish an online learning model that is compatible with the information age lifestyle has become a focus of attention. This study takes the interactive design of SANGFOR Institute, the author’s ongoing entity project as the research object. Based on the theoretical research of mental models, this study collects user needs, constructs mental models, uses potential models to identify potential design opportunities. The mental model is used to guide the main functional modules of SANGFOR Institute to build a matching website architecture. This study hopes that the research results can meet the needs of business activities, SANGFOR Institute interaction design that conforms to the user’s mental model can effectively analyze the user’s existing knowledge and experience, reduce the learning cost and enhance the user experience.

Ren Long, Honglei Wang, Chenyue Sun, Hongzhi Pan, Jiali Zhang

Chapter 1. Introduction to Wearout

Over the last decade, CMOS wearout emerged as one of the most critical threats to circuit performance and system reliability. Among recognized wearout mechanisms, bias temperature instability (BTI) and electromigration (EM) appear as two dominant effects that affect transistors and interconnect, respectively. Conventional flat guardband or dynamic margin design approaches address these effects by tolerating them, but they can be both costly and insufficient. Techniques that can take advantage of recovery of the phenomena can be more economic and effective. In this chapter, we present a taxonomy of state-of-the-art BTI and EM mitigation techniques that were developed across the system hierarchy, followed by the introduction of the concept of accelerated active self-healing that will be addressed throughout the rest of the book.

Xinfei Guo, Mircea R. Stan

A Smart Factory in a Laboratory Size for Developing and Testing Innovative Human-Machine Interaction Concepts

The content of this paper is the description of a model factory for the evaluation of industrial production applications. Beside developments in the information sciences like cloud computing or machine learning this model factory is also used for the development and evaluation of human-machine concepts like virtual engineering, virtual twins, mobile application, or augmented reality.

Carsten Wittenberg, Benedict Bauer, Nicolaj Stache

Comparison of Information Technology Service Management (ITSM) Practices in e-Infrastructures, Libraries, Public Administration and the Private Sector

This work reviews and compares the literature about ITSM practices in libraries, public administration, commercial sector and e-Science infrastructures. Six aspects taken out were properties or structure of digital components, source of finances to support the daily running of business in these sectors, forms of organization, frameworks developed in the form of guidelines to structure the sectors, the factors that influence implementation of ITSM guidelines. It was found out that though there are similarities between all of these sectors, yet the case of e-infrastructures is special and practical implementation of ITSM practices is needed.

Hashim Iqbal Chunpir, Mostafa Ismailzadeh

Energy-Aware Capacity Provisioning and Resource Allocation in Edge Computing Systems

Energy consumption plays a key role in determining the cost of services in edge computing systems and has a significant environmental impact. Therefore, minimizing the energy consumption in such systems is of critical importance. In this paper, we address the problem of energy-aware optimization of capacity provisioning and resource allocation in edge computing systems. The main goal is to provision and allocate resources such that the net profit of the service provider is maximized, where the profit is the difference between the aggregated users’ payments and the total operating cost due to energy consumption. We formulate the problem as a mixed integer linear program and prove that the problem is NP-hard. We develop a heuristic algorithm to solve the problem efficiently. We evaluate the performance of the proposed algorithm by conducting an extensive experimental analysis on problem instances of various sizes. The results show that the proposed algorithm has a very low execution time and is scalable with respect to the number of users in the system.

Tayebeh Bahreini, Hossein Badri, Daniel Grosu

Volunteer Cloud as an Edge Computing Enabler

The rapid increase in the number of devices connected to the Internet, due to the Internet of Things, demands new ways of processing data produced by the devices. Edge Computing is one of the solutions that tries to process data close to the origin, which is the edge of networks. Emerging cloud systems, such as volunteer clouds, can also be used towards the processing of data produced by IoT devices. This paper proposes a Volunteer Computing as a Service (VCaaS) based Edge Computing infrastructure. The paper addresses the architectural design of the proposed system together with its research and technical challenges.

Tessema M. Mengistu, Abdullah Albuali, Abdulrahman Alahmadi, Dunren Che

Intrusion Detection at the Network Edge: Solutions, Limitations, and Future Directions

The low-latency, high bandwidth capabilities promised by 5G, together with the diffusion of applications that require high computing power and, again, low latency (such as videogames), are probably the main reasons—though not the only one—that have led to the introduction of a new network architecture: Fog Computing, that consists in moving the computation services geographically close to where computing is needed. This architectural shift moves security and privacy issues from the Cloud to the different layers of the Fog architecture. In this scenario, IDSs are still necessary, but they need to be contextualized in the new architecture. Indeed, while on the one hand Fog computing provides intrinsic benefits (e.g., low latency), on the other hand, it introduces new design challenges.In this paper, we provide the following contributions: we analyze the possible IDS solutions that can be adopted within the different Fog computing tiers, together with their related deployment and design challenges; and, we propose some promising future directions, by taking into account the challenges left uncovered by the considered solutions.

Simone Raponi, Maurantonio Caprolu, Roberto Di Pietro

Characterization of IoT Workloads

Workload characterization is a fundamental step in carrying out performance and Quality of Service engineering studies. The workload of a system is defined as the set of all inputs received by the system from its environment during one or more time windows. The characterization of the workload entails determining the nature of its basic components as well as a quantitative and probabilistic description of the workload components in terms of both the arrival process, event counts, and service demands. Several workload characterization studies were presented for a variety of domains, except for IoT workloads. This is precisely the main contribution of this paper, which also presents a capacity planning study based on one of the workload characterizations presented here.

Uma Tadakamalla, Daniel A. Menascé

Enabling Technologies of Industry 4.0 and Their Global Forerunners: An Empirical Study of the Web of Science Database

Knowledge management in organizations brings many benefits for R&D operations of companies and corporations. This empirical study demonstrates the power of large database analyses for industrial strategies and policy. The study is based on the Web of Science database (Core Collection, ISI) and provides an overview of the core enabling technologies of Industry 4.0, as well as the countries and regions at the forefront of the academic landscape within these technologies. The core technologies and technologies of Industry 4.0 and Manufacturing 4.0 are: (1) Internet of Things and related technologies (2) Radio Frequency Identification (RFID), (3) Wireless Sensor Network (WSN), and (4) ubiquitous computing. It also covers (5) Cloud computing technologies, including (6) Virtualization and (7) Manufacturing as a Service (MaaS), and new (8) Cyber-physical systems, such as (9) Digital Twin-technology and (10) Smart & Connected Communities. Finally, important for the manufacturing integration Industry 4.0 enabling technologies are (11) Service Oriented Architecture (SOA), (12) Business Process Management (BPM), and (13) Information Integration and Interoperability. All these key technologies and technology drivers were analysed in this empirical demonstration of knowledge management.

Mikkel Stein Knudsen, Jari Kaivo-oja, Theresa Lauraeus

Digital Twins Approach and Future Knowledge Management Challenges: Where We Shall Need System Integration, Synergy Analyses and Synergy Measurements?

We’re in the midst of a significant transformation regarding the way we produce products and deliver services thanks to the digitization of manufacturing and new connected supply-chains and co-creation systems. This article elaborates Digital Twins Approach to the current challenges of knowledge management when Industry 4.0 is emerging in industries and manufacturing. Industry 4.0 approach underlines the importance of Internet of Things and interactions between social and physical systems. Internet of Things (and also Internet of Services and Internet of Data) are new Internet infrastructure that marries advanced manufacturing techniques and service architectures with the I-o-T, I-o-S and I-o-D to create manufacturing systems that are not only interconnected, but communicate, analyze, and use information to drive further intelligent action back in the physical world. This paper identifies four critical domains of synergy challenge: (1) Man-to-Man interaction, (2) Man-to-Machine interaction, (3) Machine-to-Man interaction and finally (4) Machine-to-Machine interaction. Key conclusion is that new knowledge management challenges are closely linked to the challenges of synergic interactions between these four key interactions and accurate measurements of synergic interaction.

Jari Kaivo-oja, Osmo Kuusi, Mikkel Stein Knudsen, Theresa Lauraeus

Construing Microservice Architectures: State-of-the-Art Algorithms and Research Issues

Cloud Computing is one of the leading paradigms in the IT industry. Earlier, cloud applications used to be built as single monolithic applications, and are now built using the Microservices Architectural Style. Along with several advantages, the microservices architecture also introduce challenges at the infrastructural level. Five such concerns are identified and analysed in this paper. The paper presents the state-of-art in different infrastructural concerns of microservices, namely, load balancing, scheduling, energy efficiency, security and resource management of microservices. The paper also suggests some future trends and research domains in the field of microservices.

Amit V. Nene, Christina Terese Joseph, K. Chandrasekaran

Customer Knowledge Management: Micro, Small and Medium - Sized Enterprises in Bogotá - Colombia

The idea of Customer Knowledge Management (CKM) is quite new, especially linked to operations within an organization. In this context, it is required to recall 80’s worldwide concepts as Customer Relation Ship (CRM) or Customer Lifetime Value (CLV). CRMs were complex and focused on large companies in the 90’s. At the beginning, CRMs worked through connections in infrastructures; nonetheless, from 2010 it was normal to use Cloud Computing versions. CRMs arose in Colombia firstly in large companies, now it is available for micro, small and medium enterprises (MSME). There are approximately 2.5 million MSME operating in a competitive environment, pursuing their market share. Besides, customer loyalty appears to be a difficult issue as well. Hence, the present paper aims to identify the Customer Knowledge Management Strategies developed by Colombian MSME. The methodology incorporates primary data, through a validated instrument by experts. Research results confirm that MSME work on customer loyalty strategies without systematization or measurement technology. Thus, an opportunity emerges for MSME regarding the use of cloud computing or CKM.

Yasser de Jesús Muriel-Perea, Flor Nancy Díaz-Piraquive, Rubén González-Crespo, Trinidad Cortés Puya

Extending the UTAUT2 Model to Understand the Entrepreneur Acceptance and Adopting Internet of Things (IoT)

The aim of this empirical study is to explore and discuss the factors that affect entrepreneurs acceptance and adoption of the Internet of Things (IoT) using UTAUT2 model. The study data was collected using a survey that was distributed among Omani entrepreneurs in six months period. The results showed that the relationship between information technology knowledge and entrepreneurs acceptance and adoption of the IoT was supported, like most other hypothesized relationships in the study.

Ahmad Abushakra, Davoud Nikbin

Chapter 10. Human Centred Cyber Physical Systems

This Chapter presents four prototype systems which have been developed for testing and evaluating various activity recognition approaches investigated in previous chapters. These prototype systems are categorised based on the styles of their software architecture into a standalone, a multi-agent and two SOA systems. This reflects and closely corresponds to the evolution of the latest technologies in software engineering and smart cyber-physical systems. Given that previous chapters have already described how systems are used for specific use scenarios, this chapter has focused on the implementation details and operation processes of these systems which are described one by one in four sections. It is expected that interested researchers can use these systems or follow the implementation methodologies to support their research. In addition, the performance, strengths and limitations, and future work of these systems are also discussed.

Liming Chen, Chris D. Nugent

The Future Use of LowCode/NoCode Platforms by Knowledge Workers – An Acceptance Study

Knowledge Workers have to deal with lots of different information systems to support daily work. This assumption leads to massive gaps in companies based on the complexity of legacy systems on one hand side and the development of the business processes on the other hand side. Many knowledge workers build their own shadow IT to get efficient process support without thinking about compliance, security, and scalability. One possible solution to deactivate this situation might be the idea of LowCode/NoCode platforms. The question is: Will knowledge workers be using this technology or are they not accepting the new trend? Therefore, the authors conducted a quantitative study based on an online questionnaire (N = 106) to check the acceptance of this upcoming technology for companies in the DACH region. The result of the study is a statement about the future willingness to use.

Christian Ploder, Reinhard Bernsteiner, Stephan Schlögl, Christoph Gschliesser

The Diffusion of News Applying Sentiment Analysis and Impact on Human Behavior Through Social Media

The Web is the largest source of information today, a group of these data is the news that is disseminated using Social Networks which are information that needs to be processed in order to know what is its main use in a way that contributes to understanding the impact of these media in the dissemination of news. To solve this problem, we propose the use of data mining techniques such as the Sentiment Analysis to validate the information that comes from social media. The objective of this research is to make a proposal of a method of Systematic Mapping that allows determining the state of the art related to the investigations of the diffusion of news applying Sentiment Analysis and impact on human behavior through Social Media. This initial research presented as a case study a time range until 2017 in research related to the news that uses Data mining techniques like to sentiment analysis for social media in major search engines.

Myriam Peñafiel, Rosa Navarrete, Maritzol Tenemaza, Maria Vásquez, Diego Vásquez, Sergio Luján-Mora

Research on the Construction Method of the Hospital Information System Hourglass Model

The informatization construction of modern health care has gradually shifted from “hospital informatization” around 1993 to “health care intelligentization”. The studies on Hospital Information System (HIS) are carried out from multiple perspectives: research review, theoretical proposal and case application. Based on user experience elements and Actor-Network Theory, the construction method of HIS “hourglass model” is proposed. The hourglass model consists of three parts: the upper structure composed of the hospital ethnographic research and the definition of medical staff’s requirements, the connected pipeline structure composed of HIS architecture design, and the lower structure composed of HIS interaction and visual design and HIS design evaluation. By expounding the complete development process of the nurse station service system, the reproducibility of the hourglass model construction method is proven, and the three structures of the hourglass model are refined, providing reference solutions for the subsequent development and research of agile projects.

Shifeng Zhao, Jie Shen, Zhenhuan Weng

Fourth Industrial Revolution: An Impact on Health Care Industry

The World Economic Forum annual meeting, held in Davos, Switzerland, emphasized the Fourth Industrial Revolution as one of the most cutting-edge innovative techniques to be seen in the forthcoming era. This has a greater impact on the future of production and the role of government, business and academia in all developing technologies and innovation where industries, communication and technologies meet. The fourth industrial revolution combines the physical, digital, and biological spaces and is changing the healthcare industry. The FCN-32 semantic segmentation was performed on the brain tumor images which produced better results for identifying the tumors as ground truths and predicted images was achieved. The best calculated loss = 0.0108 and accuracy = 0.9964 for the given tumor images was achieved. The earlier detecting and analysis of any disease can help diagnosing and treatment in better means through artificial intelligence techniques. The healthcare industry can serve better with faster and quality services to remote, rural and unreachable areas and thereafter reduces the cost of hospitalization.

Prisilla Jayanthi, Muralikrishna Iyyanki, Aruna Mothkuri, Prakruthi Vadakattu

Deep Learning-Based Real-Time Failure Detection of Storage Devices

With the rapid development of cloud technologies, evaluating cloud-based services has emerged as a critical consideration for data center storage system reliability, and ensuring such reliability is the primary priority for such centers. Therefore, a mechanism by which data centers can automatically monitor and perform predictive maintenance to prevent hard disk failures can effectively improve the reliability of cloud services. This study develops an alarm system for self-monitoring hard drives that provides fault prediction for hard disk failure. Combined with big data analysis and deep learning technologies, machine fault pre-diagnosis technology is used as the starting point for fault warning. Finally, a predictive model is constructed using Long and Short Term Memory (LSTM) Neural Networks for Recurrent Neural Networks (RNN). The resulting monitoring process provides condition monitoring and fault diagnosis for equipment which can diagnose abnormalities before failure, thus ensuring optimal equipment operation.

Chuan-Jun Su, Lien-Chung Tsai, Shi-Feng Huang, Yi Li

Die Unternehmensbewertung von Start-up Unternehmen

Dank Unternehmen wie UBER, Airbnb, Spotify oder Hello Fresh ist das Thema Start-up und Venture Capital in der breiten öffentlichen Berichterstattung angekommen. Solche Unternehmen werden oftmals als Einhörner bzw. Unicorns bezeichnet. Im Zentrum der Berichterstattung über diese Einhörner steht häufig die Höhe der (Unternehmens-)Bewertung, die diese Unternehmen in kurzer Zeit erzielen.

Ruben Becker

Kapitel 3. Das Problem erkennen

Kap. 3 (Das Problem erkennen) zeigt auf, wie ein Unternehmen Probleme, die sich aus internen und/oder externen Einflussfaktoren ergeben können oder bereits ergeben haben, identifiziert, differenziert und beschreibt. Das Kapitel gibt einen vollständigen Überblick über die exogenen und endogenen Faktoren, erläutert diese und schärft das Verständnis für die durchzuführenden Analysen mithilfe von Beispielen aus der Praxis. Des Weiteren wird in dem Kapitel in Begriffe wie Wertschöpfungskette, Prozesse und Agile Organisation eingeführt. Die exakte Beschreibung der Problemstellung ist der Ausgangspunkt einer wirkungsvollen Transformation. Daher sind alle wesentlichen strategischen Fragen, die im Rahmen der Identifikation und Beschreibungen der Herausforderungen vom Unternehmen gestellt werden müssen, in einer Checkliste am Ende des Kapitels zusammengefasst.

Jörg Klasen

Kapitel 2. Business Transformation – ein ganzheitlicher Managementprozess

Kap. 2 (Business Transformation – ein ganzheitlicher Managementprozess) gibt einen umfassenden Überblick über die Wissensbasis aus der Literatur zu Business Transformation. Dabei werden wesentliche Ansätze, die seit den 1990er-Jahren bis heute entwickelt wurden, vorgestellt, miteinander verglichen und bewertet. Die vorhandenen Ansätze und Bausteine werden zu einem ganzheitlichen Business Transformation Management zusammengeführt, sowie der Prozess der Business Transformation von der Problemerkennung über die Strategieentwicklung bis zur Umsetzung definiert.

Jörg Klasen

Kapitel 1. Einleitung

Kap. 1 (Einleitung) beschreibt die Motivation und Hinführung zu dem Buch. Die existierenden Problemstellungen zur Transformation werden herausgearbeitet, der Begriff der Business Transformation wird definiert und die wesentlichen, häufig ebenfalls im Zusammenhang mit Transformation genannten Begrifflichkeiten werden voneinander abgegrenzt. Zusätzlich wird aufgezeigt und begründet, in welchen Bereichen und Fragestellungen das Buch eine Ergänzung zur bisherigen Literatur darstellt. Die Struktur des Buches wird vorgestellt.

Jörg Klasen

Chapter 2. eAssistance

Chapter 2 procures the basic principles for Web-based information and experience exchange. To this end, Sect. 2.1 outlines the most important Internet services. Trends in the Internet, known by the name Web X.Y, are displayed in Sect. 2.2; in addition to that, a classification of social software is given. In Sect. 2.3, a list of criteria for municipality Web sites allows to make an estimate of the content. A gross architecture for more ample eGovernment portals is presented in Sect. 2.4. The guidelines for barrier-free Web access were created by the W3C and constitute the basis for all publicWeb sites (Sect. 2.5), with the goal that people with mental or physical handicaps can profit from Web-based information and services as well. In order to assure quality in the Internet, there are criteria for usability, content and ethics to be taken into account, as displayed in Sect. 2.6. Section 2.7 contains bibliographical notes. Finally, the case study of the Web site of the Technical Secretariat for Inclusive Management in Disability of Ecuador (SETEDIS) is presented, in which different forms of access to the Web without barriers are considered.

Andreas Meier, Luis Terán

Modeling the Interaction of Computer Errors by Four-Valued Contaminating Logics

Logics based on weak Kleene algebra (WKA) and related structures have been recently proposed as a tool for reasoning about flaws in computer programs. The key element of this proposal is the presence, in WKA and related structures, of a non-classical truth-value that is “contaminating” in the sense that whenever the value is assigned to a formula $$\phi $$ , any complex formula in which $$\phi $$ appears is assigned that value as well. Under such interpretations, the contaminating states represent occurrences of a flaw. However, since different programs and machines can interact with (or be nested into) one another, we need to account for different kind of errors, and this calls for an evaluation of systems with multiple contaminating values. In this paper, we make steps toward these evaluation systems by considering two logics, $$\mathsf {HYB}_{1}$$ and $$\mathsf {HYB}_{2}$$ , whose semantic interpretations account for two contaminating values beside classical values 0 and 1. In particular, we provide two main formal contributions. First, we give a characterization of their relations of (multiple-conclusion) logical consequence—that is, necessary and sufficient conditions for a set $$\varDelta $$ of formulas to logically follow from a set $$\varGamma $$ of formulas in $$\mathsf {HYB}_{1}$$ or $$\mathsf {HYB}_{2}$$ . Second, we provide sound and complete sequent calculi for the two logics.

Roberto Ciuni, Thomas Macaulay Ferguson, Damian Szmuc

A RBAC Model Based on Identity-Based Cryptosystem in Cloud Storage

Aiming at the shortcomings of most of existing ciphertext access control scheme in cloud storage does not support dynamic update of access control strategy, has large computational overhead ,combine identity-based cryptosystem and role based access control model (using RBAC1 model of the RBAC96 model family), build RBAC model based on identity-based cryptosystem in cloud storage. This paper presents a formal definition of the scheme, a detailed description of four tuple used to represent access control strategy, the hybrid encryption strategy and Re-encrypt when writing strategy in order to improve the efficiency of the system, detailed steps of system initialization, add and delete users, add and delete permissions, add and delete roles, add and delete role inheritance, assign and remove user, assign and remove permission, read and write file algorithm.

Jian Xu, Yanbo Yu, Qingyu Meng, Qiyu Wu, Fucai Zhou

Public Auditing of Log Integrity for Cloud Storage Systems via Blockchain

Cloud storage security has been widely focused by the industry and academia in recent years. Differing from the previous researches on cloud data integrity audit, we pay more attention to the security of log generated during the operation of cloud data. While cloud data is damaged and tampered by various security threats (e.g. faulty operations, hacker attacks etc.), it is one of the most common methods to track accidents through log analysis. Therefore, ensuring the integrity of the log files is a prerequisite for completing the incident tracking. To this end, this paper proposes a public model for verifying the integrity of cloud log based on a third party auditor. In order to prevent the log data from being tampered with, we aggregate the log block tags by using the classic Merkle hash tree structure and generate the root node which will be stored in the blockchain. In addition, the proposed scheme does not leak any log content during public audit. The theoretical analysis and experimental results show that the scheme can effectively implement the security audit of cloud logs, which is better than the past in terms of computational complexity overhead.

Jia Wang, Fang Peng, Hui Tian, Wenqi Chen, Jing Lu

A One-Way Variable Threshold Proxy Re-signature Scheme for Mobile Internet

In recent years, the mobile Internet has been rapidly developed and widely used. Aiming at the problems of the weak computing power of mobile internet mobile terminal equipment, limited energy supply and high security requirements due to the complexity of mobile Internet environment, we proposes a secure and efficient server-assisted verification threshold proxy re-signature scheme, and the correctness of the program is verified. The proposed scheme includes a threshold proxy re-signature algorithm and a server-assisted authentication protocol scheme. Threshold proxy re-signature is a technique of proxy re-signature using threshold, which can decentralize the proxy’s signature rights. In the scheme, the verifier and the server send the complex signature verification operation to a semi-trusted server through the protocol, which effectively reduces the computational load of the verifier. The security analysis results show that the new scheme is safe and it is proved that the scheme is safe under collusion attack and adaptive selection message attack under the standard model. The performance analysis results show that the new scheme proposed in this paper has shorter signature length, less computational cost, higher verification efficiency and better adaptability to the mobile Internet environment.

Yanfang Lei, Mingsheng Hu, Bei Gong, Lipeng Wang, Yage Cheng

Application of Big Data Technology in JD

The arrival of the era of big data has brought about changes and impacts on human life, work, and thinking. With the rapid development of the scale and number of e-commerce in China, the e-commerce marketing requires continuous innovation. Big data can tap and utilize the underlying business value behind the data to achieve more precise positioning and marketing. This article analyzes the big data theory and method, discusses the three major challenges of data holding, data processing and data security brought by e-commerce in the era of big data. The era of big data analyzes and accurately updates and changes the target audience. A case study of JD e-commerce company was conducted again to analyze JD’s big data platform and the application and practice of marketing based on the platform. Inspired by the case study, we found weaknesses and made suggestions.

Ning Shi, Huwei Liu

A Survey of Trusted Network Trust Evaluation Methods

The proposed trusted network is respond to the increasingly prominent internal network security threats. At present, research on trusted networks focuses on two aspects: pre-network access check and dynamic evaluation after access. The pre-access check considers the integrity of the terminal and uses encryption and authentication methods to achieve it. The dynamic evaluation uses the static and dynamic attributes of the trust to implement trust evaluation.

An-Sheng Yin, Shun-Yi Zhang

Fog-Enabled Smart Campus: Architecture and Challenges

In recent years, much attention has been paid on the design and realization of smart campus, which is a miniature smart city paradigm consisting of its unique infrastructures, facilities, and services. Realizing the full vision of smart campus needs an instrumented, interconnected, and intelligent cyber physical system leveraging ICTs and physical infrastructures in the campus. Moreover, the study of a smart campus could pave a way for studying smart cities. In a smart campus, heterogeneous big data is continuously generated by the different functional sensing devices. This poses great challenges on the computation, transmission, storage, and energy consumption of traditional sensor-to-cloud continuum, which typically incurs huge amount of network transmission, high energy consumption, and long (sometimes intolerable) processing delay. Based on these observations, we propose a fog-enabled smart campus to enhance the real-time service provisioning. An architecture of smart campus is put forward, in which multiple fog nodes are deployed to guarantee the real-time performance of services and applications by performing tasks at the network edge. Furthermore, a lot of open research issues regarding to this architecture are discussed in hope to inspire to expand more research activities in this field.

Chaogang Tang, Shixiong Xia, Chong Liu, Xianglin Wei, Yu Bao, Wei Chen

Research on Multi Domain Based Access Control in Intelligent Connected Vehicle

With the development of Intelligent Connected Vehicle (ICV), the information security problems it faces are becoming more and more important. Authentication and access control is an important part of ensuring the security of intelligent connected vehicles’ information. In this paper, we have proposed a multi-domain based access control model (MDBA) based on the attribute-based access control model. The model proposes access control from the aspects of intelligent connected vehicles’ multi-domain, thus ensuring the information security of intelligent connected vehicles.

Kaiyu Wang, Nan Liu, Jiapeng Xiu, Zhengqiu Yang

An Efficient Privacy-Preserving Palmprint Authentication Scheme Based on ElGamal

Biometric credentials have become a popular means of authentication. However, since biometrics are unique and stable, one data breach might cause the user lose some of his biometrics permanently. And the stolen biometrics may be used for identity fraud, posing a permanent risk to the user. There have been many studies addressing this problem, in which the protection of biometric templates is a basic consideration. However, most existing solutions have inefficient security or efficiency. In this paper, we use the ElGamal scheme which shows good performance in applications to construct an efficient, privacy-preserving palmprint authentication scheme. We first construct a palmprint recognition scheme based on palm lines and feature points with good performance. Then, we use the RP (random projection) method to effectively reduce the extracted palmprint features, which greatly reduces the volume of data to be stored. Finally, we design a confidential comparison process based on the ElGamal scheme to perform efficient comparisons of palmprint features while ensuring provable security. Subsequent theoretical analysis/proof and a series of experiments prove the significance and validity of our work.

Yong Ding, Huiyong Wang, Zhiqiang Gao, Yujue Wang, Kefeng Fan, Shijie Tang

FIREWORK: Fog Orchestration for Secure IoT Networks

Recent advances in Internet of Things (IoT) connectivity have made IoT devices prone to Cyber attacks. Moreover, vendors are eager to provide autonomous and open source device, which in turn adds more security threat to the system. In this paper, we consider network traffic attack, and provide a Fog-assisted solution, dubbed as FIREWORK, that reduces risk of security attacks by periodically monitoring network traffic, and applying traffic isolation techniques to overcome network congestion and performance degradation.

Maryam Vahabi, Hossein Fotouhi, Mats Björkman

Gathering Pattern Mining Method Based on Trajectory Data Stream

Moving object gathering pattern refers to a group of incident or case that are involved large congregation of moving objects. Mining the moving object gathering pattern in massive and dynamic trajectory data streams can timely discover the anomalies in the group moving model. This paper proposes a moving object gathering pattern mining method based on trajectory data stream, which consists of two stages: clustering and crowed mining. In the clustering stage, the MR-GDBSCAN clustering algorithm is proposed. It uses the grid to index moving objects and uses the grid as a clustering object and determines the center of each cluster. In the crowed mining phase, the sliding time window is used for incremental crowed mining, and the cluster center is used to calculate the distance between different clusters, thereby improving the crowed detection efficiency. Experiments show that the proposed moving object gathering pattern mining method has good efficiency and stability.

Ying Xia, Lian Diao, Xu Zhang, Hae-young Bae

Research on Big Data Platform Security Based on Cloud Computing

Emerging services such as cloud computing, the Internet of Things, and social networking are driving the growth of human society’s data types and scales at an unprecedented rate. The age of big data has officially arrived. The use of cloud computing technology to bring great convenience to big data processing, solve various deficiencies in traditional processing technology, make big data more application value and service value, but at the same time, it also brings new security problems. By analyzing the security threats faced by cloud computing-based big data platforms, a cloud computing-based big data platform security system framework is proposed, and a security deployment strategy is given.

Xiaxia Niu, Yan Zhao

Fog Computing Architecture Based Blockchain for Industrial IoT

Industry 4.0 is also referred to as the fourth industrial revolution and is the vision of a smart factory built with CPS. The ecosystem of the manufacturing industry is expected to be activated through autonomous and intelligent systems such as self-organization, self-monitoring and self-healing. The Fourth Industrial Revolution is beginning with an attempt to combine the myriad elements of the industrial system with Internet communication technology to form a future smart factory. The related technologies derived from these attempts are creating new value. However, the existing Internet has no effective way to solve the problem of cyber security and data information protection against new technology of future industry. In a future industrial environment where a large number of IoT devices will be supplied and used, if the security problem is not resolved, it is hard to come to a true industrial revolution. Therefore, in this paper, we propose block chain based fog system architecture for Industrial IoT. In this paper, we propose a new block chain based fog system architecture for industrial IoT. In order to guarantee fast performance, And the performance is evaluated and analyzed by applying a proper fog system-based permission block chain.

Su-Hwan Jang, Jo Guejong, Jongpil Jeong, Bae Sangmin

Exploration of Data from Smart Bands in the Cloud and on the Edge – The Impact on the Data Storage Space

Wearable devices used for tracking people’s health state usually transmit their data to a remote monitoring data center that can be located in the Cloud due to large storage capacities. However, the growing number of smart bands, fitness trackers, and other IoT devices used for health state monitoring pose pressure on the data centers and may raise the Big Data challenge and cause network congestion. This paper focuses on the consumption of the storage space while monitoring people’s health state and detecting possibly dangerous situations in the Cloud and on the Edge. We investigate the storage space consumption in three scenarios, including (1) transmission of all data regardless of the health state and any danger, (2) data transmission after the change in person’s activity, and (3) data transmission on the detection of a health-threatening situation. Results of our experiments show that the last two scenarios can bring significant savings in the consumed storage space.

Mateusz Gołosz, Dariusz Mrozek

Security-Aware Distributed Job Scheduling in Cloud Computing Systems: A Game-Theoretic Cellular Automata-Based Approach

We consider the problem of security-aware scheduling and load balancing in Cloud Computing (CC) systems. This optimization problem we replace by a game-theoretic approach where players tend to achieve a solution by reaching a Nash equilibrium. We propose a fully distributed algorithm based on applying Iterated Spatial Prisoner’s Dilemma (ISPD) game and a phenomenon of collective behavior of players participating in the game. Brokers representing users participate in the game to fulfill their own three criteria: the execution time of the submitted tasks, their execution cost and the level of provided Quality of Service (QoS). We experimentally show that in the process of the game a solution is found which provides an optimal resource utilization while users meet their applications’ performance and security requirements with minimum expenditure and overhead.

Jakub Gąsior, Franciszek Seredyński

Programming Paradigms for Computational Science: Three Fundamental Models

The widespread of data science programming languages and libraries have raised new interest in teaching computational science coding in ways that leverage the capabilities of both single-computer and cluster-based computation infrastructures. Some of the programming patterns and idioms are converging, yet there are specialized uses and cases that require learners to switch from one to another. In this paper, we report on the experience in action research with more than ten cohorts of mixed background students in postgraduate level data science classes. We first discuss the key mental models found to be essential to understanding solution design, and then review the three fundamental paradigms that students must face when coding data manipulation and their interrelation. Finally, we discuss some insights on additional elements found important in understanding the specificities of current practice in data analysis tasks.

Miguel-Angel Sicilia, Elena García-Barriocanal, Salvador Sánchez-Alonso, Marçal Mora-Cantallops

Analysis and Detection on Abused Wildcard Domain Names Based on DNS Logs

Wildcard record is a type of resource records (RRs) in DNS, which can allow any domain name in the same zone to map to a single record value. Former works have made use of DNS zone file data and domain name blacklists to understand the usage of wildcard domain names. In this paper, we analyze wildcard domain names in real network DNS logs, and present some novel findings. By analyzing web contents, we found that the proportion of domain names related to pornography and online gambling contents (referred as abused domain names in this work) in wildcard domain names is much higher than that in non-wildcard domain names. By analyzing behaviors of registration, resolution and maliciousness, we found that abused wildcard domain names have remarkably higher risks in security than normal wildcard domain names. Then, based on the analysis, we proposed GSCS algorithm to detect abused wildcard domain names. GSCS is based on a domain graph, which can give insights on the similarities of abused wildcard domain names’ resolution behaviors. By applying spectral clustering algorithm and seed domains, GSCS can distinguish abused wildcard domain names from normal ones effectively. Experiments on real datasets indicate that GSCS can achieve about 86% detection rates with 5% seed domains, performing much better than BP algorithm.

Guangxi Yu, Yan Zhang, Huajun Cui, Xinghua Yang, Yang Li, Huiran Yang

Chapter 10. Modeling with Delay Differential Equations

Although modeling phenomena with differential equations has a long and successful history over a wide range of applications, some situations lend themselves to adaptations that more seamlessly capture the entity being modeled. This chapter studies one such adaptation called a delay differential equation (DDE). delay differential equation DDE A DDE is an ordinary differential equation that permits dependencies on historical information, and initial conditions are replaced with legacy assumptions that detail the solution’s previously observed behavior. A DDE is a welcome framework for processes that naturally depend on historical trajectories and not just initial values.

Allen Holder, Joseph Eichholz

The Application and Development of Smart Clothing

Smart clothing is a hot research topic in recent years, it is also the application and exploration of intelligent manufacturing in the textile and clothing field. This paper analyzed the design focus and the development trend of the smart clothing through expounding the application, the characteristics, and the domestic and international research progress of smart clothing.

Jia Lyu, Yue Sui, Dongsheng Chen

Engineering the Right Change Culture in a Complex (GB) Rail Industry

This paper focuses on an interview and observational study of two major change programmes, designed to transform workforce safety across Great Britain’s railways. The implications of the pace of change and the challenges of user-influenced design are considered in the context of a railway system where there are rapidly evolving technologies and need to consider the impact of co-operative work systems and the skills workers will need to engage with them. The study shows how things have changed over time since the programmes were first introduced, identifying the factors that have influenced this, such as a focus on a continuous improvement culture. Further research directions are proposed, including the need to identify the tools to help predict how future interventions in the change programmes might manifest themselves, e.g. the effects of new technology introduction, or factors outside of the organisation’s control such as Government policy change.

Michelle Nolan-McSweeney, Brendan Ryan, Sue Cobb

Impact of Industry 4.0 on Occupational Health and Safety

Background: The objective of Industry 4.0 is to bring into existence smart, self-regulating and interconnected industrial value creation through the integration of cyber-physical systems into manufacturing. Industry 4.0 is a new paradigm of production and one that leads to a faster and more precise decision-making, entirely new approach to production, work organization manner of work task performance, which may have a significant influence on the health and safety of workers. Objectives: To provide an overview of potential effects (positive and negative) of Industry 4.0 on occupational health and safety and to list some of the recommendations regarding the integration of OHS into manufacturing in the Industry 4.0 context. Methods: A critical review of the literature currently available on this topic. Results: There are many risks as well as opportunities for occupational health and safety that derive from Industry 4.0. A considerable challenge, especially in the transitional period, is posed by insufficiency of initiatives with respect to occupational health and safety including standards and regulations, which may render them incommensurate in the face of newer and newer threats as Industry 4.0 technologies emerge. Furthermore, it may lead to forfeiting the proactive approach towards occupational health and safety that has been established in the most industrialized countries. Further research is required to enhance integration of occupational health and safety into manufacturing in the context of Industry 4.0. To achieve this, an interdisciplinary approach needs to be adopted drawing on the expertise of a team comprising engineers, IT experts, psychologists, ergonomists, social and occupational scientists, medical practitioners, and designers. The overview was carried by the research group IDEAT.

Aleksandra Polak-Sopinska, Zbigniew Wisniewski, Anna Walaszczyk, Anna Maczewska, Piotr Sopinski

Agile Management Methods in an Enterprise Based on Cloud Computing

Globalization and development of IT systems are conducive to changes in the entrepreneurs’ approach to the methods of carrying out tasks related to running a business. The growing competition and the need to quickly respond to the changing business environment mean that business owners are more willing to switch to agile business management methods. Cloud computing as a modern form of providing IT solutions can be successfully used in agile enterprises. The tools provided in the subscription model are increasingly being chosen by such enterprises. The aim of the article is to present the tools available in the cloud computing for managing a agile enterprise and to focus on security aspects of such solutions. The research conducted on several dozen companies using the software provided in the subscription model allowed to draw the conclusion that most enterprises are not aware of the threats on the Internet and can not protect themselves against them. In most cases, companies are not aware of the possibility of leakage of confidential data, business secrets and management models. Full confidence in cloud computing and data stored on the Internet makes small and medium enterprises an easy target for cybercriminals. As a result of the conducted research, the article presents not only the most frequently chosen software for agile enterprise management, but also selected options to increase the protection of corporate data stored in the cloud.

Michal Trziszka

Security in Plain TXT

Observing the Use of DNS TXT Records in the Wild

The Domain Name System is a critical piece of infrastructure that has expanded into use cases beyond its original intent. DNS TXT records are intentionally very permissive in what information can be stored there, and as a result are often used in broad and undocumented ways to support Internet security and networked applications. In this paper, we identified and categorized the patterns in TXT record use from a representative collection of resource record sets. We obtained the records from a data set containing 1.4 billion TXT records collected over a 2 year period and used pattern matching to identify record use cases present across multiple domains. We found that 92% of these records generally fall into 3 categories; protocol enhancement, domain verification, and resource location. While some of these records are required to remain public, we discovered many examples that unnecessarily reveal domain information or present other security threats (e.g., amplification attacks) in conflict with best practices in security.

Adam Portier, Henry Carter, Charles Lever

Practical Enclave Malware with Intel SGX

Modern CPU architectures offer strong isolation guarantees towards user applications in the form of enclaves. However, Intel’s threat model for SGX assumes fully trusted enclaves and there doubt about how realistic this is. In particular, it is unclear to what extent enclave malware could harm a system. In this work, we practically demonstrate the first enclave malware which fully and stealthily impersonates its host application. Together with poorly-deployed application isolation on personal computers, such malware can not only steal or encrypt documents for extortion but also act on the user’s behalf, e.g., send phishing emails or mount denial-of-service attacks. Our SGX-ROP attack uses new TSX-based memory-disclosure primitive and a write-anything-anywhere primitive to construct a code-reuse attack from within an enclave which is then inadvertently executed by the host application. With SGX-ROP, we bypass ASLR, stack canaries, and address sanitizer. We demonstrate that instead of protecting users from harm, SGX currently poses a security threat, facilitating so-called super-malware with ready-to-hit exploits. With our results, we demystify the enclave malware threat and lay ground for future research on defenses against enclave malware.

Michael Schwarz, Samuel Weiser, Daniel Gruss

Kapitel 3. Herausforderungen für multinationale Automobilunternehmen durch globale Umfeldtrends

Multinationale Automobilunternehmen stehen vor vielfältigen neuen Herausforderungen. Dies zeigen Studien und Prognosen und Diskussionen auf dem jährlich stattfindenden Duisburger „Wissenschaftsforum Mobilität“. Sie zeigen sich aber auch in Berichten und Gesprächen in Muttergesellschaften und in 90 Tochtergesellschaften von 15 deutschen Automobilherstellern und -zulieferern in den vier BRIC-Ländern (Brasilien, Russland, Indien und China), in Mexiko und in den USA zwischen 2013 und 2017.

Heike Proff

„Wer’s schneller mag und ein wenig Jetset liebt.“

Falko Brinkmann

No More, No Less

A Formal Model for Serverless Computing

Serverless computing, also known as Functions-as-a-Service, is a recent paradigm aimed at simplifying the programming of cloud applications. The idea is that developers design applications in terms of functions, which are then deployed on a cloud infrastructure. The infrastructure takes care of executing the functions whenever requested by remote clients, dealing automatically with distribution and scaling with respect to inbound traffic.While vendors already support a variety of programming languages for serverless computing (e.g. Go, Java, Javascript, Python), as far as we know there is no reference model yet to formally reason on this paradigm. In this paper, we propose the first core formal programming model for serverless computing, which combines ideas from both the $$\lambda $$ -calculus (for functions) and the $$\pi $$ -calculus (for communication). To illustrate our proposal, we model a real-world serverless system. Thanks to our model, we capture limitations of current vendors and formalise possible amendments.

Maurizio Gabbrielli, Saverio Giallorenzo, Ivan Lanese, Fabrizio Montesi, Marco Peressotti, Stefano Pio Zingaro

Self-organising Coordination Regions: A Pattern for Edge Computing

Design patterns are key in software engineering, for they capture the knowledge of recurrent problems and associated solutions in specific design contexts. Emerging distributed computing scenarios, such as the Internet of Things, Cyber-Physical Systems, and Edge Computing, define a novel and still largely unexplored application context, where identifying recurrent patterns can be extremely valuable to mainstream development of language mechanisms, algorithms, architectures and supporting platforms—keeping a balanced trade-off between generality, applicability, and guidance. In this work, we present a design pattern, named Self-organising Coordination Regions (SCR), which aims to support scalable monitoring and control in distributed systems. Specifically, it is a decentralised coordination pattern for partitioned orchestration of devices (typically on a spatial basis), which provides adaptivity, resilience, and distributed decision-making in large-scale situated systems. It works through a self-organising construction of regions of space, where internal coordination activities are regulated via feedback/control flows among leaders and worker nodes. We present the pattern, provide a template implementation in the Aggregate Computing framework, and evaluate it through simulation of a case study in Edge Computing.

Roberto Casadei, Danilo Pianini, Mirko Viroli, Antonio Natali

Parallelizing Convergent Cross Mapping Using Apache Spark

Identifying the causal relationships between subjects or variables remains an important problem across various scientific fields. This is particularly important but challenging in complex systems, such as those involving human behavior, sociotechnical contexts, and natural ecosystems. By exploiting state space reconstruction via lagged embedding of time series, convergent cross mapping (CCM) serves as an important method for addressing this problem. While powerful, CCM is computationally costly; moreover, CCM results are highly sensitive to several parameter values. While best practice entails exploring a range of parameter settings when assessing casual relationships, the resulting computational burden can raise barriers to practical use, especially for long time series exhibiting weak causal linkages. We demonstrate here several means of accelerating CCM by harnessing the distributed Apache Spark platform. We characterize and report on results of several experiments with parallelized solutions that demonstrate high scalability and a capacity for over an order of magnitude performance improvement for the baseline configuration. Such economies in computation time can speed learning and robust identification of causal drivers in complex systems.

Bo Pu, Lujie Duan, Nathaniel D. Osgood

Massive-Scale Models of Urban Infrastructure and Populations

As the world becomes more dense, connected, and complex, it is increasingly difficult to answer “what-if” questions about our cities and populations. Most modeling and simulation tools struggle with scale and connectivity. We present a new method for creating digital twin simulations of city infrastructure and populations from open source and commercial data. We transform cellular location data into activity patterns for synthetic agents and use geospatial data to create the infrastructure and world in which these agents interact. We then leverage technologies and techniques intended for massive online gaming to create 1:1 scale simulations to answer these “what-if” questions about the future.

Daniel Baeder, Eric Christensen, Anhvinh Doanvo, Andrew Han, Ben F. M. Intoy, Steven Hardy, Zachary Humayun, Melissa Kain, Kevin Liberman, Adrian Myers, Meera Patel, William J. Porter III, Lenny Ramos, Michelle Shen, Lance Sparks, Allan Toriel, Benjamin Wu

An Architectural Framework Proposal for IoT Driven Agriculture

The Internet of Things is paving the way for the transition into the fourth industrial revolution with the mad rush of connecting physical devices and systems to the internet. IoT is a promising technology to drive the agricultural industry, which is the backbone for sustainable development especially in developing countries like those in Africa that are experiencing rapid population growth, stressed natural resources, reduced agricultural productivity due to climate change, and massive food wastage. In this paper, we assessed challenges in the adoption of IoT in developing countries in agriculture. We propose a cost effective, energy efficient, secure, reliable and heterogeneous (independent of the IoT protocol) three layer architecture for IoT driven agriculture. The first layer consists of IoT devices and it is made up of IoT driven agriculture systems such as smart poultry, smart irrigation, theft detection, pest detection, crop monitoring, food preservation, and food supply chain systems. The IoT devices are connected to the gateways by low power LoRaWAN network. The gateways and local processing servers co-located with the gateways create the second layer. The cloud layer is the third layer, which exploits the open source FIWARE platform to provide a set of public and free-to-use API specifications that come along with open source reference implementations.

Godlove Suila Kuaban, Piotr Czekalski, Ernest L. Molua, Krzysztof Grochla

A Comparison of Request Distribution Strategies Used in One and Two Layer Architectures of Web Cloud Systems

Web Cloud systems are becoming more and more popular. In the article we want to examine HTTP request distribution strategies that can be used in one and two layer architectures of Web cloud systems. In particular, we want to compare our intelligent solutions with each other, and with popular and most commonly used in Web cloud systems. We describe modern solutions, present the test-bed and results of conducted experiments. At the end, we discuss results and present final conclusions.

Krzysztof Zatwarnicki, Anna Zatwarnicka

Chapter 39. Internet of Things: An Opportunity for Advancing Universal Access

IoT enables the worldwide connection of heterogeneous things or objects, which can hence interact with each other and cooperate with their neighbors to reach common goals, by using different communication technologies and communication protocol standards. IoT and related technologies can increase or reduce the gap among people. In this respect, this chapter aims to highlight the virtuose use of the IoT paradigm by providing examples of its application for enhancing universal access in different fields.

Federica Cena, Amon Rapp, Ilaria Torre

Chapter 13. Standards, Guidelines, and Trends

The World Wide Web, the Web, is technically a family of open standards that defines the protocols and formats needed for the Web to function. These technical standards are the backbone of Web accessibility. They define critical accessibility features of Web technologies, as well as interoperability with assistive technologies. At the same time, these technical standards are rapidly evolving as the Web continues to expand in volume and in functionality, as different industry and technology sectors continue to converge onto the Web, and as our expectations for the Web continue to expand. Recent advances in Web technologies include enhanced support for mobile content and applications, real-time communication, immersive environments, multimedia, and automotive systems. Concurrently, Web-based applications are increasingly making use of advances in Artificial Intelligence (AI), Internet of Things (IoT), and Open Data. While such technological advances provide immense opportunities for the inclusion of people with disabilities, they require dedicated efforts to understand the diverse accessibility needs and to develop clear accessibility requirements for designers and developers of digital content, tools, and technologies for desktop and mobile devices. The World Wide Web Consortium (W3C) is the leading standards body for the Web and has a long history of commitment to accessibility. The W3C Web Accessibility Initiative (WAI) utilizes a multi-stakeholder consensus approach to pursue the goal of ensuring accessibility for people with disabilities on the Web. This includes designing and implementing particular accessibility features in core Web standards such as HTML and CSS, as well as developing and maintaining a set of Web accessibility guidelines, which are recognized internationally by business and government. This participatory effort involving representation of people with disabilities, industry, research, public bodies, and other experts promises to address evolving trends on the Web to help ensure accessibility for people with disabilities.

Shadi Abou-Zahra, Judy Brewer

Current Situation and Countermeasures of Chinese Street Furniture Design in Intelligent Development Context

This paper first explains the concept and classification of “urban furniture”, analyzes the current situation of Chinese urban furniture and concludes that Chinese traditional urban furniture has been abandoned because it doesn’t suit needs of the times. Combined with the recent discussions and reports of the Chinese government meeting, based on the new era of “achieving urban intelligent management”, it is concluded that in the process of building “smart city”, urban furniture development will enter the “intelligent” trend. Combined with the concept of intelligent products, examples of smart city furniture in developed countries such as the United States and Singapore are presented. Finally, it is concluded that the development of urban furniture design in the context of intelligent development should consider the product, service and system layers of urban furniture products.

Tongwen Wang, Wuzhong Zhou

Technology Start-Up Firms’ Management of Data Security and Trust in Collaborative Work with Third-Parties in a Developing Economy

This study seeks to provide an understanding of how data security and trust is managed in the collaborative work between technology start-up firms in developing economies and third-party entities. Using qualitative data from Ghana which was analyzed thematically, it was found that the measures taken to ensure the security and privacy of client datasets vary in intensity among firms. Thus, in collaborating with third-parties, the firms largely depend on industry best practices, policies and service-level agreements, and thus do not attempt to integrate the policies of collaborators with their firm’s policies except in situations where the inclusion elimination of certain clauses are collaborative pre-requisites. The firms also have trust criteria which they use to pre-qualify third-party entities with whom they collaborate. It is concluded that technology start-ups possess attribute that show their core competencies and the quality of their deliveries, but might require a regulatory body to monitor their operations.

Mohammed-Aminu Sanda

Linear Programming and Cloud Computing for Pharmaceutical Supply Chains

Linear Programming (LP) is a well-known, powerful method to solve many problems in industrial and scientific applications. The objective of the present study is to propose the resolution of user LP problems by cloud computing, i.e., without the usual installation of computer programs for the purpose. We also compare our resolution with other peers’ approaches and on-line solvers, typical problems being addressed (transportation, transshipment, assignment, scheduling) applicable in numerous activities, including the pharmaceutical Supply Chains (SC). Thus, instead of providing software for the user to download and install, we offer a website where the user can insert his problems’ data and obtain their solutions. Some websites propose comparable solutions, but in little practical ways. The proposed style of action provides a practical and quick access to the advantages of cloud computing, which will encourage users and practitioners from pharmaceutical SC and many other related areas to delve further into the subject.

Miguel Casquilho, João Luís de Miranda, Miguel Barros

Kapitel 2. Anforderungen an den Vertrieb der Zukunft

Die bestehende Diskrepanz zwischen den Entwicklungen im Vertrieb und anderen Unternehmens-bereichen beschrieben Rackham und DeVinentis, Rethinking the sales force, The Mc-Graw-Hill Companies, New York, 1999 sehr treffend am Beispiel des fiktiven Sales-Managers Mr. Winkel. Dieser schläft 30 Jahre und stellt nach der Rückkehr in seinen Job fest, dass sich zwar die Welt um ihn herum, aber fast nichts an seinem Job geändert hat. Daraus zieht er die Schlussfolgerung, Verkaufen wird wohl immer so bleiben.

Marco Wunderlich, Martin Hinsch, Jens Olthoff

Nonsubmodular Optimization

The nonsubmodular optimization is a hot research topic in the study of nonlinear combinatorial optimizations. We discuss several approaches to deal with such optimization problems, including supermodular degree, curvature, algorithms based on DS decomposition, and sandwich method.

Weili Wu, Zhao Zhang, Ding-Zhu Du

A Holistic Communication Network for Efficient Transport and Enhanced Driving via Connected Cars

Growing markets and novel technologies for cooperative and integrated vehicular communication are offering excellent opportunities for innovative business and coordinated research and standardization worldwide. Network operators as well as manufacturers of cars and devices for automotive connectivity are heading towards a next generation ecosystem in framework of 5G permitting to provide a bunch of new applications. These shall contribute to improved traffic safety by reduction of number of accidents or even their avoidance, to a higher level of traffic efficiency by enabling better road utilization and reduced traffic congestion, to a significant reduction in energy consumption and CO2-emission, and to increased comfort for both drivers and passengers in cars. Such a vision can only be achieved by 5G-enabled connectivity and cooperation between vehicles and infrastructure on basis of a convergent, reliable, secure, and robust communications network that will enable real-time traffic control support. This paper reports on the approach selected by project 5G NetMobil to enable a reliable, secure, and robust connectivity between vehicles, other road users, and infrastructure for real-time applications of a cooperative intelligent transport system, forming a new kind of traffic and transport-related community.

Dirk von Hugo, Gerald Eichler, Thomas Rosowski

Promoting High-Quality Teachers Resource Sharing and Rural Small Schools Development in the Support of Informational Technology

Rural small schools in China have characteristics including remote location, small scale and poor conditions in infrastructure, quality of teaching and teacher. The shortage of teacher resources is the biggest problem in rural small schools compared with other resources. To address the problem and help offer sufficient and high-quality courses in rural small schools, the sharing of high-quality teacher resources are the most significant way to achieve it. In this paper, synchronous interactive hybrid classroom and synchronous interactive special delivery classroom teaching are put forward to facilitate the sharing of high-quality teacher resources in experimental areas. This article introduces a case of practicing the sharing of high-quality teacher resources by information technology from the urban school to the rural school in Xian’an district, Hubei province. The article concludes with a discussion of the findings for sharing teacher resources to provide equal educational opportunities between urban and rural areas.

Mingzhang Zuo, Wenqian Wang, Yang Yang

RenewKube: Reference Net Simulation Scaling with Renew and Kubernetes

When simulating reference nets, the size (places, transitions; memory and CPU consumption) of the simulation is usually not known before actual runtime. This behavior originates from the concept of net instances, which are similar to objects in object-oriented programming. The simulator Renew supports very basic distribution but the manual infrastructural setup for simulations exceeding the capabilities of one machine is left up to the modeler until now. In this work the RenewKube tool, a ready to use Kubernetes and Docker based solution, is presented, that allows to control automated scaling of simulation instances from within the net running in the Renew simulator.

Jan Henrik Röwekamp, Daniel Moldt

Chapter 9. A Comparative Study in the Application of IoT in Health Care: Data Security in Telemedicine

Internet of Things (IoT) is the backbone of telemedicine and its data security has become a significant concern that requires further attention. Therefore, this study was conducted with the aim of analyzing telemedicine systems, focusing on data security measures. Thirty peer-reviewed research studies published in 2018 were reviewed and compared according to certain parameters, viz., algorithms, IoT sensors, data encryption ability, communication mechanisms, mobile accessibility, protocols, software, and platforms. The results illustrate that transmitting sensitive medical data over the Internet has been identified as a major threat, and solutions such as ciphertext-policy attribute-based encryption and Secure Better Portable Graphics (SBPG) architecture have been developed to authenticate and protect data by concurrent encryption and watermarkingWatermarking . Furthermore, the comparison reveals that data encryption is the most frequently used secure data transmission method and 32% of the reviewed studies have focused on this. Regarding the most frequently used technologies, Raspberry Pi3Raspberry Pi3 Edge platformEdge platform (with the usage percentage of 60), TCP/IP protocolTCP/IP protocol (with usage of 38%), and ECGECG and temperature sensors (with usage of 20%) have been discussed. Additionally, telemedicine has focused on standalone systems, and, in this context, integrated systems with micro-services are yet to be improved. Therefore, this study compares and analyzes the significant technical trends, security trends, widely used IoT sensors, platforms, and protocols; the aim is to help the researchers to gain a better insight into telemedicine to improve healthcare services by maximizing the capabilities of Internet of Things.

G. A. Pramesha Chandrasiri, Malka N. Halgamuge, C. Subhashi Jayasekara

Chapter 4. Multidisciplinary Intel Fusion Technique for Proactive Cyber-Intelligence Model for the IoT

Cyber-Threat IntelligenceCyber-Threat Intelligence (CTI) is an acknowledged concept both by professionals and academia. This well-known notion was synthesized through interdisciplinary and multidisciplinary subspecialties of Cyber-Intelligence (CI). The CI concept focuses on extracting pure intelligence reports and cyber-perspectives through available information sources, including the deep/dark web. It also discloses possible threats, risks, attack campaigns, espionage, and exposure operations. Focusing on the clear and dark side of the Internet is not enough to feed CI; for an accurate and richer stream of information, the Internet of Things (IoT) concept needs to be clarified and integrated into the entire CI lifecycle. The process includes extracting information through various sources using different methodologies and techniques and by applying the proposed aggregation function/methodology models for continuous development of the CI lifecycle. This chapter focuses on the fundamentals of the CI concentrations such as Open-Source Intelligence (OSINTOSINT ), Human Intelligence (HUMINTHUMINT ), Technical Intelligence (TECHINTTECHINT ), and the IoT vision in order to propose a proactive CYBer-INTelligence (CYBINT) aggregation approach model. The proposed model depends on practical tools and approaches that are part of the proactive defenses and analyzing strategies.

Ugur Can Atasoy, Arif Sari

Chapter 11. A Novel Privacy Preserving Scheme for Cloud-Enabled Internet of Vehicles Users

The Internet of Vehicles (IoVIoV) is the Internet of Things where “things” refer to vehicles. Due to its huge amount of shared data and various types of offered services, relying on the use of cloudCloud as infrastructure is fundamental. Most of road services, regardless of whether they are of an infotainment or a safety type, use location and identity information. As a consequence, the privacyPrivacy of road users is threatened, and therefore preserving such privacy and safety has also become essential. This chapter presents an overview of existing privacy-preserving strategies and develops a proposal for a novel solution which allows its users to benefit from cloud-enabled IoV location-based services and safety applications, anonymously and securely. The performance of the proposed solution is studied by simulating it against a modeled global passive attacker and comparing it to a state-of-the-art solution. The results are optimistic and out-perform the ones compared with. The scheme ensures that the privacy is preserved with more than 70% against semantic, syntactic, observation mapping, and linkage mapping attacks.

Leila Benarous, Benamar Kadri

Chapter 2. Consumer Behaviour and Marketing Fundamentals for Business Data Analytics

This chapter provides the reader with a brief introduction to the basics of marketing. The intention is to help a “non-marketer” to understand what is needed in business and consumer analytics from a marketing perspective and continue “bridging the gap” between data scientists and business thinkers. A brief introduction to the discipline of marketing is presented followed by several topics that are crucial for understanding marketing and computational applications within the field. A background of market segmentationMarket segmentation and targeting strategies is followed by the description of typical bases for segmenting a market. Further, consumer behaviour literature and theory is discussed as well as the current trends for businesses regarding consumer behaviour.

Natalie Jane de Vries, Pablo Moscato

Chapter 13. Memetic Algorithms for Business Analytics and Data Science: A Brief Survey

This chapter reviews applications of Memetic Algorithms in the areas of business analytics and data science. This approach originates from the need to address optimization problems that involve combinatorial search processes. Some of these problems were from the area of operations research, management science, artificial intelligence and machine learning. The methodology has developed considerably since its beginnings and now is being applied to a large number of problem domains. This work gives a historical timeline of events to explain the current developments and, as a survey, gives emphasis to the large number of applications in business and consumer analytics that were published between January 2014 and May 2018.

Pablo Moscato, Luke Mathieson

Chapter 21. Hotel Classification Using Meta-Analytics: A Case Study with Cohesive Clustering

We present a new clustering algorithm for handling complexities encountered in analysing data sets of hotel ratings and analyse its performance in a clustering case study. In the setting we address, business constraints and coordinates (among other individual attributes of objects) are unknown and only distances between objects are available to the clustering algorithm, a situation that arises in a wide range of clustering applications. Our algorithm constitutes an application of meta-analytics, in which we tailor a metaheuristic procedure to address a challenging problem at the intersection of predictive and prescriptive analytics. Our work builds on and extends the ideas of our clustering algorithm introduced in previous work which employs the Tabu search Algorithm Tabu Search Tabu Search metaheuristic to assure clusters exhibit a property we call Cohesiveness cohesiveness. The special characteristics of the present hotel classification problem are handled by integrating our previous method with a new form of Hierarchical clustering hierarchical clustering. Our computational analysis discloses that our algorithm obtains clusters that exhibit greater Cohesiveness cohesiveness than those produced by the classical K-means method.

Buyang Cao, Cesar Rego, Fred Glover

Feudalismus oder Aufklärung? Optionen der digitalen Gesellschaft

Die Digitalisierung verändert die Gesellschaft und bewirkt einen Wertewandel, der eine gesellschaftliche Verhandlung neuer Grenzen notwendig macht. Der Diskurs wird in diesem Beitrag in das Spannungsfeld fiktiver Idealpositionen gestellt, die als (digitaler) Feudalismus und als (digitale) Aufklärung vorgestellt werden. Dabei steht Feudalismus sinnbildlich für eine Herrschaftsform der Dominanz einzelner Stände – hier der Anbieter digitaler Produkte – und Aufklärung sinnbildlich für die Position des informierten und freien Individuums. Zu Themenfeldern wie Datenschutz, Privatheit und Überwachung und Beispielen aus der digitalen Wirtschaft wird thesenhaft gezeigt, welche Art von gesellschaftlicher Debatte sich anbahnt. Dabei ist es von Bedeutung, dass die Frage nach dem Weg der digitalen Gesellschaft durch aufgeklärte Menschen beantwortet und nicht durch eine Entwicklung vorgegeben wird, die anschließend nur zur Kenntnis genommen werden kann.

Clemens Heinrich Cap

Chapter 12. Future Trends in Supply Chain

In this chapter, we have briefly discussed “future trends in supply chain management”. In the brief history of SCM we have already watched and observed several dramatic modifications in practice. On the other hand, in the next 15 or so years we are likely to see yet more dramatic vicissitudes. One thing is certain, as SCs become constantly more network-based and virtual, and as worldwide mega-trends reshape the trade and business landscape, the role of supply chain and logistics engineering in safeguarding a healthy ecological/environmental friendly future will become even more critical.

Syed Abdul Rehman Khan, Zhang Yu

4. Sustainable, Smart, and Data-Driven Approaches to Urbanism and their Integrative Aspects: A Qualitative Analysis of Long-Lasting Trends

Smart sustainable/sustainable smart cities, a defining context for ICT for sustainability, have recently become the leading global paradigm of urbanism. With this position, they are increasingly gaining traction and prevalence worldwide as a promising response to the mounting challenges of sustainability and the potential effects of urbanization. In the meantime, the research in this area is garnering growing attention and rapidly burgeoning, and its status is consolidating as one of the most enticing areas of investigation today. A large part of research in this area focuses on exploiting the potentials and opportunities of advanced technologies and their novel applications, especially big data computing, as an effective way to mitigate or overcome the issue of sustainable cities and smart cities being extremely fragmented as landscapes and weakly connected as approaches. In this context, one of the most appealing strands of research in the domain of smart sustainable urbanism is that which is concerned with futures studies related to the planning and development of new models for smart sustainable cities. Not only in the futures studies using a backcasting approach to strategic planning and development, but also in those using other approaches, is trend analysis a necessary step to perform and a critical input to the scenario analysis as part of such studies. With that in regard, this chapter aims to provide a detailed qualitative analysis of the key forms of trends shaping and driving the emergence, materialization, and evolvement of the phenomenon of smart sustainable cities as a leading paradigm of urbanism, as well as to identify the relevant expected developments related to smart sustainable urbanism. It is more likely that these forms of trends reflect a congeries of long-lasting forces behind the continuation of smart sustainable cities as a set of multiple approaches to, and multiple pathways to achieving, smart sustainable urban development. As part of the futures studies related to smart sustainable city planning and development using a backcasting methodology, both the trends and expected developments are key ingredients of, and crucial inputs for, analyzing different alternative scenarios for the future or long-term visions pertaining to desirable sustainable futures in terms of their opportunities, potentials, environmental and social benefits, and other effects. This study serves to provide a necessary material for scholars, researchers, and academics, as well as other futurists, who are in the process of conducting, or planning to carry out, futures research projects or scholarly backcasting endeavors related to the field of smart sustainable urbanism.

Simon Elias Bibri

5. The Underlying Technological, Scientific, and Structural Dimensions of Data-Driven Smart Sustainable Cities and Their Socio-Political Shaping Factors and Issues

We are moving into an era where instrumentation, datafication, and computation are routinely pervading the very fabric of cities, coupled with the interlinking, integration, and coordination of their systems and domains. As a result, vast troves of contextual and actionable data are being produced and used to operate, regulate, manage, and organize urban life. This data-driven approach to urbanism has recently become the mode of production for smart sustainable cities, which are accordingly becoming knowable, tractable, and controllable in new dynamic ways, responsive to the data generated about them by reacting to the analytical outcome of many domains of urban life in terms of enhancing and optimizing operational functioning, planning, design, development, and governance in line with the goals of sustainable development. However, topical studies tend to deal mostly with data-driven smart urbanism while barely exploring how this approach can improve and advance sustainable urbanism under what is labeled ‘data-driven smart sustainable cities’ as a leading paradigm of urbanism. Having a threefold aim, this chapter first examines how data-driven smart sustainable cities are being instrumented, datafied, and computerized so as to improve, advance, and maintain their contribution to the goals of sustainable development through enhanced practices. Secondly, it highlights and substantiates the real potential of big data technology for enabling such contribution by identifying, synthesizing, distilling, and enumerating the key practical and analytical applications of this advanced technology in relation to multiple urban systems and domains with respect to operations, functions, services, designs, strategies, and policies. Thirdly, it proposes, illustrates, and describes a novel architecture and typology of data-driven smart sustainable cities. This chapter intervenes in the existing scholarly conversation by calling attention to a relevant object of study that previous scholarship has neglected and whose significance for the field of urbanism is well elucidated, as well as by bringing new insights to and informing the ongoing debate on smart sustainable urbanism in light of big data science and analytics. This work serves to bring data-analytic thinking and practice to smart sustainable urbanism, and seeks to promote and mainstream its adoption, in addition to drawing special attention to the crucial role and enormous benefits of big data technology and its novel applications as to transforming the future form of such urbanism.

Simon Elias Bibri

2. The Leading Smart Sustainable Paradigm of Urbanism and Big Data Computing: A Topical Literature Review

The big data revolution is set to erupt in both smart cities and sustainable cities throughout the world. This is manifested in bits meeting bricks on a vast scale as instrumentation, datafication, and computation are routinely pervading urban environments. As a result, smart sustainable urbanism is becoming more and more data-driven. Explicitly, big data computing and the underpinning technologies are drastically changing the way both smart cities and sustainable cities are understood, operated, managed, planned, designed, developed, and governed in relation to sustainability in the face of urbanization. This implies that urban systems are becoming much more tightly integrated and urban domains much more highly coordinated while more holistic views and synoptic city intelligence can now be provided thanks to the possibility of drawing together and interlinking urban big data as well as reducing urban life to a form of logic and calculative procedures on the basis of powerful computational algorithms. These data-driven transformations are in turn being directed for improving, advancing, and maintaining the contribution of smart sustainable/sustainable smart cities to the goals of sustainable development. This chapter provides a comprehensive, state-of-the-art review of smart sustainable/sustainable smart cities as a leading paradigm of urbanism in terms of the underlying foundational components and assumptions, research status, issues and debates, research opportunities and challenges, future practices and horizons, and technological trends and developments. As to the findings, this chapter shows that smart sustainable urbanism involves numerous issues that are unsolved, largely ignored, or underexplored from an applied theoretical perspective. And, a large part of research in this area focuses on exploiting the potentials of big data technologies and their novel applications as an effective way to mitigate or overcome the issue of sustainable cities and smart cities being extremely fragmented as landscapes and weakly connected as approaches. The comprehensive overview of and critique on existing work on smart sustainable urbanism provides a valuable reference for researchers and practitioners in related research communities and the necessary material to inform these communities of the latest developments in the area of smart sustainable urban planning and development. The outcome of this topical review will help strategic city stakeholders to understand what they can do more to advance sustainability based on big data technology and its novel applications, and also give policymakers an opportunity to identify areas for further improvement while leveraging areas of strength with regard to the future form of sustainable smart urbanism in the era of big data.

Simon Elias Bibri

8. Advancing Sustainable Urbanism Processes: The Key Practical and Analytical Applications of Big Data for Urban Systems and Domains

Sustainable cities have been the leading global paradigm of urbanism. Undoubtedly, sustainable development has significantly positively influenced city planning and development since the early 1990s. This pertains to the immense opportunities that have been explored and, thus, the enormous benefits that have been realized from the planning and development of sustainable urban forms as an instance of sustainable cities. However, the existing models of such forms, especially compact cities and eco-cities, are associated with a number of problems, issues, and challenges. This mainly involves the question of how such forms should be monitored, understood, and analyzed to improve, advance, and maintain their contribution to sustainability and hence to overcome the kind of wicked problems, intractable issues, and complex challenges they embody. This in turn brings us to the current question related to the weak connection between and the extreme fragmentation of sustainable cities and smart cities as approaches and landscapes, respectively, despite the great potential of advanced ICT for, and also its proven role in, supporting sustainable cities in improving their performance under what is labeled ‘smart sustainable cities.’ This integrated approach to urbanism takes multiple forms of combining the strengths of sustainable cities and smart cities based on how the concept of smart sustainable cities can be conceptualized and operationalized. In this respect, there has recently been a conscious push for cities across the globe to be smarter and thus more sustainable by particularly utilizing big data technology and its applications in the hopes of reaching the optimal level of sustainability. Having a twofold aim, this chapter firstly provides a comprehensive, state-of-the-art review of the domain of sustainable urbanism, with a focus on compact cities and eco-cities as models of sustainable urban forms and thus instances of sustainable cities, in terms of research issues and debates, knowledge gaps, challenges, opportunities, benefits, and emerging practices. It secondly highlights and substantiates the real, yet untapped, potential of big data technology and its novel applications for advancing sustainable cities. In so doing, it identifies, synthesizes, distills, and enumerates the key practical and analytical applications of big data technology for multiple urban domains. This study shows that sustainable urban forms involve limitations, inadequacies, difficulties, fallacies, and uncertainties in the context of sustainability, in spite of what has been realized over the past three decades or so within sustainable urbanism. Nevertheless, as also revealed by this study, tremendous opportunities are available for exploiting big data technology and its novel applications to smarten up sustainable urban forms in ways that can improve, advance, and sustain their contribution to the goals of sustainable development by optimizing and enhancing their operations, functions, services, designs, strategies, and policies across multiple urban domains, as well as by finding answers to challenging analytical questions and transforming the way knowledge can be developed and applied.

Simon Elias Bibri

3. The Theoretical and Disciplinary Underpinnings of Data–Driven Smart Sustainable Urbanism: An Interdisciplinary and Transdisciplinary Perspective

Interdisciplinarity and transdisciplinarity have become a widespread mantra for research within diverse fields, accompanied by a growing body of academic and scientific publications. The research field of smart sustainable/sustainable smart urbanism is profoundly interdisciplinary and transdisciplinary in nature. It operates out of the understanding that advances in knowledge necessitate pursuing multifaceted questions that can only be resolved from the vantage point of interdisciplinarity and transdisciplinarity. Indeed, related research problems are inherently too complex and dynamic to be addressed by single disciplines. In addition, this field does not have a unitary approach in terms of a uniform set of concepts, theories, and disciplines, as it does not represent a specific direction of research but rather multiple directions. These are analytically quite diverse. Regardless, interdisciplinarity and transdisciplinarity as scholarly perspectives apply, by extension, to any conceptual, theoretical, and/or disciplinary foundations underpinning this field. Such perspectives in this chapter represent a rather topical and organizational approach as justified and determined by the interdisciplinary aid transdisciplinary nature of the research field of smart sustainable urbanism. In this subject, additionally, theories from academic and scientific disciplines constitute a foundation for action—data–driven smart sustainable urbanism and related urban big data development as informed by data science practiced within the fields of urban science and urban informatics, as well as by sustainability science and sustainable development. In light of this, it is of relevance and importance to develop a foundational approach consisting of the relevant concepts, theories, discourses, and academic and scientific disciplines that underpin smart sustainable urbanism as a field for research and practice. With that in regard, this chapter endeavors to systematize this complex field by identifying, distilling, mixing, fusing, and thematically analytically organizing the core dimensions of this foundational approach. The primary intention of setting such approach is to conceptually and analytically relate urban planning and development, sustainable development, and urban science while emphasizing why and the extent to which sustainability and big data computing have particularly become influential in urbanism in modern society. Being interdisciplinary and transdisciplinary in nature, such approach is meant to further highlight that this scholarly character epitomizes the orientation and essence of the research field of smart sustainable urbanism in terms of its pursuit and practice. Moreover, its value lies in fulfilling one primary purpose: to explain the nature, meaning, implications, and challenges pertaining to the multifaceted phenomenon of smart sustainable urbanism. This chapter provides an important lens through which to understand a set of theories that is of high integration, fusion, applicability, and influence potential in relation to smart sustainable urbanism.

Simon Elias Bibri

7. On the Sustainability and Unsustainability of Smart and Smarter Urbanism and Related Big Data Technology, Analytics, and Application

There has recently been a conscious push for cities across the globe to be smart and even smarter and thus more sustainable by developing and implementing big data technologies and their applications across various urban domains in the hopes of reaching the required level of sustainability and improving the living standard of citizens. Having gained momentum and traction as a promising response to the needed transition toward sustainability and to the challenges of urbanization, smart and smarter cities as urban planning and development strategies (or urbanism approaches) are increasingly adopting the advanced forms of ICT to improve their performance in line with the goals of sustainable development and the requirements of urban growth. One of such forms that has tremendous potential to enhance urban operations, functions, services, designs, strategies, and policies in this direction is big data computing and its application. It was not until recently that the realization grew about the benefits of exploiting the big data deluge and its extensive sources to better monitor, understand, analyze, and plan smart and smarter cities to improve their contribution to sustainability. However, topical studies on big data applications in the context of smart and smarter cities tend to deal largely with economic growth and the quality of life in terms of service efficiency and betterment, while overlooking and barely exploring the untapped potential of such applications for advancing sustainability. In fact, smart and smarter cities raise several issues and involve significant challenges when it comes to their development and implementation in the context of sustainability. This chapter provides a comprehensive, state-of-the-art review and synthesis of the field of smart and smarter cities in regard to sustainability and related big data analytics and its application in terms of the underlying foundations and assumptions, research issues and debates, opportunities and benefits, technological developments, emerging trends, future practices, and challenges and open issues. This study shows that smart and smarter cities are associated with misunderstanding and deficiencies as regards their incorporation of, and contribution to, sustainability, respectively. Nevertheless, as also revealed by this study, tremendous opportunities are available for utilizing big data applications in smart cities of the future or smarter cities to improve their contribution to the goals of sustainable development through optimizing and enhancing urban operations, functions, services, designs, strategies, and policies, as well as finding answers to challenging analytical questions and advancing knowledge forms. However, just as there are immense opportunities ahead to embrace and exploit, there are enormous challenges ahead to address and overcome in order to achieve a successful implementation of big data technology and its novel applications in such cities. These findings will help strategic city stakeholders understand what they can do more to advance sustainability based on big data applications, and also give policymakers an opportunity to identify areas for further improvement while leveraging areas of strength with regard to the future form of sustainable smart urbanism.

Simon Elias Bibri

Towards Health 4.0: e-Hospital Proposal Based Industry 4.0 and Artificial Intelligence Concepts

The implementation of the most recent technologies is a requirement in this fast-growing and competitive world, especially in the improvement and development of the healthcare area, that is fundamental in life quality enhancement. This article has as objective the utilization of AI and Industry 4.0 concepts oriented to the optimization of a hospital, using a case study an Emergency Department (ED). This proposal allows the development of a current proposal of e-Hospital based on Health 4.0 features and the use of computational ED models will allow the avoidance and detection of bottlenecks in the workflow. Those blockages are automatically removed using an improved shift management proposal based on control theory, AI, and telemedicine. The results show an optimization in the use of the resources and a reduction of the length of stay improving the service quality. The simulation tools allow the test and validation of novel proposals for e-health.

Camilo Cáceres, Joao Mauricio Rosário, Dario Amaya

Group ID-Based Encryption with Equality Test

In era of cloud computing, how to search on encrypted data has been studied extensively. ID-based encryption with equality test (IBEET) as a type of searchable encryption allows a tester (insider) to check whether two ciphertexts encrypted under different identities contain the same message. Due to its equality test functionality, IBEET has many interesting applications, such as personal health record systems. In this paper, we first introduce group mechanism into IBEET and propose a new primitive, namely group ID-based encryption with equality test (G-IBEET). By the group mechanism, G-IBEET supports group granularity authorization. That is, a group administrator, who is trusted by group users, would issue the insider a group trapdoor to specify that it can only compare on ciphertexts of the group users but cannot compare with ciphertexts of any users other than them. Moreover, the workload of generation and management of trapdoors can be greatly reduced due to the group granularity authorization. For the insider attack which exists in most IBEET schemes with the goal of recovering the message from a ciphertext by mounting an offline message recovery attack, G-IBEET provides a nice solution for IBEET to resist it by the group mechanism. We propose a G-IBEET scheme in bilinear pairings, prove its security in the random oracle model and show that the proposed scheme has a more efficient test algorithm.

Yunhao Ling, Sha Ma, Qiong Huang, Ru Xiang, Ximing Li

Function-Dependent Commitments from Homomorphic Authenticators

In cloud computing, delegated computing raises the security issue of guaranteeing data authenticity during a remote computation. In this context, the recently introduced function-dependent commitments (FDCs) are the only approach providing both fast correctness verification, information-theoretic input-output privacy, and strong unforgeability. Homomorphic authenticators—the established approach to this problem—do not provide information-theoretic privacy and always reveal the computation’s result upon verification, thus violating output privacy. Since many homomorphic authenticator schemes already exist, we investigate the relation between them and FDCs to clarify how existing schemes can be supplemented with information-theoretic output privacy. Specifically, we present a generic transformation turning any structure-preserving homomorphic authenticator scheme into an FDC scheme. This facilitates the design of multi-party computation schemes with full information-theoretic privacy. We also introduce a new structure-preserving, linearly homomorphic authenticator scheme suitable for our transformation. It is the first both context hiding and structure-preserving homomorphic authenticator scheme. Our scheme is also the first structure-preserving homomorphic authenticator scheme to achieve efficient verification.

Lucas Schabhüser, Denis Butin, Johannes Buchmann

Chapter 5. The Technology Juggernaut

Any discussion about globalization is incomplete without a discussion of technological evolution. After all, technology played as a catalyst for globalization and helped it spread with unprecedented pace. Even presently, it is the technology that is making the world a small place to live in.

Ricardo Ernst, Jerry Haar

Technological Model of Facial Recognition for the Identification of Patients in the Health Sector

The identification of patients within medical institutions is an important issue to provide better care in health centers and avoid identity personifications. The risk of medical identity theft is one important factor for patient safety. Technologies are improving, such as fingerprints, atrial biometry or electrocardiograms to improve safety measures. However, biometric counterfeiting methods have increased and violated the security of these technological models. This article proposes a technological model of facial recognition to efficiently identify patients according to cognitive services in medical centers. The technological model was implemented in the UROGINEC clinic for the proof of concept. The results of the identification of the patient were successful with a precision percentage of 95.82 in an average of 3 s. This allowed the clinic to prevent identity theft with alert messages and improved the user experience within the medical institution.

Diego La Madrid, Martín Barriga, Pedro Shiguihara

Context Analysis of Teachers’ Learning Design Practice Through Activity Theory, Distributed Cognition, and Situated Cognition

The objective of this work was to analyze how different theoretical frameworks encompass the teachers’ learning design practice. We compared the use of Activity Theory, Distributed Cognition, and Situated Cognition theoretical frameworks to deal with data analysis and interpretations and compared them. The Activity Theory is useful to study technology in activities since the unit of analysis is an activity, and it is always instrumentally mediated. The use of Distributed Cognition framework lies in exploring how mental schemas of the mind can represent structures outside the mind. Finally, Situated Cognition is focused on analyzing the continuous relationship integrating teachers and environmental instruments to act on class planning to understand the flow of the activities involved in this real scenario. The analysis from different theoretical points of view gave the designers a better understanding of the activity structure, how the artifacts support distributed cognitive activities and finally how the artifact is embodied in the planning activities. The present work aims to have contributed to a better understanding of teachers’ planning practices to influence the design of lessons planning software services to be integrated into teaching practice.

Leandro M. Queiros, Carlos J. P. Silva, Alex S. Gomes, Fernando Moreira

Chapter 12. Regulating Artificial Intelligence

Air transport is a technology intensive and capital-intensive industry. However, at the same time, one must not ignore the fact that it is an industry which is responsible for the safety and security of humans. It is an industry which necessarily involves emotional intelligence and empathy for air passengers. As the previous discussions have shown with regard to human trafficking by air and related issues, air transport is no longer the simple carriage by air of the passenger from one point to another, but a composite product that takes care of vulnerable passengers. Through some initiatives, ICAO has demonstrated this fact.

Ruwantissa Abeyratne

Chapter 2. Getting Started with Azure Functions

Over the last few decades, the process of innovation has changed drastically due to the advancements in technology. Microsoft’s vision of Cloud First, Mobile First has triggered a massive avalanche of ideas from individuals by utilizing the power of the Microsoft Azure cloud. Azure’s IaaS, PaaS, and SaaS offerings have shared their success with many researchers, inventors, and developers by supporting them in their mission-critical domain requirements.

Rami Vemula

Chapter 1. A New Era of Serverless Computing

Modern-day software engineering practices have evolved due to the fundamental improvements of automation techniques, computational speed, and operational strategies. The latest technologies and cloud platforms empowered software organizations and professionals to quickly adapt to new engineering practices without major technical and operational disruptions. The modernization of core software engineering practices like Agile development, customer focused testing, the Continuous Integration-Validation-Deployment cycle, predictive analytics, and more, have not only changed the process of building software but have also served as a foundation of a new software paradigm, serverless computing.

Rami Vemula

Emergence of Collective Behavior in Large Cellular Automata-Based Multi-agent Systems

We study conditions of emergence of collective behavior of agents acting in the two-dimensional (2D) Cellular Automata (CA) space, where each agent takes part in spatial Prisoner’s Dilemma (PD) game. The system is modeled by 2D CA evolving in discrete moments of time, where each cell-agent changes its state according to a currently assigned to its rule. Rules are initially assigned randomly to cells-agents, but during iterated game agents may replace their current rules by rules used by their neighbors. While each agent is oriented on maximization of its own profit in the game, we are interested in answering the question if and when a phenomenon of global cooperation in a large set of agents is possible. We present results of the experimental study showing conditions and degree of such cooperation.

Franciszek Seredyński, Jakub Gąsior

RDFM: Resilient Distributed Factorization Machines

Factorization Machines algorithms have been successfully applied to recommender systems due to their ability to handle data sparsity and the cold-start problem. Their scalability makes it suitable to produce evergrowing complex predictive models, which are based on Big Data without performance degradation. The algorithm has been scaled to contexts of distributed and parallel computation, but in general with the strong assumption that those environments are safe and are not subject to arbitrary errors, malicious attacks, and hardware failures. In this work, we show that a distributed average consensus strategy is capable to deal with unsafe and dynamic learning environments.

André Rodrigo da Silva, Leonardo M. Rodrigues, Luciana de Oliveira Rech, Aldelir Fernando Luiz

Automotive Meets ICT—Enabling the Shift of Value Creation Supported by European R&D

Digitalization is proving to be a game changer in bridging the gap between the heterogeneous skills and markets. It increases productivity through optimisation over the entire supply chain and lets new services emerge through the convergence of applications domains. In this paper, we are providing a review of the main automotive trends and are highlighting how digitalization (especially by information communication technologies—ICT) is supporting, even pushing innovation. We are especially mapping to the IOT4CPS and SCOTT projects to present key results related to Internet of Things supporting the digital transition in the automotive domain.

Eric Armengaud, Bernhard Peischl, Peter Priller, Omar Veledar

The GDPR and Its Application in Connected Vehicles—Compliance and Good Practices

Symbol of the 20th century economy, the automobile is one of the mass consumer products that has impacted society as a whole. Commonly associated with the notion of freedom, cars are often considered as more than just a mean of transportation. Indeed, they represent a private area in which people can enjoy a form of autonomy of decision, without encountering any external interferences. Today, as connected vehicles move into the mainstream, such a vision no longer corresponds to the reality. In-vehicle connectivity is rapidly expanding from luxury models and premium brands to high-volume midmarket models, and vehicles are now massive data hubs. Not only vehicles, but drivers and passengers are also becoming more and more connected. As a matter of fact, many models launched over the past few years on the market integrate sensors and connected on board equipment, which may collect and record, among other things, the engine performance, the driving habits, the locations visited, and potentially even the driver’s eye movements, or other biometric data for authentication purposes. Such data processing is taking place in a complex ecosystem, which is not limited to the traditional players of the automotive industry, but is also shaped by the emergence of new players belonging to the digital economy. Aware of the issues at stake for the protection of motorists’ privacy in this ecosystem, the Commission Nationale de l’Informatique et des Libertés (CNIL)—the French data protection authority—developed a reference framework enabling professionals to comply with the General Data Protection Regulation (GDPR), applicable as from 25 May 2018, thus making the compliance simpler and ensuring that users enjoy transparency and control in relation to their data.

Félicien Vallet

Using Commercial Software to Create a Digital Twin

In the manufacturing environment, the Industrial Internet of Things (IIoT)Industrial Internet of Things (IIoT) allows machines, products, and processes to communicate with each other to achieve more efficient production. With the growing move to Industry 4.0, increased digitalization is bringing its own unique challenges and concerns to manufacturing. An important component of meeting those challenges is with the use of a Digital Twin. A digital twin provides a virtual representation of a product, part, system or process that allows you to see how it will perform, sometimes even before it exists. A digital twin of the entire manufacturing facility performs in a virtual world very similar to how the entire manufacturing facility performs in the physical world. This broad definition of a digital twin may seem unattainable, but it is not—advanced discrete event simulation products and modeling techniques now make it possible. This chapter will describe the importance of a digital twin and how data-driven and data-generated models, real-time communication, and integral riskRisk analysis -analysis based on an advanced DES product can solve many of the challenges and help realize the benefits offered by Industry 4.0. We will illustrate by providing a brief tutorial on building a data-generated model using the Simio DES product.

David T. Sturrock

Chapter 4. More Than One—Artistic Explorations with Multi-agent BCIs

In this chapter, the historical context and relevant scientific, artistic, and cultural milieus from which the idea of brain-computer interfaces involving multiple participants emerged is discussed. Additional contextualization includes descriptions of the intellectual climate from which ideas about brain biofeedback led to pioneering applications in music and its allied arts. The chapter then proceeds with more in-depth explanations of what are termed contingent and non-contingent feedback schemes, along with descriptions of early artistic applications and how those might be differentiated. Effects ensuing from the qualitative nature of the feedback signals in brainwave music are also briefly discussed. Following this, substantial space is devoted to describing selected examples of relatively recent musical and artistic pieces that employ multi-agent BCI. These are described with more extensive technical details that illustrate how the ideas, some of which could only have been imagined in earlier times, are now made possible by advances in available technology and new methods for analyzing brain signals from both individuals and groups. These include: implementing biofeedback schemes in which feedback signals depend upon contingent conditions in electroencephalographic features measured among multiple participants, multivariate principal oscillation pattern detection, “hyper-brain” scanning, employing wearable technology, and other related methods. Complex brain-computer music systems are also described in detail. Key artistic concepts explored include the idea of active imaginative listening as performance and cooperative multi-agent artistic productions with BCIs. Some concluding commentary and ideas for future research are also offered.

David Rosenboom, Tim Mullen

Improving Population Diversity Through Gene Methylation Simulation

During the runtime of many evolutionary algorithms, the diversity of the population starts out high and then rapidly diminishes as the algorithm converges. The diversity will directly influence the algorithm’s ability to perform effective exploration of the problem space. In most cases if exploration is required in the latter stages of the algorithm, there may be insufficient diversity to allow for this. This paper proposes an algorithm that will better maintain diversity throughout the runtime of the algorithm which will in turn allow for better exploration during the latter portion of the algorithm’s run.

Michael Cilliers, Duncan A. Coulter

Chapter 2. IoT: Privacy, Security, and Your Civil Rights

This chapter delves into the interrelationship between the Internet of Things (IoT) and our civil rights, particularly, the Fourth Amendment to the US Constitution. The goal is to emphasize the immensity of the work to be done to tackle the issues presented with this great ubiquitous asset. We constantly use it, but it can also be the cause of many grave issues detrimental to the safety of our personal data. Also discussed are the fundamentals of privacy, security, and your civil rights and the different obstacles and opportunities that occur along the way, sometimes leading to litigation. We will explain why it is all about the data, who gets hacked and why, and what precautions and actions you can take as an individual and a company.

Cynthia D. Mares

Chapter 12. Secure Distributed Storage for the Internet of Things

The recent popularity of the Internet of Things (IoT) significantly impacts data storage and protection. The heterogeneous nature of the data generated from IoT devices makes it difficult to design a “one-solution-fits-all” storage system. Current storage solutions include object-based storage for large files like images and videos and flash array-based storage for smaller log files from sensors. The massive amounts of data generated by IoT devices make cloud-based storage a perfect solution for such data. This creates a need for a single but layered cloud storage model that would not only be effective for all types of IoT data but would also provide data privacy while guaranteeing high system availability at all times. In this chapter, we describe applications of distributed storage and the possibility of using a multi-cloud or hybrid cloud storage model to securely store data from IoT devices.

Sinjoni Mukhopadhyay

18. Access Controls in Internet of Things to Avoid Malicious Activity

Connecting and controlling the access to the Internet of Things (IoT) are crucial since it converges and evolves multiple technologies. Traditional embedded systems, wireless sensor networks, control systems, software-defined radio technologies, and smart systems contribute to the Internet of Things. The deep learning, data analytics, and consumer applications have an essential role to IoT. The challenges are to store, process, and develop a meaningful form so that it is useful to the business, government, and customers. The paper discusses computing the data in a cloud; current challenges to store, retrieve, and process; security requirements; and possible solutions. We further provide the trust framework for the user and access control algorithms for data processing in the cloud environment.

Yenumula B. Reddy

17. Biometric System: Security Challenges and Solutions

The concept of biometric authentication is popular in the research industry. Biometric authentication refers to the measurement and statistical analysis of a human’s biological and behavioral features. Biometric technology is mainly used for authentication and identifying individuals based on their biological traits. Regarding biometric applications, security is the key issue that has a lot of remaining challenges. To succeed in this domain, this paper gives a background on the fingerprint matching algorithm steps. Moreover, the paper presents a brief overview of different attacks and threats that affect the privacy and security of the biometric system. Then we discuss the common schemes that have been used to secure biometric systems. Finally, findings and direction for further research about biometric system security are explored.

Bayan Alzahrani, Fahad Alsolami

81. Strategies Reported in the Literature to Migrate to Microservices Based Architecture

Context: Microservice-oriented architecture relies on the implementation and deployment of small and autonomous microservices, rather than implementing the functionalities in one unique module to be deployed. They have been adopted as a solution to the shortcomings of the monolithic architecture such as lack of flexibility. Goal: This paper discusses lessons learned and challenges reported in the literature regarding the migration of legacy monolithic software systems to microservices based architecture. Method: We performed an automated search targeting public repositories to accomplish the stated goal. Results: Based on the evidence provided by 12 studies, we classified main findings in lessons learned related to the migration, as well as associated difficulties and challenges. Conclusions: the guidelines to migrate to microservices based architecture are maturing/evolving and the literature has pinpointed issues that deserve further investigation.

Heleno Cardoso da Silva Filho, Glauco de Figueiredo Carneiro

35. Smart Food Security System Using IoT and Big Data Analytics

The agriculture sector is facing major challenges to enhance production in a situation of dwindling natural resources. The growing demand for agricultural products, however, also offers opportunities for producers to sustain, improve productivity and reduce waste in the supply chain. Recent advancements in information and communication technologies (ICT) show promise in addressing these challenges. This paper proposes food security architecture for agricultural supply chain efficiency using ICT tools such as Internet of Things and Big Data analytics. To avoid loss in agriculture, a food security architecture is developed through Smart Agribusiness Supply Chain Management System is developed in this paper. This can result in uplifting the livelihoods of the rural poor through the enhancement of agribusiness.

Sazia Parvin, Sitalakshmi Venkatraman, Tony de Souza-Daw, Kiran Fahd, Joanna Jackson, Samuel Kaspi, Nicola Cooley, Kashif Saleem, Amjad Gawanmeh

3. Detecting and Preventing File Alterations in the Cloud Using a Distributed Collaborative Approach

Cloud Computing is the new trend and this brings new issues and challenges in cyber security since a lot of the data from companies and users is available through the Internet. It is not only about the data but also about the applications that run in the Cloud that could be compromised affecting the service to thousands or millions of users, which may also have their local systems under siege through the exploitation of the security flaws in the Cloud. We propose here an algorithm that can detect unauthorized modifications to any of the files that are kept under custody. It is a collaborative systems where a set of nodes participate to gain also reputation points. This is a light algorithm that requires a time complexity of only O(n) times hashings in total. In our algorithm is implemented a technique to avoid the Hash Value Manipulation Attack that is one kind of Man-in-the-middle attack used to replace hash values. Any unauthorized modification of a file is detected and reported without the need of a third party auditor, which is another advantage.

José Antonio Cárdenas-Haro, Maurice Dawson Jr.

22. Decentralizing Rehabilitation: Using Blockchain to Store Exoskeletons’ Movement

During the 2nd Semester of 2018, at the Brazilian Aeronautics Institute of Technology (Instituto Tecnologico de Aeronautica – ITA), a successful Collaborative Interdisciplinary Problem-Based Learning (Co-IPBL) experience took place. At that time, more than 20 undergrad and graduate students from 3 different courses, within just 17 academic weeks, had the opportunity of conceptualizing, modeling, developing, and testing a Computer System involving multiple actors (Patients, Doctors, Hospitals, and Suppliers) for real-time decision making in the rehabilitation with Exoskeletons of patients suffering from Lower Limb Impairment after motorcycle accidents. Differently from other existing products from universities, research centers, governmental agencies, and other public and/or private companies, this product was developed, using the best practices of the Agile Scrum Method, along with emerging Information Technologies (ITs) such as Blockchain Hyperledger, Internet of Things (IoT), among others. This Co-IPBL was performed with the participation of a rehabilitation medical team from the Hospital of Clinics at the Faculty of Medicine of the University of Sao Paulo (HC-FMUSP). The experience described in this paper illustrates a way of dealing with the multiple challenges involved in teaching, learning, designing, and implementing complex intelligent systems to address health care issues with collaborative work involving multidisciplinary teams facing real-life problems such as exoskeletons applied to clinical recover of Patients.

Daniela America da Silva, Claudio Augusto Silveira Lelis, Luiz Henrique Coura, Samara Cardoso dos Santos, Leticia Yanaguya, Jose Crisostomo Ozorio Junior, Isaias da Silva Tiburcio, Gildarcio Sousa Goncalves, Breslei Max Reis da Fonseca, Alexandre Nascimento, Johnny Cardoso Marques, Luiz Alberto Vieira Dias, Adilson Marques da Cunha, Paulo Marcelo Tasinaffo, Thais Tavares Terranova, Marcel Simis, Pedro Claudio Gonsales de Castro, Linamara Rizzo Battistella
Image Credits