Skip to main content
main-content

Big Data

weitere Buchkapitel

Chapter 9. Internet of Things (IoT) and Machine to Machine (M2M) Communication

Kevin Ashton, form Auto-ID center at MIT first coined the term Internet of things in 1999. IoT is nothing but the convergence of wired technologies, wireless communications, micro-electromechanical devices, micro-services software systems building small single function modules, and Internet connectivity. IoT is basically a network of connected physical objects and they communicate with each other through Internet.

Ramjee Prasad, Vandana Rohokale

Chapter 16. Artificial Intelligence and Machine Learning in Cyber Security

Artificial Intelligence (AI) is the intelligence that can be shown by machines, in contrast to the natural way of intelligence displayed by humans or the way in which we can create intelligent machines that work and react like humans. AI is basically a term used when a machine behaves like a human, in activities such as problem-solving or learning, which is also known as Machine Learning. There are so many applications of AI that we use in our day to day lives without even knowing it. Such as, Siri, Alexa, Self-driven cars, Robotics, Gaming etc.

Ramjee Prasad, Vandana Rohokale

Chapter 18. Research Challenges and Future Scope

The cyber security has several potential open research challenges as well as future scope. There are various flaws in the ICT and cyber systems such as level code defects, trustworthy computing and selfish behavior of the users and machines, security architecture imperfections, privacy shortcomings, usability issues, and weak security metrics. To overcome these flaws with effective solutions is a challenging task. Key research challenges and future scope is discussed subject wise in this chapter.

Ramjee Prasad, Vandana Rohokale

Chapter 1. Introduction

Digitization is becoming the basis of future development in our society and economy. In today’s world, People, devices and machines are networked through wired or wireless means. Now is the era of Internet and smart mobile devices. Internet has touched every human being and has changed the way we perform our everyday activities like working, playing, shopping, seeing movies and serials, talking on the phone, listening to our favorite music, ordering food, paying bills, making friends, and greeting our friends and relatives on their special occasions. Due to this every time and everywhere connectivity, track of every user and the objects is possible using the IP address. At this point, users can not stop using Internet but they expect it to be secure, privacy preserving and trustworthy.

Ramjee Prasad, Vandana Rohokale

Chapter 16. Procedures with Incomplete Information

As mentioned at the beginning of the chapter an obvious short coming concerning application to real situations of the forecasting procedure as described so far is that we make use of complete knowledge of the entire space of agents and their interactions. To test the strength of the results against incomplete information a first attempt has been introducing an error in the interaction matrix used for the mean field treatment. This represents the situation in which an observer would have to measure the interactions between agents and does so with an error. This is possibly the biggest problem one would have to overcome when trying to describe real systems. As we will see the forecasting method has proven itself to be quite robust, yielding similar results in both models even in the presence of non negligible errors.

Duccio Piovani, Jelena Grujić, Henrik J. Jensen

Efficient Resource Utilization Using Blockchain Network for IoT Devices in Smart City

With rapid increase in the use of technology, the world is now moving towards smart cities which require the communication and collaboration of Internet-of-Things (IoT) devices. The smart city enhances the use of technology to share information and data among devices. These devices are producing a huge volume of data that needs to be tackled carefully. Different works have already been proposed to provide a communication in a network for the IoT devices; however, nothing has been founded more effective in terms of resource utilization. Hybrid network architecture is the combination of a centralized and distributed network architectures. The centralized network is used for the communication of IoT devices with edge nodes and distributed network for communicating miner nodes with edge nodes. In this way, the network utilize a lot of resources. In this paper, we are proposing a single network which is the combination of both edge nodes and miner nodes. Blockchain is also implemented in this network to provide secure communication between the devices. The evaluation of the proposed model is done using different performance parameters such as time and cost against the number of devices. Limited number of devices are used to perform this evaluation. Furthermore, the results are obtained by utilizing Proof-of-Work consensus mechanism.

Muhammad Zohaib Iftikhar, Muhammad Sohaib Iftikhar, Muhammad Jawad, Annas Chand, Zain Khan, Abdul Basit Majeed Khan, Nadeem Javaid

Enhanced Decentralized Management of Patient-Driven Interoperability Based on Blockchain

In healthcare, interoperability has been focused recently, in which, Electronic Health Record (EHR) is patient-centric. However, patient-centered interoperability brings new challenges and requirements, like security and privacy, advance technology, immutability, transparency and trust among applications. Data related to healthcare is an asset of a patient that must be controlled and owned by patient. In this paper, we have proposed a blockchain based patient-driven interoperability and discussed how we can leverage blockchain. Blockchain facilitates us in data liquidity, data immutability, data aggregation, patient identity, digital access rules, incentives and clinical data volume. Our system provides patients an immutable log and easy access to their health data across the healthcare organizations. Furthermore, patient authorize healthcare organizations to access their health data. Stakeholders (patients and healthcare organization) of EHRs are also incentivized if any organization wants to access their health data.

Asad Ullah Khan, Affaf Shahid, Fatima Tariq, Abdul Ghaffar, Abid Jamal, Shahid Abbas, Nadeem Javaid

Clustering Analysis and Visualization of TCM Patents Based on Deep Learning

In the process of medicine innovation, pharmaceutical enterprises tend to seize the intellectual property highland actively. They engage in research and development independently, apply for patents for core technologies, or take the initiative to acquire patents from others. Before applying for patents by their own efforts or purchasing patents from others, pharmaceutical companies need to search for related patents in the patent pool and make a comparative analysis of them, in order to find technology blank areas as R&D objectives, or find valuable patents as potential acquisition targets. In this paper, we use deep learning technology and propose a semantic-based clustering algorithm for Traditional Chinese Medicine (TCM) patents, discarding the traditional literal–based text clustering method. We also give a visualization method for TCM patents, so as to facilitate pharmaceutical enterprises to intuitively understand the relevant patents.

Na Deng, Xu Chen, Caiquan Xiong

Web Version of IntelligentBox (WebIB) and Its Extension for Web-Based VR Applications - WebIBVR

This paper treats a 3D graphics software development system called IntelligentBox and its web version called WebIB. Originally, IntelligentBox was implemented as a development system for desktop 3D graphics applications. It provides various 3D software components called boxes each of which has a unique functionality and a 3D visible shape. IntelligentBox also provides a dynamic data linkage mechanism called slot connection that allows users to develop interactive 3D graphics applications only by combining already existing boxes through direct manipulations on a computer screen. Ten years ago, the author extended IntelligentBox system to make possible the development of web-based 3D graphics applications. This extended version of IntelligentBox is called WebIB. Furthermore, this time, the author extended WebIB to make possible the development of web-based VR (Virtual Reality) applications. This version of IntelligentBox called WebIBVR. In this paper, the author explains several new functionalities of WebIBVR and introduces use cases of web-based VR applications.

Yoshihiro Okada

Design and Construction of Intelligent Decision-Making System for Marine Protection and Law Enforcement

Marine protection is closely related to the sustainable development of mankind. At present, CHINA COAST GUARD, an important functional department in the business of marine protection in China, mainly relies on manual work in the process of marine protection and law enforcement, which is inefficient and risky. In order to solve this problem, this paper presents a design framework of an intelligent decision-making system for marine protection and law enforcement. By using Internet of Things, Artificial Intelligence, Natural Language Processing and other advanced technologies, the decision-making system can automatically push intelligent punishment measures for marine protection and law enforcement through automatic matching with maritime laws and regulations. The design idea of this paper can also be extended to the assistant decision-making in other domains.

Na Deng, Xu Chen, Caiquan Xiong

Trusted Remote Patient Monitoring Using Blockchain-Based Smart Contracts

With an increase in the development of the Internet of Things (IoT), people have started using medical sensors for health monitoring purpose. The huge amount of health data generated by these sensors must be recorded and conveyed in a secure manner in order to take appropriate measures in critical conditions of patients. Additionally, privacy of the personal information of users must be preserved and the health records must be stored in a secure manner. Possession details of IoT devices must be stored electronically for eradication of counterfeited actions. The emerging blockchain is a distributed and transparent technology that provides a trusted and unalterable log of transactions. We have made a healthcare system using blockchain-based smart contracts which support enrollments of patients and doctors in a health center thereby increasing user participation in remote patient monitoring. Our system monitors the patients at distant places and generates alerts in case of emergency. We have used smart contracts for authorization of its devices and provided a legalized and secure way of using medical sensors. Using the blockchain technology, forgery and privacy hack in healthcare settings is reduced, thereby increasing the trust of people in remote monitoring. We have provided a graphical comparison of costs that verifies the successful deployment of contracts.

Hafiza Syeda Zainab Kazmi, Faiza Nazeer, Sahrish Mubarak, Seemab Hameed, Aliza Basharat, Nadeem Javaid

A Multi-sensor Based Physical Condition Estimator for Home Healthcare

According to the WHO(World Health Organization) and UNSD(United Nations Statistics Division) definition, when the percentage of elderly people (65 years of age or older) in the population exceeds 7%, it becomes an “aging society”, if it exceeds 14%, it becomes an “aged society”, and if it exceeds 21%, it becomes a “super-aged society”. Some developed countries are becoming super-aged societies. In a super-aged society, there are various problems in medical services for health management. To solve these problems, it is desirable for all generations, including the elderly, to take the initiative to maintain their own health. In this paper, we propose a system aimed at every one of them actively managing their health. The system always monitors and accumulates the biological information of the subject using various contact or non-contact sensors. By analyzing these data in an integrated manner, the subject can easily recognize changes in the physical condition. And also, it promotes the provision of information to remote healthcare professionals when people receive healthcare at home.

Toshiyuki Haramaki, Hiroaki Nishino

News Collection and Analysis on Public Political Opinions

With the fast development of news media and freedom of speech in Taiwan, some news is not objectively reported. In fact, in order to attract people’s attention and increase the click rates of news, many journalists did not convey the exact meanings of news, even distorting news meanings or adding some subjective criticisms or opinions. As a result, news confusions come out one after the other. Based on the analysis of political opinion news, this study would like to analyze certain political characters, such as candidates during a certain period of time, for example, the election period. Last year (2018), Kaohsiung-city-mayor election was held in December.We develop a news gathering and analytical scheme, named Focused News Collection and analytical System (FNCaS), which predicts which candidate might be the winner. By analyzing the possible outcomes for readers through big data analysis techniques and deep learning approaches after some amount of news were gathered. The purpose is to reduce the time for readers to absorb news essentials, and to conclude the possible results of the analyses immediately, aiming to improve the efficiency that people access to news contents and understand the implications behind it. Our conclusion is that the FNCaS has capability in collecting news immediately and analyzing some amount of news of focused domains efficiently.

Zhi-Qian Hong, Fang-Yie Leu, Heru Susanto

A General Framework for Multiple Choice Question Answering Based on Mutual Information and Reinforced Co-occurrence

As a result of the continuously growing volume of information available, browsing and querying of textual information in search of specific facts is currently a tedious task exacerbated by a reality where data presentation very often does not meet the needs of users. To satisfy these ever-increasing needs, we have designed an solution to provide an adaptive and intelligent solution for the automatic answer of multiple-choice questions based on the concept of mutual information. An empirical evaluation over a number of general-purpose benchmark datasets seems to indicate that this solution is promising.

Jorge Martinez-Gil, Bernhard Freudenthaler, A Min Tjoa

Kapitel 2. Grundkonzepte urbaner und innovativer Mobilität im Kontext von Sharing Economy

Das Mobilitätsverhalten erlaubt, basierend auf dem Status Quo und rückblickenden Vergleichen, Ableitung für Szenarien über die Mobilität der Zukunft zu tätigen. Dieses Kapitel bildet hierdurch die Basis für die nachgehende Akzeptanzforschung, bei der Mobilitätsanforderungen und das Ausmaß zur Befriedigung der Mobilitätsbedürfnisse (Akzeptanz) abgeleitet werden. Gleichzeitig bildet dieses Kapitel damit die Grundlage zur Ableitung von Mobilitätsanforderungen für ein zukunftsfähiges Konzept und der Erfordernis neuer, innovativer Mobilitätskonzepte.

Wiebke Geldmacher

Semantically-Enabled Optimization of Digital Marketing Campaigns

Digital marketing is a domain where data analytics are a key factor to gaining competitive advantage and return of investment for companies running and monetizing digital marketing campaigns on, e.g., search engines and social media. In this paper, we propose an end-to-end approach to enrich marketing campaigns performance data with third-party event data (e.g., weather events data) and to analyze the enriched data in order to predict the effect of such events on campaigns’ performance, with the final goal of enabling advanced optimization of the impact of digital marketing campaigns. The use of semantic technologies is central to the proposed approach: event data are made available in a format more amenable to enrichment and analytics, and the actual data enrichment technique is based on semantic data reconciliation. The enriched data are represented as Linked Data and managed in a NoSQL database to enable processing of large amounts of data. We report on the development of a pilot to build a weather-aware digital marketing campaign scheduler for JOT Internet Media—a world leading company in the digital marketing domain that has amassed a huge amount of data on campaigns performance over the years—which predicts the best date and region to launch a marketing campaign within a seven-day timespan. Additionally, we discuss benefits and limitations of applying semantic technologies to deliver better optimization strategies and competitive advantage.

Vincenzo Cutrona, Flavio De Paoli, Aljaž Košmerlj, Nikolay Nikolov, Matteo Palmonari, Fernando Perales, Dumitru Roman

Sparklify: A Scalable Software Component for Efficient Evaluation of SPARQL Queries over Distributed RDF Datasets

One of the key traits of Big Data is its complexity in terms of representation, structure, or formats. One existing way to deal with it is offered by Semantic Web standards. Among them, RDF – which proposes to model data with triples representing edges in a graph – has received a large success and the semantically annotated data has grown steadily towards a massive scale. Therefore, there is a need for scalable and efficient query engines capable of retrieving such information. In this paper, we propose Sparklify: a scalable software component for efficient evaluation of SPARQL queries over distributed RDF datasets. It uses Sparqlify as a SPARQL-to-SQL rewriter for translating SPARQL queries into Spark executable code. Our preliminary results demonstrate that our approach is more extensible, efficient, and scalable as compared to state-of-the-art approaches. Sparklify is integrated into a larger SANSA framework and it serves as a default query engine and has been used by at least three external use scenarios.Resource type Software Framework Website http://sansa-stack.net/sparklify/ Permanent URL https://doi.org/10.6084/m9.figshare.7963193

Claus Stadler, Gezim Sejdiu, Damien Graux, Jens Lehmann

A Scalable Framework for Quality Assessment of RDF Datasets

Over the last years, Linked Data has grown continuously. Today, we count more than 10,000 datasets being available online following Linked Data standards. These standards allow data to be machine readable and inter-operable. Nevertheless, many applications, such as data integration, search, and interlinking, cannot take full advantage of Linked Data if it is of low quality. There exist a few approaches for the quality assessment of Linked Data, but their performance degrades with the increase in data size and quickly grows beyond the capabilities of a single machine. In this paper, we present DistQualityAssessment – an open source implementation of quality assessment of large RDF datasets that can scale out to a cluster of machines. This is the first distributed, in-memory approach for computing different quality metrics for large RDF datasets using Apache Spark. We also provide a quality assessment pattern that can be used to generate new scalable metrics that can be applied to big data. The work presented here is integrated with the SANSA framework and has been applied to at least three use cases beyond the SANSA community. The results show that our approach is more generic, efficient, and scalable as compared to previously proposed approaches.Resource type Software Framework Website http://sansa-stack.net/distqualityassessment/ Permanent URL https://doi.org/10.6084/m9.figshare.7930139

Gezim Sejdiu, Anisa Rula, Jens Lehmann, Hajira Jabeen

Squerall: Virtual Ontology-Based Access to Heterogeneous and Large Data Sources

The last two decades witnessed a remarkable evolution in terms of data formats, modalities, and storage capabilities. Instead of having to adapt one’s application needs to the, earlier limited, available storage options, today there is a wide array of options to choose from to best meet an application’s needs. This has resulted in vast amounts of data available in a variety of forms and formats which, if interlinked and jointly queried, can generate valuable knowledge and insights. In this article, we describe Squerall: a framework that builds on the principles of Ontology-Based Data Access (OBDA) to enable the querying of disparate heterogeneous sources using a unique query language, SPARQL. In Squerall, original data is queried on-the-fly without prior data materialization or transformation. In particular, Squerall allows the aggregation and joining of large data in a distributed manner. Squerall supports out-of-the-box five data sources and moreover, it can be programmatically extended to cover more sources and incorporate new query engines. The framework provides user interfaces for the creation of necessary inputs, as well as guiding non-SPARQL experts to write SPARQL queries. Squerall is integrated into the popular SANSA stack and available as open-source software via GitHub and as a Docker image.Software Framework. https://eis-bonn.github.io/Squerall .

Mohamed Nadjib Mami, Damien Graux, Simon Scerri, Hajira Jabeen, Sören Auer, Jens Lehmann

The Microsoft Academic Knowledge Graph: A Linked Data Source with 8 Billion Triples of Scholarly Data

In this paper, we present the Microsoft Academic Knowledge Graph (MAKG), a large RDF data set with over eight billion triples with information about scientific publications and related entities, such as authors, institutions, journals, and fields of study. The data set is licensed under the Open Data Commons Attribution License (ODC-By). By providing the data as RDF dump files as well as a data source in the Linked Open Data cloud with resolvable URIs and links to other data sources, we bring a vast amount of scholarly data to the Web of Data. Furthermore, we provide entity embeddings for all 210 million represented publications. We facilitate a number of use case scenarios, particularly in the field of digital libraries, such as (1) entity-centric exploration of papers, researchers, affiliations, etc.; (2) data integration tasks using RDF as a common data model and links to other data sources; and (3) data analysis and knowledge discovery of scholarly data.

Michael Färber

Datalog Materialisation in Distributed RDF Stores with Dynamic Data Exchange

Several centralised RDF systems support datalog reasoning by precomputing and storing all logically implied triples using the well-known seminaïve algorithm. Large RDF datasets often exceed the capacity of centralised RDF systems, and a common solution is to distribute the datasets in a cluster of shared-nothing servers. While numerous distributed query answering techniques are known, distributed seminaïve evaluation of arbitrary datalog rules is less understood. In fact, most distributed RDF stores either support no reasoning or can handle only limited datalog fragments. In this paper, we extend the dynamic data exchange approach for distributed query answering by Potter et al. [13] to a reasoning algorithm that can handle arbitrary rules while preserving important properties such as nonrepetition of inferences. We also show empirically that our algorithm scales well to very large RDF datasets.

Temitope Ajileye, Boris Motik, Ian Horrocks

Algorithmen, Daten und Ethik

Ein Beitrag zur Papiermaschinenethik

Die informationstechnischen Realisierungen von Algorithmen sind längst in alle Lebensbereiche des modernen Menschen vorgedrungen. Algorithmen werden als Allheilmittel angepriesen oder als Hauptschuldige der „Digitalen Unmündigkeit“ gesehen. Zunächst werden wir Algorithmen ganz ohne Digitaltechnik betrachten, also auf bestimmte Arten von Texten eingehen, die auf den menschlichen Geist eine Art Zwang ausüben, die so genannten Paper Machine Codes. Die Betrachtungen zur Ethik der Algorithmen wird mit einer ethischen Betrachtung der (Eingabe-)Daten verbunden, besonderes Augenmerk liegt auf der Signatur einer Person. Es folgen dann Ausführungen zu Geistesmaschinen und nutzlosen Maschinen, um mit einem Ausblick auf die Bedingungen zur Möglichkeit der Mündigkeit im Zeitalter der Internet of Things zu schließen.

Stefan Ullrich

Pflegeroboter aus Sicht der Maschinenethik

Operations- und Therapieroboter werden in vielen Gesundheitseinrichtungen eingesetzt. Pflegeroboter verbreiten sich erst allmählich. Bei Operationsrobotern stellen sich kaum Fragen aus Sicht der Maschinenethik, da es sich dabei mehrheitlich um Teleroboter handelt und sich diese Disziplin teilautonomen und autonomen Maschinen widmet. Bei Therapierobotern ergeben sich theoretisch Herausforderungen. Allerdings sind sie in ihrem Anwendungsgebiet und in ihren Fähigkeiten i. d. R. ausgesprochen begrenzt. Pflegeroboter kommen als Informations- und Unterhaltungsroboter, als Transportroboter und als Assistenzroboter mit direktem körperlichem Kontakt zum Patienten vor. Insbesondere mit Blick auf die letztere Variante ist es wichtig, frühzeitig die Maschinenethik einzubeziehen und zu erforschen, ob ein Moralisieren nützlich, sinnvoll und notwendig ist. Es ist etwa zu klären, wie weit der Pflegeroboter den Wünschen der Patienten entsprechen darf, gerade wenn es um eine angemessene Behandlung oder um Leben und Tod geht. Im vorliegenden Beitrag werden Pflegeroboter erklärt, eingeordnet und aus der Sicht der Maschinenethik behandelt.

Oliver Bendel

Maschinenethik und Künstliche Intelligenz

Der Fachbereich der Künstlichen Intelligenz (KI) hat ideelle und finanzielle Höhen und Tiefen erlebt. Wissenschaftler prognostizieren immer wieder den Untergang der Menschen durch die von ihnen entwickelte Technik – ein Paradigma, das die Grundlage eines konstanten Narrativs innerhalb der Menschheitsgeschichte darstellt. Während die Furcht vor der technologischen Singularität in heutiger Zeit Fiktion bleibt, stellen sich realistischere, maschinenethisch hoch relevante Fragen: Mit welchen moralischen Ansprüchen werden Maschinen entwickelt? Wie werden diese Entwicklungen moralisch bewertet? Und wer trägt die Verantwortung für moralische Probleme?

Leonie Seng

Open Access

Chapter 1. Fragility and Innovations in Data Collection

Rapidly changing situations in fragile countries increase the need for up-to-date information to inform decision makers. Yet fragile countries are among the most data deprived. Moreover, many, though not all, are affected by insecurity which makes collecting new information particularly challenging. This book presents innovations in data collection developed with decision makers in fragile situations in mind.

Johannes Hoogeveen, Utz Pape

Chapter 1. IoT Technologies and Applications

Internet of Things (IoT) is a technology that aims at providing connectivity for anything, by embedding short-range mobile transceivers into a wide array of additional gadgets and everyday items, enabling new forms of communication between people and things, and between things themselves. This chapter reviews key IoT technologies and several applications, which include not only simple data sensing, collection, and representation, but also complex information extraction and behavior analysis. As 5G mobile networks are beginning to be commercially deployed worldwide, intelligent IoT applications and services are getting more and more important and popular in different business sectors and industrial domains, thanks to more communication bandwidth, better data quality, faster data rate, denser network connectivity, lower transmission latency, and higher system reliability.

Yang Yang, Xiliang Luo, Xiaoli Chu, Ming-Tuo Zhou

Chapter 4. Fog-Enabled Multi-Robot System

Robots are used widely in many fields now, such as earthquake rescue, smart factory, and so on. They bring us lots of convenience in daily lives, save huge manpower in factories, and help to complete many mission-possible tasks in some cases. For these applications, simultaneous localization and mapping (SLAM), efficient management, and collaboration among robots are necessary. However, in these robot applications, it may suffer issues of high cost, large power consumption, and low efficiency. An effective solution is to employ fog computing. This chapter introduces fog-enabled solutions for robot SLAM, multi-robot smart factory, and multi-robot fleet formation applications, which require large local computing power for timely constructing the map of a working environment, calculating multiple robots’ exact positions, and tracking their movement postures and orientations. Through a high-speed wireless network, massive data and images collected by onboard and local sensors are transmitted from the robots and intelligent infrastructure to nearby fog nodes, where intelligent data processing algorithms are responsible for analyzing valuable information and deriving the results in real time.

Yang Yang, Xiliang Luo, Xiaoli Chu, Ming-Tuo Zhou

Chapter 6. Fog-Enabled Intelligent Transportation System

Intelligent transportation system (ITS) helps to improve traffic efficiency and ensure traffic safety. The core of this system is the collection and analysis of sensor data and vehicle communication technologies. The challenges of ITS mainly focus on two aspects: computing and communication, while security and interoperability are the prerequisites of the system. Existing network architecture and communication technology still cannot meet the demand for advanced intelligent driving support and rapid development of intelligent transportation. As an emerging concept, fog computing is proposed for various IoT scenarios and can address the challenges in intelligent transportation system. Fog computing enables the critical functions of ITS by collaborating, cooperating, and utilizing the resources of underlying infrastructures within roads, smart highways, and smart cities. Fog computing will address the technical challenges in ITS and will help scale the deployment environment for billions of personal and commercial smart vehicles. In this chapter, we first introduced the definition and development of ITS, describing the ecosystem composition and their respective requirements. Then, we explained the challenges and a stage-of-the-art of ITS, mainly focusing on vehicle station and communication network. To present fog computing, the architecture of fog-enabled ITS was provided. And we also discussed how fog computing can address the technical challenges and provide strong support for ITS. Finally, several use cases in fog-enabled ITS, including autonomous driving, cooperative driving, and shared vehicles, are shown in this chapter, which further verifies the benefits that fog computing can bring to ITS.

Yang Yang, Xiliang Luo, Xiaoli Chu, Ming-Tuo Zhou

Chapter 2. Fog Computing Architecture and Technologies

Fog computing is a horizontal, system-level architecture that distributes computing, storage, control, and networking functions closer to the users along a cloud-to-thing continuum. This chapter introduces the architecture and key enabling technologies of fog computing, as well as its latest development in standardization bodies and industrial consortium. As the bridge connecting the cloud and things, fog computing plays the crucial role in identifying, integrating, managing, and utilizing multi-tier computing, communication, and storage resources in different IoT systems. Together with feasible AI algorithms, fog nodes can combine various local/regional micro-services and orchestrate more intelligent applications and services with different user preferences and stringent performance requirements. For example, autonomous driving and intelligent manufacturing require high security in data transmission and storage, very low latency in data processing and decision making, and super-high reliability in network connectivity and service provisioning. Further, the challenges of developing more sophisticated services across multiple domains are discussed.

Yang Yang, Xiliang Luo, Xiaoli Chu, Ming-Tuo Zhou

Open Access

5. SWOT-Analyse der derzeitigen Agrarpolitik aus Sicht des Natur- und Umweltschutzes

Dieses Kapitel stellt eine Analyse der Stärken, Schwächen, Chancen und Risiken der gemeinsamen Agrarpolitik der EU aus Sicht des Natur- und Umweltschutzes vor. Das Ergebnis ist ambivalent: Einerseits bildet die Gemeinsame Agrarpolitik einen stabilen institutionellen Rahmen mit guter Finanzausstattung und vielen umweltpolitischen Instrumenten. Andererseits führen die Dominanz agrarpolitischer Akteure, Status-quo-Denken und geringe Beteiligungsmöglichkeiten zur systematischen Schwächung der natur- und umweltpolitischen Ansätze, zu Regelungs- und Vollzugsdefiziten, mangelnder Datenlage sowie geringer Effektivität und Effizienz. Chancen entstehen aus der Etablierung von Tierwohl, Natur-, Umwelt- und Verbraucherschutz als Legitimationsgrundlage für die Agrarzahlungen, aus europarechtlichen Anforderungen an ein hohes Schutzniveau im Binnenmarkt, aus einem erheblichen öffentlichen Mobilisierungspotenzial sowie aus technischen Entwicklungen. Finanzpolitische und internationale Verteilungskämpfe, politische Polarisierung und Radikalisierung sowie nachteilige Veränderungen der Agrarökosysteme, nicht zuletzt durch menschliche Einwirkung, sind wichtige Risiken.

Peter H. Feindt, Christine Krämer, Andrea Früh-Müller, Alois Heißenhuber, Claudia Pahl-Wostl, Kai P. Purnhagen, Fabian Thomas, Caroline van Bers, Volkmar Wolters

Kapitel 11. Informations-, Wissens-, Kompetenz- und Wertegesellschaft

Information ist der Rohstoff, Wissen der Stoff, Kompetenz das Ziel moderner Bildung. Werte fungieren dabei als Ordner selbstorganisierten Handelns. Von der Informationsgesellschaft zur Wissensgesellschaft, von der Wissensgesellschaft zur Kompetenz- und Wertegesellschaft verläuft die gesellschaftliche Entwicklung, ohne andere Entwicklungsziele, andere Megatrends zu negieren. Die Informationsgesellschaft ist Ausdruck des Megatrends, dass weltweit, vor allem natürlich in den Industriestaaten, messbare Informationen, große Datenmengen, Big Data quantitativ und qualitativ immer wichtiger werden. Die Wissensgesellschaft kennzeichnet den Megatrend der kulturellen Einbindung jeglicher Information in ein Netz von Wissen und Meinen, Verifizieren, Werten und Verwerten. Damit ist der Weg zur selbstorganisierten Kompetenzentwicklung geebnet. Die Gesellschaft der Zukunft ist eine Kompetenzgesellschaft. Kompetenzentwicklung beinhaltet dabei zwangsläufig Werteentwicklung. Es bedarf deshalb einer Wertegesellschaft, die der Kompetenzgesellschaft zur Seite steht, um die Kompetenzkatastrophe doch noch zu verhindern.

John Erpenbeck, Werner Sauter

Semantic-Driven Architecture for Autonomic Management of Cyber-Physical Systems (CPS) for Industry 4.0

Today we are living a new industrial revolution, which has its origin in the vertiginous deployment of ICT technologies that have been pervasively deployed at all levels of the modern society. This new industrial revolution, known as Industry 4.0, evolves within the context of a totally connected Cyber-Physic world in which organizations face immeasurable challenges related to the proper exploitation of ICT technologies to create and innovate in order to develop the intelligent products and services of tomorrow’s society. This paper introduces a semantic-driven architecture intended to design, develop and manage Industry 4.0 systems by incrementally integrating monitoring, analysis, planning and management capabilities within autonomic processes able to coordinate and orchestrate Cyber-Physical Systems (CPS). This approach is also intended to cope with the integrability and interoperability challenges of the heterogeneous actors of the Internet of Everything (people, things, data and services) involved in the CPS of the Industry 4.0.

Ernesto Exposito

MRI Brain Images Compression and Classification Using Different Classes of Neural Networks

The aim of this paper is to build an automatic system for compression and classification for magnetic resonance imaging brain images. The algorithm segments the images in order to separate regions of medical interest from its background. Only the regions of interest are compressed with a low-ratio scheme, while the rest of the image is compressed with a high-ratio scheme. Based on Convolutional Neural Network (CNN) method for classification and a Probabilistic Neural Network (PNN) for image segmentation, the system has been developed. Experiments were conducted to evaluate the performance of our approach using different optimizers with a huge dataset of MRI brain images. Results confirmed that the Root Mean Square Propagation (RMSprop) optimizer converges faster with a highest accuracy comparing to other optimizers and showed that the proposed preprocessing schema reduced the execution time.

Abdelhakim El Boustani, Essaid El Bachari

Chapter 6. Dis-imagining Rights, Legitimacy, and the Foundations of Politics

In this chapter, the recent changes in governance are considered from rights- and legitimacy-based perspectives. While not aiming at an exhaustive treatment of the matter, this chapter teases out some emergent issues that merit further consideration. Although, if a posthumanist framework is accepted, it is no longer reasonable to rely on a human rights-based assessment, it is important to show how some otherwise taken-for-granted assumptions are rendered questionable today. For that reason, some provisions from the Universal Declaration of Human Rights (UDHR) and the International Covenant on Civil and Political Rights (ICCPR) are compared against the current trends in algorithmic governance. Meanwhile, a consideration of legitimacy, particularly in its political sense, helps uncover the emergent fault lines within the three-way interplay between the electorates, the public authorities, and code.

Ignas Kalpokas

Chapter 2. Data: The Premise of New Governance

The collection, analysis, and use of data are the defining features of today’s world. In fact, it is safe to say that today’s world has been fundamentally datafied, particularly courtesy to the proliferation of platforms that provide structure to (and, in fact, drive) contemporary social and business practices. No less importantly, this process has encompassed not only public life but also the human body and private environments. To that effect, the observability and predictability of individuals and the capacity to turn their lives into valuable commodities have become highly pervasive, and the more so the more the dominant platforms accumulate and strengthen their network effects. As a result, then, the position of the human person is profoundly altered in the world where there is no more outside to commercial(ised) data.

Ignas Kalpokas

Chapter 1. Introduction: I’ll Be Watching You…

Technological changes, particularly those related to the collection, analysis, and application of data, have significantly disrupted the personal, public, and economic domains. With both human persons and their environments being knowable to an unprecedented extent, and trends, correlations and future trends derivable from the accumulated data, a new mode of economic organisation, usually referred to as the platform economy, has taken hold. However, the internal logic of this economy is impossible to contain in its own domain and has, therefore, also spread into the governance of everyday life, not least because an ever-growing part of everyday life is lived through such platforms. Hence, this introductory chapter sets out the context for the book by delving into today’s digital-first transformations.

Ignas Kalpokas

Chapter 3. The Code That Is Law

The abundance of data in today’s world implies the need for algorithms as tools for sorting, ranking, retrieval, interpretation, and decision-making. As a result, algorithms become the moving and driving forces behind today’s life, undergirding private, public, and business environments. As a consequence, algorithms acquire an unparalleled power of governance: As they determine the architecture of everyday life, decisions made and actions performed must conform to digitally coded affordances. Hence, the regulatory function of algorithms is explored in this chapter, particularly through comparing and contrasting them to the traditional regulator—law. This analysis reveals important differences, particularly in relation to power and pervasiveness, opacity, interests served, and the general modus operandi. As a result, algorithmic governance is seen as a new and distinct form of governance.

Ignas Kalpokas

Chapter 4. Personalisation, Emotion, and Nudging

Pleasure is undeservedly excluded from studies of algorithmic governance. Nevertheless, due to the incessant competition over attention, prevalent in today’s media environment, the maximisation of pleasure and consumer satisfaction becomes a must in order to be able to exert the power of algorithmic governance in the first place. Therefore, the first part of this chapter is dedicated to a discussion of the importance of enthralling one’s audience and the role of data therein. The second part, meanwhile, is focused on nudging strategies that are geared towards encouraging individuals to make predefined choices. However, in today’s datafied, pleasurised, and personalised environment, nudging goes beyond mere encouragement: As showed in this chapter, options can be stacked in such a way that individuals simply cannot fail to choose the option intended by the choice architect.

Ignas Kalpokas

Chapter 17. IHK/McKernel

IHKIHK (interface for heterogeneous kernels)/McKernelMcKernel is a lightweight multi-kernel operating system that is designed for extreme-scale HPC systems. The basic idea of IHKIHK (interface for heterogeneous kernels)/McKernelMcKernel is to run Linux and a lightweight kernel (LWK) side by side on each compute node to provide both LWK scalability and full Linux compatibility. IHKIHK (interface for heterogeneous kernels)/McKernelMcKernel is one of the first multi-kernels that has been evaluated at large scale and that has demonstrated the advantages of the multi-kernel approach. This chapter describes the architecture of IHKIHK (interface for heterogeneous kernels)/McKernelMcKernel, provides insights into some of its unique features, and describes its ability to outperform Linux through experiments. We also discuss our experiences and lessons learned so far.

Balazs Gerofi, Masamichi Takagi, Yutaka Ishikawa

Chapter 1. Introduction to HPC Operating Systems

The fastest computers in the world over the last three decades have been vector machines and then massively parallel, distributed memoryMemorydistributed systems. These machines have helped scientists in fields such as astronomy, biology, chemistry, mathematics, medicine, engineering, and physics, reach a deeper understanding of natural phenomena through numerical analysis and ever more detailed simulations from atoms to galaxies.

Balazs Gerofi, Yutaka Ishikawa, Rolf Riesen, Robert W. Wisniewski

Kapitel 4. Zwischenergebnis: Quo vadis Einzelhandel?

Die Summe der im vorangegangen aufgezeigten Aspekte und Argumente macht deutlich, wie massiv und grundsätzlich die aktuellen Herausforderungen für Einzelhändler heute sind und wie schnell die Entwicklung gerade auch in diesem Wirtschaftssektor voranschreitet.

Wolfgang Merkle

Kapitel 5. Gestaltungsaspekte bei der Entwicklung eines eigenständigen und differenzierenden Geschäftsmodells

Bei der Beschäftigung mit der Frage, wie sich der stationäre Einzelhandel erfolgreich in dem umfassenden Wettbewerb positionieren kann, finden sich zunächst viele Diskussionsbeiträge, die ganz generell eine Abgrenzung vom Online-Wettbewerb einfordern. Dabei wurde im Vorangegangenen jedoch ebenso verdeutlicht, dass sich insbesondere auch innerhalb des stationären Umfeldes die Konkurrenzsituation deutlich verschärft. Insofern muss ein nachhaltig erfolgreiches Geschäftsmodell im gesamten Wettbewerb standhalten.

Wolfgang Merkle

10. China ermöglicht solide Finanzierungen für KMUs – die Rolle von Alibaba und Alipay

Eine gute Versorgung mit Liquidität ist für viele KMUs lebenswichtig. Insbesondere in einem anspruchsvollen und innovativen Wettbewerbsumfeld. Die Zahlungsfähigkeit chinesischer Firmen ist für die Schweiz mit einem derzeitigen jährlichen Exportvolumen von über 11.4 Mrd. CHF sehr bedeutsam (Außenhandelsstatistik 2018). Deshalb lohnt es sich, die Finanzierungsbasis chinesischer Firmen zu untersuchen, um deren Zuverlässigkeit und Zahlungsverhalten einschätzen zu können. Hier soll diesbezüglich insbesondere auf moderne Finanzierungsformen wie Peer to Peer Lending, die Rolle des chinesischen Staates sowie der Unternehmen Alibaba und Alipay fokussiert werden. Ein zentrales Element ist Vertrauen.

Juan Wu, Martin Schnauss, Christoph de Montrichard

1. Die Bedeutung der digitalen Transformation für Schweizer KMUs

Von einer Transformation spricht man bei komplexen und fundamentalen Organisationsveränderungen. Folglich steht die digitale Transformation für komplexe Organisationsveränderungen durch die Nutzung digitaler Technologien mit dem Ziel, Wettbewerbsvorteile zu generieren (eigene Definition). Digitale Technologien können die Geschäftsprozesse, die Produkte, Services oder auch die Geschäftsmodelle betreffen.Die digitale Transformation ist derzeit ein viel diskutierter Begriff. Es gibt keinen Mangel an relevanten Veröffentlichungen zu diesem Thema. Allerdings gibt es einen signifikanten Unterschied zwischen dem Hype um die digitale Transformation und deren tatsächlicher Umsetzung. Insbesondere bei den für den Arbeitsmarkt so wichtigen KMUs in der Schweiz scheint die Digitalisierung sehr viel langsamer voranzukommen, als man vermutet hätte.Als Gründe für die schleppende Umsetzung werden fehlendes Wissen über die betriebswirtschaftlichen und technologischen Möglichkeiten, technische Probleme, fehlende Standards, mangelnde Datensicherheit und hohe Kosten genannt. Häufig fehlt auch der konkrete Veränderungsdruck bei den Unternehmen, weil die Umsätze aktuell noch stabil sind (vgl. Saam et al. (2016)).Die Literatur beschäftigt sich generell lieber mit den technologischen Möglichkeiten als mit den Hindernissen der digitalen Transformation. Zweifelsfrei ist die Technologie ein nicht zu unterschätzendes Element der digitalen Transformation. Aber Technologie allein reicht für eine digitale Transformation eben nicht aus. Sie spielt nur eine unterstützende Rolle. Technologie ist nur der Ermöglicher der digitalen Transformation, das Werkzeug, mit dem Unternehmen ihre Wettbewerbsfähigkeit, ihr Geschäftsmodell, ihre Produkte oder Prozesse verbessern können.

Axel Uhl, Stephan Loretan

2. Strategische Analyse und organisatorische Lösungen für die digitale Transformation eines mittelständischen Unternehmens

Fallstudie Villeroy & Boch

Villeroy & Boch, weltbekannte und renommierte Lifestyle-Marke mit Produkten aus den Bereichen Bad & Wellness und Tischkultur, wappnet sich mit digitalen Medien für die Zukunft. Das Unternehmen will durch innovative digitale Lösungen und Prozesse in die digitale Champions League aufsteigen. Schwerpunktthemen sind E-Commerce, die Weiterentwicklung des eigenen Online-Shops, Cloud Transformation, Cyber Security und der digitale Kulturwandel. Ein Tandem bestehend aus dem Senior Director Corporate IT und dem Senior Director Digital bringen zusammen die notwendigen Kompetenzen für die komplexen und vielfältigen Themen des digitalen Wandels mit.

Peter Domma, Thomas Ochs, Axel Uhl

6. Digitale Transformation eines Traditionshauses im Luxusmodesegment

Fallstudie Zimmerli of Switzerland

Die Fallstudie des Traditionsunternehmens Zimmerli of Switzerland zeigt einen Ansatz zur digitalen Transformation des Geschäftsmodells eines KMU im Luxusmodesegment. Zusammenfassend kristallisieren sich folgende Erkenntnisse heraus: Die Eigentümerschaft bzw. das Management, im Falle geschichtsträchtiger KMU oftmals eher getrieben von Leidenschaft und Verpflichtung als von kurzfristigem Gewinndenken, muss gewillt sein, das Unternehmen für den Wandel zu öffnen und die Kontrolle in einem gewissen Masse abzugeben. Weiter ist die starke, über Jahrzehnte gepflegte Unternehmenskultur zu berücksichtigen. Es empfiehlt sich, die oftmals langjährigen Mitarbeitenden frühzeitig in den Veränderungsprozess einzubeziehen und eine offene Kommunikations- und Feedbackkultur zu etablieren. Ihre Expertise und Kooperationsbereitschaft ist gerade im Luxussegment angesichts der hohen Qualitätsansprüche an Produkte, Dienstleistungen und Erlebnisse besonders wichtig. Schließlich sollte neues Know-how nicht in Form eines „Digitalisierungs-Silos“ in bestehende Strukturen gezwängt, sondern durch den Einbezug externer Experten und den internen Aufbau von Wissen und Fähigkeiten organisch in die sich wandelnde Organisation einfließen. Und nicht zuletzt sollten, neben der Bewahrung der wertvollen Marke, vor allem die Kundinnen und Kunden mit ihren veränderten Bedürfnissen und Erwartungen im Zentrum der Veränderung stehen und physische wie digitale Berührungspunkte zu einem konsistenten und einzigartigen Markenerlebnis zusammenfließen.

Fabio Duma, Florence Labati, Gianluca Brunetti

9. Digitalisierung der Anlageberatung am Beispiel der Zürcher Kantonalbank

Die Digitalisierung macht auch vor der Bankenbranche nicht halt: junge technologieorientierte Unternehmen bringen Konkurrenzangebote auf den Markt und neue Kundengenerationen fordern zunehmend digitale Angebote und Kommunikationswege. Die Digitalisierung bietet den traditionellen Banken aber auch die Chance, Prozesse und Angebote zu überarbeiten und attraktiver auszugestalten.Ein Beispiel, wie die Digitalisierung von Beratungsprozessen erfolgen kann, liefert die Zürcher Kantonalbank, die ihre Anlageberatung digitalisiert hat. Sie verzichtet dabei jedoch ausdrücklich nicht auf die persönliche Beratung. Vielmehr werden die Möglichkeiten der Digitalisierung genutzt, um die Attraktivität des Beratungserlebnisses zu steigern und den Beratungsprozess zu vereinheitlichen. Den Kundenberaterinnen und -beratern steht ein Tablet mit einem End-to-End-Prozess zur Verfügung. Eine weitere Neuerung ist die zentral geführte Portfoliooptimierungs- und -überwachungs-Engine, die während der Beratung in Echtzeit individuelle Anlagevorschläge ausgibt.

Johannes Höllerich, Robert Fehr

3. Die digitale Reise des FC Bayern: Im globalen Wettbewerb und außerhalb des Platzes

Digitalisierung im Fußball

Auch der moderne Profifußball unterliegt dem Veränderungsdruck globaler Megatrends. Um im internationalen Geschäft weiter an der Spitze zu sein, muss der FC Bayern sowohl Digitalisieren als auch Internationalisieren. Mit seiner Strategie „FC Bayern 4.0“ setzt der erfolgreichste deutsche Fußballclub konsequent auf digitale Technologien, um auch in den wichtigsten außereuropäischen Märkten China und USA erfolgreich zu sein. Der Beitrag stellt zunächst das bisherige Erfolgsmodell des FC Bayern München vor. Anschließend wird die digitale Strategie des FC Bayern anhand der sechs digitalen Fähigkeiten Innovationsfähigkeit, Transformationsfähigkeit, IT Excellence, Customer Centricity, Operational Excellence und Effective Knowledge Worker beschrieben. In den weiteren Absätzen werden die internationalen Herausforderungen des FC Bayern in China und den USA sowie der Wettbewerb um den lukrativen Sportmarkt diskutiert.

Axel Uhl, Raimond Zenhäusern

Chapter 8. Dataset Construction of Wave Energy Resources in the Maritime Silk Road

The application and sharing of scientific data has become an important indicator of the national science and technology level. With the rapid development of observation methods and numerical models, ocean data has also exploded, and “big data” has entered people’s sight. How to extract useful information of energy assessments from large ocean data with low information density, and establish the marine energy dataset is the key to rational and efficient development of wave energy. At present, the global marine energy dataset is rare, and the dataset of wave energy of the Maritime Silk Road is still in blank. Based on the previous research results, this chapter propose to build the wave energy resource dataset of Maritime Silk Road, which is close to actual demand, convenient to query and perfect in theoretical system. The data is also the first wave energy resource dataset of Maritime Silk Road at home and abroad. In the future, the program can be widely used in the construction of marine new energy datasets of offshore wind energy and ocean current energy, to provide data support for researchers and engineers actively participating in the marine new energy development of the Maritime Silk Road.

Dr. Chongwei Zheng, Dr. Jianjun Xu, Chao Zhan, Qing Wang

Chapter 2. Research Progress of Wave Energy Evaluation

The rational development of wave energy will make a positive contribution to ease the resource crisis, protect the marine environment, improve the living quality of residents, develop tourism of the deep sea, etc. Detailed survey of resources is an important foundation in order and efficient development and utilization of wave energy. Looking for advantageous areas with abundant resources, high utilization rate, good stability and low frequency of severe weather can achieve very efficient results. This chapter analyzes the research progress of wave energy resource assessment at home and abroad, especially focuses on the current wave energy assessment along the Maritime Silk Road, hoping to find the method to improve the exploitation and conversion efficiency for wave energy. According to data sources, the wave energy assessment can be divided into the following stages: (1) observation stage: wave energy resource assessment based on limited observation data; (2) satellite-based stage: satellite-based wave data are applied to the wave energy resource assessment; (3) numerical simulation stage: the method of numerical wave simulation is used in wave energy resource evaluation; (4) reanalysis data stage: the reanalysis data are applied to wave energy resource assessment. At last, the organization of this book was introduced.

Dr. Chongwei Zheng, Dr. Jianjun Xu, Chao Zhan, Qing Wang

Performance Study of Some Recent Optimization Techniques for Energy Minimization in Surveillance Video Synopsis Framework

In the age of the smart city, each activity is under surveillance. The employment of plentiful surveillance video cameras produces the gigantic amount of redundant video data. For ease of investigations, video synopsis competently shrinks the length with the preservation of all activities presents in the original video. The outcome of the video synopsis technology greatly depends on the central module, the optimization framework, and its minimization. This paper evaluates the performance of various optimization techniques, namely simulated annealing (SA), NSGA II, cultural algorithm (CA), teaching–learning-based optimization (TLBO), gray wolf optimizer (GWO), forest optimization algorithm (FOA), JAYA algorithm, elitist-JAYA algorithm, self-adaptive multi-population-based JAYA algorithm (SAMP-JAYA), to minimize the energy in the field of object-based surveillance video synopsis. The experimental results and analysis direct the need for an optimization algorithm which can efficiently and consistently solve the minimization problem in connection to video synopsis.

Subhankar Ghatak, Suvendu Rup

Wirtschaft 4.0: Die Digitalisierung in der Wirtschaft und die Folgen für die Wirtschaftsförderung

Unter dem Begriff Wirtschaft 4.0 verbirgt sich eine ganze Reihe von technischen Innovationen, die von nicht wenigen als vierte industrielle Revolution betitelt wird. Entsprechend dem großen Umfang und der großen Reichweite der zumeist technischen Innovationen, verändern sich unternehmerische Anspruchs- und Erwartungsprofile genauso wie die Rahmenbedingungen, insbesondere der demografische Wandel und der Fachkräftemangel. Die Unternehmen sehen sich nicht nur mit neuen digitalen Geschäftsmodellen konfrontiert, sondern auch mit teils ganz anderen Wettbewerbsstrukturen in der fortschreitenden Internationalisierung.Für die kommunale Wirtschaftsförderung bedeuten diese Wandlungsprozesse, dass sie mit veränderten unternehmerischen Bedürfnissen zu tun haben als noch vor wenigen Jahren. Heute sind moderne infrastrukturelle Voraussetzungen, insbesondere Breitbandanbindungen, genauso gefragt, wie die Versorgung mit ausreichend qualifizierten Fachkräften oder mit flexiblen und digitalen Beratungsdienstleistungen. Ob die Wirtschaftsförderung in der Lage ist, diesen neuen Ansprüchen erfolgreich zu begegnen und letztlich mit den Unternehmen auf gleicher „digitaler Augenhöhe“ zu kommunzieren, bleibt abzuwarten. Der Beitrag stellt unter anderem anhand einer kleinen Studie heraus, wie sich derzeit die Wirtschaftsförderungen auf diesen Wandel vorbereiten und welche Veränderungen sich für sie durch das Thema „Wirtschaft 4.0“ ergeben.

Jürgen Stember

Chapter 1. Introduction

Perhaps second only to witnessing the birth of a child, in my opinion there is no experience closer to pure creation than through the arduous efforts and subsequent yield of software engineering. In regard to such efforts, I have found that, in many cases, the amount of fulfillment perceived by an individual is directly proportional to the level of difficulty that they were required to overcome. That is to say, the harder something is, the greater the feeling of accomplishment once you succeed. There is no doubt that software engineering is a difficult discipline to learn and its limits are never-ending. But there is nothing more fruitlessly frustrating than spending tireless hours of energy learning something just to find out that its applications are limited.

Jason Lee Hodges

Chapter 15. Further Study

You should be filled with great pride to have reached the culminating chapter of this book. However, you should also recognize that in doing so, you have not reached the end of your journey into the ever-expanding universe of computer science, but rather simply the end of the beginning. By reading this book, you have been introduced to many of the fundamental ideas and theories surrounding computer science and software engineering. However, it is imperative to understand that, though the core concepts have not changed in decades, software engineering is a rapidly evolving industry. New languages, frameworks, and conglomerations of paradigms and tools are being continually synthesized all the time. To be a successful software engineer, you must strive to keep up with these changes. Some of the most productive engineers are successful because they are chronic autodidacts or self-learners. In this final chapter, you will be introduced to several areas of specialty in software engineering that you can explore further on your own to deepen your understanding.

Jason Lee Hodges

Omnichannel Analytics

Retail business models have evolved over the years to create a value chain that combines multiple channels to interact with customers and suppliers. At the same time, technological advances have enabled the collection of various forms of data which can be used to support managerial decisions. This chapter provides a constructive framework to understand the practice of retail analytics—the data-driven approach to support decisions based on models and quantitative methods—through the dynamic evolution of various channels of what is now referred to as omnichannel retail. This framework is supported with several research examples that illustrate the differences in terms of data, decisions, and methods used in various retail channels, and also show more recent examples of convergence and integration across channels.

Marcel Goic, Marcelo Olivares

Chapter 10. Big Data Analytics Techniques in Capital Market Use Cases

Having surveyed the applications of Big Data Analytics in Banking and Financial Services sector in the last chapter, we shall now provide an overview of the possible applications of Big Data Analytics in the Capital Market Use cases.

Dr. C.S.R. Prabhu, Dr. Aneesh Sreevallabh Chivukula, Dr. Aditya Mogadala, Rohit Ghosh, Dr. L.M. Jenila Livingston

Chapter 7. Social Semantic Web Mining and Big Data Analytics

In the context of Big Data Analytics and Social Networking, Semantic Web Mining is an amalgamation of three scientific areas of research: (1) Social Networking (2) Semantic Web and (3) Big Data Analytics, including Web Mining and Data Mining.

Dr. C.S.R. Prabhu, Dr. Aneesh Sreevallabh Chivukula, Dr. Aditya Mogadala, Rohit Ghosh, Dr. L.M. Jenila Livingston

Chapter 6. Machine Learning Algorithms for Big Data

Growth of data provided from varied sources has created enormous amount of resources. However, utilizing those resources for any useful task requires deep understanding about characteristics of the data. Goal of machine learning algorithms is to learn these characteristics and use them for future predictions. However, in the context of big data, applying machine learning algorithms rely on the effective processing techniques of the data such as using data parallelism by working with huge chunks of data. Hence, machine learning methodologies are increasingly becoming statistical and less rule-based to handle such scale of data.

Dr. C.S.R. Prabhu, Dr. Aneesh Sreevallabh Chivukula, Dr. Aditya Mogadala, Rohit Ghosh, Dr. L.M. Jenila Livingston

Chapter 8. Internet of Things (IOT) and Big Data Analytics

Having surveyed in the last chapter how Big Data Analytics is applied in Social Semantic Web, in this chapter we shall delve into another very important application domain, IOT. We shall examine the interaction between IOT and Big Data Analytics.

Dr. C.S.R. Prabhu, Dr. Aneesh Sreevallabh Chivukula, Dr. Aditya Mogadala, Rohit Ghosh, Dr. L.M. Jenila Livingston

Chapter 9. Big Data Analytics for Financial Services and Banking

Having surveyed the scenario of the application of techniques of Big Data Analytics in the context of Internet of things (IoT), let us now examine how the application of Big Data Analytics techniques is impacting the financial services and banking section. In a highly competitive business of financial services, we have companies vying with each other to grab their potential customers. This calls for their monitoring closely the customer opinions and feedback in all different platforms of Internet-enabled world, from mortgage applications to twitter postings—which provide unprecedented data for drawing insights. The Big Data phenomenon has resulted in expanding the range of data types that can be processed, enabling the banks and financial institutions to better digest, assimilate and respond in a better way to their physical and digital interactions with the customers.

Dr. C.S.R. Prabhu, Dr. Aneesh Sreevallabh Chivukula, Dr. Aditya Mogadala, Rohit Ghosh, Dr. L.M. Jenila Livingston

Chapter 16. Privacy and Big Data Analytics

While the personal information in the realm of Big Data is considered the most valuable resource of the twenty-first century, the ‘new oil’, [2] its analytics, i.e., Big Data Analytics can be described as the ‘new engine’ for economic and social value creation. At the same time, the threat of loss of privacy of the personal data looms large. Enterprises or other organizations keen to harness the power of Big Data with its vast potential are also cautious and hence recognizing their responsibility to protect the privacy of personal data collected and analyzed in the Big Data framework.

Dr. C.S.R. Prabhu, Dr. Aneesh Sreevallabh Chivukula, Dr. Aditya Mogadala, Rohit Ghosh, Dr. L.M. Jenila Livingston

Chapter 15. Security in Big Data

In the previous chapters, we have seen how techniques of the Big Data Analytics can be applied to various application domains such as Social Semantic Web, IOT, Financial Services and Banking, Capital Market and Insurance. In all these cases, the success of such application of the techniques of Big Data Analytics will be critically dependent on security. In this chapter, we shall examine how and to what extent it is possible to insure security in Big Data.

Dr. C.S.R. Prabhu, Dr. Aneesh Sreevallabh Chivukula, Dr. Aditya Mogadala, Rohit Ghosh, Dr. L.M. Jenila Livingston

Chapter 17. Emerging Research Trends and New Horizons

The upcoming new horizons and recent research trends in Big Data Analytics frameworks, techniques and algorithms are as reflected in research papers recently published in conferences such as ACM International Conference on Knowledge Discovery and Data Mining (ACM SIG KDD), SIAM International Conference on Data Mining (SDM), IEEE International Conference on Data Engineering (ICDE) and ACM International Conference on Information and Knowledge Management (CIKM). In this chapter, we shall survey the research trends and the possible new horizons coming up in Big Data Analytics.

Dr. C.S.R. Prabhu, Dr. Aneesh Sreevallabh Chivukula, Dr. Aditya Mogadala, Rohit Ghosh, Dr. L.M. Jenila Livingston

Chapter 2. Intelligent Systems

In Chap. 1, we presented a total overview of Big Data Analytics. In this chapter, we delve deeper into Machine Learning and Intelligent Systems. By definition, an algorithm is a sequence of steps in a computer program that transforms given input into desired output. Machine learning is the study of artificially intelligent algorithms that improve their performance at some task with experience. With the availability of big data, machine learning is becoming an integral part of various computer systems. In such systems, the data analyst has access to sample data and would like to construct a hypothesis on the data. Typically, a hypothesis is chosen from a set of candidate patterns assumed in the data. A pattern is taken to be the algorithmic output obtained from transforming the raw input. Thus, machine learning paradigms try to build general patterns from known data to make predictions on unknown data.

Dr. C.S.R. Prabhu, Dr. Aneesh Sreevallabh Chivukula, Dr. Aditya Mogadala, Rohit Ghosh, Dr. L.M. Jenila Livingston

Chapter 3. Analytics Models for Data Science

The ultimate goal of data science is to turn raw data into data products. Data analytics is the science of examining the raw data with the purpose of making correct decisions by drawing meaningful conclusions.

Dr. C.S.R. Prabhu, Dr. Aneesh Sreevallabh Chivukula, Dr. Aditya Mogadala, Rohit Ghosh, Dr. L.M. Jenila Livingston

Chapter 4. Big Data Tools—Hadoop Ecosystem, Spark and NoSQL Databases

In Chap. 1 , we have surveyed in brief the total overview for Big Data and Hadoop.

Dr. C.S.R. Prabhu, Dr. Aneesh Sreevallabh Chivukula, Dr. Aditya Mogadala, Rohit Ghosh, Dr. L.M. Jenila Livingston

Chapter 1. Big Data Analytics

The latest disruptive trends and developments in digital age comprise social networking, mobility, analytics and cloud, popularly known as SMAC. The year 2016 saw Big Data Technologies being leveraged to power business intelligence applications. What holds in store for 2020 and beyond?

Dr. C.S.R. Prabhu, Dr. Aneesh Sreevallabh Chivukula, Dr. Aditya Mogadala, Rohit Ghosh, Dr. L.M. Jenila Livingston

Chapter 11. Big Data Analytics for Insurance

In the last few chapters, we have seen the application of Big Data Analytics to various application domains. In this chapter, we shall examine its role in insurance.

Dr. C.S.R. Prabhu, Dr. Aneesh Sreevallabh Chivukula, Dr. Aditya Mogadala, Rohit Ghosh, Dr. L.M. Jenila Livingston

Chapter 12. Big Data Analytics in Advertising

Traditionally, advertising was nothing but communicating to a whole set of target audience. But with the advent of internet, everything changed, especially behaviorally targeted advertisements. Since 2000, the internet became the primary advertising and marketing channel for all the businesses in all sectors. But even then, the click-through rates (CTRs) flattened s after a point of time. CRTs increased 62% in 2013 and much later. Today, brands have access to a huge quantity of data in the form of reviews, tweets, followers, click, likes, etc. which offer great untapped potential.

Dr. C.S.R. Prabhu, Dr. Aneesh Sreevallabh Chivukula, Dr. Aditya Mogadala, Rohit Ghosh, Dr. L.M. Jenila Livingston

Chapter 13. Big Data Analytics in Bio-informatics

Bio-informatics is an interdisciplinary science, which provides life solutions in the discipline of biology and health care by combining the tools available in various disciplines such as computer science, statistics, storage, retrieval and processing of biological data. This interdisciplinary science can provide inputs to diverse sectors such as medical, health, food and agriculture.

Dr. C.S.R. Prabhu, Dr. Aneesh Sreevallabh Chivukula, Dr. Aditya Mogadala, Rohit Ghosh, Dr. L.M. Jenila Livingston

Chapter 14. Big Data Analytics and Recommender Systems

Recommender systems are designed to augment human decision making. The objective of a recommender system is to suggest relevant items for a user to choose from a plethora of options. In essence, recommender systems are concerned about predicting personalized item choices for a user. Recommender systems produce a ranked list of items ordered in their order of likeability for the user.

Dr. C.S.R. Prabhu, Dr. Aneesh Sreevallabh Chivukula, Dr. Aditya Mogadala, Rohit Ghosh, Dr. L.M. Jenila Livingston

Chapter 1. Introduction: What Is AI?

That wasn’t a science fiction scenario. These were AI technologies that are technically feasible today and are being developed as part of computer science and engineering. Traditionally, AI (Artificial Intelligence) was understood as the simulation of intelligent human thought and action. This definition suffers from the fact that “intelligent human thinking” and “acting” are not defined. Furthermore, man is made the yardstick of intelligence, although evolution has produced many organisms with varying degrees of “intelligence”. In addition, we have long been surrounded in technology by “intelligent” systems that control our civilization independently and efficiently, but often differently from humans.

Klaus Mainzer

Chapter 9. Infrastructures Become Intelligent

The nervous system of human civilization is now the Internet. Up to now, the Internet has only been a (“stupid”) database with signs and images whose meaning emerges in the user’s mind. In order to cope with the complexity of the data, the network must learn to recognize and understand meanings independently. This is already achieved by semantic networks that are equipped with expandable background information (ontologies, concepts, relation, facts) and logical reasoning rules in order to independently supplement incomplete knowledge and draw conclusions. For example, people can be identified, although the data entered directly only partially describe the person. Here again it becomes apparent that semantics and understanding of meanings do not depend on human consciousness.

Klaus Mainzer

Chapter 10. From Natural and Artificial Intelligence to Superintelligence?

Classical AI research is based on the capabilities of a program-controlled computer which, according to Church’s thesis, is in principle equivalent to a Turing machine. According to Moore’s law, gigantic computing and storage capacities have thus been achieved, which only made possible the AI services of the WATSON supercomputer (cf. Sect. 5.2). But, the power of supercomputers has a price that the energy of a small town can match. All the more impressive are the human brains that realize the power of WATSON (e.g., speak and understand a natural language) with the energy consumption of an incandescent lamp. By then, at the latest, one is impressed by the efficiency of neuromorphic systems that have evolved in evolution. Is there a common principle underlying these evolutionary systems that we can use in AI?Biomolecules, cells, organs, organisms, and populations are highly complex dynamic systems in which many elements interact. Complexity research in physics, chemistry, biology, and ecology deals with the question of how the interactions of many elements of a complex dynamic system (e.g., atoms in materials, biomolecules in cells, cells in organisms, organisms in populations) can lead to orders and structures, but also to chaos and decay.

Klaus Mainzer

Chapter 11. How Safe Is Artificial Intelligence?

Machine learning dramatically changes our civilization. We rely more and more on efficient algorithms, because otherwise the complexity of our civilizing infrastructure would not be manageable: Our brains are too slow and hopelessly overwhelmed by the amount of data we have to deal with. But how secure are AI algorithms? In practical applications, learning algorithms refer to models of neural networks, which themselves are extremely complex. They are fed and trained with huge amounts of data. The number of necessary parameters explodes exponentially. Nobody knows exactly what happens in these “black boxes” in detail. A statistical trial-and-error procedure often remains. But how should questions of responsibility be decided in, e.g., autonomous driving or in medicine, if the methodological basics remain dark?In machine learning with neural networks, we need more explainability and accountability of causes and effects in order to be able to decide ethical and legal questions of responsibility!

Klaus Mainzer

Chapter 12. Artificial Intelligence and Responsibility

Artificial intelligence (AI) is an international future topic in research and technology, economy, and society. But research and technical innovation at AI are not enough. AI technology will dramatically change the way we live and work. The global competition of the social systems (e.g., Chinese state monopolism, US-American IT giants, European market economy with individual freedom rights) will decisively depend on how we position our European value system in the AI world.

Klaus Mainzer

Chapter 5. Computers Learn to Speak

Against the background of knowledge-based systems, Turing’s famous question, which moved early AI researchers, can be taken up again: Can these systems “think”? Are they “intelligent”? The analysis shows that knowledge-based expert systems as well as conventional computer programs are based on algorithms. Even the separation of knowledge base and problem solving strategy does not change this, because both components of an expert system must be represented in algorithmic data structures in order to finally become programmable on a computer.This also applies to the realization of natural language through computers. One example is J. Weizenbaum’s ELIZA language program. As a human expert, ELIZA will simulate a psychiatrist talking to a patient. These are rules on how to react to certain sentence patterns of the patient with certain sentence patterns of the “psychiatrist”. In general, it is about the recognition or classification of rules with regard to their applicability in situations. In the simplest case, the equality of two symbol structures must be determined, as determined by the EQUAL function in the LISP programming language for symbol lists. An extension exists if terms and variables are included in the symbolic expressions, e.g.(x B C)(A B y)

Klaus Mainzer

7. From Genetic Engineering to Gene Editing: Harnessing Advances in Biology for National Economic Development

This chapter has examined the nature and adoption of biotechnologies, socio-economic impacts, regulatory frameworks and concerns for rising farm incomes in a cross-country perspective. The product development in biotech has been moving from just insect/herbicide resistance to breaking yield barriers, drought tolerance and quality enhancing traits, just from 3 to 31 crops, a large share of acreage in developing countries and increasing penetration of public sector. The frontiers have been moving forward with the fundamental breakthrough in the form of CRISPR-Cas 9 technique with wide-ranging applications. A rigorous study of peer-reviewed literature shows that GE crop cultivation has increased yields and net income, reduced pesticide usage and helped conserve tillage. Biosafety laws have been stifling product development, and therefore harnessing biotechnologies necessitate enabling policies like a legal framework for biosafety, labelling and trans-boundary movement. Developing countries need to put in place regulations for the new plant breeding techniques on par with the conventional plant breeding techniques. The policy implications have been then drawn for utilization of opportunities in advancement of biotechnology for developing country agriculture.

Chandra Sekhara Rao Nuthalapati

3. Evaluating Government Budget Forecasts

This chapter reviews the literature on the evaluation of government budget forecasts, outlines a generic framework for forecast evaluation, and illustrates forecast evaluation with empirical analyses of different U.S. government agencies’ forecasts of U.S. federal debt. Techniques for forecast evaluation include comparison of mean squared forecast errors, forecast encompassing, tests of predictive failure, and tests of bias and efficiency. Recent extensions of these techniques utilize machine-learning algorithms to handle more potential regressors than observations, a characteristic common to big data. These techniques are generally applicable, including to forecasts of components of the government budget; to forecasts of budgets from municipal, state, provincial, and national governments; and to other economic and non-economic forecasts. Evaluation of forecasts is fundamental to assessing the forecasts’ usefulness, and evaluation can indicate ways in which the forecasts may be improved.

Neil R. Ericsson, Andrew B. Martinez

1. Einleitung

Die Lernziele dieses Kapitels: Die Leser kennen verschiedene Perspektiven des Personalmanagementbegriffs. Die Leser entwickeln ein Verständnis für die strategische Bedeutung des Personalmanagements. Die Leser überblicken relevante Rahmenbedingungen, welche die Aktivitäten des Personalmanagements beeinflussen. Die Leser kennen die zentralen Zielgrößen des Personalmanagements. Die Leser überblicken die Inhalte des vorliegenden Lehrbuchs und können die Systematik ihrer Darstellung nachvollziehen.

Ruth Stock-Homburg, Matthias Groß

4. Gestaltung der Personalgewinnung

Die Lernziele dieses Kapitels: Die Leser kennen den Begriff und die Ziele der Personalgewinnung. Die Leser können alternative Strategien der Personalgewinnung hinsichtlich ihrer Relevanz für Unternehmen einordnen. Die Leser kennen die primären Zielgruppen der Personalgewinnung. Die Leser überblicken die zentralen Phasen des Personalgewinnungsprozesses. Die Leser kennen die zentralen Themenfelder und Instrumente zur Analyse des Arbeitsmarktes. Die Leser überblicken die wichtigsten Schritte einer systematischen Kommunikation im Rahmen der Personalgewinnung. Die Leser können bewerten, inwieweit verschiedene Methoden zur Auswahl von Bewerbern geeignet sind. Die Leser kennen die wichtigsten Schritte eines systematischen Personalauswahlprozesses. Die Leser kennen alternative Wege der Personalgewinnung, die sich aus der Digitalisierung ergeben.

Ruth Stock-Homburg, Matthias Groß

Sustainable Living? Biodigital Future!

Within the context of this book, entitled “Sustaining Resources for Tomorrow”, it can quickly come to mind how and why we got here now: why after millennia of human history it is so necessary to talk about “Green Energy and Technology”, within which this writing is framed. Actually, to raise these issues today would seem rhetorical, because at this point of the twenty-first century everyone knows how and why we have reached this point. Even in the most disadvantaged or remote places one can, in one way or another, get access to a television, a mobile phone, the Internet. The fact that we continue to feel the need and urgency to address these issues makes us realize how little resolved are still at the global level. Few are those who act 100% in consequence of such a situation. And we can find so many contradictions in our daily life…

Alberto T. Estévez

Chapter 110. Interface Between Industry 4.0 and Sustainability: A Systematic Review

This paper explores the relationship between the concepts of industry 4.0 and sustainability as a contribution to the development of more sustainable production systems. A systematic review of literature shows the tendency of expansion of this research field as well as integration between sustainability and industry 4.0 emerging technologies.

Lucas Conde Stocco, Luciana Oranges Cezarino

Chapter 98. New Organizational Models and Technology Growth: Ethical Conflicts in Today’s Business Scenario

This research analyzes the ethical conflicts in new business scenarios, raising issues on the impact of new organization models and technologies on the professional’s behavior considering an ethical perspective. The result shows that part of them are not thinking about ethics and many of them are limited in consolidated and already known impacts.

Thaís Quinet Villela de Andrade

Chapter 92. Lean Office and Digital Transformation: A Case Study in a Services Company

The purpose of this article is to report the implementation of lean office and digital transformation in a services company. The research method was a qualitative approach, with literature review and case study. The comparative results between current and future Value stream maps showed consistent improvements in performance indicators.

Juliana das Chagas Santos, Alberto Eduardo Besser Freitag, Oswaldo Luiz Gonçalves Quelhas

Chapter 37. Systematic Literature Reviews in Sustainable Supply Chain—SSC: A Tertiary Study

Using the SR method, this paper cataloged 27 SSC LRs in order to identify the main topics, findings, and gaps for further researches. Despite the misunderstanding in the past, nowadays sustainability in SC is seen in a better way. However, some topics require more attention, especially the social ones.

Bruno Duarte Azevedo, Rodrigo Goyannes Gusmão Caiado, Luiz Felipe Scavarda

Chapter 34. Mapping of Humanitarian Operations Literature: A Bibliometric Approach

The purpose is to present a literature review of humanitarian operations/supply chain/logistic based on bibliometric methods for mapping this field of research. Software was used to construct maps and bibliometric networks. The science mapping approach is useful for providing quick identification of main research topics and clusters.

Rodolfo Modrigais Strauss Nunes, Susana Carla Farias Pereira

Chapter 38. A Maturity Model for Manufacturing 4.0 in Emerging Countries

Based on a systematic review and a focus group, this paper proposes a Manufacturing 4.0 maturity model in emerging countries to increase operational efficiency, improve the integration of physical and virtual structures and guide multiple stakeholders through a holistic view of the supply chain with social, environmental and economic implications.

Rodrigo Goyannes Gusmão Caiado, Luiz Felipe Scavarda, Daniel Luiz de Mattos Nascimento, Paulo Ivson, Vitor Heitor Cardoso Cunha

Chapter 17. The Use of Big Data for Researching the Leprosy Healthcare Supply Chain

The use of big data can help to analyze the healthcare variables and their relationships. Through the analysis of some variables, it is possible to measure characteristics related to the quality of care. The present study aims to use big data for analyzing the leprosy healthcare supply chain.

Annibal Scavarda, Maristela Groba Andrés, Tatiana Bouzdine-Chameeva, Narasimhaiah Gorla, Marcio Pizzi de Oliveira

Chapter 77. Lean Manufacturing and Industry 4.0—Are There Interactions? a Multiple Case Study

This study intends to show the interactions between lean manufacturing (LM) and Industry 4.0 (I4.0). We used multiple case studies of Brazilian and German companies. German companies use I4.0 to improve the benefits of LM further. However, Brazilian companies need to consolidate LM before considering the investment in I4.0.

Luiz Reni Trento, Reno Schmidt Junior, Anderson Felipe Habekost

Zivilrechtliche Standardermittlung

Der Standard ist die zentrale Schnittstelle von Medizin und Recht im Hinblick auf den Lebenssachverhalt ärztlicher Behandlung. Beide wirken bei seiner Bestimmung eng zusammen. Der Begriff „Standard“ an sich sowie das diesem zu Grunde liegende Denkkonzept ist dabei wie dargelegt im Ausgangspunkt rechtlich geprägt. Dies ändert allerdings nichts daran, dass auch das Haftungsrecht zur Ausfüllung seines Standardbegriffs auf die Medizin angewiesen und von deren fachlichen Inhalten abhängig ist.

Christoph Jansen

Sozialrechtliche Standardsetzung

Der Standard ist als sozialrechtlicher Leistungs- und Versorgungsmaßstab auf zwei Ebenen von Bedeutung: Einerseits steht das Sozialrecht vor der Aufgabe, einen allgemeingültigen Leistungskatalog (rechtsetzend) zu gestalten, welcher alle Leistungen erfasst, die Teil der GKV-Versorgung sind, deren Kosten also von den Krankenkassen übernommen werden und an dem sich die Akteure des GKV-Systems bei ihren Entscheidungen orientieren können oder müssen. In diesen Katalog können nur Leistungen aufgenommen werden, die abstrakt-generell dem sozialrechtlichen Standard entsprechen.

Christoph Jansen

Medizinischer Standardbegriff, medizinische Standardbildung

Ausgangspunkt für die Bestimmung eines Medizinischen Standards können nur die Maßgaben der Medizin sein. Die Standardbildung erfolgt stets aus der Medizin heraus. Den medizinischen Standardbegriff vor den rechtlichen Standardbegriffen darzustellen, ist insofern unausweichlich, als das Recht an die Lebenswirklichkeit der Medizin anknüpft und aus dieser seine Standards formt. Freilich darf dies nicht darüber hinwegtäuschen, dass der Medizin selbst der Begriff „Standard“ tendenziell fremd ist. Er ist jedenfalls nicht eindeutig definiert.

Christoph Jansen

Prediction of Student Success Through Analysis of Moodle Logs: Case Study

Data mining together with learning analytics are emerging topics because of the huge amount of educational data coming from learning management systems. This paper presents a case study about students’ grade prediction by using data mining methods. Data obtained from Moodle log files are explored to understand the trends and effects of students’ activities on Moodle learning management system. Correlations of system activities with the student success are found. Data is classified and modeled by using decision tree, Bayesian Network and Support Vector Machine algorithms. After training the model with a one-year course activity data, next years’ grades are predicted. We found that Decision tree classification gives the best accuracy on the test data for the prediction.

Neslihan Ademi, Suzana Loshkovska, Slobodan Kalajdziski

Application of Hierarchical Bayesian Model in Ophtalmological Study

The problems with statistical results based on p-values, together with multiple comparisons have been criticized often in the literature. Many authors argue that this way of reporting scientific research creates unreliable results. This issue is especially important in the era of Big Data, when many tests are done on the same data sets, which are often openly available. A way to overcome these problems is offered by Bayesian analysis. In our previous research we have used traditional statistical approach to conduct multiple hypothesis tests on our data in ophtalmological study. The goal of this paper is to apply the hierarchical Poisson exponential model on the data and test the dependence of congenital heart disease and Brusfield spots. We give detailed description of the model, analyze the generated Markov chains and the posterior distributions for the simulated parameters and discuss the results from Bayesian perspective. The results are original and have not been published yet.

Biljana Tojtovska, Panche Ribarski, Antonela Ljubic

Stock Prices Forecasting with LSTM Networks

An application of deep neural networks was studied in the area of stock prices forecasting of pharmacies chain “36 and 6”. The learning sample formation in the time series area was shown and a neural network architecture was proposed. The neural network for exchange trade forecasting using Python’s Keras Library was developed and trained. The basic parameters setting of algorithm have been carried out.

Tatyana Vasyaeva, Tatyana Martynenko, Sergii Khmilovyi, Natalia Andrievskaya

Spatial Clustering Based on Analysis of Big Data in Digital Marketing

Analysis and visualization of large volumes of semi-structured information (Big Data) in decision-making support is an important and urgent problem of the digital economy. This article is devoted to solving this problem in the field of digital marketing, e.g. distributing outlets and service centers in the city. We propose a technology of adaptive formation of spatial segments of an urbanized territory based on the analysis of supply and demand areas and their visualization on an electronic map. The proposed approach to matching supply and demand includes 3 stages: semantic-statistical analysis, which allows building dependencies between objects generating demand, automated search for a balance between supply and demand, and visualization of solution options. An original concept of data organization using multiple layer including digital map, semantic web (knowledge base) and overlay network was developed on the basis of the introduced spatial clustering model. The proposed technology, being implemented by an intelligent software solution of a situational center for automated decision-making support, can be used to solve problems of optimization of networks of medical institutions, retail and cultural centers, and social services. Some examples given in this paper illustrate possible benefits of its practical use.

Anton Ivaschenko, Anastasia Stolbova, Oleg Golovnin

Hybrid Intelligent Systems Based on Fuzzy Logic and Deep Learning

The purpose of this lecture is to establish the fundamental links between two important areas of artificial intelligence - fuzzy logic and deep learning. This approach will allow researchers in the field of fuzzy logic to develop application systems in the field of strong artificial intelligence, which are also of interest to specialists in the field of machine learning. The lecture also examines how neuro-fuzzy networks make it possible to establish a link between symbolic and connectionist schools of artificial intelligence. A lot of methods of rule extraction from neural networks are also investigated.

Alexey Averkin

Intellectual Property Rights in Industrial Bioprocess Engineering

Industrial bioprocess engineering is an aggregation of chemistry, biology, mathematics and industrial design that particularly deals with various biotechnological processes employed in large-scale production in bio-related industries. It applies the principles of physics, chemistry and other allied disciplines of engineering to design and mimics the entities and processes close to nature. Since they deal with nature-like entities and processes which are not obviously natural and have potential novelty and utility aspects, they inherently qualify for protection by way of intellectual property rights enabling the first time manufacturers an opportunity to enjoy the fruits of hard work, time and monetary investments for a stipulated period of time and to further exclude others to exploit the results and products without their permission. These rights are territorial in nature, therefore, a potential invention has to be reviewed keenly towards protection on intellectual property terms in accordance with the domestic and international regulations for widespread societal acceptance and exclusivities without compromising on the quality. The international agencies such as the World Intellectual Property Organization strive hard in harmonizing the intellectual property related rules that are acceptable by all for fair trade and negotiations at global levels, supported by the national framework of each nation.

Sripathi Rao Kulkarni

Digitalization will radically change controlling as we know it

Controllers must address eight central challenges resulting from digitalization in the coming years. Their task profile, toolbox, and mindset must be adapted to the new parameters. Looking ahead, the number of controllers will drastically decrease as controlling develops more strongly into a management philosophy.

Utz Schäffer, Jürgen Weber

Digitalization ante portas

The key changes to controlling reflected in the third WHU study on the Future of Controlling

Digitalization has now reached controlling. The third WHU study on the Future of Controlling, like its predecessors, identifies the top ten future trends in controlling and compares its findings with the results of the 2011 and 2014 studies. The 2017 study indicates that the diverse facets of digitalization are gradually dominating the list of the top future trends in controlling and the perceived pressure to change is high. Still, in most companies little change has actually taken place.

Utz Schäffer, Jürgen Weber

XGBoost-Driven Harsanyi Transformation and Its Application in Incomplete Information Internet Loan Credit Game

In theory, the key step of traditional Harsanyi transformation is “Nature” assigns types to real players according to certain probability distributions. In big data era, it is still a challenge to obtain the probability distributions in practice, partly because the history type data of a new player is still private. Considering it is easy to access some feature data of a new player as well as to obtain both the feature and type data of massive other players, this paper introduces the statistical learning method eXtreme Gradient Boosting (XGBoost) to propose an XGBoost-driven Harsanyi transformation, where XGBoost is used to predict a new player’s type distribution indirectly. To test the effect of XGBoost-driven Harsanyi transformation, an incomplete information Internet loan credit game (3ILCG) is modeled and analyzed. When the loan interest rate r = 0.2, the empirical analysis is executed on 24,000 training data and 6,000 testing data. The experiment shows the accuracy (A) and harmonic mean (F1) of the enterprise loan decision based on pxgb on 6,000 testing data are 0.900833 and 0.945864 respectively. The test experiment demonstrates the XGBoost-driven Harsanyi transformation can help the lending platform to make loan decisions scientifically in practice and improve the practice value of game theory.

Yi-Cheng Gong, Yan-Na Zhang, Li Yu

The Consensus Games for Consensus Economics Under the Framework of Blockchain in Fintech

The goal of this paper is to introduce a new notion called “Consensus Game (CG)” with motivation from the mechanism design of blockchain economy under the consensus incentives from Bitcoin ecosystems in financial technology (Fintech), we then establish the general existence results for consensus equilibria of consensus games in terms of corresponding interpretation based on the viewpoint of Blockchain consensus in Fintech by applying the concept of hybrid solutions in game theory. As applications, our discussion in this paper for the illustration of some issues and problems on the stability of mining pool-games for miners by applying consensus games shows that the concept of consensus equilibria could be used as a fundamental tool for the study of consensus economics under the framework of Blockchain economy in Fintech.

Lan Di, Zhe Yang, George Xianzhi Yuan

Great European Crisis: Shift or Turning Point in Job Creation from Job Destruction

The new century globalization in trade and finance—both between euro area and other world economic areas and within euro zone—quickens the pace of “creative destruction” and thereby speeds the flow of technology across European countries. In a previous work, we proposed the category of labor as a product opposed to labor as a factor of production. Coexisting forms of labor in societies range from an upper class, the labor product class, to different types of post-Fordist labor, where, notwithstanding the intensity of technology, labor input is declining. Have the 2008–2014 crises and the delays in the reforms of institutional architecture shifted or modified the long term trends? This paper describes a naïve model of diverse forms of labor substitution developed using certain rules, which proved fruitful in describing the substitution of primary energy sources (constrained logistic functions). The model is used to answer the question and to measure the structural gaps between four different EU countries (Germany, Spain, France and Italy) compared to the UK economy.

Martino Lo Cascio, Massimo Bagarani

Chapter 10. Suggesting a Hybrid Approach: Mobile Apps with Big Data Analysis to Report and Prevent Crimes

Conventional crime prediction techniques rely on location-specific historical crime data. Yet relying on historical crime data alone has deficiencies, as such data is limited in scope and often fails to capture the full complexity of crimes. This chapter proposes a novel approach to employ mobile applications with big data analysis for crime reporting and prevention using aggregate data from multiple sources, the Hybrid Smart Crime Reporting App (HIVICRA). It is an infographic intelligent crime-reporting analysis application that incorporates crime data sourced from local police, social media and crowdsourcing, including sentiment analysis of Twitter streams in conjunction with historical police crime datasets. An evaluation of the approach suggests that by combining sentiment analysis with smart crime reporting applications, it is possible to improve the forecasting of crime.

Abdi Fidow, Ahmed Hassan, Mahamed Iman, X. Cheng, M. Petridis, Clifford Sule

Chapter 1. Introduction: The Police and Social Media

Social media have become a standard element in police work worldwide. This chapter provides an overview of the ways police forces employ social media from open source intelligence (OSINT) and community engagement to crisis communication. It further addresses a range of operational and ethical issues the use of social media raises when used by law enforcement agencies.

David Waddington

A Serverless Architecture for Wireless Body Area Network Applications

Wireless body area networks (WBANs) have become popular for providing real-time healthcare monitoring services. WBANs are an important subset of Cyber-physical systems (CPS). As the amount of sensing devices in such healthcare applications is growing rapidly, security, scalability, availability and privacy are a real challenge. Adoption of cloud computing is growing in the healthcare sector because it can provide high scalability while ensuring availability and affordable healthcare monitoring services. Serverless computing brings a new era to the design and deployment of event-driven applications in cloud computing. Serverless computing also helps the developer to build a large application using Function as a Service without thinking about the management and scalability of the infrastructure. The goal of this paper is to propose a dependable serverless architecture for WBAN applications. This architecture will improve the dependability of WBAN applications through ensuring scalability, availability, security and privacy by design, in addition to being cost-effective. This paper presents a detailed price comparison between two leading cloud service providers. Additionally, this paper reports on the findings from a case study which evaluated security, scalability and availability of the proposed architecture. This evaluation was conducted by load testing and rule-based intrusion detection.

Pangkaj Chandra Paul, John Loane, Fergal McCaffery, Gilbert Regan

Kapitel 7. Gesellschaftliche Perspektive

Nicht zuletzt angesichts vielfältiger globaler Krisen und Risikopotenziale hat die Auseinandersetzung mit gesellschaftlicher Komplexitätsbewältigung in den letzten Jahren wachsende Bedeutung erfahren. Vor diesem Hintergrund geben die folgenden Unterkapitel einen Überblick über unterschiedliche Ansätze, die sich dem insgesamt noch wenig erschlossenen Leitbild der komplexitätsfähigen Gesellschaft von unterschiedlichen Richtungen her annähern. Der älteste und derzeit noch dominanteste Diskurs assoziiert das Konzept der „Entwicklung“ einer Gesellschaft mit ihrer Komplexitätsfähigkeit.

Karim Fathi

Kapitel 6. Organisationale Perspektive

In der organisationalen Diskussion findet die Frage, wie sich Komplexitätsbewältigung praktisch handhaben und institutionalisieren lässt, rege Auseinandersetzung. Dies dürfte u. a. auch mit der von mehreren Studien bestätigten Feststellung zusammenhängen, wonach die Halbwertszeit von Unternehmen innerhalb der letzten Jahrzehnte signifikant abgenommen hat. Zu der am öftesten zitierten Referenz gehören die Unternehmen, die im 20. Jahrhundert laut Fortune zu den 500 größten US-Gesellschaften gehörten.

Karim Fathi

Kapitel 3. Transdisziplinäre Komplexitätsbewältigung aus einer erkenntnistheoretischen Perspektive

Im Jahre 2014 erhielt in Berlin ein Beratungsunternehmen von einem sozialen Träger das Mandat für eine Organisationsentwicklung. Der soziale Träger beschäftigte über 700 Mitarbeiter und war von einer komplexen Gemengelage mehrerer Problemthemen betroffen, die eine professionelle Begleitung nötig machte. Von „Kommunikationsschwierigkeiten“ war im Mandatsgespräch mit dem Beratungsunternehmen die Rede.

Karim Fathi

Chapter 6. Policies for Protection of Indian Migrant Workers in Middle East

The present chapter highlights the exploitation faced by low-skilled Indian migrants to the Middle East and attempts to enhance the understanding of evolution of the policies and measures undertaken by India for their protection. The chapter revisits a study done by the author on Indian migrant workers based on first-hand data collected in the host country Lebanon during the late 1990s, regarding poor living and working conditions of migrants and their exploitation. The study had brought out the need for the intervention by the government of sending countries to frame effective policies to protect the migrant workers from exploitation and inhuman treatment. Since then, the Indian government has undertaken several measures for protection of low-skilled migrant workers, especially in the Middle East. Drawing upon comprehensive literature review and anecdotal evidence, it has been observed that the exploitation of low-skilled migrant workers, including Indian workers, in the Middle East still continues. Thereafter, comprehensive look is taken at the efforts made to protect migrants’ rights at various levels, including the steps taken by India as a sending country. Detailed analysis is undertaken as to why exploitation continues in spite of extensive proactive measures taken by India to protect the migrant workers. It is found that several factors join together in continuing exploitation of migrant workers in the fiercely competitive labour markets in the Middle East. The chapter concludes with several suggestions, including adoption of pro-migrant policies by sending countries as well as destination countries in order to empower the migrants, to ensure that they do not fall prey to unscrupulous agents at home and are protected in host countries and to harness the migration for the benefit of both sending and destination countries.

Seema Gaur

A Possibility of AI Application on Mode-choice Prediction of Transport Users in Hanoi

Mode choice is a significant analysis of travel demand modelling. The selection of transport mode highly depends on the travel behavior of transport users, for instance, travel distance, trip cost, trip purpose, household income and so on. This study aims at investigating the possibility of applying Artificial Intelligent (AI) method to predict the mode choice through travel behavior survey data with a focus on Hanoi city - Vietnam. Firstly, travel interview survey was conducted with the involvement of 311 transport users at different land-use types. The study secondly applies the Ensemble Decision Trees (EDT) method to predict the mode-choice of transport users. Finally, the recommendation for a possibility of AI application on travel mode-choice is also proposed. The results of this study might beneficial for transport planners and transport authorities. The application of AI on parking demand forecast also contributes for the big data application on transport demand modeling.

Truong Thi My Thanh, Hai-Bang Ly, Binh Thai Pham

Innovation Reduces Risk for Sustainable Infrastructure

Society and standards require more and more “risk-informed” decisions. The paper demonstrates the potential of reducing risk by implementing reliability and risk concepts as a complement to conventional analyses. Reliability evaluations can range from qualitative estimates, simple statistical evaluations to full quantitative probabilistic modelling of the hazards and consequences. The paper first introduced recent innovative developments that help reduce risk. Risk assessment and risk management are briefly touched upon. An example of the application of the new stress testing method is given. The usefulness of the seminal (1969) Observational Method is discussed. The need for developing sustainable and holistic civil engineering solutions is also briefly mentioned. The paper concludes that reliability-based approaches provide useful complementary information, and enable the analysis of complex uncertainties in a systematic and more complete manner than deterministic analyses alone. There is today a cultural shift in the approach for design and risk reduction in our profession. Reliability and risk-based approaches will assist preparing sustainable engineering recommendations and making risk-informed decisions.

Suzanne Lacasse

4. Zielgruppenansprache: Den Stier bei den Hörnern packen

Wollen Sie dem Nutzer – und damit dem potenziellen Bewerber – wirklich bestmögliche Orientierung bieten, so sollte auch unmissverständlich klar sein, an wen sich Ihre Karriere-Website richtet bzw. wer eigentlich gesucht wird. Das sagt einem nicht nur der gesunde Menschenverstand, das sagen einem auch die Bewerber. Doch nicht nur das: Dank Reizüberflutung neigt der Mensch zur selektiven Informationswahrnehmung und dem sollten Sie mit selektiver Informationsbedarfsbefriedigung begegnen. Kommen Sie dieser Erwartungshaltung nicht nach, ist der potenzielle Bewerber nämlich schnell wieder von Ihrer Website verschwunden – und bei der Karriere-Website Ihres Wettbewerbers fündig geworden. Im Grunde genommen gilt es also, analog der Königstochter Europa den Stier bei den Hörnern zu packen, auf die, die sich auf Ihre Karriereseiten verirrt haben, im übertragenen Sinne mutig zugehen und ihre (Informations)bedürfnisse zu befriedigen. In der Folge führt eine kluge (!) Zielgruppenansprache zu besser geeigneten und passenden Bewerbern.

Henner Knabenreich

Let’s Listen to the Data: Sonification for Learning Analytics

This paper falls in the field of playing analytics. It deals with an empirical work dedicated to explore the potential of data sonification (i.e. the conversion of data into sound that reflects their objective properties or relations). Data sonification is proposed as an alternative to data visualization. We applied data sonification for the analysis of gameplays and players’ strategies during a session dedicated to game-based learning. The data of our study (digital traces) was collected from 200 pre-service teachers who played Tamagocours, an online collaborative multiplayer game dedicated to learn the rules (i.e. copyright) that comply with the policies for the use of digital resources in an educational context. For one typical individual (parangon) for each of the 5 categories of players, the collected digital traces were converted into an audio format so that the actions that they performed become listenable. A specific software, SOnification of DAta for Learning Analytics (SODA4LA), was developed for this purpose. The first results show that different features of the data can be recognized from data listening. These results also enable for the identification of different parameters that should be taken into account for the sonification of diachronic data. We consider that this study open new perspectives for playing analytics. Thus we advocate for new research aiming at exploring the potential of data sonification for the analysis of complex and diachronic datasets in the field of educational sciences.

Eric Sanchez, Théophile Sanchez

Applying Epistemic Network Analysis to Explore the Application of Teaching Assistant Software in Classroom Learning

With the rapid development of information technology, teaching assistant software has been constantly appearing in classroom learning. How to effectively apply this technical resource in classroom learning has become one of the focus of educational research and practice. In this study, the epistemic network analysis method was used to process the interview text of students using teaching AIDS, and the effect of the application of teaching AIDS in classroom learning was discussed. The results show that there is a significant difference between high-score students and low-score students in using instructional software in classroom learning, especially with regards to their learning motivation towards it. Additionally, the use of epistemic network analysis technology could improve the accuracy of decision-making reference for evaluating the effect of software use and implementing accurate teaching.

Lijiao Yue, Youli Hu, Jing Xiao

Using Epistemic Networks with Automated Codes to Understand Why Players Quit Levels in a Learning Game

Understanding why students quit a level in a learning game could inform the design of appropriate and timely interventions to keep students motivated to persevere. In this paper, we study student quitting behavior in Physics Playground (PP) – a Physics game for secondary school students. We focus on student cognition that can be inferred from their interaction with the game. PP logs meaningful and crucial student behaviors relevant to physics learning in real time. The automatically generated events in the interaction log are used as codes for quantitative ethnography analysis. We study epistemic networks from five levels to study how the temporal interconnections between the events are different for students who quit the game and those who did not. Our analysis revealed that students who quit over-rely on nudge actions and tend to settle on a solution more quickly than students who successfully complete a level, often failing to identify the correct agent and supporting objects to solve the level.

Shamya Karumbaiah, Ryan S. Baker, Amanda Barany, Valerie Shute

Use of Training, Validation, and Test Sets for Developing Automated Classifiers in Quantitative Ethnography

Using automated classifiers to code discourse data enables researchers to carry out analyses on large datasets. This paper presents a detailed example of applying training, validation and test sets frequently utilized in machine learning to develop automated classifiers for use in quantitative ethnography research. The method was applied to two dispositional constructs. Within one cycle of the process, reliable and valid automated classifiers were developed for Social Disposition. However, the automated coding scheme for Inclusive Disposition was rejected during the validation stage due to issues of overfitting. Nonetheless, the results demonstrate the beneficial potential of using preclassified datasets in enhancing the efficiency and effectiveness of the automation process.

Seung B. Lee, Xiaofan Gui, Megan Manquen, Eric R. Hamilton

Theme Analyses for Open-Ended Survey Responses in Education Research on Summer Melt Phenomenon

Summer melt is a phenomenon when college-intending students fail to enroll in the fall after high school graduation. Previous research on summer melt utilized surveys, typically consisting of Likert scale questions and open-ended response questions. Open-ended responses can elicit more information from students, but they have not been fully analyzed due to the cost, time, and complexity of theme extraction with manual coding. In the present study, we applied the topic modeling approach to extract topics and relevant themes, and evaluated model performance by comparing model-generated topics and categories with the human-identified topics and themes. Results showed that the topic model allows for extracting similar topics as the survey questions that were investigated, but only extracted part of the themes classified by the human. Discussion and implications focus on potential improvements in automated topic and theme classification from open-ended survey responses.

Haiying Li, Joyce Zhou-Yile Schnieders, Becky L. Bobek

Towards Dynamic Data Placement for Polystore Ingestion

Integrating low-latency data streaming into data warehouse architectures has become an important enhancement to support modern data warehousing applications. In these architectures, heterogeneous workloads with data ingestion and analytical queries must be executed with strict performance guarantees. Furthermore, the data warehouse may consists of multiple different types of storage engines (a.k.a., polystores or multi-stores). A paramount problem is data placement; different workload scenarios call for different data placement designs. Moreover, workload conditions change frequently. In this paper, we provide evidence that a dynamic, workload-driven approach is needed for data placement in polystores with low-latency data ingestion support. We study the problem based on the characteristics of the TPC-DI benchmark in the context of an abbreviated polystore that consists of S-Store and Postgres.

Jiang Du, John Meehan, Nesime Tatbul, Stan Zdonik

An Integrated Architecture for Real-Time and Historical Analytics in Financial Services

The integration of historical data has become one of the most pressing issues for the financial services industry: trading floors rely on real-time analytics of ticker data with very strong emphasis on speed, not scale, yet, a large number of critical tasks, including daily reporting and backtesting of models, put emphasis on scale. As a result, implementers continuously face the challenge of having to meet contradicting requirements and either scale real-time analytics technology at considerable cost, or deploy separate stacks for different tasks and keep them synchronized—a solution that is no less costly.In this paper, we propose Adaptive Data Virtualization, as an alternative approach, to overcome this problem. ADV lets applications use different data management technologies without the need for database migrations or re-configuration of applications. We review the incumbent technology and compare it with the recent crop of MPP databases and draw up a strategy that, using ADV, lets enterprises use the right tool for the right job flexibly. We conclude the paper summarizing our initial experience working with customers in the field and outline an agenda for future research.

Lyublena Antova, Rhonda Baldwin, Zhongxian Gu, F. Michael Waas

Multi-engine Analytics with IReS

We present IReS, the Intelligent Resource Scheduler that is able to abstractly describe, optimize and execute any batch analytics workflow with respect to a multi-objective policy. Relying on cost and performance models of the required tasks over the available platforms, IReS allocates distinct workflow parts to the most advantageous execution and/or storage engine among the available ones and decides on the exact amount of resources provisioned. Moreover, IReS efficiently adapts to the current cluster/engine conditions and recovers from failures by effectively monitoring the workflow execution in real-time. Our current prototype has been tested in a plethora of business driven and synthetic workflows, proving its potential of yielding significant gains in cost and performance compared to statically scheduled, single-engine executions. IReS incurs only marginal overhead to the workflow execution performance, managing to discover an approximate pareto-optimal set of execution plans within a few seconds.

Katerina Doka, Ioannis Mytilinis, Nikolaos Papailiou, Victor Giannakouris, Dimitrios Tsoumakos, Nectarios Koziris

Feature Selection on Credit Risk Prediction for Peer-to-Peer Lending

Lending plays a key role in economy from early civilization. One of the most important issue in lending business is to measure the risk that the borrower will default or delay in loan payment. This is called credit risk. After Lehman shock in 2008–2009, big banks increased verification for lending operation to reduce risk. As borrowing from established financial institutions is getting harder, social lending also called Peer-to-Peer (P2P) lending, is becoming the popular trend. Because the client information at P2P lending is not sufficient as in traditional financial system, big data and machine learning become the default methods for analyzing credit risk. However, cost of computation and the problem of training the classifier with imbalance data affect the quality of result. This paper proposes a machine learning model with feature selection to measure credit risk of individual borrower on P2P lending. Based on our experimental results, we showed that the credit risk prediction for P2P lending can be improved using Logistic Regression in addition to proper feature selection.

Shin-Fu Chen, Goutam Chakraborty, Li-Hua Li

Multi-View Learning of Network Embedding

In recent years, network representation learning on complex information networks attracts more and more attention. Scholars usually use matrix factorization or deep learning methods to learn network representation automatically. However, existing methods only preserve single feature of networks. How to effectively integrate multiple features of network is a challenge. To tackle this challenge, we propose an unsupervised learning algorithm named Multi-View Learning of Network Embedding. The algorithm preserves multiple features that including vertex attribute, network global and local topology structure. Features are treated as network views. We use a variant of convolutional neural networks to learn features from these views. The algorithm maximizes the correlation between different views by canonical correlation analysis, and learns the embedding that preserve multiple features of networks. Comprehensive experiments are conducted on five real networks. We demonstrate that our method can better preserve multiple features and outperform baseline algorithms in community detection, network reconstruction and visualization.

Zhongming Han, Chenye Zheng, Dan Liu, Dagao Duan, Weijie Yang

A Vision Sensor Network to Study Viewers’ Visible Behavior of Art Appreciation

Since the empathic processes are essential to the aesthetic experience, the empathy-enabling technology for behavioral sensing is gaining its popularity to support the study of anonymized viewers’ cognition in art appreciation. Because such behavior is highly dynamic and divergent among viewers, it is a challenge to observe the multiple dynamic features from the streaming data. In this study, we propose a vision sensor network (VSN) to support the visual interpretation of viewers’ appreciation on visual arts. It firstly annotates the features in the captured frames based on CloudAPI (here the Google Cloud Vision API is used), and secondly the query on nested documents in MongoDB provides universal access to the annotated features. Comparing with the traditional approaches with subjective evidence, such as the questionnaire or social listening methods, the proposed VSN can interpret the visible behavior of viewers in real-time. In addition, it also has less selective bias because of more objective evidence being captured.

Yilang Wu, Luyi Huang, Zhongyu Wei, Zixue Cheng

Legal Question Answering System Using FrameNet

A central issue of yes/no question answering is the usage of knowledge source given a question. While yes/no question answering has been studied for a long time, legal yes/no question answering largely differs from other domains. The most distinguishing characteristic is that legal issues require precise analysis of predicate argument structures and semantical abstraction in these sentences. We have developed a yes/no question answering system for answering questions for a statute legal domain. Our system uses a semantic database based on FrameNet, which works with a predicate argument structure analyzer, in order to recognize semantic correspondences rather than surface strings between given problem sentences and knowledge source sentences. We applied our system to the COLIEE (Competition on Legal Information Extraction/Entailment) 2018 task. Our frame based system achieved better scores on average than our previous system in COLIEE 2017, and was the second best score among participants of Task 4. We confirmed effectiveness of the frame information with the COLIEE training dataset. Our result shows the importance of the points described above, revealing opportunities to continue further work on improving our system’s accuracy.

Ryosuke Taniguchi, Reina Hoshino, Yoshinobu Kano

Question Answering System for Legal Bar Examination Using Predicate Argument Structure

We developed a question answering system for legal bar exam, which can explain the way system solves based on underlying logical structures. We focus on the set of subject and object with their predicate, i.e. the predicate argument structure, in order to represent structures of legal documents. We implemented a couple of modules using different searching methods. Our system outputs results using these modules by learning each module’s confidence value with SVM. We manually analyzed the difficulty level of the problems whether external knowledge is required or not. We created a structured synonym dictionary specialized to the legal domain, where predicates are categorized with their objects. This synonym dictionary could absorb superficial differences of predicates to solve the problems which do not require external knowledge. We confirmed that the system can solve more than 70% of simple problems. Our system achieved the second best score in Task 4 of the COLIEE 2018 shared task.

Reina Hoshino, Ryosuke Taniguchi, Naoki Kiyota, Yoshinobu Kano

1. Introduction

This chapter presents the fact that over the last 20 years, paternalistic approaches of health care have gradually given way to patient-oriented approaches that consider differences, values and experiences of patient. Around the world, healthcare organizations, institutions and universities are doubling their efforts to involve patients and make their participation increasingly active, using different modalities of engagement and various means of motivation. In this context, the objectives of this monograph are to show if the ongoing patient revolution, based on patient knowledge, has contributed to transform, improve or innovate the ways in which care services are organized and delivered as well as the culture and practices of healthcare professionals regarding direct patient care.

Marie-Pascale Pomey, Nathalie Clavel, Jean-Louis Denis

11. Future Directions for Patient Knowledge: A Citizen-Patient Reflection

Previous chapters have offered rich and diverse examples of embedding patient knowledge into the content and conduct of every sector of healthcare around the world. Patients and caregivers are influencing every component of healthcare in unique ways that reflect local culture and system readiness. Here we take a brief look ahead at the transformative power of inviting patients and caregivers into partnership for better care. Processes already underway may well shift the job of healthcare systems from an industrial model for delivering standardized disease-focussed treatment, to care partnerships that cultivate individually-defined goals for healthy living. Such a future promises healthier working environments for practitioners, as well as stronger communities for all.

Carolyn Canfield

Leveraging Social Media to Track Urban Park Quality for Improved Citizen Health

In this chapter, we showcase the use of qualitative data available on two “geobrowsers” (i.e., Google Maps and Foursquare) and of a data-mining technique to quantify the sentiment of online reviews about parks. The underlying interest for this study comes from the growing literature suggesting that living near parks or other open spaces contributes to higher levels of physical activity and to lower levels of stress and fewer mental health problems. Mecklenburg County (North Carolina), which encompasses the City of Charlotte, is used as a case study. In a comparison among 97 cities in the USA, The Trust for Public Land ranks Charlotte’s park system at the very bottom and reports their spending per resident on their park system among the lowest 20% of these cities. Considering their lower spending, the city government may be particularly interested to leverage publicly available data from social media to complement the assessments they already perform about their park system, such as satisfaction surveys or quality assessments. Nevertheless, Charlotte’s low ranking – although unfortunate – indicates an opportunity for the city to improve its park system, which in turn could engage residents in more physical activity and, in doing so, create positive community health outcomes.

Coline C. Dony, Emily Fekete

Extending Volunteered Geographic Information (VGI) with Geospatial Software as a Service: Participatory Asset Mapping Infrastructures for Urban Health

Community asset mapping is an essential step in public health practice for identifying community strengths, needs, and urban health intervention strategies. Community-based Volunteered Geographic Information (VGI) could facilitate customized asset mapping to link free and accessible technologies with community needs in a mutually shared, knowledge-producing process. To address this issue, we demonstrate a participatory asset mapping infrastructure developed with a Chicago community using VGI concepts, participatory design principles, and geospatial Software as a Service (SaaS) using a suite of free and/or open tools. Participatory mapping infrastructures using decentralized system architecture can link data and mapping services, transforming siloed datasets to integrated systems managed and shared across multiple organizations. The final asset mapping infrastructure includes a flexible and cloud-based data management system, an interactive web map, and community asset data stream. By allowing for a dynamic, reproducible, adaptive, and participatory asset mapping system, health systems infrastructures can further support community health improvement frameworks by facilitating shared data and decision support implementations across health partners. Such “community-engaged VGI” is essential in integrating previously siloed data systems and facilitating means of collaboration with health systems in urban health research and practice.

Marynia Kolak, Michael Steptoe, Holly Manprisio, Lisa Azu-Popow, Megan Hinchy, Geraldine Malana, Ross Maciejewski

Introduction

This chapter provides an overview of the background and content of this book. Starting with a discussion on the recent edited volumes on or closely related to urban health, this chapter highlights the need for a book on geospatial technologies for the study of urban health. The uniqueness of geospatial approaches to investigate urban health issues can be attributed to the spatial perspective and the lens of place. This chapter further argues that the continuous development in geospatial technologies, coupled with recent advances in communication and information technologies, portable sensor technologies, and the various social media and open data, has played an essential role for the modelling of environment exposure and health risk. However, there still exist challenges for urban health studies. These challenges maybe rooted in, among the multiple causes, a lack of understanding of the micro-level health decisions and the methodological limitation to address the Uncertain Geospatial Contextual Problem. This chapter finishes with a section-by-section and chapter-by-chapter overview of the empirical studies included in this book volume. This overview is provided to illustrate the organization of this book and to serve as a guide for a reader to navigate through the book chapters.

Yongmei Lu, Eric Delmelle

Chapter 1. What Is Anomaly Detection?

In this chapter, you will learn about anomalies in general, the categories of anomalies, and anomaly detection. You will also learn why anomaly detection is important and how anomalies can be detected and the use case for such a mechanism.

Sridhar Alla, Suman Kalyan Adari

Chapter 3. Introduction to Deep Learning

In this chapter, you will learn about deep learning networks. You will also learn how deep neural networks work and how you can implement a deep learning neural networks using Keras and PyTorch.

Sridhar Alla, Suman Kalyan Adari

Applying a Model-Driven Approach for UML/OCL Constraints: Application to NoSQL Databases

Big Data have received a great deal of attention in recent years. Not only the amount of data is on a completely different level than before, but also we have different type of data including factors such as format, structure, and sources. This has definitely changed the tools we need to handle Big Data, giving rise to NoSQL systems. While NoSQL systems have proven their efficiency to handle Big Data, it’s still an unsolved problem how the automatic storage of Big Data in NoSQL systems could be done. This paper proposes an automatic approach for implementing UML conceptual models in NoSQL systems, including the mapping of the associated OCL constraints to the code required for checking them. In order to demonstrate the practical applicability of our work, we have realized it in a tool supporting four fundamental OCL expressions: Iterate-based expressions, OCL predefined operations, If expression and Let expression.

Fatma Abdelhadi, Amal Ait Brahim, Gilles Zurfluh

MapSDI: A Scaled-Up Semantic Data Integration Framework for Knowledge Graph Creation

Semantic web technologies have significantly contributed with effective solutions for the problems of data integration and knowledge graph creation. However, with the rapid growth of big data in diverse domains, different interoperability issues still demand to be addressed, being scalability one of the main challenges. In this paper, we address the problem of knowledge graph creation at scale and provide MapSDI, a mapping rule-based framework for optimizing semantic data integration into knowledge graphs. MapSDI allows for the semantic enrichment of large-sized, heterogeneous, and potentially low-quality data efficiently. The input of MapSDI is a set of data sources and mapping rules being generated by a mapping language such as RML. First, MapSDI pre-processes the sources based on semantic information extracted from mapping rules, by performing basic database operators; it projects out required attributes, eliminates duplicates, and selects relevant entries. All these operators are defined based on the knowledge encoded by the mapping rules which will be then used by the semantification engine (or RDFizer) to produce a knowledge graph. We have empirically studied the impact of MapSDI on existing RDFizers, and observed that knowledge graph creation time can be reduced on average in one order of magnitude. It is also shown, theoretically, that the sources and rules transformations provided by MapSDI are data-lossless.

Samaneh Jozashoori, Maria-Esther Vidal

Modeling a Multi-agent Tourism Recommender System

Today’s design of e-services for tourists means dealing with a big quantity of information and metadata that designers should be able to leverage to generate perceived values for users. In this paper we revise the design choices followed to implement a recommender system, highlighting the data processing and architectural point of view, and finally we propose a multi-agent recommender system.

Valerio Bellandi, Paolo Ceravolo, Eugenio Tacchini

Personalised Exploration Graphs on Semantic Data Lakes

Recently, organisations operating in the context of Smart Cities are spending time and resources in turning large amounts of data, collected within heterogeneous sources, into actionable insights, using indicators as powerful tools for meaningful data aggregation and exploration. Data lakes, which follow a schema-on-read approach, allow for storing both structured and unstructured data and have been proposed as flexible repositories for enabling data exploration and analysis over heterogeneous data sources, regardless their structure. However, indicators are usually computed based on the centralisation of the data storage, according to a less flexible schema on write approach. Furthermore, domain experts, who know data stored within the data lake, are usually distinct from data analysts, who define indicators, and users, who exploit indicators to explore data in a personalised way. In this paper, we propose a semantics-based approach for enabling personalised data lake exploration through the conceptualisation of proper indicators. In particular, the approach is structured as follows: (i) at the bottom, heterogeneous data sources within a data lake are enriched with Semantic Models, defined by domain experts using domain ontologies, to provide a semantic data lake representation; (ii) in the middle, a Multi-Dimensional Ontology is used by analysts to define indicators and analysis dimensions, in terms of concepts within Semantic Models and formulas to aggregate them; (iii) at the top, Personalised Exploration Graphs are generated for different categories of users, whose profiles are defined in terms of a set of constraints that limit the indicators instances on which the users may rely to explore data. Benefits and limitations of the approach are discussed through an application in the Smart City domain.

Ada Bagozi, Devis Bianchini, Valeria De Antonellis, Massimiliano Garda, Michele Melchiori

A Conceptual Modelling Approach to Visualising Linked Data

Increasing numbers of Linked Open Datasets are being published, and many possible data visualisations may be appropriate for a user’s given exploration or analysis task over a dataset. Users may therefore find it difficult to identify visualisations that meet their data exploration or analyses needs. We propose an approach that creates conceptual models of groups of commonly used data visualisations, which can be used to analyse the data and users’ queries so as to automatically generate recommendations of possible visualisations. To our knowledge, this is the first work to propose a conceptual modelling approach to recommending visualisations for Linked Data.

Peter McBrien, Alexandra Poulovassilis

Creating a Vocabulary for Data Privacy

The First-Year Report of Data Privacy Vocabularies and Controls Community Group (DPVCG)

Managing privacy and understanding handling of personal data has turned into a fundamental right, at least within the European Union, with the General Data Protection Regulation (GDPR) being enforced since May 25th 2018. This has led to tools and services that promise compliance to GDPR in terms of consent management and keeping track of personal data being processed. The information recorded within such tools, as well as that for compliance itself, needs to be interoperable to provide sufficient transparency in its usage. Additionally, interoperability is also necessary towards addressing the right to data portability under GDPR as well as creation of user-configurable and manageable privacy policies. We argue that such interoperability can be enabled through agreement over vocabularies using linked data principles. The W3C Data Privacy Vocabulary and Controls Community Group (DPVCG) was set up to jointly develop such vocabularies towards interoperability in the context of data privacy. This paper presents the resulting Data Privacy Vocabulary (DPV), along with a discussion on its potential uses, and an invitation for feedback and participation.

Harshvardhan J. Pandit, Axel Polleres, Bert Bos, Rob Brennan, Bud Bruegger, Fajar J. Ekaputra, Javier D. Fernández, Roghaiyeh Gachpaz Hamed, Elmar Kiesling, Mark Lizar, Eva Schlehahn, Simon Steyskal, Rigo Wenning

A Linked Open Data Approach for Web Service Evolution

Web services are subject to changes during their lifetime, such as updates in data types, operations, and the overall functionality. Such changes may impact the way Web services are discovered, consumed, and recommended. We propose a Linked Open Data (LOD) approach for managing Web services new deployment and updates. We propose algorithms, based on semantic LOD similarity measures, to infer composition and substitution relationships for both newly deployed and updated services. We introduce a technique that gathers Web service interactions and users’ feedbacks to continuously update service relationships. To improve the accuracy of relationship recommendation, we propose an algorithm to learn new LOD relationships from Web service past interaction. We conduct extensive experiments on real-world Web services to evaluate our approach.

Hamza Labbaci, Nasredine Cheniki, Yacine Sam, Nizar Messai, Brahim Medjahed, Youcef Aklouf

Initializing k-Means Efficiently: Benefits for Exploratory Cluster Analysis

Data analysis is a highly exploratory task, where various algorithms with different parameters are executed until a solid result is achieved. This is especially evident for cluster analyses, where the number of clusters must be provided prior to the execution of the clustering algorithm. Since this number is rarely known in advance, the algorithm is typically executed several times with varying parameters. Hence, the duration of the exploratory analysis heavily dependends on the runtime of each execution of the clustering algorithm. While previous work shows that the initialization of clustering algorithms is crucial for fast and solid results, it solely focuses on a single execution of the clustering algorithm and thereby neglects previous executions. We propose Delta Initialization as an initialization strategy for k-Means in such an exploratory setting. The core idea of this new algorithm is to exploit the clustering results of previous executions in order to enhance the initialization of subsequent executions. We show that this algorithm is well suited for exploratory cluster analysis as considerable speedups can be achieved while additionally achieving superior clustering results compared to state-of-the-art initialization strategies.

Manuel Fritz, Holger Schwarz

Enabling Compressed Encryption for Cloud Based Big Data Stores

We propose a secure yet efficient data query system for cloud-based key-value store. Our system supports encryption and compression to ensure confidentiality and query efficiency simultaneously. To reconcile encryption and compression without compromising performance, we propose a new encrypted key-value storage structure based on the concept of horizontal-vertical division. Our storage structure enables fine-grained access to compressed yet encrypted key-value data. We further combine several cryptographic primitives to build secure search indexes on the storage structure. As a result, our system supports rich types of queries including key-value query and range query. We implement a prototype of our system on top of Cassandra. Our evaluation shows that our system increases the throughput by up to 7 times and compression ratio by up to 1.3 times with respect to previous works.

Meng Zhang, Saiyu Qi, Meixia Miao, Fuyou Zhang

Catheter Synthesis in X-Ray Fluoroscopy with Generative Adversarial Networks

Accurate localization of catheters or guidewires in fluoroscopy images is important to improve the stability of intervention procedures as well as the development of surgical navigation systems. Recently, deep learning methods have been proposed to improve performance, however these techniques require extensive pixel-wise annotations. Moreover, the human annotation effort is equally expensive. In this study, we mitigate this labeling effort using generative adversarial networks (cycleGAN) wherein we synthesize realistic catheters in flouroscopy from localized guidewires in camera images whose annotations are cheaper to acquire. Our approach is motivated by the fact that catheters are tubular structures with varying profiles, thus given a guidewire in a camera image, we can obtain the centerline that follows the profile of a catheter in an X-ray image and create plausible X-ray images composited with such a centerline. In order to generate an image similar to the actual X-ray image, we propose a loss term that includes perceptual loss alongside the standard cycle loss. Experimental results show that the proposed method has better performance than the conventional GAN and generates images with consistent quality. Further, we provide evidence to the development of methods that leverage such synthetic composite images in supervised settings.

Ihsan Ullah, Philip Chikontwe, Sang Hyun Park

TADPOLE Challenge: Accurate Alzheimer’s Disease Prediction Through Crowdsourced Forecasting of Future Data

The Alzheimer’s Disease Prediction Of Longitudinal Evolution (TADPOLE) Challenge compares the performance of algorithms at predicting the future evolution of individuals at risk of Alzheimer’s disease. TADPOLE Challenge participants train their models and algorithms on historical data from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) study. Participants are then required to make forecasts of three key outcomes for ADNI-3 rollover participants: clinical diagnosis, Alzheimer’s Disease Assessment Scale Cognitive Subdomain (ADAS-Cog 13), and total volume of the ventricles – which are then compared with future measurements. Strong points of the challenge are that the test data did not exist at the time of forecasting (it was acquired afterwards), and that it focuses on the challenging problem of cohort selection for clinical trials by identifying fast progressors. The submission phase of TADPOLE was open until 15 November 2017; since then data has been acquired until April 2019 from 219 subjects with 223 clinical visits and 150 Magnetic Resonance Imaging (MRI) scans, which was used for the evaluation of the participants’ predictions. Thirty-three teams participated with a total of 92 submissions. No single submission was best at predicting all three outcomes. For diagnosis prediction, the best forecast (team Frog), which was based on gradient boosting, obtained a multiclass area under the receiver-operating curve (MAUC) of 0.931, while for ventricle prediction the best forecast (team EMC1), which was based on disease progression modelling and spline regression, obtained mean absolute error of 0.41% of total intracranial volume (ICV). For ADAS-Cog 13, no forecast was considerably better than the benchmark mixed effects model (BenchmarkME), provided to participants before the submission deadline. Further analysis can help understand which input features and algorithms are most suitable for Alzheimer’s disease prediction and for aiding patient stratification in clinical trials. The submission system remains open via the website: https://tadpole.grand-challenge.org/ .

Răzvan V. Marinescu, Neil P. Oxtoby, Alexandra L. Young, Esther E. Bron, Arthur W. Toga, Michael W. Weiner, Frederik Barkhof, Nick C. Fox, Polina Golland, Stefan Klein, Daniel C. Alexander

3. Prozessmodell

Was ist eigentlich ein Prozess? Hier eine erste Antwort auf diese Frage:

Frank Urlaß

6. Weiterentwicklungsmodell der neuen IT

Sind alle beschriebenen Schritte zur Entwicklung einer neuen IT erfolgreich durchlaufen und ist das neue IT-System im produktiven Einsatz, ist damit die Arbeit noch nicht erledigt.

Frank Urlaß

Kapitel 13. Nur etwas für Konzerne oder klappt Recruiting Analytics auch im Mittelstand?

Fehlentscheidungen im Recruiting können kostspielig werden. Daher gilt es sie vermeiden. Um Ursachen für Fehlentwicklungen frühzeitig zu erkennen und proaktiv Veränderungen zu bewirken, kann Recruiting Analytics die Lösung sein. Doch gerade mittelständische Unternehmen tun sich noch schwer bei der Implementierung. Doch wer die relevanten Berührungspunkte zu Kandidaten identifizieren kann, vorhandene Datenquellen für sich nutzt und die richtigen Key Performance Indicators festlegt, kann schnell erkennen, wie effektiv einzelne Maßnahmen wirken, Transparenz über den gesamten Recruiting-Prozess erreicht wird und das eingesetzte Budget deutlich effizienter genutzt werden kann.

Marcel Rütten

Kapitel 11. Wie die Jobsuche zur Traumjobsuche wird – und wie HR Tech dabei hilft

Die Entscheidung für einen Job ist neben der Entscheidung für einen Partner und einen Wohnort die wichtigste Entscheidung im Leben eines Menschen. Sie bestimmt seine Lebensumstände und ist ein wichtiger Treiber der Zufriedenheit im Leben. Deshalb ist es wichtig, dass Menschen die Entscheidung für einen Job auf einer möglichst breiten Informationsbasis treffen. Das können sie aber meist nicht: Viele Menschen wissen heute selbst bei Vertragsunterschrift nicht, wie ihr zukünftiger Arbeitsplatz aussieht, mit wem sie zusammenarbeiten werden oder ob sie ins Team passen. Das Ziel von StepStone ist es, das zu ändern. Wir wollen Menschen in die Lage versetzen, die richtige Entscheidung zu treffen. Durch den Einsatz innovativster Technologien helfen wir bei der Suche nach dem Traumjob und beschleunigen den Weg zum „perfect match“ zwischen Talenten und Unternehmen. Wir sind aber auch überzeugt: Am Ende bleibt die Entscheidung für den richtigen Job und den richtigen Kandidaten immer subjektiv.

Rudi Bauer

Kapitel 3. Big Data im Recruiting

Big Data, als Antriebskraft der fortschreitenden Digitalisierung, nimmt unmittelbaren Einfluss auf die Art und Weise des Recruitings. Mithilfe vorhandener Datensätze sowie der Nutzung von prädiktiver Analytik zur Datenverarbeitung und Entscheidungsunterstützung entstehen neue Möglichkeiten, spezielle Talentmärkte und Trends schnellstmöglich zu identifizieren, potenzielle Kandidaten proaktiv anzuwerben und die Bewerberauswahl evidenzbasierter und zielsicherer zu gestalten. Bei effektiver Implementierung einer digitalen Rekrutierungsstrategie kann ein Synergieeffekt zwischen Big Data, Künstliche-Intelligenz-Technologien und menschlichen Entscheidungsträgern zu enormen Produktivitätssteigerungen führen. Jedoch gehen mit den neuen technologischen Möglichkeiten auch Risiken einher, da die Kriterien zur Bewertung von Kandidaten bei Machine-Learning-Systemen oftmals undurchsichtig erscheinen und inkorporierte Verzerrungen zur systematischen Benachteiligung bestimmter Personengruppen führen können. Jedoch verändern sich bereits jetzt Rollen, Anforderungen und Zuständigkeiten und so erscheint aus unternehmerischer Sicht unabdingbar, sich dem digitalen Transformationsprozess zu stellen und das Recruiting schnellstmöglich auf den Wandel einzustellen.

Maximilian Tallgauer, Marion Festing, Florian Fleischmann

Kapitel 1. Einleitung

Wie es zu diesem Buch kam und was Sie hier erwarten wird

Dieses Kapitel bildet eine kurze Zusammenfassung über die Entstehungsgeschichte dieses Buchs, eine Zusammenfassung der wesentlichen Themen, mit denen sich das Buch und jedes seiner Kapitel beschäftigt und meine persönliche Empfehlung, wem dieses Buch einen Mehrwert liefert und wem ich eher davon abraten würde, dieses Buch zu lesen.

Tim Verhoeven

A Probabilistic Approach to Web Waterfall Charts

The purpose of this paper is to propose an efficient and rigorous modeling approach for probabilistic waterfall charts illustrating timings of web resources, with particular focus on fitting them on big data. An implementation on real-world data is discussed, and illustrated on examples. The technique is based on non-parametric density estimation, and we discuss some subtle aspects of it, such as noisy inputs or singular data. We also investigate optimization techniques for numerical integration that arises as a part of modeling.

Maciej Skorski

Den Datenfischern die Netze kappen: Ideen gegen die Marktmacht der Plattformen

Während die Datenkonzerne Google und Facebook weiter auf dem Weg zu digitaler Dominanz sind, nimmt die politische Diskussion um die Begrenzung ihrer Macht an Fahrt auf. Wir haben an dieser Stelle wichtige Ideen zur Regulierung der Plattformmonopole zusammengetragen.

Ingo Dachwitz, Simon Rebiger, Alexander Fanta

Do we have a Data Culture?

Nowadays, adopting a “data culture” or operating “data-driven” are desired goals for a number of managers. However, what does it mean when an organization claims to have data culture? A clear definition is not available. This paper aims to sharpen the understanding of data culture in organizations by discussing recent usages of the term. It shows that data culture is a kind of organizational culture. A special form of data culture is a data-driven culture. We conclude that a data-driven culture is defined by following a specific set of values, behaviors and norms that enable effective data analytics. Besides these values, behaviors and norms, this paper presents the job roles necessary for a datadriven culture. We include the crucial role of the data steward that elevates a data culture to a data-driven culture by administering data governance. Finally, we propose a definition of data-driven culture that focuses on the commitment to data-based decision making and an ever-improving data analytics process. This paper helps teams and organizations of any size that strive towards advancing their – not necessarily big – data analytics capabilities by drawing their attention to the often neglected, non-technical requirements: data governance and a suitable organizational culture.

Wolfgang Kremser, Richard Brunauer

Facilitating Public Access to Legal Information

The European legal system is multi-layered and complex, and large quantities of legal documentation have been produced since its inception. This has significant ramifications for European society, whose various constituent actors require regular access to accurate and timely legal information, and often struggle with basic comprehension of legalese. The project focused on within this paper proposes to develop a suite of usercentric services that will ensure the real–time provision and visualisation of legal information to citizens, businesses and administrations based on a platform supported by the proper environment for semantically annotated Big Open Legal Data (BOLD). The objective of this research paper is to critically explore how current user activity interacts with the components of the proposed project platform through the development of a conceptual model. Model Driven Design (MDD) is employed to describe the proposed project architecture, complemented by the use of the Agent Oriented Modelling (AOM) technique based on UML (Unified Modelling Language) user activity diagrams to develop both the proposed platform’s user requirements and show the dependencies that exist between the different components that make up the proposed system.

Shefali Virkar, Chibuzor Udokwu, Anna-Sophie Novak, Sofia Tsekeridou

Using supervised learning to predict the reliability of a welding process

Abstract—In this paper, supervised learning is used to predict the reliability of manufacturing processes in industrial settings. As an example case, lifetime data has been collected from a special device made of sheet metal. It is known, that a welding procedure is the critical step during production. To test the quality of the welded area, End-of-Life tests have been performed on each of the devices.For the statistical analysis, not only the acquired lifetime, but also data specifying the device before and after the welding process as well as measured curves from the welding step itself, e.g., current over time, are available.Typically, the Weibull and log-normal distributions are used to model lifetime. Also in our case, both are considered as an appropriate candidate distribution. Although both distributions might fit the data well, the log-normal distribution is selected because the ks-test and the Bayesian Factor indicate slightly better results.To model the lifetime depending on the welding parameters, a multivariable linear regression model is used. To find the significant covariates, a mix of forward selection and backward elimination is utilized. The t-test is used to determine each covariate’s importance while the adjusted coefficient of determination is used as a global Goodness-of-Fit criterion. After the model that provides the best fit has been determined, predictive power is evaluated with a non-exhaustive cross-validation and sum of squared errors.The results show that the lifetime can be predicted based on the welding settings. For lifetime prediction, the model yields accurate results when interpolation is used. However, an extrapolation beyond the range of available data shows the limits of a purely data-driven model.

Melanie Zumtobel, Kathrin Plankensteiner

Smart recommendation system to simplify projecting for an HMI/SCADA platform

Abstract—Modelling and connecting machines and hardware devices of manufacturing plants in HMI/SCADA software platforms is considered time-consuming and requires expertise. A smart recommendation system could help to support and simplify the tasks of the projecting process. In this paper, supervised learning methods are proposed to address this problem. Data characteristics, modelling challenges, and two potential modelling approaches, one-hot encoding and probabilistic topic modelling, are discussed.The methodology for solving this problem is still in progress. First results are expected by the date of the conference.

Sebastian Malin, Kathrin Plankensteiner, Robert Merz, Reinhard Mayr, Sebastian Schöndorfer, Mike Thomas

Überwachbare Welt: Wird das Darknet zum Mainstream digitaler Kommunikation?

Der Beitrag diskutiert die Frage, inwiefern das Darknet Zukunftspotentiale für digitale Kommunikation besitzt. Aufgrund staatlicher Machtstrukturen ist davon auszugehen, dass Überwachungs- und Kontrollmöglichkeiten zukünftig auf die digitale Sphäre übertragen werden.

Daniel Moßbrucker

Impact of Anonymization on Sentiment Analysis of Twitter Postings

The process of policy-modelling, and the overall field of policy-making are complex and put decision-makers in front of great challenges. One of them is present in form of including citizens into the decision-making process. This can be done via various forms of E-Participation, with active/passive citizen-sourcing as one way to tap into current discussions about topics and issues of relevance towards the general public. An increased understanding of feelings behind certain topics and the resulting behavior of citizens can provide great insight for public administrations. Yet at the same time, it is more important than ever to respect the privacy of the citizens, act in a legally compliant way, and therefore foster public trust. While the introduction of anonymization in order to guarantee privacy preservation represents a proper solution towards the challenges stated before, it is still unclear, if and to what extent the anonymization of data will impact current data analytics technologies. Thus, this research paper investigates the impact of anonymization on sentiment analysis of social media, in the context of smart governance. Three anonymization algorithms are tested on Twitter data and the results are analyzed regarding changes within the resulting sentiment. The results reveal that the proposed anonymization approaches indeed have a measurable impact on the sentiment analysis, up to a point, where results become potentially problematic for further use within the policy-modelling domain.

Thomas J. Lampoltshammer, L˝orinc Thurnay, Gregor Eibl

Chapter 4. Consumption Pattern as a Turning Point from Material Consumption to Service Consumption

As China enters the Late Industrialization, the 13th FYP period will see a major trend towards the overall and rapid growth of over 1.3 billion people’s service consumption, while the consumption pattern is upgrading from material consumption to service consumption. However, the problem is that the imbalance of investment and consumption has become a prominent structural contradiction in the current economic operation. By 2020, a new pattern of consumption-oriented economic growth will basically be formed. The key is to promote investment transformation and achieve a dynamic balance between investment and consumption.

Fulin Chi

Chapter 3. Urbanization as a Turning Point from Scale to Population

In the 13th FYP period, China has entered a new stage of population urbanization. New-type urbanization is still the greatest potential for China’s development. Based on this, new ideas are needed to deepen the reform of the household registration system in the 13th FYP period. The first idea is a change from population control to service and management of population. The second is a change from the dual household registration system in urban and rural areas to the comprehensive implementation of the residence permit system. The third is a change in the population management from the public security department to the population service department. By 2020, it will be a historic breakthrough to replace the dual household registration system in urban and rural areas with the residence permit system.

Fulin Chi

Chapter 9. Structural Reform with Economic Transformation and Upgrading as the Main Line

The 13th FYP period is an important historical node for China to cross the middle-income trap. In this particular context, structural reform should take economic transformation and upgrading as the main line, and cope with five major relationships. One is the relationship between speed and structure; two is the relationship between the short term and the long term; three is the relationship between policy and system; four is the relationship between the government and the market; and five is the relationship between the top-level design and grassroots innovation.

Fulin Chi

Chapter 2. Industrial Change as a Turning Point from Industry Orientation to Service Orientation

The core of China’s economic transformation and upgrading during the 13th FYP period is industrial structural change. For one thing, industrialization has entered the late stage in which a general trend is marked by the accelerated development of service industry. For another, contradictions caused by irrational industrial structure have become ever manifest. In terms of service-oriented industry structure formed in 2020, the key is to propel the opening of market by breaking the monopoly as an essential point while making an effort to resolve the contradictions between policy and system during the opening of service industry.

Fulin Chi

Chapter 8. Promoting Government Reform with a Focus on Regulatory Changes

In the 13th FYP period, market regulation has a strong particularity. For one thing, with the upgrading of consumption pattern, the release of domestic demand potential is directly dependent on the effectiveness of market regulation. For another, the lagging transformation of market regulation has become the “biggest short board” in further promoting the reform of streamlining administration and decentralization. In this context, both further streamlining and decentralization, and upgrading the consumption pattern and releasing the potential of domestic demand directly depend on the effectiveness of market regulation.

Fulin Chi

Chapter 5. Opening-Up as the Turning Point Is Shifting from Goods Trade to Service Trade

In the 13th FYP period, China’s further opening-up and the new round of global free trade have formed a historical intersection, which provides an important opportunity for China’s “second opening-up” focusing on service trade.

Fulin Chi

Quality-Driven Query Processing over Federated RDF Data Sources

The integration of data from heterogeneous sources is a common task in various domains to enable data-driven applications. Data sources may range from publicly available sources to sources within data lakes of companies. The added value generated by integrating and analyzing the data greatly depends on the quality of the underlying data. As a result, querying heterogeneous data sources as a way of integrating data enabling such applications needs to consider quality aspects. Quality-driven query processing over RDF data sources aims to study approaches which consider data quality description of the data sources to determine optimal query plans. In contrast to most federated query approaches, in quality-driven query processing the quality of an optimal plan and thus of the retrieved data, not only depends on efficiency typically measured as execution time but also on other quality criteria. In this work, we present the challenges associated with considering multiple quality criteria in federated query processing and derive our problem statement accordingly. We present our research questions to address the problem and the associated hypotheses. Finally, we outline our approach including an evaluation plan and provide preliminary results.

Lars Heling

Using Knowledge Graphs to Search an Enterprise Data Lake

This paper summarizes our research & development activities in building a semantic data management platform for large enterprise data lakes, with a focus on the automotive domain. We demonstrate the use of ontology models to systematically represent, link, and search large amounts of automotive data. Such search capability is an important enabler for Hadoop-based big data analytics and machine learning. These findings are being transferred to a productive system in order to foster the advanced engineering and AI at Bosch Chassis Systems Control (CC), especially in the automated driving area.

Stefan Schmid, Cory Henson, Tuan Tran

2. Scheitern vorprogrammiert

„Wir haben zwar ein tolles ERP-System, das liefert aber Ergebnisse, die nicht mit der Realität übereinstimmen. Wir müssen vieles auf manuellem Wege erledigen und die Ergebnisse permanent überprüfen.“ Oder: „Mit steigendem Umsatz haben wir verwaltungstechnisch viel mehr Aufwand als erwartet.“ Solche Sätze hört man immer wieder in Unternehmen und wir werden fast täglich mit dem Thema Stammdaten konfrontiert. Stets fragen wir uns, warum sich Unternehmen ein ERP-System anschaffen, es jedoch nicht zum Fliegen bringen und ihm auch wenig Aufmerksamkeit schenken. Viele unserer Kunden verlieren das eigentliche Ziel, welches sie mit einem ERP-System verfolgt haben, aus den Augen.

Tobias Hertfelder, Philipp Futterknecht

4. Diagnose Selbsterkenntnis – ohne Ziele keine Maßnahmen

„Einmal den Gipfel besteigen“ – Haben Sie nicht auch diesen Traum, Ihr Unternehmen einmal auf dem Gipfel zu sehen, unabhängig davon, was dies ganz persönlich für Sie bedeutet? Nach kurzer Besinnung und zurück in der Realität sitzen Sie wieder an Ihrem Schreibtisch und Ihre Arbeit läuft vermutlich wie gewohnt ab. In der Regel dauert es morgens keine 15 Minuten, bis Sie den ersten Brandherd zu bekämpfen haben. Eigentlich wollten Sie sich schon seit langem der Umsetzung der strategischen Themen widmen, die für Sie wichtig sind, jedoch gibt es zunächst wichtigere Dinge zu tun. Herr Maier hat schon zweimal angerufen, viele Mails sind zu checken und das erste Meeting beginnt in 30 Minuten.

Tobias Hertfelder, Philipp Futterknecht

Multi-scale Microaneurysms Segmentation Using Embedding Triplet Loss

Deep learning techniques are recently being used in fundus image analysis and diabetic retinopathy detection. Microaneurysms are an important indicator of diabetic retinopathy progression. We introduce a two-stage deep learning approach for microaneurysms segmentation using multiple scales of the input with selective sampling and embedding triplet loss. The model first segments on two scales and then the segmentations are refined with a classification model. To enhance the discriminative power of the classification model, we incorporate triplet embedding loss with a selective sampling routine. The model is evaluated quantitatively to assess the segmentation performance and qualitatively to analyze the model predictions. This approach introduces a $$30.29\%$$ relative improvement over the fully convolutional neural network.

Mhd Hasan Sarhan, Shadi Albarqouni, Mehmet Yigitsoy, Nassir Navab, Abouzar Eslami

CS-Net: Channel and Spatial Attention Network for Curvilinear Structure Segmentation

The detection of curvilinear structures in medical images, e.g., blood vessels or nerve fibers, is important in aiding management of many diseases. In this work, we propose a general unifying curvilinear structure segmentation network that works on different medical imaging modalities: optical coherence tomography angiography (OCT-A), color fundus image, and corneal confocal microscopy (CCM). Instead of the U-Net based convolutional neural network, we propose a novel network (CS-Net) which includes a self-attention mechanism in the encoder and decoder. Two types of attention modules are utilized - spatial attention and channel attention, to further integrate local features with their global dependencies adaptively. The proposed network has been validated on five datasets: two color fundus datasets, two corneal nerve datasets and one OCT-A dataset. Experimental results show that our method outperforms state-of-the-art methods, for example, sensitivities of corneal nerve fiber segmentation were at least 2% higher than the competitors. As a complementary output, we made manual annotations of two corneal nerve datasets which have been released for public access.

Lei Mou, Yitian Zhao, Li Chen, Jun Cheng, Zaiwang Gu, Huaying Hao, Hong Qi, Yalin Zheng, Alejandro Frangi, Jiang Liu

Accelerated ML-Assisted Tumor Detection in High-Resolution Histopathology Images

Color normalization is one of the main tasks in the processing pipeline of computer-aided diagnosis (CAD) systems in histopathology. This task reduces the color and intensity variations that are typically present in stained whole-slide images (WSI) due to, e.g., non-standardization of staining protocols. Moreover, it increases the accuracy of machine learning (ML) based CAD systems. Given the vast amount of gigapixel-sized WSI data, and the need to reduce the time-to-insight, there is an increasing demand for efficient ML systems. In this work, we present a high-performance pipeline that enables big data analytics for WSIs in histopathology. As an exemplary ML inference pipeline, we employ a convolutional neural network (CNN), used to detect prostate cancer in WSIs, with stain normalization preprocessing. We introduce a set of optimizations across the whole pipeline: (i) we parallelize and optimize the stain normalization process, (ii) we introduce a multi-threaded I/O framework optimized for fast non-volatile memory (NVM) storage, and (iii) we integrate the stain normalization optimizations and the enhanced I/O framework in the ML pipeline to minimize the data transfer overheads and the overall prediction time. Our combined optimizations accelerate the end-to-end ML pipeline by $$7.2{\times }$$ and $$21.2{\times }$$ , on average, for low and high resolution levels of WSIs, respectively. Significantly, it allows for a seamless integration of the ML-assisted diagnosis with state-of-the-art whole slide scanners, by reducing the prediction time for high-resolution histopathology images from $$\sim $$ 30 min to under 80 s.

Nikolas Ioannou, Milos Stanisavljevic, Andreea Anghel, Nikolaos Papandreou, Sonali Andani, Jan Hendrik Rüschoff, Peter Wild, Maria Gabrani, Haralampos Pozidis

Ki-GAN: Knowledge Infusion Generative Adversarial Network for Photoacoustic Image Reconstruction In Vivo

Photoacoustic computed tomography (PACT) breaks through the depth restriction in optical imaging, and the contrast restriction in ultrasound imaging, which is achieved by receiving thermoelastically induced ultrasound signal triggered by an ultrashort laser pulse. The photoacoustic (PA) images reconstructed from the raw PA signals usually utilize conventional reconstruction algorithms, e.g. filtered back-projection. However, the performance of conventional reconstruction algorithms is usually limited by complex and uncertain physical parameters due to heterogeneous tissue structure. In recent years, deep learning has emerged to show great potential in the reconstruction problem. In this work, for the first time to our best knowledge, we propose to infuse the classical signal processing and certified knowledge into the deep learning for PA imaging reconstruction. Specifically, we make these contributions to propose a novel Knowledge Infusion Generative Adversarial Network (Ki-GAN) architecture that combines conventional delay-and-sum algorithm to reconstruct PA image. We train the network on a public clinical database. Our method shows better image reconstruction performance in cases of both full-sampled data and sparse-sampled data compared with state-of-the-art methods. Lastly, our proposed approach also shows high potential for other imaging modalities beyond PACT.

Hengrong Lan, Kang Zhou, Changchun Yang, Jun Cheng, Jiang Liu, Shenghua Gao, Fei Gao

Quantifying Confounding Bias in Neuroimaging Datasets with Causal Inference

Neuroimaging datasets keep growing in size to address increasingly complex medical questions. However, even the largest datasets today alone are too small for training complex machine learning models. A potential solution is to increase sample size by pooling scans from several datasets. In this work, we combine 12,207 MRI scans from 15 studies and show that simple pooling is often ill-advised due to introducing various types of biases in the training data. First, we systematically define these biases. Second, we detect bias by experimentally showing that scans can be correctly assigned to their respective dataset with 73.3% accuracy. Finally, we propose to tell causal from confounding factors by quantifying the extent of confounding and causality in a single dataset using causal inference. We achieve this by finding the simplest graphical model in terms of Kolmogorov complexity. As Kolmogorov complexity is not directly computable, we employ the minimum description length to approximate it. We empirically show that our approach is able to estimate plausible causal relationships from real neuroimaging data.

Christian Wachinger, Benjamin Gutierrez Becker, Anna Rieckmann, Sebastian Pölsterl

Confounder-Aware Visualization of ConvNets

With recent advances in deep learning, neuroimaging studies increasingly rely on convolutional networks (ConvNets) to predict diagnosis based on MR images. To gain a better understanding of how a disease impacts the brain, the studies visualize the salience maps of the ConvNet highlighting voxels within the brain majorly contributing to the prediction. However, these salience maps are generally confounded, i.e., some salient regions are more predictive of confounding variables (such as age) than the diagnosis. To avoid such misinterpretation, we propose in this paper an approach that aims to visualize confounder-free saliency maps that only highlight voxels predictive of the diagnosis. The approach incorporates univariate statistical tests to identify confounding effects within the intermediate features learned by ConvNet. The influence from the subset of confounded features is then removed by a novel partial back-propagation procedure. We use this two-step approach to visualize confounder-free saliency maps extracted from synthetic and two real datasets. These experiments reveal the potential of our visualization in producing unbiased model-interpretation.

Qingyu Zhao, Ehsan Adeli, Adolf Pfefferbaum, Edith V. Sullivan, Kilian M. Pohl

COMETA: An Air Traffic Controller’s Mental Workload Model for Calculating and Predicting Demand and Capacity Balancing

In ATM (Air Traffic Management), traffic and environment are not important by themselves. The most important factor is the cognitive work performed by the air traffic controller (ATCo). As detailed mental pictures can overcome ATCo’s limited attentional resources (causing Mental Overload), she/he can use internal strategies called abstractions to mitigate the cognitive complexity of the control task. This paper gathers the modelling, automation and preliminary calibration of the Cognitive Complexity concept. The primary purpose of this model is to support the ATM planning roles to detect imbalances and make decisions regarding the best DCB (Demand and Capacity Balancing) measures to resolve hotspots. The four parameters selected that provide meaningful operational information to mitigate cognitive complexity are Standard Flow Interactions, Flights out of Standard Flows, Potential Crossings and Flights in Evolution. The model has been integrated into a DCB prototype within the SESAR (Single European Sky ATM Research) 2020 Wave 1 during Real Time Simulations.

Patricia López de Frutos, Rubén Rodríguez Rodríguez, Danlin Zheng Zhang, Shutao Zheng, José Juan Cañas, Enrique Muñoz-de-Escalona

Use of Edge Computing for Predictive Maintenance of Industrial Electric Motors

Industrial Internet of Things has become a reality in many kind of industries. In this paper, We explore the case of high quantity of raw data generated by a machine. In the aforementioned case is not viable store and process the data in a traditional Internet of Things architecture. For this case, We use an architecture based on edge computing and Industrial Internet of Things concepts and apply them to a case of machine monitoring for predictive maintenance. The proof of concept shows the potential benefits in real industrial applications.

Victor De Leon, Yira Alcazar, Jose Luis Villa

Chapter 5. Digital Thinking in Education

Digital thinking has been considered to be one of the elements of competence and therefore should be made integral part of child’s analytical ability. Therefore, digital thinking must be added to school’s learning. In the present chapter, the ways in which digital thinking approach can be integrated into educationEducation system have been discussed. The challengesChallenges associated with appropriately defining digital thinking have been highlighted, and the discussion is made on the aspects associated with the core and peripheral definitions. Examples have been presented to delineate how digital thinking can be used to address formal and informal educationEducation setting. The chapter ends with concluding remarks on future research agenda in the aforementioned subject of discussion.

Kaushik Kumar, Divya Zindani, J. Paulo Davim

Looking Inside the Black Box: Core Semantics Towards Accountability of Artificial Intelligence

Recent advances in artificial intelligence raise a number of concerns. Among the challenges to be addressed by researchers, accountability of artificial intelligence solutions is one of the most critical. This paper focuses on artificial intelligence applications using natural language to investigate if the core semantics defined for a large-scale natural language processing system could assist in addressing accountability issues. Core semantics aims to obtain a full interpretation of the content of natural language texts, representing both implicit and explicit knowledge, using only ‘subj-action-(obj)’ structures and causal, temporal, spatial and personal-world links. The first part of the paper offers a summary of the difficulties to be addressed and of the reasons why representing the meaning of a natural language text is relevant for artificial intelligence accountability. In the second part, a-proof-of-concept for the application of such a knowledge representation to support accountability, and a detailed example of the analysis obtained with a prototype system named CoreSystem is illustrated. While only preliminary, these results give some new insights and indicate that the provided knowledge representation can be used to support accountability, looking inside the box.

Roberto Garigliano, Luisa Mich

Towards Model Checking Product Lines in the Digital Humanities: An Application to Historical Data

Rapid development in computing techniques and databases’ systems have aided in the digitization of, and access to, various historical (big) data, with significant challenges of analysis and interoperability. The Death and Burial Data, Ireland project aims to build a Big Data interoperability framework loosely based on the Knowledge Discovery Data (KDD) process to integrate Civil Registration of Death data with other data types collated in Ireland from 1864 to 1922. For our project, we resort to a Document Type Description (DTD) product line to represent and manage various representations and enrichments of the data. Well-formed documents serve as contracts between a provider (of the data set) and its customers (the researchers that consult them). We adopt the Context-Free Modal Transition Systems as a formalism to specify product lines of DTDs. The goal is to then proceed to product line verification using context-free model checking techniques, specifically the M3C checker of [14] to ensure that they are fit for purpose. The goal is to later implement and manage the corresponding family of data models and processes in the DIME framework, leveraging its flexible data management layer to define and efficiently manage the interoperable historical data framework for future use.The resulting hierarchical product line verification will allow our technical platform to act as a high-quality service provider for digital humanities researchers, providing them with a wide range of tailored applications implementing the KDD process, whose essential business rules are easily checked by a standard DTD checker.

Ciara Breathnach, Najhan M. Ibrahim, Stuart Clancy, Tiziana Margaria

Kapitel 12. Sportverein 4.0 – Eine Potenzialanalyse der digitalen Transformation für den Breitensport

Die Digitalisierung ist aktuell in aller Munde: Wirtschaft, Politik und Gesellschaft setzen sich mit digitalen Themen auseinander und werden tagtäglich mit den digitalen Trends konfrontiert. Unsere Art zu kommunizieren, zu wirtschaften, zu arbeiten, zu konsumieren und unser gesamtes Miteinander verändern sich dadurch. Diese Entwicklungen mit ihrer intensiven Dynamik und großen Tragweite machen sich auch im Alltag der deutschen Sportvereine bemerkbar. Welche Potenziale stecken für den Breitensport in diesen Veränderungen und welche Risiken stehen dem gegenüber? In diesem Beitrag wird zunächst auf die theoretischen Grundlagen der digitalen Transformation eingegangen, bevor die deutschen Breitensportvereine mit ihren spezifischen Merkmalen und Herausforderungen vorgestellt werden. Darauffolgend wird der Status Quo der bisherigen Auseinandersetzung des (Breiten-)Sports mit der digitalen Transformation beleuchtet, um abschließend die Chancen und Risiken der digitalen Transformation für die Breitensportvereine zu konkretisieren.

Linda Volkmann

6. Phasenmodelle von Partnerschaften

In einem Ökosystem sind die wesentlichen Erfolgsfaktoren die Kompatibilität der Strategien und eine konsistente Zielsetzung. Darüber hinaus streben alle Beteiligten gezielt die kulturelle Konvergenz ihrer Werte an. Je höher die Zahl der an einem Ökosystem beteiligten Parteien, umso schwieriger ist diese Aufgabe. Sie zu lösen, erfordert gewisse Managementkompetenzen wie strategische Weitsicht und Planungsfähigkeiten sowie konsistente Prozesse und Verfahren zur Umsetzung strategischer Maßnahmen und Ziele. In diesem Kapitel erörtern wir schrittweise die einzelnen Kernkompetenzen, die für die Planung und Implementierung strategischer Partnerschaften notwendig sind.

Noah Farhadi

Towards a Deep Learning Approach for Urban Crime Forecasting

This paper presents a deep learning approach for urban crime forecasting. A deep neural network architecture is designed so that it can be trained by using geo-referenced data of criminal activity and road intersections to capture relevant spatial patterns. Preliminary results suggest this model would be able to identify zones with criminal activity in square areas of $$500 \times 500$$ m $$^2$$ in a weekly scale.

Freddy Piraján, Andrey Fajardo, Miguel Melgarejo

A Study on the Effect of ‘Information Mismatch’ Simulation on Victims’ Quality of Life and Sense of Place in the Post-disaster Period

The purpose of this study was to clarify the construction of a gaming simulation model based on information scenarios obtained from questionnaire surveys in L’Aquila. A second objective was to discover the kinds of quality of life (QOL) factors affected by disaster information mismatch (DIM) when using a gaming simulation model based on information scenarios. This study conducted an experiment on the variation of QOL factors by DIM using a questionnaire survey for seven students. It is clarified that, if DIM occurred, then the victim’s QOL decreased compared to the situation with no DIM in each disaster phase. The results of this study indicate that the construction of disaster information sharing systems based on residents’ needs should be harmonized with public specialized information.

Hiroaki Shimizu, Ryoya Tomeno, Quirino Crosta, Micaela Merucuri, Satoru Ono, Hidehiko Kanegae, Paola Rizzi

MELODIC: Selection and Integration of Open Source to Build an Autonomic Cross-Cloud Deployment Platform

MELODIC is and open source platform for autonomic deployment and optimized management of Cross-Cloud applications. The MELODIC platform is a complete, enterprise ready solution using only open source software. The contribution of this paper is the discussion of approaches to integration and various options for large scale open source projects and their evaluation showing that only a combination of an Enterprise Service Bus (ESB) with Business Process Management (BPM) for platform integration and control, and the use of a distributed Event Management Services (EMS) for monitoring state and creating context awareness, will provide the required stability and reliability. Consequently, the selection, the evaluation, and the design process of these three crucial components of the MELODIC platform are described.

Geir Horn, Paweł Skrzypek, Marcin Prusiński, Katarzyna Materka, Vassilis Stefanidis, Yiannis Verginadis

Method of Improving the Cyber Resilience for Industry 4.0. Digital Platforms

Cyber resilience is the most important feature of any cyber system, especially during the transition to the sixth technological stage, and related Industry 4.0 technologies: Artificial Intelligence (AI), Cloud and foggy computing, 5G +, IoT/IIoT, Big Data and ETL, Q-computing, Block chain, VR/AR, etc. We should even consider the cyber resilience as primary one, because the mentioned systems cannot exist without it. Indeed, without the sustainable formation, made of the interconnected components of the critical information infrastructure, it does not make sense to discuss the existence of 4.0 Industry cyber-systems. In case when the cyber security of these systems is mainly focused on assessment of the incidents’ probability and prevention of possible security threats, the cyber security is mainly aimed at preserving the targeted behavior and cyber systems’ performance under the conditions of known (about 45%) as well as unknown (the remaining 55%) cyber-attacks.

Sergei Petrenko, Khismatullina Elvira

Above the Clouds: A Brief Study

Cloud Computing is a versatile technology that can support a broad-spectrum of applications. The low cost of cloud computing and its dynamic scaling renders it an innovation driver for small companies, particularly in the developing world. Cloud deployed enterprise resource planning (ERP), supply chain management applications (SCM), customer relationship management (CRM) applications, medical applications, business applications and mobile applications have potential to reach millions of users. In this paper, we explore the different concepts involved in cloud computing and we also examine clouds from technical aspects. We highlight some of the opportunities in cloud computing underlining the importance of clouds showing why that technology must succeed and we have provided additional cloud computing problems that businesses may need to address. Finally, we discuss some of the issues that this area should deal with.

Subham Chakraborty, Ananga Thapaliya

Measurements for Energy Efficient, Adaptable, Mobile Systems - A Research Agenda

Software systems are the enabling technology for the development of sustainable systems. However, such devices consume power both from the client side and from the server side. This scenario poses to software engineering a new challenge that concerns the development of software for sustainable systems i.e. systems that explicitly characterize the resources under control, that dynamically evolve to maintain an acceptable consumption of resources making the best possible tradeoff with user needs and that are opportunistic and proactive in taking actions that can optimize future resource consumption based on context and past experiences. This paper outlines a research agenda in this area.

Vladimir Ivanov, Sergey Masyagin, Andrey Sadovykh, Alberto Sillitti, Giancarlo Succi, Alexander Tormasov, Evgeny Zouev

Document Recommendation Based on Interests of Co-authors for Brain Science

Personalized knowledge recommendation is an effective measure to provide individual information services in the field of brain science. It is essential that a complete understanding of authors’ interests and accurate recommendation are carried out to achieve this goal. In this paper, a collaborative recommendation method based on co-authorship is proposed to make. In our approach, analysis of collaborators’ interests and the calculation of collaborative value are used for recommendations. Finally, the experiments using real documents associated with brain science are given and provide supports for collaborative document recommendation in the field of brain science.

Han Zhong, Zhisheng Huang

Research on a Blockchain-Based Medical Data Management Model

Medical data plays an important role in government regulation of resources, scientific research and precise treatment of medical staffs. Due to the different data management systems used by each hospital, it is difficult to exchange data among them, resulting in a waste of medical resources. In this paper, a medical data management model based on the blockchain is proposed, which takes advantage of the characteristics of the blockchain, such as decentralisation, tamper-proofing and realizability. A data-sharing reward mechanism was designed to maximise the benefits of both a medical data producer (MDP) and a medical data Miner (MDM) in the process of data sharing, while reducing the risk of leakage of a patient’s private information. Finally, a reward mechanism was analysed through experiments, which proved the validity and reliability of the medical data management model based on blockchain.

Xudong Cao, Huifen Xu, Yuntao Ma, Bin Xu, Jin Qi

A Smart Health-Oriented Traditional Chinese Medicine Pharmacy Intelligent Service Platform

With the national emphasis on traditional Chinese medicine treatments and the development of the modern Internet, people are increasingly showing a strong interest in traditional Chinese medicine, leading to the transformation of traditional Chinese medicine enterprises. The optimisation and innovation of the traditional Chinese medicine pharmacy service has become a hot topic. Therefore, this study combines the advantages of traditional Chinese medicine with Internet technology to build a smart health-oriented traditional Chinese medicine pharmacy intelligent service platform. It integrates hospitals, pharmacies, drug decoction centres, distribution centres and other resources, and forms a traditional Chinese medicine decoction, distribution and traceability system. The system realises the informatisation, automation and standardisation of traditional Chinese medicine pharmacy services. In this study, the platform is implemented using the Internet of Things and the Internet in Nanjing Pharmaceutical Co., Ltd. to provide patients with standard modern drug decoction and distribution services, and to monitor and manage the decoction, distribution and traceability processes.

Lei Hua, Yuntao Ma, Xiangyu Meng, Bin Xu, Jin Qi

Deep Learning in Multimodal Medical Image Analysis

Various imaging modalities (CT, MRI, PET, etc.) encompass abundant information which is different and complementary to each other. It is reasonable to combine images from multiple modalities to make a more accurate assessment. Multimodal medical imaging has shown notable achievements in improving clinical accuracy. Deep learning has achieved great success in image recognition, and also shown huge potential for multimodal medical imaging analysis. This paper gives a review of deep learning in multimodal medical imaging analysis, aiming to provide a starting point for people interested in this field, and highlight gaps and challenges of this topic. Based on the introduction of basic ideas of deep learning and medical imaging, the state-of-the-art multimodal medical image analysis is given, with emphasis on the fusion technique and feature extraction deep models. Multimodal medical image applications, especially cross-modality related, are also summarized.

Yan Xu

Towards Annotation-Free Segmentation of Fluorescently Labeled Cell Membranes in Confocal Microscopy Images

The lack of labeled training data is one of the major challenges in the era of big data and deep learning. Especially for large and complex images, the acquisition of expert annotations becomes infeasible and although many microscopy images contain repetitive and regular structures, manual annotation effort remains expensive. To this end, we propose an approach to obtain image slices and corresponding annotations for confocal microscopy images showing fluorescently labeled cell membranes in an automated and unsupervised manner. Due to their regular structure, cell membrane positions are modeled in silico and respective raw images are synthesized by generative deep learning approaches. The resulting synthesized data set is validated based on the authenticity of generated images and the utilizability for training an existing deep learning segmentation approach. We show, that segmentation accuracy nearly reaches state-of-the-art performance for fluorescently labeled cell membranes in A.thaliana, without the expense of manual labeling.

Dennis Eschweiler, Tim Klose, Florian Nicolas Müller-Fouarge, Marcin Kopaczka, Johannes Stegmaier

Edge AIBench: Towards Comprehensive End-to-End Edge Computing Benchmarking

In edge computing scenarios, the distribution of data and collaboration of workloads on different layers are serious concerns for performance, privacy, and security issues. So for edge computing benchmarking, we must take an end-to-end view, considering all three layers: client-side devices, edge computing layer, and cloud servers. Unfortunately, the previous work ignores this most important point. This paper presents the BenchCouncil’s coordinated effort on edge AI benchmarks, named Edge AIBench. In total, Edge AIBench models four typical application scenarios: ICU Patient Monitor, Surveillance Camera, Smart Home, and Autonomous Vehicle with the focus on data distribution and workload collaboration on three layers. Edge AIBench is publicly available from http://www.benchcouncil.org/EdgeAIBench/index.html . We also build an edge computing testbed with a federated learning framework to resolve performance, privacy, and security issues.

Tianshu Hao, Yunyou Huang, Xu Wen, Wanling Gao, Fan Zhang, Chen Zheng, Lei Wang, Hainan Ye, Kai Hwang, Zujie Ren, Jianfeng Zhan

A Survey on Deep Learning Benchmarks: Do We Still Need New Ones?

Deep Learning has recently been gaining popularity. From the micro-architecture field to the upper-layer end applications, a lot of research work has been proposed in the literature to advance the knowledge of Deep Learning. Deep Learning Benchmarking is one of such hot spots in the community. There are a bunch of Deep Learning benchmarks available in the community already and new ones keep coming as well. However, we find that not many survey works are available to give an overview of these useful benchmarks in the literature. We also find few discussions on what has been done for Deep Leaning Benchmarking in the community and what are still missing. To fill this gap, this paper attempts to provide a survey on multiple high-impact Deep Learning Benchmarks with training and inference support. We share some of our insightful observations and discussions on these benchmarks. In this paper, we believe the community still needs more benchmarks to capture different perspectives, while these benchmarks need a way for converging to a standard.

Qin Zhang, Li Zha, Jian Lin, Dandan Tu, Mingzhe Li, Fan Liang, Ren Wu, Xiaoyi Lu

DCMIX: Generating Mixed Workloads for the Cloud Data Center

To improve system resource utilization, consolidating multi-tenants’ workloads on the common computing infrastructure is a popular way for the cloud data center. The typical deployment of the modern cloud data center is co-locating online services and offline analytics applications. However, the co-locating deployment inevitably brings workloads’ competitions for system resources, such as the CPU and the memory resources. These competitions result in that the user experience (the request latency) of the online services cannot be guaranteed. More and more efforts try to assure the latency requirements of services as well as the system resource efficiency. Mixing the cloud workloads and quantifying resource competition is one of the prerequisites for solving the problem. We proposed a benchmark suite—DCMIX as the cloud mixed workloads, which covered multiple application fields and different latency requirements. Furthermore the mixture of workloads can be generated by specifying mixed execution sequence in the DCMIX. We also proposed the system entropy metric, which originated from some basic system level performance monitor metrics as the quantitative metric for the disturbance caused by system resource competition. Finally, compared with the Service-Standalone mode (only executing the online service workload), we found that $$99^{th}$$ percentile latency of the service workload under the Mixed mode (workloads mix execution) increased 3.5 times, and the node resource utilization under that mode increased 10 times. This implied that mixed workloads can reflect the mixed deployment scene of cloud data center. Furthermore, the system entropy of mixed deployment mode was 4 times larger than that of the Service-Standalone mode, which implied that the system entropy can reflect the disturbance of the system resource competition. We also found that the isolation mechanism has some efforts for mixed workloads, especially the CPU-affinity mechanism.

Xingwang Xiong, Lei Wang, Wanling Gao, Rui Ren, Ke Liu, Chen Zheng, Yu Wen, Yi Liang

An Open Source Cloud-Based NoSQL and NewSQL Database Benchmarking Platform for IoT Data

Internet of Things (IoT) is continually expanding, and the information being transmitted through IoT is often in large-scale in both volume and velocity. With its evolution, IoT raises new challenges such as throughput and scalability of software and database working with it. This is the reason that traditional techniques for data management and database operations cannot adopt the new challenges from IoT data. We need an efficient database system that can handle, store, and retrieve continuous, high-speed, and large-volume data, perform various database operations, and generate quick results. Recent developments of database technologies such as NoSQL and NewSQL database provides promising solutions to IoT. This paper proposes an extensible cloud-based open-source benchmarking framework on how these databases could work with IoT data. Using the framework, we compare the performances of VoltDB NewSQL and MongoDB NoSQL database systems on IoT data injection, transactional operations, and analytical operations.

Arjun Pandya, Chaitanya Kulkarni, Kunal Mali, Jianwu Wang

Power Characterization of Memory Intensive Applications: Analysis and Implications

DRAM is a significant source of server power consumption especially when the server runs memory intensive applications. Current power aware scheduling assumes that DRAM is as energy proportional as other components. However, the non-energy proportionality of DRAM significantly affects the power and energy consumption of the whole server system when running memory intensive applications. Thus good knowledge of server power characterization under memory intensive workloads can help better workload placement with power reduction. In this paper, we investigate the power characteristics of memory intensive applications on real rack servers of different generations. Through comprehensive analysis we find that (1) Server power consumption changes with workload intensity and concurrent execution threads. However, fully utilized memory systems are not the most energy efficient. (2) Powered memory modules of installed memory capacity, i.e. the memory capacity per processor core has significant impact on the application’s performance and server power consumption even if the memory system is not fully utilized. (3) Memory utilization is not always a good indicator for server power consumption when it is running memory intensive applications. Our experiments show that hardware configuration, workload types, as well as concurrently running threads have significant impact on a server’s energy efficiency when running memory intensive applications. Our findings presented in this paper provide useful insights and guidance to system designers, as well as data center operators for energy efficiency aware job scheduling and power reductions.

Yeliang Qiu, Congfeng Jiang, Tiantian Fan, Yumei Wang, Liangbin Zhang, Jian Wan, Weisong Shi

Machine-Learning Based Spark and Hadoop Workload Classification Using Container Performance Patterns

Big data Hadoop and Spark applications are deployed on infrastructure managed by resource managers such as Apache YARN, Mesos, and Kubernetes, and run in constructs called containers. These applications often require extensive manual tuning to achieve acceptable levels of performance. While there have been several promising attempts to develop automatic tuning systems, none are currently robust enough to handle realistic workload conditions. Big data workload analysis research performed to date has focused mostly on system-level parameters, such as CPU and memory utilization, rather than higher-level container metrics. In this paper we present the first detailed experimental analysis of container performance metrics in Hadoop and Spark workloads. We demonstrate that big data workloads show unique patterns of container creation, completion, response-time and relative standard deviation of response-time. Based on these observations, we built a machine-learning-based workload classifier with a workload classification accuracy of 83% and a workload change detection accuracy of 74%. Our observed experimental results are an important step towards developing automatically tuned, fully autonomous cloud infrastructure for big data analytics.

Mikhail Genkin, Frank Dehne, Pablo Navarro, Siyu Zhou

AIBench: Towards Scalable and Comprehensive Datacenter AI Benchmarking

AI benchmarking provides yardsticks for benchmarking, measuring and evaluating innovative AI algorithms, architecture, and systems. Coordinated by BenchCouncil, this paper presents our joint research and engineering efforts with several academic and industrial partners on the datacenter AI benchmarks—AIBench. The benchmarks are publicly available from http://www.benchcouncil.org/AIBench/index.html . Presently, AIBench covers 16 problem domains, including image classification, image generation, text-to-text translation, image-to-text, image-to-image, speech-to-text, face embedding, 3D face recognition, object detection, video prediction, image compression, recommendation, 3D object reconstruction, text summarization, spatial transformer, and learning to rank, and two end-to-end application AI benchmarks. Meanwhile, the AI benchmark suites for high performance computing (HPC), IoT, Edge are also released on the BenchCouncil web site. This is by far the most comprehensive AI benchmarking research and engineering effort.

Wanling Gao, Chunjie Luo, Lei Wang, Xingwang Xiong, Jianan Chen, Tianshu Hao, Zihan Jiang, Fanda Fan, Mengjia Du, Yunyou Huang, Fan Zhang, Xu Wen, Chen Zheng, Xiwen He, Jiahui Dai, Hainan Ye, Zheng Cao, Zhen Jia, Kent Zhan, Haoning Tang, Daoyi Zheng, Biwei Xie, Wei Li, Xiaoyu Wang, Jianfeng Zhan
Image Credits