Skip to main content

Business Intelligence

weitere Buchkapitel


Dieses Kapitel enthält die Stichworte zum Buchstaben B von Backend as a Service bis By-laws des Gabler Kompakt-Lexikon Unternehmensgründung.

Tobias Kollmann


Dieses Kapitel enthält die Stichworte zum Buchstaben D von Dachfonds bis Dynamic Pricing des Gabler Kompakt-Lexikon Unternehmensgründung.

Tobias Kollmann


Dieses Kapitel enthält die Stichworte zum Buchstaben P von Page Impression bis Put/Call-Option des Gabler Kompakt-Lexikon Unternehmensgründung.

Tobias Kollmann

Apriori-Backed Fuzzy Unification and Statistical Inference in Feature Reduction: An Application in Prognosis of Autism in Toddlers

Maitra, Shithi Akter, Nasrin Zahan Mithila, Afrina Hossain, Tonmoy Shafiul Alam, MohammadWeak Artificial Intelligence (AI) allows the application of machine intelligence in modern health information technology to support medical professionals in bridging physical/psychological observations with clinical knowledge, thus generating diagnostic decisions. Autism, a highly variable neurodevelopmental condition marked by social impairments, reveals symptoms during infancy with no abatement with time due to comorbidities. There exist genetic, behavioral, neurological actors playing roles in the making of the disease and this constructs an ideal pattern recognition task. In this research, the Autism Screening Data (ASD) for toddlers was initially exploratorily analyzed to hypothesize impactful features which were further condensed and inferentially pruned. An interesting application of the business intelligence algorithm: Apriori has been made on transactions consisting of ten features and this has constituted a novel preprocessing step derived from market basket analysis. The huddling features were fuzzily modeled to a single feature, the membership function of which evaluated to the degree to which a toddler could be called autistic, thus paving the way to the first optimized Neural Network (NN). Features were further eliminated based on statistical t-tests and Chi-squared tests, administering features only with $$p{\text {-values}} < 0.05$$ p -values < 0.05 —giving rise to the second and final optimized model. The research showed that the unremitted 16-feature and the optimized 5-feature models showed equivalence in terms of maximum test accuracy: 99.68%, certainly with lower computation in the optimized scheme. The paper follows a ‘hard (EDA, inferential statistics) + soft (fuzzy logic) + hard (forward propagation) + soft (backpropagation)’ pipeline and similar systems can be used for similar prognostic problems.

Shithi Maitra, Nasrin Akter, Afrina Zahan Mithila, Tonmoy Hossain, Mohammad Shafiul Alam

1. Organisational Buying: Accepted Wisdom

Upon completion of the chapter, you should be able to:

Daniel D Prior

12. Information Technology Developments and Organisational Buying

Upon completion of the chapter, you should be able to:

Daniel D Prior

Marketing and Financial Services in the Age of Artificial Intelligence

Marketing is changing rapidly with the advancements in AI systems. AI presents several opportunities as gaining insight, hyper-personalization, enhancing the customer experience, providing customers with better service, reducing operational costs, increase efficiency, etc. Artificial intelligence has become a game changer for marketers as it is for financial services providers. Therefore, it is paramount to build a good understanding of AI in the marketing context and mention the fundamentals of AI usage in financial services as AI is introduced as a significant competitive edge factor in financial marketing in recent years. This study aims to explore AI and marketing in a theoretical base and aims to explain the issue with a holistic approach.

Ayşen Akyüz, Korhan Mavnacıoğlu

Chapter 10. Social Impact Assessment: Measurability and Data Management

The growing interest in the issues of sustainability and social innovation opens new opportunities to explore the great potential of technological innovation in favour of social and environmental impact objectives. However, the scientific community dealing with these issues is facing new challenges to make the achieved results measurable. This research aims to verify the possibility of overcoming the limit of measurability of social impact by integrating social innovation and digital innovation. More specifically, gathering the social impact measurement data and applying business intelligence methods to construct a data warehouse containing a benchmark on the impact chain (Clark 2004). The results show a great potential of information that can be generated by digital technologies to make social innovation processes more measurable, reliable and scalable.

Luigi Corvo, Lavinia Pastore

Data-Driven Organizational Structure Optimization: Variable-Scale Clustering

With the continuous improvement of external data acquisition ability and computing power, data-driven optimization of organizational structure becomes an emerging technique for various enterprises to develop business performance and control management costs. This paper focuses on the management scale level discovery problem for the optimization of enterprise organizational structure. Firstly, according to the scale transformation theory, the scale level of the multi-scale dataset is defined. Then, a scale level discovery method based on the variable-scale clustering (SLD-VSC) is proposed. After determining management objectives, the SLD-VSC is able to recognize optimal management scale level and the scale characteristics of each management object clusters distributed in different management scale levels. The numerical experimental results illustrate that the proposed SLD-VSC is able to support enterprises improving their organizational structure by identifying the management scale levels from business data.

Ai Wang, Xuedong Gao

8. Wissensmanagement implementieren

Dieses Kapitel orientiert die Implementierung eines Wissensmanagements an fünf Leitfragen.Der Leser lernt das Wissensmarktkonzept kennen mit seinen drei Elementen: Rahmenbedingungen, Spielregeln und Akteure sowie Prozesse/Strukturen, mit denen ein wissensorientiertes Unternehmen gestaltet werden kann. Viele Praxisbeispiele geben konkrete Umsetzungsempfehlungen. Rollen und Aufgaben von Wissensmanagern werden beschrieben.Außerdem wird ein Gesamtkonzept zur Einführung von Wissensmanagement orientiert an der Wissenstreppe dargestellt. Vier Einführungspfade und ein Zwölf-Punkte-Programm bereiten den Weg zum wissensorientierten Unternehmen.

Klaus North

Chapter 9. Enabling Technologies and Architecture for 5G-Enabled IoT

With the creation of 3G, the world witnessed the era of the net boom, internet surfing emerged as clean on portable devices, and when 4G was applied, it modified everything, including uploading of information, downloading of records, and social media became a habit of retail users, and virtual interaction reached a new height. Now 5G promises to widen internet accessibility in diverse programs worldwide, but there remain many challenges to successful implementation of 5G. This centralized architecture also can have a unique cause for dissatisfaction near the numerical overhead. Thus, there is a requirement for a viable dispersed entry controllable framework for Device-to-Device (D2D) correspondence in numerous present-day components for IoT-enabled power-driven motorization. The open issues and difficulties of 5G-enabled loT for industrial automation focused on blockchain are also analyzed in this chapter. Finally, a comparison of the current proposals with respect to different criteria is provided, allowing end-users to choose one of the proposals over the others in relation to its merits. This chapter depicted empowering Technologies and Architecture for 5G-enabled IoT having components via the usage of point portrayal of Taxonomy and Decentralized Architectures for 5G-enabled IoT. We also discussed approximately Integrating IoE and Artificial Intelligence with blockchain-based 5G-enabled IoT. This will further help in Communication and Networking in 5G-enabled IoT. It moreover depicts the advanced record’s analytics for cloud-facilitated storage in 5G-enabled IoT. From the existing recommendation, it has been seen that blockchain can exchange a huge segment of the existing and future mechanical programs in super fragments by giving fine-grained decentralized entry management. 5G, IoT, and blockchain all effect and need one another to prosper in this globalized world.

Parveen Mor, Shalini Bhaskar Bajaj

Chapter 5. Developing a Protective Security Strategy

The origins of “strategy” can be traced by to 1810, meaning the “art of a General” and causally linked with the Greek strategia "office or command of a general," from strategos "general, commander of an army," and from the title of various civil officials and magistrates, from stratos "multitude, army, expedition, encamped army." This first became a nonmilitary used term from 1887.

Jim Seaman

An Overview of Complex Enterprise Systems Engineering: Evolution and Challenges

Enterprise Systems Engineering (ESE) is the application of systems engineering principles, concepts, and methods to the planning, design, improvement, and operation of an enterprise. The development of an enterprise poses new challenges for Systems Engineering (SE) to address enterprise architecture, strategic technical planning, enterprise analysis and organizational behavior. In this paper, we first summarize the evolution of technology product development, discuss the concepts of Enterprise Systems (ES), and review recent research works on Enterprise Systems Engineering (ESE) with Enterprise Architecture (EA) and Enterprise Ontology (EO). With the goal of successful integration of technology and human-beings, we then propose an updated ontology of the EA framework, highlight the new trend of Model based Enterprise (MBE), the ES “Vee” model, and present challenges of future research on ESE posed by ES requirements, scope, synthesis, model-based design, integration, and verification and validation.

Xinguo Zhang, Chen Wang, Lefei Li, Pidong Wang

3. Theoretische Betrachtungen von Entscheidungen

Worum handelt es sich bei meiner Entscheidung und mit welcher Methode treffe ich sie am besten? Einige Entscheidungen lassen sich gut unter der Anwendung von Heuristiken treffen, andere wiederum durch die wissensbasierte, multikriterielle Entscheidungsfindung. Durch das kritische Hinterfragen von statistischen Studien lassen sich Fehlentscheidungen vermeiden.

Sebastian Pioch

Chapter 4. Artificial Intelligence

Artificial intelligence (AI) is the most versatile digital technology introduced in this book and among the most frequently used buzzwords in media today. It is often used in conjunction with the related terms machine learning, neural networks, big data, and deep learning, which we will discuss in this chapter, too. We will see that artificial intelligence offers a wide range of applications across all sectors of modern business and society.

Volker Lang

Chapter 1. Digitalization and Digital Transformation

Digitalization is a technological trend that is reshaping all sectors of our industry and society today. It is considered a major and inexorable driving force of innovation and disruption that challenges private and public organizations equally. With all economic and societal sectors being affected, digital economy is very dynamic and increasingly competitive. It empowers new startup ventures – backed by many billion dollars of venture capital – to create novel value propositions by leveraging digital technologies. Their highly scalable, data-driven, and software-centric operating models increasingly collide with incumbent companies and poise an existential threat to their business. Just think about Apple’s iOS and Google’s Android smartphones, for example. Built on a consistent digital platform, both companies attracted ever-expanding ecosystems of third-party app developers that ultimately caused Nokia to tumble from a position of phone industry dominance into irrelevance. This story is threatening to repeat everywhere across the economy: The cloud computing services of Amazon, Microsoft, and Google are challenging traditional software and hardware providers, the online marketplaces of Amazon and Alibaba are replacing traditional retailers and challenge companies like on-demand Walmart, and the video delivery services of Netflix and Hulu are about to disrupt traditional pay TV providers. Another very popular example is the online booking platforms Airbnb and that leverage digital technologies to simplify booking and offer personalized and individually tailored travel experiences while disrupting the business of traditional hotel chains including Marriott, Hilton, and Hyatt. In order to capture the benefits of digital technologies, established organizations – independently of whether they operate in the public or private sector – are forced to incorporate them into their own ecosystem to advance their product and service portfolios through an organizational change process that is commonly referred to as digital transformation. This process of technology adaptation is particularly challenging for companies that were – in contrast to Amazon, Google, Microsoft, and others – not “born digital” since those companies need to undergo far-reaching business transitions that may well overturn established job designs and internal business processes and require novel ways of thinking and collaboration.

Volker Lang

An Introduction to: Legal-Economic Institutions, Entrepreneurship, and Management: Perspectives on the Dynamics of Institutional Change from Emerging Markets

This introduction serves as an overview of the Legal-Economic Institutions, Entrepreneurship, and Management: Perspectives on the Dynamics of Institutional Change from Emerging Markets. The emphasis is mainly on legal-economic institutions and the role of management and entrepreneurship on institutional change in emerging market economies. It describes the contents of the book and contributed chapters reflecting the research and analysis undertaken by a number of outstanding researchers, experts, and academics who are educating, conducting research, and engaged in addressing and discussing the most recent issues in the interdisciplinary area of institutional change and development. Moreover, the book can contribute to a better comprehension of the corresponding aspects and evidence of institutional change and development dynamics and satisfy the scholarly and intellectual interests.

Nezameddin Faghih, Ali Hussein Samadi

The Role of Modern Technologies on Entrepreneurship Dynamics Across Efficiency and Innovation-Driven Countries

The present research paper is focused on global trends of modern technologies (such as blockchain, mechatronics, IT, artificial intelligence, and augmented or virtual reality) that affect entrepreneurial activities in the short and long run. Research on technology’s effects on business development is gaining momentum.To examine the role of modern technologies on entrepreneurship dynamics in high-income countries, the authors conducted semi-structured qualitative interviews with 16 entrepreneurs from 4 countries (4 entrepreneurs per country, of which 8 came from countries that entered the EU in 2004 [Lithuania and Malta] and 8 from innovation-driven Canada and South Korea. We backed the conceptual matrix of modern technologies’ effects on business with the GEM data for South Korea and Canada and paired GEM countries to Lithuania and Malta (Poland and Latvia for Lithuania and Cyprus for Malta).The purpose of the research is to examine the role of modern technologies on entrepreneurship dynamics in high-income countries (in the efficiency and innovation-driven categories). The research question is how to leverage the economic and social value-added of entrepreneurship activities via modern technologies, create synergy among stakeholders, and reach business sustainability. Our methodology combines primary and secondary data analysis: The literature review and secondary GEM 2018/2019 data were supported by primary, qualitative, and semi-structured interview results with technology-driven entrepreneurship experts from four high-income countries (Lithuania, Malta, Canada, and South Korea), which backed the conceptual model created after the scientific literature review and GEM 2018/2019 data analysis.

Mindaugas Laužikas, Aistė Miliūtė

Kapitel 8. Umweltmanagementsysteme (UM)

Die UN habenUmweltmanagement im Jahr 2016 17 Ziele für nachhaltige EntwicklungZielefür nachhaltige Entwicklung (Sustainable Development Goals) verabschiedet, die bis 2030 erreicht sein sollen.

Johannes Kals

Kapitel 23. Das Online-Marketing von morgen

Marketing-Automation, Marketing-Suites und kanalübergreifende Strategien

Aktuell gibt es viele Aspekte, die darauf hindeuten, dass sich Marketing im Allgemeinen und das Online-Marketing im Speziellen in den nächsten Jahren stark verändern werden. Seit geraumer Zeit kursieren Buzzwords wie „digitale Transformation“, „das Internet der Dinge“ oder „Industrie 4.0“ durch die Medien. Das beeinflusst natürlich auch Entscheider in Unternehmen. Der digitale Wandel beschäftigt derzeit die gesamte Wirtschaft. Keine Branche, keine Organisation oder Institution kann sich davor verschließen. Die Welt wird immer globaler, immer mobiler, immer vernetzter, immer schneller und damit immer komplexer. Je komplexer Sachverhalte und Prozesse werden, desto mehr Vorteile ergeben sich, wenn diese mit technischen Hilfsmitteln gesteuert und optimiert werden. Mit anderen Worten: Die Technologisierung des Marketings ist nicht mehr aufzuhalten.

Erwin Lammenett

Kapitel 4. Geschäftssysteme und Benchmarks im E-Commerce

Der Ausgestaltung des Geschäftssystems kommt eine Schlüsselrolle im Online-Handel zu. Sie ist auch Basis für Kanalexzellenz, die erfolgreiche Online-Händler auszeichnet. Diese sind in der Lage, mit ihren Leistungen im E-Commerce den Benchmark zu setzen, und nutzen alle Möglichkeiten der modernen Interaktion. Insgesamt sind acht zentrale Erfolgsfaktoren für das Vorliegen von Webexzellenz im B2C zu beachten. Eine große Herausforderung ist jedoch zunehmend die Nachhaltigkeit der Erfolgsfaktoren, da der Wettbewerb sich immer schneller anpasst.

Gerrit Heinemann

6. Data Science bei OTTO

Prognosen, die auf Verfahren des maschinellen Lernens basieren, haben im deutschen Versandhandel Tradition. Quasi zeitgleich mit dem Einzug der Desktop Computer in die Büros in den frühen 90er-Jahren fingen Distanzhändler wie OTTO an, die Möglichkeiten der entstandenen Rechenkapazität für die Optimierung des eigenen Geschäfts und der Schaffung von Mehrwerten für ihre Kunden zu nutzen. Vor allem im Marketing wurde damals mit Hilfe der aufkommenden Berufsdisziplin des Data Mining neue Wege beschritten. Einige der in dieser Zeit geprägten Ansätze finden heute noch in ähnlicher Form Anwendung. Die Entwicklung von Big Data Technologien nach 2010 hat die Möglichkeiten, Daten mittels künstlicher Intelligenz zu verarbeiten, allerdings drastisch verbessert. Quasi in allen Unternehmensbereichen, in den Daten eine wichtige Planungs- und Steuerungsgrundlage darstellen, werden heute auch Data-Science-Services eingesetzt. Und die Möglichkeiten sind längst noch nicht ausgeschöpft. Wir erwarten, dass Unternehmensprozesse in den nächsten Jahren noch in viel stärkerem Maße als heute mit Hilfe der Algorithmen automatisiert werden können.

Timo Christophersen, Juri Pärn

7. Das intelligente Unternehmen: Effiziente Prozesse mit Künstlicher Intelligenz von SAP – Wie Unternehmen die hohen Erwartungen an die KI erfüllen können

Der Einsatz der KI bringt den Vorreitern in Deutschland bereits überdurchschnittliche Renditen und Wachstumsraten. Diese Unternehmen zeigen die Bedeutung der KI für die Wettbewerbsfähigkeit. Eine große Hürde stellt immer noch der Schritt von der Strategie zur konkreten Umsetzung dar. Dieses Kapitel beleuchtet diese Hürden und zeigt an konkreten Produktivbeispielen die bereits erreichten Mehrwerte entlang der Wertschöpfungskette. Basierend auf der SAP-Strategie zu KI schließt das Kapitel mit Erfolgsfaktoren und Handlungsempfehlungen zum erfolgreichen Einsatz von KI in Unternehmen.

Susanne Vollhardt, Karsten Schmidt, Sean Kask, Markus Noga

Kapitel 12. Marketing-Intelligence: Zwischen Key-Performance und Kreativität – zwischen Digitalisierung, Digitalität und Digitalismus

Marketing-Intelligence begegnet der Marketingpraxis einerseits als statistisch-analytischer Treiber des Performance-Marketings und andererseits als Treiber des Kreativmarketings. Dieser Beitrag fragt, ob mit dem Performance-Marketing ein analytisch-methodisches Optimierungspotenzial propagiert wird, dessen Beweisführung noch gar nicht möglich ist. Nach einer rahmengebenden Kennzeichnung der Begriffe Digitalisierung, Digitalität und Digitalismus legt dieser Beitrag eine Phase wachsender Digitalität dar. Sie ist vom Performance-Marketing mit dem Paradigma der Kennzahlenerfüllung sowie der Popularität der Social Media mit dem Content-Marketing und dem hiermit getriebenen Inbound-Paradigma geprägt. Mit dieser polarisierenden Gegenüberstellung soll eine Methodenspaltung pointiert werden, die die Marketingpraxis warnt, dass Marketing-Intelligence als Performance-Marketing sich bestenfalls in einer Frühphase befindet.

Jan Lies

Towards Expectation-Maximization by SQL in RDBMS

Integrating machine learning techniques into RDBMSs is an important task since many real applications require modeling (e.g., business intelligence, strategic analysis) as well as querying data in RDBMSs. Without integration, it needs to export the data from RDBMSs to build a model using specialized ML toolkits and frameworks, and import the model trained back to RDBMSs for further querying. Such a process is not desirable since it is time-consuming and needs to repeat when data is changed. In this paper, we provide an SQL solution that has the potential to support different ML models in RDBMSs. We study how to support unsupervised probabilistic modeling, that has a wide range of applications in clustering, density estimation, and data summarization, and focus on Expectation-Maximization (EM) algorithms, which is a general technique for finding maximum likelihood estimators. To train a model by EM, it needs to update the model parameters by an E-step and an M-step in a while-loop iteratively until it converges to a level controlled by some thresholds or repeats a certain number of iterations. To support EM in RDBMSs, we show our solutions to the matrix/vectors representations in RDBMSs, the relational algebra operations to support the linear algebra operations required by EM, parameters update by relational algebra, and the support of a while-loop by SQL recursion. It is important to note that the SQL ’99 recursion cannot be used to handle such a while-loop since the M-step is non-monotonic. In addition, with a model trained by an EM algorithm, we further design an automatic in-database model maintenance mechanism to maintain the model when the underlying training data changes. We have conducted experimental studies and will report our findings in this paper.

Kangfei Zhao, Jeffrey Xu Yu, Yu Rong, Ming Liao, Junzhou Huang

Performance Comparison of Tree-Based Machine Learning Classifiers for Web Usage Mining

Web usage mining plays a very important role in finding a new patterns and recognition from the web server log file. It is the subcategory of web mining that is used to mine data over the web. In this paper, the authors classified the server log file, identified new patterns of data, and analyzed the data over the tree-based classification algorithms. Authors accessed the tree algorithms (treeJ48, RandomTree, RandomForest, and REPTree) to classify the weblog data and analyzed the results. This paper summarizes the results and compared the tree-based algorithms classification data and check which algorithm gives a better result.

Ruchi Mittal, Varun Malik, Vikas Rattan, Deepika Jhamb

Chapter 2. Selecting a VPS Provider

Often when shopping for any product, we are somewhat bound by a physical or well-delineated space. If you’re purchasing a car, you go to a car lot or find a website that has the word “auto” or “car” in the name. If you’re looking to hire someone to perform a service, you typically have various physical businesses you can visit (e.g., a locksmith, a temporary worker agency) or easy-to-define search terms that you can use on your search engine of choice (e.g., “unlock my car” or “house cleaning”). With selecting a VPS provider, the waters are a bit muddier as services can be offered to a variety of types of clients (e.g., individuals, small businesses, large corporations) and in many different ways (e.g., packages of services vs. a pay-as-you-go a la carte approach). Further complicating the issue are the concepts that you’ll hear thrown around as you shop – the idea of a Terms of Service (TOS) or a Service Level Agreement. Without having some sort of dictionary to help you, you may find yourself confused and ultimately unable to purchase the product you want or, worse, paying for a product you don’t need. In this chapter, we’ll talk about the various types of providers that you can find, what the words they use in their advertisements mean, how to vet them, and ultimately how to engage with one for services.

Jon Westfall

Chapter 13. Replica Sets and Atlas

So far, we have considered performance tuning singleton MongoDB servers – servers that are not part of a cluster. However, most production MongoDB instances are configured as replica sets, since only this configuration provides sufficient high availability guarantees for modern "always-on" applications.

Guy Harrison, Michael Harrison

AdaptationExplore – A Process for Elicitation, Negotiation, and Documentation of Adaptive Requirements

[Context and motivation] Current and future systems have to operate in complex and dynamic environments. An adaptive system addresses these challenges as it monitors its environment and reacts by changing its behavior. [Question/Problem] Representations of adaptive requirements (e.g., at runtime) and strategies for decision-making have gained a lot of interest in past and current research. Yet, there is a lack of support for elicitation of requirements and environmental information for adaptive systems.[Principal ideas/results] We suggest to apply creativity techniques to elicit adaptation requirements and make use of situations to negotiate them (a situation represents the state of the system and its environment at a particular instance of time). [Contributions] In this paper, we introduce AdaptationExplore, a process for the development of adaptive systems, which supports engineers in particular during the early phases. The results of a pilot study are reported. 37 Master students applied the process on different cases. The study provides first positive experiences on the effectiveness and applicability of the process.

Fabian Kneer, Erik Kamsties, Klaus Schmid

Specifying Requirements for Data Collection and Analysis in Data-Driven RE. A Research Preview

[Context and motivation] According to Data-Driven Requirements Engineering (RE), explicit and implicit user feedback can be considered a relevant source of requirements, thus supporting requirements elicitation. [Question/problem] Less attention has been paid so far to the role of implicit feedback in RE tasks, such as requirements validation, and on how to specify what implicit feedback to collect and analyse. [Principal idea/results] We propose an approach that leverages on goal-oriented requirements modelling combined with Goal-Question-Metric. We explore the applicability of the approach on an industrial project in which a platform for online training has been adapted to realise a citizen information service that has been used by hundreds of people during the COVID-19 pandemic. [Contributions] Our contribution is twofold: (i) we present our approach towards a systematic definition of requirements for data collection and analysis, at support of software requirements validation and evolution; (ii) we discuss our ideas using concrete examples from an industrial case study and formulate a research question that will be addressed by conducting experiments as part of our research.

Maurizio Astegher, Paolo Busetta, Anna Perini, Angelo Susi

2. Methoden des Data Mining für Big Data Analytics

Noch nie wurden derart gewaltige Datenmengen produziert wie in jüngster Zeit. Daraus erwächst die Erwartung, dass sich in den Peta- und Exabyte an Daten interessante Informationen finden lassen, wenn es nur gelingt, dieses gewaltige Volumen zielgerichtet auszuwerten. Sowohl in der Wissenschaft als auch zunehmend in der Praxis werden daher Verfahren und Technologien diskutiert, die interessante Muster in umfangreichen Datenbeständen aufdecken und Prognosen über zukünftige Ereignisse und Gegebenheiten anstellen können. Zahlreiche der hierfür verwendeten Methoden sind unter dem Begriffsgebilde Data Mining bereits seit langer Zeit bekannt, wurden jedoch im Laufe der Jahre ausgebaut und verfeinert. Der vorliegende Beitrag setzt sich das Ziel, die wesentlichen Verfahren zur Datenanalyse im Überblick zu präsentieren und dabei auf die grundlegenden Vorgehensweisen sowie potenzielle Einsatzbereiche einzugehen.

Peter Gluchowski, Christian Schieder, Peter Chamoni

1. Rundgang Big Data Analytics – Hard & Soft Data Mining

Das Einführungskapitel definiert und charakterisiert verschiedene Facetten des Big Data Analytics und zeigt auf, welche Nutzenpotenziale sich für Wirtschaft, öffentliche Verwaltung und Gesellschaft ergeben. Nach der Klärung wichtiger Begriffe wird der Prozess zum Schürfen nach wertvollen Informationen und Mustern in den Datenbeständen erläutert. Danach werden Methodenansätze des Hard Computing basierend auf klassischer Logik mit den beiden Wahrheitswerten wahr und falsch sowie des Soft Computing mit unendlich vielen Wahrheitswerten der unscharfen Logik vorgestellt. Anhand der digitalen Wertschöpfungskette elektronischer Geschäfte werden Anwendungsoptionen für Hard wie Soft Data Mining diskutiert und entsprechende Nutzenpotenziale fürs Big Data Analytics herausgearbeitet. Der Ausblick fordert auf, einen Paradigmenwechsel zu vollziehen und sowohl Methoden des Hard Data Mining wie des Soft Data Mining für Big Data Analytics gleichermaßen zu prüfen und bei Erfolg umzusetzen.

Andreas Meier

3. Digital Analytics in der Praxis – Entwicklungen, Reifegrad und Anwendungen der Künstlichen Intelligenz

Die Professionalisierung des Digital AnalyticsDigital Analytics, der automatisierten Sammlung, Analyse und Auswertung von Web- und App-Daten, hat sich durch die DigitalisierungDigitalisierung in den letzten Jahren stark erhöht. Die damit einhergehenden Möglichkeiten, mit Kunden zu interagieren und deren Verhalten zu verstehen, werden zunehmend wichtiger, um wirtschaftlich erfolgreich zu bleiben. In diesem Kapitel werden nach 2011 und 2016 die Resultate der dritten Digital-Analytics-Umfrage vorgestellt. Es zeigt aktuelle Trends, den steigenden ReifegradReifegrad, Nutzenpotenziale und KI-AnwendungenKI-Anwendung der Digital-Analytics-Praxis auf. Dazu gehören die PersonalisierungPersonalisierung, Price NudgingPrice Nudging, Anomaly DetectionAnomaly Detection, Predictive Analytics sowie die MarketingMarketing AutomationMarketingAutomation. Die grössten Herausforderungen sind die Datenqualität, fehlendes Wissen beziehungsweise Know-How und die Datenkultur, sprich die Offenheit gegenüber Daten und datengetriebenen Prozessen im gesamten Unternehmen.

Darius Zumstein, Andrea Zelic, Michael Klaas

Process Mining on FHIR - An Open Standards-Based Process Analytics Approach for Healthcare

Process mining has become its own research discipline over the last years, providing ways to analyze business processes based on event logs. In healthcare, the characteristics of organizational and treatment processes, especially regarding heterogeneous data sources, make it hard to apply process mining techniques. This work presents an approach to utilize established standards for accessing the audit trails of healthcare information systems and provides automated mapping to an event log format suitable for process mining. It also presents a way to simulate healthcare processes and uses it to validate the approach.

Emmanuel Helm, Oliver Krauss, Anna Lin, Andreas Pointner, Andreas Schuler, Josef Küng

How to Catch a Moonbeam: From Model-Ware to Component-Ware Software Product Engineering

Models are an integral part of software systems Engineering (SSE). In SSE, modeling is the activity of creating the artifact – the model. Models are created according to the method of the modeling approach. An SSE endeavor comprises multiple modeling techniques for supporting multiple models. The recent scientific literature exhibits a plethora of sophisticatedly articulated philosophies, principles, definitions, and theories for models and modeling. There is a gap in linking the model to software systems generation by-passing the programming allowing unskilled programmers to develop software products. Such an endeavor requires organizing the software systems engineering as the engineering activities of the development of model-ware and software product generation through off-the-shelf component-ware. It also requires to engineer systems engineering methodology flexible and adapting to the community that practices software product development.This research presents the theory behind the computer-aided method engineering (CAME) approach that pioneered the engineering of models to generate component-based CASE (computer-aided software engineering) tools which are software products, by by-passing the programing or code generation. Paving way for unskilled programmers to design and develop software products. It enumerates the achievements of CAME and layout future directions for truly achieving model-ware to component-ware software plug and play product engineering as the potential technology for software systems engineering.

Ajantha Dahanayake

6. Stimmen aus der Praxis – die Interviews

Theoretische Überlegungen allein ergeben kein vollständiges Bild von der Digitalisierung in Verwaltungen. Deshalb ist es sinnvoll, zusätzlich Wahrnehmungen, Stimmungen und Ideen aus den unterschiedlichen Ecken der Verwaltungswelt einzufangen. Im Folgenden kommen Menschen zu Wort, die den digitalen Wandel im öffentlichen Dienst am eigenen Leib erfahren. Die Bandbreite der neun Interviewpartnerinnen und -partner reicht dabei von Führungskräften zu Mitarbeitenden, von Generation Z bis zu den Baby Boomern.

Christina Winners

1. Digitalisierung – und warum sie so wichtig ist

Digitalisierung lässt sich in zweierlei Hinsicht verstehen.

Christina Winners

7. Artificial Intelligence, Big Data, and Cloud Computing

By working through this chapter, you will be able to:

Prof. Dr. Bernd W. Wirtz

9. The Predictive Intelligence Case Studies

In the following section, three different case studies on predictive intelligence are described as examples. These three case studies were selected with the background to be able to represent all temporal dimensions of Predictive Intelligence in the most compact form possible. The first case study describes how Predictive Intelligence can be used to optimize short-term inventories and thus the net working capital of an organization. The second case study shows how precisely business expansion can be designed, planned, and quantified with Predictive Intelligence, and subsequently monitored as part of the continuous process of data-based corporate management. The third case study illustrates the long-term strategic perspective of Predictive Intelligence and concludes by presenting the application forms of modern predictive intelligence.

Uwe Seebacher

8. The Predictive Intelligence Team

In this chapter, we will take a look at the human resources that are necessary and relevant in the context of predictive intelligence. Different areas of competence are discussed and based on these, different possible roles and their responsibilities are derived and explained. In this way, a comprehensive picture of future, new areas of knowledge, and competence are to be drawn so that managers can also ensure and implement sustainable strategic PI workforce management in the course of PI activities. Given the fact that the entire training sector is currently engaged in developing new curricula and courses specifically for these new requirements, first movers must proactively take the HR scepter into their own hands in order to develop the necessary competencies within their own organization in a forward-looking manner.

Uwe Seebacher

7. The Predictive Intelligence TechStack (PITechStack)

In the long term, data-driven corporate management is dependent on a corresponding Predictive Intelligence TechStack (PITechStack), because only with a well-thought through and integrated IT infrastructure can a multidimensional, interactive PI system work with big data and artificial intelligence with high performance. In this chapter, the current status but also trends with regard to the PITechStack are therefore critically analyzed and discussed in order to give the reader an overview but also an assistance, which things must be considered, and which products and solutions are relevant for a successful PITechStack.

Uwe Seebacher

6. The Process Model for Predictive Intelligence

In this chapter the individual steps are described, how an organization can develop toward a data-based management and lift these in further consequence on the level of the predictive intelligence. The process model for Predictive Intelligence is based on the maturity model for Predictive Intelligence already described in this book as well as the Self-Assessment for Predictive Intelligence, which is why these two instruments can also be used continuously as a frame of reference and orientation during the entire development process.The process model presented here is based on various projects already realized in practice in various organizations for the conception and implementation of Predictive Intelligence. Against this background, it can be certified that neither a separate budget nor additional resources are generally required for the first stages of implementation or application of this process model. Conscious use of organizational resources deepens the awareness of the importance of Predictive Intelligence and automatically ensures an implementation speed that is appropriate for the organization.

Uwe Seebacher

5. The Predictive Intelligence Self-Assessment

In this chapter, the test procedure is described and discussed, on the basis which in a short time both the potential regarding data-driven management and/or Predictive Intelligence and the respective initial situation in an organization can be assessed and determined in form of a percentage value. The Predictive Intelligence Self-Assessment (PI-SA) is based on the previously described maturity model for predictive intelligence (PIMM) and maps all required dimensions accordingly.

Uwe Seebacher

4. The Predictive Intelligence Maturity Model

This chapter describes the development stages of Predictive Intelligence in organizations. This model forms the basis for the implementation process of Predictive Intelligence in any form of organization. The model was developed based on research work in the area of organization etymology in combination with experiences from different projects in the context of Predictive Intelligence. The Predictive Intelligence Assessment (PIA) presented in the following chapter was also developed on the basis of the Predictive Intelligence maturity model. It can be used initially to evaluate the status quo in an organization, but also the current state of development of Predictive Intelligence (PI). In this way, the PI maturity model, the PI assessment, and the PI procedure model provide a coherent set of instruments for every organization to efficiently and effectively develop toward a data-driven enterprise.

Uwe Seebacher

3. The Predictive Intelligence Ecosystem

In this section, the most important terms in the context of Predictive Intelligence are listed and described in a way that is easy to understand for everyone. The content area is subject to enormous dynamics, so this overview must be considered a snapshot. An attempt will be made to focus on already established and widely used terms and not to list those that are used selectively from the perspective of a generally valid overview.

Uwe Seebacher

2. Predictive Intelligence at a Glance

This chapter provides a compact overview and an introductory presentation of the term Predictive Intelligence. It is described how Predictive Intelligence can be established in an organization step by step. In order to be able to analyze the respective initial situation in their organization, the maturity model to Predictive Intelligence is described just like the analysis instrument. At the end of the section, the advantages of Predictive Intelligence (PI) and, above all, the sustainable effects are discussed in a compact form.

Uwe Seebacher

Business Intelligence and Social Media Analytics

In this chapter, the business intelligence and social media analytics (SMA) are brought into focus to address the strategies, methods, and technologies for managing the data analysis processes. The overall goal is to identify differences in categories, tools, and techniques concerning the development of various analytics processes as well as their application and obtained results in cultural heritage businesses. The focus is on business intelligence and social media analytics to partnering business activities towards the achievement of better results from different perspectives, such as efficiency, interactivity with customers, improvement of internal processes, and a wiser decision-making process based on information. To clarify the theoretical and practical implications on the field, several examples of results achieved by firms that have implemented business intelligence and social media analytics in their daily activities are illustrated.

Tiziana Russo Spena, Marco Tregua, Angelo Ranieri, Francesco Bifulco

Augmented Servicescape: Integrating Physical and Digital Reality

This chapter goes in-depth in the analysis of the museum servicescape and discusses it as a new combination of physical and virtual contexts (i.e. mixed reality), including multiple dimensions of visitors’ interactions. The chapter proposes a conceptual model of the augmented museum servicescape to analyse how organisations face the increasing complexity in design the customer experience in the cultural heritage sector by integrating multisided aspects of digital and physical contexts. The physical, social, and emotional dimensions are integrated and designed to activate users’ roles in an augmented designed environment. Through technologies, servicescape can help users to be guided better by their choices and times, increase their engagement within the virtual exploration, and improve their content understanding through direct and augmented experience.

Cristina Caterina Amitrano, Tiziana Russo Spena, Francesco Bifulco


Transformation in the science and technology field brings many changes; one of the biggest challenges is the interaction between multiple skills, tools, and competences. When we talk about the digital transformation, we know that it seeks to produce a better, faster, and more innovative way of pursuing business, social, and economic development. In this book, we discuss how digital transformation is involved in—and changing—the cultural heritage sector. The focus on new technologies implies not merely a matter of data digitization, storage, and use for the elaboration of a new digital strategy, but also how technologies are related to the transformation of cultural heritage sectors and market processes as a whole. There is a need to move to the forefront of cultural heritage efforts to understand and help firms and policymakers respond to the challenges of managing cultural heritage businesses in the new technology era.

Tiziana Russo Spena, Francesco Bifulco

Chapter 33. The Possibility of Using Distance Learning During the Emergency

The Pan-European University Apeiron has in academic 2010–2011 year, along with the classical way of teaching, introduced in its work the distance learning course known as e-learning. The materials prepared for students who regularly attended the classes were the same as for distance learning students, with adapted lectures and other forms of interaction. Since the pandemic of the COVID-19 has arisen all over the world, including Bosnia and Herzegovina, the classical form of teaching has been transformed into distance learning. The experience of the university staff in the distance learning providing now has been applied to all university students. The authors of this paper analyzed the effectiveness of the distance learning system in terms of emergencies and the manner and speed of adjustment of students, who have previously studied in the classical way, on methods of distance learning. The most appropriate indicators for measuring the degree of adaptation of distance learning are selected. Data for two semesters in 1 school year were monitored throughout the university. In the winter semester, classical classes were conducted, and in the summer semester, classes followed the distance learning model. The interesting indicators are monitored: the number of students’ first access to the system by subject, the time spent monitoring the online materials, and other types of possible student-professor and student-student interaction.

Igor Grujić, Vladimir Domazet, Zoran Ž. Avramović

Kapitel 4. Von den Daten zur Geschichte

Daten sind der Inhalt von Geschichten. Zudem bringt die Ausbreitung digitaler Kanäle eine neue Form des Geschichtenerzählens hervor. Mehr denn je werden visuelle Elemente zum Aufhänger und Anker von Geschichten. Gerade in einer durch Reize überfluteten Gesellschaft gewinnen starke Visualisierungen an Gewicht. Der Journalismus hat es vorgemacht, wie mittels Daten Geschichten entwickelt werden. Inzwischen sind diese Praktiken auch in Unternehmen angekommen. Wer aus Daten Geschichten machen will, braucht ein Team mit sehr unterschiedlichen Fähigkeiten. Kommunikations- und Marketingverantwortlichen sind daher gut beraten, Netzwerke zu bilden und eine gemeinsame Datenkultur zu entwickeln.

Hans-Wilhelm Eckert

Kapitel 3. Von der Fragestellung zu den Daten

Datenquellen fließen so reichlich, dass beim Beginn eines jeden Projekts eine präzise Fragestellung wichtig ist. In diesem Kapitel zeige ich Ansätze, wie Daten den Kontext der Geschichten bilden und dabei helfen zu definieren, welche Zielgruppe ich anspreche, welche Themen ich setze und warum diese überhaupt relevant sind. Die Beispiele zeigen die Herausforderungen bei der Entwicklung der Fragestellung und liefern jeweils spezifische Antworten auf die zentralen Kommunikationsthemen: Warum wir etwas erzählen, wem wir es erzählen und was wir erzählen.

Hans-Wilhelm Eckert

Kapitel 1. Das Spielfeld: Was ist Storytelling mit Daten?

Der Mensch ist ein Sinnsucher. Mit seinen Geschichten organisiert er die Welt, vernetzt sich mit anderen und konstruiert Zusammenhänge. Das Storytelling ist deshalb so effektiv, weil es tief in den Strukturen unseres Gehirns angelegt ist. Immer mehr dieser Geschichten speisen sich heute aus Daten. Ob Daten das neue Öl, Gold oder auch Plutonium sind: Letztlich sind es wir Menschen, die den Daten diese Bedeutungen zuschreiben. Beim Storytelling haben Daten eine Doppelrolle: Sie sind Inhalt und Kontext der Botschaft.

Hans-Wilhelm Eckert

5. The Process of Information Systems Theorizing as a Discursive Practice*

Although there has been a growing understanding of theory in the Information Systems (IS) field in recent years, the process of theorizing is rarely addressed with contributions originating from other disciplines and little effort to coherently synthesize them. Moreover, the field’s view of theorizing has traditionally focused on the context of justification with an emphasis on collection and analysis of data in response to a research question with theory often added as an afterthought. To fill this void, we foreground the context of discovery that emphasizes the creative and often serendipitous articulation of theory by emphasizing this important stage of theorizing as a reflective and highly iterative practice. Specifically, we suggest that IS researchers engage in foundational theorizing practices to form the discourse, problematize the phenomenon of interest and leverage paradigms, and deploy generative theorizing practices through analogies, metaphors, myths, and models to develop the IS discourse. To illustrate the detailed workings of these discursive practices, we draw on key examples from IS theorizing.

Nik Rushdi Hassan, Lars Mathiassen, Paul Lowry

8. Pathways to IT-Rich Recontextualized Modifying of Borrowed Theories: Illustrations from IS Strategy*

While indigenous theorizing in IS has clear merits, theory borrowing will not, and should not, cease completely due to its appeal and usefulness. In this chapter, we aim at increasing our understanding of IT-rich recontextualized modifying of borrowed theories. We present a 2 × 2 framework in which we discuss how two recontextualization approaches of specification and distinction help with increasing the IT-richness of borrowed constructs and relationships. In doing so, we use several illustrative examples from IS strategy. The framework can be used by researchers as a tool to explore the multitude of ways in which a theory from another discipline can yield some IT-rich understanding.

Mohammad Moeini, Robert D. Galliers, Boyka Simeonova, Alex Wilson

Business Analytics: Process and Practical Applications

Today, automation of business processes and devices like IoT for monitoring/activating services generate massive raw data, though they stand alone may not look useful but together carry domain specific signatures that are immensely useful for decision making. The problem of deducing strategic information in detecting patterns, analyzing, reasoning over it, and learning on business trends is popularly known as business analytics and uses artificial intelligence and machine intelligence techniques. This chapter while introducing basics of characteristics of business data analytics, presents types and uses of analytics, and standard processes. Further, this chapter would include an approach to design a recommendation system (with techniques such as content-based filtering, collaborative filtering, and Hybrid recommendations methods). This chapter would do a comparative analysis as well between process of business analytics, various types, and choice of recommendation systems.

Amit Kumar Gupta

Digital Consumption Pattern and Impacts of Social Media: Descriptive Statistical Analysis

Dependence on digital media has increased manifold during the COVID—pandemic lockdowns, across the globe. Most of the offices and academic institutions started operating, on ‘work-from-home’ mode, on digital platform. Even senior citizens, non-working homemakers and kids spent more time in social (networking and communication) media. The ‘difficult times’ of COVID—pandemic has also shown the world the ‘different times’ and the difference in our preferred way of functioning. The ‘digital consumption pattern’ changed substantially, both by its scale and by diversity. The present paper discusses the issue of ‘effectiveness and impact’ of ‘digital medium’ on ‘digital life’ of its users (/digital consumers), particularly during this extended lockdown. The objective is to discuss issues relating to the ‘effect of extended/longer use of net, during this pandemic, for academic and professional activities from home, continuously’. The study is about the life in virtual world, particularly during this extended lockdown. The paper adopts a method of descriptive statistical analysis, covering different categories of users of Eastern India. Using a structured questionnaire method, descriptive primary data, relating to digital consumption, were collected from around 1350 respondents, from Odisha and its neighbouring states. Result shows how different categories prefer and use their preferred social media. Study finds a significant contribution of Internet-based social media (SM), countering ‘social isolation’ of senior citizens.

Rabi N. Subudhi

Healthcare Analytics: An Advent to Mitigate the Risks and Impacts of a Pandemic

Healthcare analytics is a broad term for a specific facet of analytics which sweeps in a wide sash of the healthcare industry, providing macro- and micro-level insights on patient data, risk scoring for chronic diseases, hospital management, optimizing costs, expediting diagnosis and so on. An efficacious way of implementing healthcare analytics on this sector is to combine predictive analysis, data visualization tools and business suites to get valuable information that can both deliver actionable insights to support decisions, as well as reduce variations to optimize utilization. In order to device a more potent approach to prepare for the next pandemic, the clinical data has to be improved which would provide more granulated information to augment treatment effectiveness and success rate. Confirmatory data analysis combined with predictive modeling can be used to furnish unexpected gaps in terms of getting the required clinical data, which is vital for the success of this approach. The healthcare analytics domain has got the potential to monitor the health of the masses to identify disease trends and can provided enhanced health strategies based on demographics, geography and socio-economics. Hence, healthcare analytics is indeed an advent to counter the imminent threats of outbreaks such as epidemics and pandemics provided it gets the necessary course of action to fulfill its grail.

Shubham Kumar

An Implementation of Text Mining Decision Feedback Model Using Hadoop MapReduce

A very large amount of unstructured text data is generated everyday on the Internet as well as in real life. Text mining has dramatically lifted the commercial value of these data by pulling out the unknown comprehensive potential patterns from these data. Text mining uses the algorithms of data mining, statistics, machine learning, and natural language processing for hidden knowledge discovery from the unstructured text data. This paper hosts the extensive research done on text mining in recent years. Then, the overall process of text mining is discussed with some high-end applications. The entire process is classified into different modules which are test parsing, text filtering, transformation, clustering, and predictive analytics. A more efficient and more sophisticated text mining model is also proposed with a decision feedback perception in which it is a way advanced than the conventional models providing a better accuracy and attending broader objectives. The text filtering module is discussed in detail with the implementation of word stemming algorithms like Lovins stemmer and Porter stemmer using MapReduce. The implementation set up has been done on a single node Hadoop cluster operating in pseudo-distributed mode. An enhanced implementation technique has been also proposed which is Porter stemmer with partitioner (PSP). Then, a comparative analysis using MapReduce has been done considering above three algorithms where the PSP provides a better stemming performance than Lovins stemmer and Porter stemmer. Experimental result shows that PSP provides 20–25% more stemming capacity than Lovins stemmer and 3–15% more stemming capacity then Porter stemmer algorithm.

Swagat Khatai, Siddharth Swarup Rautaray, Swetaleena Sahoo, Manjusha Pandey

Evolution of Sentiment Analysis: Methodologies and Paradigms

With the advent of the digital age, almost everything has come down to better understanding of the data. Natural language processing is equally in pursuit and is rather among the most researched areas of computer science. Post 1980, a major revolution in NLP embarked with the emergence of machine learning algorithms resulting from steady escalation in computational power. Unlike other data, text semantics becomes more complex both because of its contextual nature and daily evolving language usage. While the continuous efforts of improving language representation for logical units interpretation is still prevalent, much to our realization, traditional, and long established recurrent neural networks which were supposed to grasp a bi-directional context of language have been surpassed by attention models in constructing improved embeddings allowing systems to better understand language. Among numerous applications circumventing, understanding sentiment of text has been widespread in fields including but not limited to customer reviews, stock market, elections, healthcare analytics, online, and social media analytics. From binary classification of it to more challenging cases such as negation handling, sarcasm, toxicity, multiple attitudes, or polarity, this research chapter explores the evolution of sentiment analysis in the light of emerging text processing and the transition of text understanding from rule-based to a statistical one with a comparison of benchmark performance from state-of-the-art models over various applications and datasets.

Aseer Ahmad Ansari

Exploring Interdisciplinary Data Science Education for Undergraduates: Preliminary Results

This paper reports a systematic literature review on undergraduate data science education followed by semi-structured interviews with two frontier data science educators. Through analyzing the hosting departments, design principles, curriculum objectives, and curriculum design of existing programs, our findings reveal that (1) the data science field is inherently interdisciplinary and requires joint collaborations between various departments. Multi-department administration was one of the solutions to offer interdisciplinary training, but some problems have also been identified in its practical implementation; (2) data science education should emphasize hands-on practice and experiential learning opportunities to prepare students for data analysis and problem-solving in real-world contexts; and (3) although the importance of comprehensive coverage of various disciplines in data science curricula is widely acknowledged, how to achieve an effective balance between various disciplines and how to effectively integrate domain knowledge into the curriculum still remain open questions. Findings of this study can provide insights for the design and development of emerging undergraduate data science programs.

Fanjie Li, Zhiping Xiao, Jeremy Tzi Dong Ng, Xiao Hu

Development of a Decision-Making System for Choosing Software in the Field of Data Management

The understanding of the need to apply reliable data management methods requires the company’s management focus on the practical questions. Special attention should be paid to tools that ensure that the company’s data usage goals are achieved. A significant difficulty in implementing the company’s data management practices is caused by the lack of resources or services on the market for selecting software in this area. This paper analyzes the market for technology services for the main data management knowledge arias. Based on the analysis, we developed the decision support system for selecting software in the field of data management. The system contains five steps. Firstly, users select the required area of expertise, then mark the functions that must be performed by the software product that they need. On the third and the forth steps the system selects software products using the functional completeness criterion. Finally, users get the list of software products that meet the requirements for the functions performed. The created solution will allow the company’s management interested in data management to get acquainted with the available offers on the market and software product characteristics, select the most appropriate functions and the optimal software product under their needs.

L. Gadasina, A. Manzhieva

Zusammenspiel der Blockchain und der Künstlichen Intelligenz in der Logistik – Zukunftsaussichten und Potenziale

Die Logistik-Branche ist mit zahlreichen Herausforderungen konfrontiert, darunter steigenden Kundenanforderungen, sich verändernden Geschäftsfeldern und einem steigenden Einfluss neuer Technologien. Vor diesem Hintergrund gewinnt die Digitalisierung in der Branche eine zunehmende Bedeutung und Innovationen wie die Blockchain und die Künstliche Intelligenz werden im Transportwesen eingeführt. Die Blockchain stellt ein dezentrales Transaktionssystem dar, welches einen sicheren Datenaustausch ohne Vermittler ermöglicht.

Alexander Goudz, Yusuf Kücük, Viktor Fuchs

Chapter 10. Cloud Technology

Although there are many aspects to the cloud, the most obvious one to consider is the technical aspect. This has naturally received a lot of attention and is often the focus of analysis and marketing. As a result, there is a plethora of information available about virtually every aspect of cloud technology. The purpose of this chapter is to provide an overview of the main technologies that we meet in the cloud. In the introduction, we saw that it is common to divide the cloud into three types: Infrastructure as a service (IaaS), which provides the most basic technologies; Platform as a Service (PaaS), which contains technologies for developers to create applications; and Software as a Service (SaaS), which offers finished applications to the end user. The emphasis here is on explaining basic structures and functions for the reader to explore in more detail elsewhere.

Anders Lisdorf

Kapitel 4. Ausgewählte praktische Probleme und Gestaltung von Lösungsansätzen

Viele Unternehmen proklamieren den Grundsatz, ihren Kostenmanagementprozessen eine ausgeprägte Marktorientierung zugrunde zu legen. Ein genauer Blick hinter diese Aussagen zeigt jedoch, dass sich dieses Engagement hauptsächlich auf die Einbindung der Dimension Kunde beschränkt. Die Marktorientierung umfasst jedoch mehr Dimensionen. Die Problembereiche des klassischen Target-Costing-Konzepts resultieren ebenso aus dem deterministischen Charakter unterstellter Bedingungen, der vielfältig nicht berücksichtigten Unsicherheiten des Umfeldes und des dominierenden Vollkostencharakters. Begleitet werden die Problembereiche davon, dass eine Partialverantwortung von Zielkostenvorgaben bei zunehmend komplexen Produkten mit einer großen kostenrelevanten Interdependenz des gesamten Produktgesamtsystems im Rahmen der Motivationsanreize zunehmend an Wirkung verliert. Die Weiterentwicklung klassischer Target-Costing-Konzepte ist insgesamt notwendig, um eine Ausrichtung auf die veränderten Bedingungen zu erreichen.

Robert J. Schildmacher

IoT for Diabetes: Smart Monitoring and Control

Nowadays, diabetes has become a very common disease across globe. People with diabetes cost the healthcare system in billions and also result in a great loss to productivity. The root cause for diabetes is still un-identified, and the only way to fight is to either reduce or control the extent. For the same, continuous monitoring of diabetes becomes a core area to be examined. Therefore, in this paper, we have explored the use of IoT devices to monitor and control diabetes. Smart glucometers in form of wearable devices can take a stock of glucose levels in the blood on real-time basis. This data is gathered and streamed to multiple systems for further processing. Data analytics and data engineering methods are used to convert data into useful information which in turn helps us to get insights. In case of abnormal situations (400 < glucose level < 70 mg/dL), notifications in form of alerts are sent out to care givers or doctors. So, this leads to an early or timely intervention of doctor. This timely intervention helps to save cost at both patient and payer level.

Shweta Taneja, Mohit Chandna

Chapter 12. Building a Dataverse Database

Microsoft Dataverse is a premium, cloud-based database. As a data store, it is far superior to SharePoint, and it is Microsoft's preferred database for products in the Power Platform. Dataverse is the database that Microsoft Dynamics 365 uses, and it is the foundation on which model-driven and portal apps are based. Without a Dataverse database, we cannot create these types of apps, and this is the reason why Dataverse is so important.

Tim Leung

Economic and Food Safety: Optimized Inspection Routes Generation

Data-driven decision support systems rely on increasing amounts of information that needs to be converted into actionable knowledge in business intelligence processes. The latter have been applied to diverse business areas, including governmental organizations, where they can be used effectively. The Portuguese Food and Economic Safety Authority (ASAE) is one example of such organizations. Over its years of operation, a rich dataset has been collected which can be used to improve their activity regarding prevention in the areas of food safety and economic enforcement. ASAE needs to inspect Economic Operators all over the country, and the efficient and effective generation of optimized and flexible inspection routes is a major concern. The focus of this paper is, thus, the generation of optimized inspection routes, which can then be flexibly adapted towards their operational accomplishment. Each Economic Operator is assigned an inspection utility – an indication of the risk it poses to public health and food safety, to business practices and intellectual property as well as to security and environment. Optimal inspection routes are then generated typically by seeking to maximize the utility gained from inspecting the chosen Economic Operators. The need of incorporating constraints such as Economic Operators’ opening hours and multiple departure/arrival spots has led to model the problem as a Multi-Depot Periodic Vehicle Routing Problem with Time Windows. Exact and meta-heuristic methods were implemented to solve the problem and the Genetic Algorithm showed a high performance with realistic solutions to be used by ASAE inspectors. The hybrid approach that combined the Genetic Algorithm with the Hill Climbing also showed to be a good manner of enhancing the solution quality.

Telmo Barros, Alexandra Oliveira, Henrique Lopes Cardoso, Luís Paulo Reis, Cristina Caldeira, João Pedro Machado

Artificial Intelligence and Entrepreneurship in Bahrain

Bahrain Economic Vision 2030 is a comprehensive vision for Bahrain’s economy, which provides a clear image of the economy’s continuous development for Bahrain. Moreover, it is reflected in a substantial common goal that is represented in providing a good life for residents.The main strategy that the government of Bahrain is adopting is shifting the reliance from oil wealth to the productive, globally and competitive economy. One of the main factors to increase the diversity in the market is the existence of the Entrepreneurship, Therefore, lately Kingdome of Bahrain encouraging the privet sector by creating a clean environment to invest and provide a competitive incentive to make Bahrain ideal for entrepreneurship. Coinciding with the Fourth Industrial Revolution (4IR) to increase the efficiency of the entrepreneur’s organization is to adopt artificial intelligence whether in their process or production.The aim of this research is to define the concepts of entrepreneurship and artificial intelligence (AI), and how can artificial intelligence increase the efficiency of entrepreneurship in Bahrain. Furthermore, several case studies of (AI) and its application in various sectors in the context of business entrepreneurship. Based on academic insights, the major potential challenges are discussed that are faced by entrepreneurs while integrate Artificial intelligence (AI) into their venture process as well as the solutions are proposed to those challenges.

Khawla Mohamed Khaifa, Allam Hamdan, Bahaaeddin Alareeni

An Enriched Framework for CRM Success Factors Outlining Data Analytics Capabilities’ Dimension

A Case Study from the Retail Industry

The evolution of Big Data has been attracting many researchers who emphasized on the growing role of data analytics capabilities (DAC) in enhancing marketing decisions within enterprises. Although various hypothesis and theories have been previously generated, yet case studies validating them by real-life examples have been scarce. This research paper serves as an exploratory qualitative research, based on a single case study with a retail company established in Lebanon: The United Company for Central Markets SARL (UCCM). The grocery retail industry is a good example because it’s changing faster than most other complex industries, making it harder to detect and keep any competitive advantage. The study on hand aims to inductively develop theory further, and reflects UCCM’s results by illustrating an enriched conceptual framework for understanding how efficient CRM systems, when integrated with DAC, can enhance Marketing decisions during times of crisis management. UCCM’s research design and decision-making process allow addressing DAC as a new key dimension for the enriched conceptual framework. UCCM’s results also clarify how DAC play a crucial integrative role in adding value to the three factors of a CRM successful implementation: People, Technology and Process. The enriched conceptual framework proves to be a valuable heuristic tool that highlights the importance of establishing and acquiring a new skill set of competencies that combine Marketing with technologies and data analytics, especially during times of crisis management.

Roula Jabado, Rim Jallouli

Increasing the Investment Attractiveness of the Financial Reporting Indicators by Applying Innovations

The paper considers the approaches to determination of the main investment attractiveness indicators by the approved methods. It is established that the investment attractiveness of financial statements in the modern world is increasingly dependent on innovative, i.e. intangible assets. It is determined that the main indicators characterizing the investment attractiveness are those of financial stability, business activity, assets, liquidity of assets and profitability. The purpose of the study is to identify the impact of intellectual property on the level of investment attractiveness of financial statements and to improve their balance reporting approaches. The methodology for identifying, evaluating and balancing intellectual property assets has been developed. Its effectiveness is established according to the simulated balance by way of example of the Plant Production Institute nd. a. V. Ya. Yuryev of National Academy of Agrarian Sciences of Ukraine financial statements.

Yu. Kyrylov, N. Stoliarchuk, M. Kisil, H. Khioni, V. Hranovska, R. Korobenko

A Performance Benchmark for Stream Data Storage Systems

Modern business intelligence relies on efficient processing on very large amount of stream data, such as various event logging and data collected by sensors. To meet the great demand for stream processing, many stream data storage systems have been implemented and widely deployed, such as Kafka, Pulsar and DistributedLog. These systems differ in many aspects including design objectives, target application scenarios, access semantics, user API, and implementation technologies. Each system use a dedicated tool to evaluate its performance. And different systems measure different performance metrics using different loads. For infrastructure architects, it is important to compare the performances of different systems under diverse loads using the same benchmark. Moreover, for system designers and developers, it is critical to study how different implementation technologies affect their performance. However, there is no such a benchmark tool yet which can evaluate the performances of different systems. Due to the wide diversities of different systems, it is challenging to design such a benchmark tool. In this paper, we present SSBench, a benchmark tool designed for stream data storage systems. SSBench abstracts the data and operations in different systems as “data streams” and “reads/writes” to data streams. By translating stream read/write operations into the specific operations of each system using its own APIs, SSBench can evaluate different systems using the same loads. In addition to measure simple read/write performance, SSBench also provides several specific performance measurements for stream data, including end-to-end read latency, performance under imbalanced loads and performance of transactional loads. This paper also presents the performance evaluation of four typical systems, Kafka, Pulsar, DistributedLog and ZStream, using SSBench, and discussion of the causes for their performance differences from the perspective of their implementation techniques.

Siqi Kang, Guangzhong Yao, Sijie Guo, Jin Xiong

Chapter 5. Predictive Analytics Using Social Big Data and Machine Learning

The ever-increase in the quality and quantity of data generated from day-to-day businesses operations in conjunction with the continuously imported related social data have made the traditional statistical approaches inadequate to tackle such data floods. This has dictated researchers to design and develop advance and sophisticated analytics that can be incorporated to gain valuable insights that benefit business domain. This chapter sheds the light on core aspects that lay the foundations for social big data analytics. In particular, the significant of predictive analytics in the context of SBD is discussed fortified with presenting a framework for SBD predictive analytics. Then, various predictive analytical algorithms are introduced with their usage in several important application and top-tier tools and APIs. A case study on using predictive analytics to social data is provided supports with experiments to substantiate significance and utility of predictive analytics.

Bilal Abu-Salih, Pornpit Wongthongtham, Dengya Zhu, Kit Yan Chan, Amit Rudra

Chapter 3. Credibility Analysis in Social Big Data

The concept of social trust has attracted the attention of information processors/data scientists and information consumers/business firms. One of the main reasons for acquiring the value of SBD is to provide frameworks and methodologies using which the credibility of online social services users can be evaluated. These approaches should be scalable to accommodate large-scale social data. Hence, there is a need for well comprehending of social trust to improve and expand the analysis process and inferring credibility of social big data. Given the exposed environment’s settings and fewer limitations related to online social services, the medium allows legitimate and genuine users as well as spammers and other low trustworthy users to publish and spread their content. This chapter presents an overview of the notion of credibility in the context of SBD. It also lists an array of approaches to measure and evaluate the trustworthiness of users and their contents. Finally, a case study is presented that incorporates semantic analysis and machine learning modules to measure and predict users’ trustworthiness in numerous domains in different time periods. The evaluation of the conducted experiment validates the applicability of the incorporated machine learning techniques to predict highly trustworthy domain-based users.

Bilal Abu-Salih, Pornpit Wongthongtham, Dengya Zhu, Kit Yan Chan, Amit Rudra

Chapter 4. Semantic Data Discovery from Social Big Data

Due to the large volume of data and information generated by a multitude of social data sources, it is a huge challenge to manage and extract useful knowledge, especially given the different forms of data, streaming data and uncertainty and ambiguity of data. Hence, there are still challenges in this area of BD analytics research to capture, store, process, visualise, query, and manipulate datasets to derive meaningful information that is specific to an application’s domain. This chapter attempts to address this problem by studying Semantic Analytics and domain knowledge modelling, and to what extent these technologies can be utilised toward better understanding to the social textual contents. In particular, the chapter gives an overview of semantic analysis and domain ontology followed by shedding light on domain knowledge modelling, inference, semantic storage, and publicly available semantic tools and APIs. Also, the theoretical notion of Knowledge Graphs is reported and their interlinking with SBD is discussed. The utility of the semantic analytics is demonstrated and evaluated through a case study on social data in the context of politics domain.

Bilal Abu-Salih, Pornpit Wongthongtham, Dengya Zhu, Kit Yan Chan, Amit Rudra

Chapter 1. Social Big Data: An Overview and Applications

The emergence of online social media services has made a qualitative leap and brought profound changes to various aspects of human, cultural, intellectual, and social life. These significant Big data tributaries have further transformed the businesses processes by establishing convergent and transparent dialogues between businesses and their customers. Therefore, analysing the flow of social data content is necessary in order to enhance business practices, to augment brand awareness, to develop insights on target markets, to detect and identify positive and negative customer sentiments, etc., thereby achieving the hoped-for added value. This chapter presents an overview of the Social Big Data term and definition. This chapter also lays the foundation for several applications and analytics that are broadly discussed in this book.

Bilal Abu-Salih, Pornpit Wongthongtham, Dengya Zhu, Kit Yan Chan, Amit Rudra

Chapter 2. Introduction to Big Data Technology

Big data is no more “all just hype” but widely applied in nearly all aspects of our business, governments, and organizations with the technology stack of AI. Its influences are far beyond a simple technique innovation but involves all rears in the world. This chapter will first have historical review of big data; followed by discussion of characteristics of big data, i.e. from the 3V’s to up 10V’s of big data. The chapter then introduces technology stacks for an organization to build a big data application, from infrastructure/platform/ecosystem to constructional units and components. Finally, we provide some big data online resources for reference.

Bilal Abu-Salih, Pornpit Wongthongtham, Dengya Zhu, Kit Yan Chan, Amit Rudra

30. Back to Basics in the Dairy Industry: Building Innovation Capabilities to Allow Future Innovation Success

This chapter reviews the innovation endeavors of the second biggest dairy company in Mexico, from a resource-based point of view, to catch up with the market dynamics in the mature business of dairy products in an emerging economy. First, we address the strategic decisions of the top management team, headed by a new CEO with a transformational profile, to lay the foundations of a long-term competitive strategy that respond to the poundings of well-positioned and new aggressive competitors. Then we turn next to take a close look at the efforts to developing innovation capabilities and the reconfiguration of the capabilities structure for the implementation of the innovation strategy that guarantees a bright, innovative future for this iconic company in Mexico.

Erick G. Torres, Andres Ramirez-Portilla

Cannabis Laboratory Management: Staffing, Training and Quality

Cannabis laboratory proprietors risk overexposure to staffing, training, quality management, and leadership culture failures. As the technical startup burden is eased by an increasing body of standardized testing methodologies, laboratories must establish programs for continuous improvement to remain competitive. The fastest path to sustainable high-reliability cannabis testing operations incorporates the regulatory history, technical strengths, and modernized quality practices from established relevant testing sectors. This chapter will discuss critical components of cannabis laboratory management including hiring competent personnel, ensuring and documenting adequate training, and promoting a culture of quality and its relationship to ISO/IEC 17025 certification.

Robert D. Brodnick, William A. English, Maya Leonetti, Tyler B. Anthony

Alternative Data, Big Data, and Applications to Finance

Financial technology has often been touted as revolutionary for financial services. The promise of financial technology can be ascribed to a handful of key ideas: cloud computing, smart contracts on the blockchain, machine learning/AI, and finally—big and alternative data. This chapter focuses on the last concept, big and alternative data, and unpacks the meaning behind this term as well as its applications. We explore applications to various domains such as quantitative trading, macroeconomic measurement, credit scoring, corporate social responsibility, and more.

Ben Charoenwong, Alan Kwan

Application of Big Data with Fintech in Financial Services

Computing technologies perform a vital role in the transformation of contemporary financial services. The emergence of financial technology (fintech) in recent years has transformed different facets of the global financial industries, thus changing the way financial personnel lives, think, and work. The fintech similarly had developed numerous technologies to advance the industry. These developments focused on the advancement of financial segments through the use of big data, cloud computing, the Internet of Things, artificial intelligence, and blockchain, for better transaction and security. Big data has seen a substantial adjustment in approaches to gather, demonstrate, and appraise information. How to use the data to develop new business models has attracted considerable attention, and recent technological advances have allowed us to produce data much faster than ever. Big data analytics is the modeling of big data to discover useful information and patterns, thereby communicating the same to support decision making and suggest conclusions. Big data can be processed, accumulated, and accumulated in the financial industries through rapidly evolving fields of big data analytics, thus enabling the understanding, rationalization, and use of big data for different purposes. Therefore, this chapter discusses the concept of big data for guaranteeing better expansion and research toward fintech. The chapter likewise will examine how large-volume financial rights data can greatly help clarify and elucidate fintech patterns. The chapter will also make an effort to convey various benefits that financial industries can make by using fintech with big data technology. The hope of using fintech with big data in financial services will have a great impact on the quality of financial transactions that can be delivered to financial sectors across socioeconomic and geographic boundaries.

Joseph Bamidele Awotunde, Emmanuel Abidemi Adeniyi, Roseline Oluwaseun Ogundokun, Femi Emmanuel Ayo

Chapter 9. Business Intelligence Culture

This chapter mostly draws on materials previously published in papers in journals Issues in Informing Science and Information Technology (Skyrius et al., Issue Inform Sci Informat Technol, 13: 171–186, 2016), and Information Research (Skyrius, Nemitko, & Taločka, Informat Res, 23: 4, 2018). The wide set of human factors influencing business intelligence (BI) adoption tends to define features of what may be called BI culture, a concept covering human and managerial issues that emerge in BI implementation. The goal of this chapter is to point out the key features of BI culture, and examine their relations to the perceived level of this culture. The elements of BI culture are: sharing of information and insights; record of lessons learned; intelligence information integration; intelligence community; and supportive IT infrastructure. The chapter presents the results of empirical research, directed at evaluation of perceived level of BI culture and its relation to different activities and their components.

Rimvydas Skyrius

Chapter 4. Business Intelligence Dimensions

This chapter discusses business intelligence (BI) dimensions—the degrees of freedom along the most important BI implementation features. BI dimensions are largely set by business dimensions for a given business entity. The existing variety of business dimensions and BI options creates a multidimensionality of choices that encompasses both the information space (information sources, accessibility, efforts, costs) and BI functional space (methods, models, procedures, software products). The most important BI dimensions discussed in the chapter are: internal and external focus; centralized and decentralized deployment; question complexity from simple to complex; functional scope from wide to narrow; level of automation of BI functions; data-driven to insight-driven; and real time to right time. For some of the dimensions, the results of earlier empirical research are provided. A compromise approach to select a BI system configuration is the use of tiered structure of informing environment, where the first, or closest tier would contain the simplest and most often used support tools, with the second (or more) tier providing the less-often used, and usually more advanced, support tools.

Rimvydas Skyrius

Chapter 7. Business Intelligence Technologies

As BI technologies are covered in great detail in many sources, this chapter is more of an overview nature. Its goal is to cover the landscape of the most important BI technologies regarding their placement in BI value chain, and to expose their most important benefits and limitations. The technologies reviewed in this chapter are: data collection and storage; multidimensional data analysis and OLAP; self-service tools; business analytics including data mining and Big Data analytics; modeling and simulation; text analytics; presentation and visualization; artificial intelligence (AI) and machine learning (ML); communication and collaboration platforms; and technology deployment issues. The field is developing rapidly, and it is nearly impossible to keep up with the latest innovations in a book that takes time to be published. For example, artificial intelligence technologies currently are experiencing a substantial rise; however, the corresponding paragraph in this chapter is rather brief—there are many sources that can give much more detailed descriptions and explanations. Some of the established BI technologies, however, have firmly found their place in corporate information activities, and the author hopes that their coverage in this book is sufficiently complete.

Rimvydas Skyrius

Chapter 3. Business Intelligence Information Needs: Related Systems and Activities

The chapter positions BI among other informing activities and systems, stressing the complex nature of information needs served by BI. The features of complex information needs are presented together with features of other dimensions, like common-special, and repetitive–non-repetitive information needs. BI is described as a cyclical process with feedback, such structure being featured by cognitive processes in general. In an organization, other types of information systems share their information with BI—management information systems, decision support systems, competitive intelligence systems, business analytics, and BI relations with all of the above system types are discussed. The chapter concludes with BI relations to other domains where intelligence activities have deep traditions—political and military intelligence, scientific research.

Rimvydas Skyrius

Chapter 8. Business Intelligence Maturity and Agility

This chapter is dedicated to the dynamics of development of BI function in an organization. The first part discusses the development path of BI systems since the start of its life cycle, and addresses the notion of BI maturity using published research and own findings. The second part of the chapter is given to the notion of BI agility, deeming it one of the most important features in a turbulent and ever-changing business environment.For BI that is considered mature, all important discoveries regarding its adoption have been made, while agility has to be prepared for important discoveries that can emerge from any unexpected points. In this aspect, maturity seems to apply to more structured and well-understood phenomena, while agility regards activities and environments that are experiencing constant changes. BI is a mix of both, but the insight-producing part of BI, attributed to advanced informing, needs agility because of being largely unstructured and undergoing constant changes.

Rimvydas Skyrius

Chapter 2. Business Intelligence Definition and Problem Space

In the current business environment, business intelligence activities have a mission to support complex information needs and provide valuable insights to business managers and decision makers. Having largely evolved from decision support environments, BI is expected to cover environment monitoring, awareness and other more sophisticated information needs in addition to decision support function. BI can be seen as a specific information system, a cyclical informing process, a technology platform or an information value chain. BI operates in a complex and messy information environment, and its benefits are hard to evaluate. A term “advanced informing” is used throughout the book to separate BI activities from those of regular informing, carried out by conventional information systems. The product of BI is better insights into business operations and environment.

Rimvydas Skyrius

Chapter 6. Management of Experience and Lessons Learned

Accumulated expertise has a significant role in BI as tested facts and conclusions about situations that actually have happened, have a degree of similarity to the problem at hand, and whose outcomes are known. Data in transaction databases do not provide adequate coverage of important situations and their outcomes, so a need for recorded expertise or lessons learned (LL) is evident. Such collections have their roots in knowledge management, communities of practice and other forms of activity where collections of lessons learned have been a key element. The records can have different forms—cases, rules, lesson reports, event descriptions, best practices, tips and so on. Theory and management issues are discussed together with several known cases of LL management. The last part of the chapter contains a set of results of empirical research to determine the most important issues in managing LL in project management.

Rimvydas Skyrius

Chapter 11. Encompassing BI: Education and Research Issues

The current chapter rounds up the ideas presented in the book, stressing the encompassing nature of BI—coverage of informing activities on all important business dimensions, supported by an appropriate set of BI dimensions. The short review of the materials in book chapters clarifies the main axis of the book—the better understanding of human issues in BI leads to development of its encompassing nature. The chapter also considers the issues that weren’t given attention in the previous chapters—BI education and competency development, including importance of soft competencies, and research directions coming out of interplay between book chapters.

Rimvydas Skyrius

Chapter 5. Information Integration

Business intelligence often is expected to provide a composite view of a field of interest, where relevant business dimensions are joined together to produce adequate coverage and context. Integration of both data and information helps boosting the actionable qualities of the results of BI activities. Data integration issues are more of a technical nature, having their roots in database domain. If the technical issues in data integration are deterministic and relatively easy to solve, organizational problems like existence of data silos introduce lack of transparency and efficiency, problems in collaboration and data quality. Information integration encompasses all forms of information—structured and unstructured, internal and external; and centers around an axis—a topic of importance. Because of its cognitive nature, information integration uses sense making procedures that filter the information flow, extract snippets of sense and evaluate the context. The non-linear and often vague nature of information integration lead to development of informal approaches—data spaces, data lakes, data mashups that leave the final information integration steps to the user.

Rimvydas Skyrius

Chapter 10. Soft Business Intelligence Factors

This chapter discusses the soft BI factors that, to author’s opinion, are important for BI, but have been under-researched and lack generalization: attention, sense, and trust in the context of BI function and its product. The above soft factors are considered important because of their key roles in human information behavior. In the case of BI, production of important insights requires careful handling of all three. There are important under-researched aspects like metrics, management levers or interrelations between the three factors. The unifying common feature for all three factors is their perishability—they are subject to waste, and IT support should be directed to prevent this waste. The author has not performed any empirical research on the above factors, but the materials presented in the chapter may be seen as a response to the need to define possible future research directions.

Rimvydas Skyrius

Chapter 1. Business Intelligence: Human Issues

The stormy and kaleidoscopic development of the field of business intelligence (BI), mostly attributed to advances in information technology (IT), has created significant confusion in the area of business informing and an imbalance between IT and human issues, to the favor of the former. Early in the era of decision support systems, Feldman and March (1981) have noted a controversy between information engineering, represented by information systems (IS), and information behavior, represented by intelligence: “some strange human behavior may contain a coding of intelligence that is not adequately reflected in engineering models”. It is quite ironic that in the field of business intelligence the dominating emphasis has been made on information technology as being intelligent, while the intelligent need to be aware arises from intelligence activities of the people, and this aspect has received a lot less emphasis. The current book is a modest attempt by the author to reduce this imbalance.

Rimvydas Skyrius

Spatiotemporal Data Cleaning and Knowledge Fusion

Knowledge fusion aims to establish the relation-ship between heterogeneous ontology or heterogeneous instances. Data cleaning is one of the underlying key technologies supporting knowledge fusion. In this chapter, we give a brief overview of some important technologies of knowledge fusion and data cleaning. We first briefly introduce the motivation and background of knowledge fusion and data cleaning. Then, we discuss some of the recent methods for knowledge fusion and spatiotemporal data cleaning. Finally, we outline some future directions of knowledge fusion and data cleaning.

Huchen Zhou, Mohan Li, Zhaoquan Gu, Zhihong Tian

Rational Bureaucracy 2.0: Public Administration in the Era of Artificial Intelligence

The article reflects research on the dynamics of the process of integrating digital and artificial intelligence technologies in public administration. The authors compared the evolutionary stages of analytical methods and technologies used in corporate sector management systems and in public administration. The analysis revealed similarities that are determined by the key characteristics of modern social processes, which led to the conclusion that the integration of artificial intelligence technologies in public administration is a key condition for increasing the efficiency and ensuring the sustainability of government systems and also the development trend of the public sector in the coming years. An important result of the work is the conclusion that the specifics of public administration, its basic differences from governance in the private sector, lead to deviations from the path of digital evolution that is observed in the private sector. These deviations are objective and they can’t be interpreted solely as lag. Public administration as a process of exercising power is based on certain principles, unique resources and proceeds from specific goals. The development of digital technologies, each new generation of which makes the world more transparent, violates the autonomy of the state administration in relation to society, and exacerbates this opposition. The specific data operated by government agencies can’t enter the data economy, at least at the present stage: the protection of personal data, tax secrets and other information collected by the state for the implementation of its functions remains the agreed interest of society and the state. The authors propose a model that allows you to get all the benefits from the introduction of digital technologies in public administration, but at the same time maintain state-administrative autonomy. The article presents the key principles for constructing such an adaptive model of «rational bureaucracy 2.0» , which integrates digital and artificial intelligence technologies into the public administration system while maintaining the immanent autonomy of the public administration system. As the main parameters for developing a strategy for transition and adaptation of the public administration system, the authors justify the classification of functions and tasks for their potential automation based on the principles of rational Weber bureaucracy, which are proposed to be forwarded to artificial intelligence systems, and the staff of government structures will focus on solving complex problems on the principles of network organization, cooperation and proactivity.

Natalia V. Blinova, Artem V. Kirka, Dmitriy A. Filimonov

From Artificial Intelligence to Dissipative Sociotechnical Rationality: Cyberphysical and Sociocultural Matrices of the Digital Age

Artificial-intelligence and converging technologies currently setting global post-industrial (economic, cultural, social, and manufacturing) trends, such as the Internet of things, penetrating computing, physical-digital communication, smart medicine and smart education replacing step by step the traditional forms of communications, logistics, management, data logging, and automation.

Sergey V. Leshchev

Open Access

13. Edge Computing und Industrie 4.0

Anwendungsbereiche in der Schweizer Fertigungsindustrie

Durch die industrielle, digitale Transformation, insbesondere durch die Vernetzung von Fertigungsanlagen, wird zusehends eine sehr große Datenmenge in der Schweizer Fertigungsindustrie generiert. Viele Daten bleiben dabei lokal (oft) ungenutzt oder werden über weite Transportwege an zentrale Rechenzentren zur Analyse gesendet. Vor diesem Hintergrund stellt sich die Frage, wie Daten so genutzt werden können, dass lange Transportwege entfallen und zeitgleich, durch die Verarbeitung dieser Daten, Wissen generiert werden kann. Dieser Beitrag liefert erste Antworten auf der Basis von empirischen Erkenntnissen, welche durch Befragungen von Anbietern, Beratungsunternehmen und Fertigungsunternehmen im Bereich Edge Computing durchgeführt wurden. Dabei liefert die vorliegende Studie Erkenntnisse in den Bereichen technisches Verständnis, Geschäftsmodelle und Anwendungsszenarien sowie praktische Umsetzungen im Sinne von Pilotierungen und Rollouts als Proof of Concept.

Dominik Appius, Roger Andreas Probst, Kim Oliver Tokarski

Open Access

1. Digital Business in der Praxis

Modell, Analyse und Handlungsfelder

Der Beitrag beschäftigt sich mit der digitalen Transformation von Organisationen. Er zeigt auf der Basis der Beiträge dieses Sammelbands und der angeführten Ergebnisse eigener empirischer Studien, welche Veränderungsprozesse in welcher Tiefe und in welchem Feld beobachtet werden können. Anhand eines maturitätsorientierten Analysemodells zum digitalen Business werden die Transformationsprozesse in den einzelnen Fallstudien zuordenbar. Die Beiträge dieses Sammelbandes reflektieren die Annahme, dass sich die digitale Transformation in ihren unterschiedlichen Ausprägungen grundsätzlich intensiviert. Besonders deutlich wird dies für Prozesse der Automatisierung in der Industrie und auch für den Begleitprozess im Bereich Human Resources, die den Change strategisch begleiten und in einer neuen Art unterstützen.

Kim Oliver Tokarski, Ingrid Kissling-Näf, Jochen Schellinger

Kapitel 5. Risikomanagement in der Supply Chain

Individuelle Kundenanforderungen hinsichtlich Qualität, Preis, Flexibilität und Verfügbarkeit, die zunehmende Konzentration auf die eigenen Kernkompetenzen sowie der zunehmende globale Wettbewerb veranlassen viele Unternehmen zu einer engeren Zusammenarbeit innerhalb ihrer Wertschöpfungsketten. Die daraus resultierenden schlanken Netzwerke sind durch niedrige Bestände, optimierte Durchlaufzeiten, gut ausgelastete Kapazitäten und die sich daraus ergebenden hohen Abhängigkeiten gekennzeichnet. Aus diesen Abhängigkeiten, aber auch aus dem Marktumfeld, politischen Unruhen sowie Naturkatastrophen entstehen Risiken, deren Beherrschung für den Erfolg des beschaffenden Unternehmens und des gesamten Wertschöpfungsnetzwerks unerlässlich ist.

Rainer Lasch

Automatic Text Extraction from Digital Brochures: Achieving Competitiveness for Mauritius Supermarkets

In recent years, it has been observed that there is a growing number of supermarket stores coming into operation in Mauritius. Being a small country, Mauritius has got a limited customer base, and developing marketing strategies to stay in the market has become a priority for supermarket owners. A common marketing strategy is for decision makers to frequently offer sales on items in stock. These sales are advertised using brochures that are distributed to customers as soft copies (PDF digital brochures online) or hard copies within the supermarket premise. To ensure competitiveness of sales prices, decision makers must consult their competitor’s brochures. Given that each competitor’s brochure must be manually checked, the process can be costly, time consuming and labor intensive. To address this problem, we present the components of a system suitable for automatically collecting and aggregating information on items on sale in each supermarket by extracting useful information from digital brochures published online. The proposed system has been implemented on a pilot scale and tested in the local context. The results obtained indicate that our proposal can be effectively used to help local supermarkets easily keep track of market trends in order to remain competitive and attract customers.

Yasser Chuttur, Yusuf Fauzel, Sandy Ramasawmy

Chapter 1. Lead to Cash: Front Office Process Tower

This chapter starts with an overview of the front office process tower and typical activities for each of those towers. We will discuss various concepts related to front office processes and tools consideration to support those processes. By the end of this chapter, you will have learned the fundamentals of an integrated front office powered by various solutions and modules from the Salesforce ecosystem. The following topics will be covered in this chapter:

Rashed A. Chowdhury

Chapter 9. Data Management

In the previous chapter, we reviewed a few concepts and frameworks, including customer segmentation, executive scorecard, and framework for business development owners. We also covered Net Promoter Score (NPS), dashboard design, how to architect top line revenue growth in Salesforce, and in-depth CRM design questionnaires. In this chapter, we will review and study the data aspect of Salesforce that includes data export, mass delete, native integration, duplicate management, and the state and country picklist enablement process. We will briefly discuss D&B Hoovers and Informatica Cloud from a Salesforce perspective.

Rashed A. Chowdhury

An Architecture for the Development of Distributed Analytics Based on Polystore Events

To balance the requirements for data consistency and availability, organisations increasingly migrate towards hybrid data persistence architectures (called polystores throughout this paper) comprising both relational and NoSQL databases. The EC-funded H2020 TYPHON project offers facilities for designing and deploying such polystores, otherwise a complex, technically challenging and error-prone task. In addition, it is nowadays increasingly important for organisations to be able to extract business intelligence by monitoring data stored in polystores. In this paper, we propose a novel approach that facilitates the extraction of analytics in a distributed manner by monitoring polystore queries as these arrive for execution. Beyond the analytics architecture, we presented a pre-execution authorisation mechanism. We also report on preliminary scalability evaluation experiments which demonstrate the linear scalability of the proposed architecture.

Athanasios Zolotas, Konstantinos Barmpis, Fady Medhat, Patrick Neubauer, Dimitris Kolovos, Richard F. Paige

Chapter 7. The Analysis of Potentiality of the Zhongguancun NEEQ Companies to ‘Select’

2019 has been a significant year for NEEQ. The reform of NEEQ with aspect to the relevant policies and regulations were continuously implemented since China Securties Regulatory Commission has announced the deepen reform of NEEQ on 25th October 2019. The deepen reform of NEEQ has already got into a new stage. The listed NEEQ companies are meeting the significant opportunities. In this report, the situations of the Zhongguancun companies which meets the criterions to be listed in ‘Select’ layer is analyzed according to the financial ratios in NEEQ.

Zhongguancun Listed Companies Association

Legal Innovation Mechanisms: From the Designer to the Consumer

The expression “legal innovation” has emerged recently to qualify certain legal work or certain technologies used in relation to law or legal is-sues. Why now as Law is a living material continuously evolving and as Lawyers have constantly created legal rules, solutions and tools? Are we facing a linguistic denial that needs to be overcome to help Lawyers to be actors of the current legal revolution? Indeed, Lawyers must do their job more quickly and in a more effective manner but in a more complex context: internationalization, globalization, digitalization, etc. The solution for Lawyers is to conceive and develop new systems, processes and tools to perform some of their tasks. How can Lawyers innovate? How can legal teams be organized and motivated to innovate and industrialize some of their production? What are the mechanisms to be followed? What are the accelerants and obstacles to legal innovation? A new legal innovation methodology is proposed to conduct research and development programs from the designer to the consumer, associated to a new tool the LTRL (Legal Technology Readiness Level) as a help to have a strategic view of the program steps.

Véronique Chapuis-Thuault

Chapter 11. Future of Work in Information Technology and the Analytics Industry: Understanding the Demand

Analytics using the Internet of Things (IoT) and data science is slowly becoming an allied industry of IT sector in India. IBM has predicted that demand for data scientists will soar by 28 per cent by 2021. More than 50 per cent of this demand is in finance and insurance, IT and professional services. It is interesting to examine the trade-off in the creation of decent work and automated work in the sector. Trade-offs also exist between the profitability of IT firms and costs of deployment of automation in the sector. Particularly, as automation is likely to reduce the demand for labour, any research on the future of work in the industry can answer important questions on the economic and social contribution of technical progress to the Indian labour market. IoT, Artificial Intelligence, automation and cloud computing can change the labour market dynamics in this industry and there is a need to understand this to estimate the future of work in this sector. The emergence of the new knowledge economy has called for changes in the labour regulations of the industry as the nature of work in such industries is distinct from the traditional industries. The chapter, using both primary and secondary data, aims to study the nature of work in IT-Analytics industry and the labour regulations required to improve and reform the work conditions in the industry. The study also examines the current skill demand in the industry to understand the possible skill gaps in the labour market.

Nausheen Nizami

MedTech Companies on Their Growth Journey-Leadership Responses to Growth Challenges in the Light of the 3-P-Model

On the base of their broad industry experiences in different companies, Marie Theres Schmidt and Dieter Fellner developed a case study on challenges faced by fast-growing MedTech companies. The continuous strive for breakthrough medical innovations requires an agile mindset and the readiness to embrace change. Global demographic changes lead to a growing demand for innovative diagnostic and treatment options in the coming years inducing MedTech companies to explore the space while growing fast. At the same time, the medical device market will face increasing regulation to contain healthcare costs and safeguard patients’ safety requiring companies to respond to changing conditions.Regular assessment of the level of agility and resilience is key for fast-growing organizations to successfully manage the growth journey. Leaders of these organizations should establish reflection cycles to monitor success. As an outcome of Marie’s and Dieter’s experiences, the Three-Pillar Model (3-P-Model) (Wollmann, Kühn, Kempf, Three pillars of organization and leadership in disruptive times – Navigating your company successfully through the 21st century business world. Cham: Springer Nature, 2020) can be applied to facilitate the reflection cycles and emphasize challenges of the organization connected to the growth journey. This is illustrated in the interview which evaluates the three pillars based on the authors’ individual leadership experience in fast-growing organizations. The interview reveals companies’ persistent focus on improving patients’ lives as a sustainable purpose connecting employees around the globe on their joint mission. It further highlights that the sustainable purpose becomes an individual and organizational compass: the sustainable purpose provides employees with a tool, or rather an ethical standard, to respond to unforeseen changes. Thereby, the organization preserves its agility to react quickly to uncertainties and remains competitive through the ability of fast adaption to change. A strong sustainable purpose that is deeply embedded in the company’s DNA is key to ensure personal commitment and joint success on the growth journey—and frames cross-functional connectivity in fast-developing organizations.Consequently, fast-growing MedTech companies should thoroughly invest in keeping their sustainable purpose present in all areas and accessible to all employees.From a leader’s perspective: the better a fast-growing company can maintain its DNA, the more successfully and sustainably it will grow.

Marie Theres Schmidt, Dieter Fellner

Applying the Principles of the 3-P-Model to Build an Agile High-Performance Team Within Finance

The authors describe and evaluate an exemplary Munich Re (MR) initiative to build up a new agile unit (like a start-up) within classic Finance, using the Three-Pillar Model (3-P-Model) effectively. The project focuses on developing (individually tailored) smart, digital finance products. The development story uses, among others, communication documents of Digital Finance (DF) and their development over time. The reason why the new—virtual—unit is a magnet for global talents and why the unit has developed into a role model within MR is described, and this description is embedded and completed by two interviews (with the Head of Group Controlling and with an Application Developer of Digital Finance).

Benjamin Rausch, John Gray, Thomas Thirolf, Peter Wollmann

Chapter 9. Situational Awareness for Industrial Operations

The smooth operation of industrial or business enterprises rests on constantly monitoring, evaluating and projecting their current state into the near future. Such situational awareness problems are not well supported by today’s software solutions, which often lack higher level analytic capabilities. To address these issues, we propose a modular and re-usable system architecture for monitoring systems in terms of their state evolution. As a main novelty, states are represented explicitly and are amenable to external analysis. Moreover, different state trajectories can be derived and analysed simultaneously, for dealing with incomplete or noisy input data. In the paper, we describe the system architecture and our implementation of a core component, the state inference engine, through a shallow embedding in Scala. The implementation of our modelling language as an embedded domain-specific language grants the modeller expressive power and flexibility, yet allows us to abstract a significant part of the complexity of the model’s execution into the common inference engine core.

Peter Baumgartner, Patrik Haslum

Kapitel 8. Überwachung von Fertigungseinrichtungen

Die Überwachung von Fertigungslinien ist eine der wesentlichsten Funktionen eines Produktionsleitsystems. In diesen Bereich fallen Funktionen wie Anlagenvisualisierung, die Ermittlung von Anlagenkennzahlen und das Berichtswesen. Diese Prozesse werden im vorliegenden Abschnitt erläutert, wobei auch auf die Konfiguration und auf die Anwendung grob eingegangen wird.

Markus Kropik

Visualization and Interpretation of Gephi and Tableau: A Comparative Study

In graphs, data is organized in such a way so that the information becomes clearer to draw conclusions which help to make decisions. Graphical visualization of the data gives better insights for decision making, forecasting, and prediction. Graphs clustering tools helps and assist in completing the task in a faster way. It also helps in graphical visualization of data based on features. Identifying the best tool for visualization is a time-consuming task. In this study, graph clustering tools are reviewed. Experiments are performed on few graph clustering tools for further comparative study. Graph clustering tools mainly Gephi and Tableau are considered for experiment purpose. The comparison is based on visualization and clustering effect in both tools.

Anuja Bokhare, P. S. Metkewar

Chapter 17. A Zero Trust Policy Model

Policies are at the heart of Zero Trust—after all, its primary architectural components are Policy Decision Points and Policy Enforcement Points. The term policy is heavily overloaded in the English language, of course, with meanings at many different levels. In our Zero Trust world, policies are the structures created by organizations to define which identities are permitted to get access to which resources, under which circumstances. Recall that within a Zero Trust environment, access can only be obtained through the evaluation and assignment of a policy to an identity, and that access may be enforced at the network or application levels.

Jason Garbis, Jerry W. Chapman

Chapter 6. Knowledge Managers and Dashboards in Splunk

The previous chapter covered various knowledge objects, software components, commands, and features in Splunk. It dealt with input-output technicalities and provided step-by-step guides. This chapter discusses codes and segments and introduces the manual elements responsible for performing a few of these functions.

Deep Mehta

Chapter 2. Splunk Search Processing Language

In Chapter 1 , you learned about Splunk’s architecture, history, inception, and salient features. You saw a roadmap for the Splunk Enterprise Certified Admin exam and were introduced to Splunk in a nutshell. You installed Splunk on macOS or Windows and went through the process to add data to it. In this chapter, you take a deep dive into the Splunk Search Processing Language and the methods to analyze data using Splunk.

Deep Mehta

3. Lean Digitalisierungs-Framework

Eine ganzheitliche Reifegrad-Analyse sowie Planung und Steuerung ist essenziell für einen erfolgreichen digitalen Wandel. Hierfür sind alle wesentlichen Dimensionen und Aspekte der Digitalisierung zu betrachten. Für jede Dimension sind Lösungen auszuwählen und diese zu einem passgenauen Digitalisierungs-Framework zusammenzustellen. In diesem Kapitel finden Sie die wesentlichen Dimensionen und Aspekte und eine Übersicht von Lösungsbausteinen für alle relevante Aspekte.

Inge Hanschke

5. Lösungsbausteine der Digitalisierung

Die Digitalisierung ist kein Trend oder eine Modeerscheinung, sondern verändert die Unternehmen und die Gesellschaft von innen heraus und hat großen Einfluss auf Geschäftsmodelle. Digitale Fähigkeiten werden zum Wettbewerbsfaktor. Um auch in der Zukunft wettbewerbsfähig zu sein, müssen klassische Denkmuster verlassen und interdisziplinär und unternehmensübergreifend technische Innovationen in Business-Innovationen verwandelt werden. Kreativitätsmethoden und Methoden für die Geschäftsmodellentwicklung und -operationalisierung machen hier den Unterschied. Klare strategische Vorgaben, ein explizites Customer Experience Management sowie ein angemessenes Datenmanagement, Enterprise Architecture Management und Demand Management geben ein Instrumentarium an die Hand.Den Erfolg der digitalen Transformation kann man in einer hohen Kundenzufriedenheit sowie höheren Umsätzen und Erträgen ablesen. Der Weg dahin ist aber lang und steinig. Best-Practice Lösungsbausteine helfen hierbei. Die Umsetzung ist jedoch eine komplexe Aufgabe. Die Lösungsbausteine der Digitalisierung aus diesem Kapitel helfen auf diesem schwierigen Weg.

Inge Hanschke

2. Digitalisierung im Überblick

Customer Experience entscheidet über den Erfolg in der Digitalisierung. Daher stehen der Kunde und dessen Bedürfnisse im Mittelpunkt der Digitalisierung. Was bedeutet dies für Unternehmen und wie können Sie sich wappnen, um den digitalen Wandel erfolgreich zu meistern? Welche Beispiele gibt es in der Wirtschaft, und welche Erfolgsfaktoren gibt es? Dies sind die Fragen, die dieses Kapitel beantwortet. Zudem spannt es den Bogen zur Detaillierung in den weiteren Kapiteln.

Inge Hanschke

Open Access

Kapitel 18. Einfach mal anders gucken?!

Neue Perspektiven auf alte Probleme: Sicherheitskultur in Industrie 4.0

Der Beitrag beschreibt einen ganzheitlichen Ansatz für die digitale Transformation unter Einbezug der Kulturperspektive in kleinen und mittleren Unternehmen, die den Anforderungen einer Industrie 4.0 gerecht wird. Das vorgestellte Vorgehen, die SiTra4.0-Roadmap, ist das wesentliche Ergebnis des Forschungsprojekts “Nachhaltige Sicherheitskultur als Transformationsansatz für Industrie 4.0 in KMU” (SiTra4.0). Beschrieben werden zunächst die einzelnen Schritte der Roadmap und deren Einsatz in der Praxis. Der Beitrag schließt mit einem Ausblick auf fortlaufende Forschungsarbeiten sowie dem Arbeitskreis für Sicherheit in Industrie 4.0.

Anna Borg, Achim Buschmeyer, Claas Digmayer, Cornelia Hahn, Eva-Maria Jakobs, Johanna Kluge, Jonathan Reinartz, Jan Westerbarkey, Martina Ziefle

Chapter 2. The Role of Moral Values in the Development of E-Business

The notion of ethics comes from the Greek language, from the noun “ethos”, which has the meaning of “character or habit, custom”. According to Crăciun (2004), business ethics is a current that originates in the North American society and in Romania, this area becomes of interest with the transition to a democratic, capitalist society.

Ioana Bucur-Teodorescu

Enhancing Enterprise Business Processes Through AI Based Approach for Entity Extraction – An Overview of an Application

While Industries are growing strong with their digital transformation, advanced analytics are making them stronger through data driven decisions. At the same time traditional automation is getting matured and emerging as cognitive automation. In the era of Industry 4.0, handshake of business process automation, advance analytics and cognitive services have laid down a strong platform for ‘Cognitive Bots’. Enterprises can leverage the advent of powerful technologies and approaches, anticipating ultimate goal of the business through more adaptive, self-learning, and contextual applications. This paper explains one of such cognitive bot for finance department where invoices are of utmost importance for the function. The mentioned bot is intended for amount detection and verification; additionally, it can also extract various entities like organization name, location and date which contributes to perform analytics to a great extent. The application reward business in reducing turnaround time and human errors. The accuracy of specially customized trained neural model has achieved state of the art results on the current set of learning data. The proposed framework makes use of Optical Character Recognition and PDFMiner for text extraction from scanned invoices. A Quality Classifier that will reject hand written invoices for any further processing. A spaCy’s Name-Entity-Recognition predicts the amount, date, organization name and location from extracted unstructured text.

Ankit Dwivedi, Praveen Vijayan, Rishi Gupta, Preeti Ramdasi

Chapter 14. Advice to Cybersecurity Investors

Over $45 billion of private equity and public IPO (initial public offering) investment has been made in cybersecurity companies from 2003 to 2020, yet the mega-breaches have continued. This chapter covers where all this money has been going and what categories of defenses have been invested in thus far. We then go on to analyze what areas of cybersecurity are ripe for further investment. As an example, areas such as Internet of Things (IoT) security and privacy, among others, have received less as compared to, say, network security and probably warrant more investment going forward.

Neil Daswani, Moudy Elbayadi

AP12 Low-Energy Technology at ALRO Smelter

Electricity cost is one of the main determinants of the competitive structure of an aluminium smelter. Over the past few years, ALRO Group, the biggest industrial power consumer in Romania, has completed ambitious projects to reduce specific energy consumption. To reach the next level, in 2018, ALRO mandated Rio Tinto Aluminium Pechiney (RTAP) to supply AP12 Low EnergyLow energy technology. To guarantee smooth technology transferTechnology transfer and validation and achieve step-change performance the AP Technology™ standard “cell development cycle” approach was used. ALRO and RTAP worked together as a team to execute specific activities such as a measurement campaign, modelling, risk analysis, readiness assessment, on-site and remote support, data analytics, Go/No Go. This article presents this project, which achieved a significant reduction in specific energy consumption, along with some of the supporting activities and tools.

Marian Cilianu, Bertrand Allano, Ion Mihaescu, Gheorghe Dobra, Claude Ritter, Yves Caratini, André Augé

Data-Driven Knowledge Discovery in Retail: Evidences from the Vending Machine’s Industry

The purpose of this study is to investigate new forms of marketing data-driven knowledge discovery in the vending machine (VM) industry. Data of shopping activities understanding were gathered and analyzed by a system technology based on a RGBD camera. An RGBD camera, already tested in retail environments, is installed in top-view configuration on a VM to gather data that are processed and embedded within a knowledge discovery project. We adopted the knowledge discovery via data analytics framework (KDDA) that was applied to a real-world VM scenario. Real-world case tests, based on more than 17.000 shoppers measured in 4 different locations, were conducted. By using this method it was possible to verify the ability of this approach to generate new forms of marketing data-driven knowledge available on the VM industry. Main novelties are: i) the application of a KDDA project to the VM industry; ii) the use of RGBD data sources for the first time for a KDDA process; iii) the contribution to the practice by supporting retailers to carry out knowledge discovery processes more effectively; iv) the real-world testing process based on 4 locations and more than 17.000 shoppers.

Luca Marinelli, Marina Paolanti, Lorenzo Nardi, Patrizia Gabellini, Emanuele Frontoni, Gian Luca Gregori

Kapitel 3. Dimension 1: Customer Value-based Decision Making

Kundenorientierung baut auf dem Kundenwert und nicht mehr nur auf der Kundenzufriedenheit auf. In der Praxis haben noch sehr viele Organisationen kein Kundenwertmodell etabliert oder nur ein sehr einfaches in Form einer ABC-Analyse nach Umsatz. Zur Verbesserung der Kundenorientierung gilt es ein möglichst hochstehendes Kundenwertmodell zu implementieren und die daraus gewonnen Erkenntnisse für die Entscheidungsfindung zu nutzen. Dabei sollen Entscheidungen nicht nur getroffen, sondern auch die Entscheidungsprozesse systematisch hinterfragt werden.

Dr. Jörg Staudacher

Chapter 6 Big Data and FAIR Data for Data Science

The article is devoted to the review of such modern phenomena in the field of data storage and processing as Big Data and FAIR data. For Big Data, you will find an overview of the technologies used to work with them. And for FAIR data, their definition is given, and the current state of their development is described, including the Internet of FAIR Data & Services (IFDS).

Alexei Gvishiani, Michael Dobrovolsky, Alena Rybkina

An Amalgamation of Big Data Analytics with Tweet Feeds for Stock Market Trend Anticipating Systems: A Review

In digital era, the provision of streaming data and its storage has taken wider forms where data can be in structure, semi-structured and unstructured forms. As data is increasing the storage capacity also has to be increased, and the processing of data from such huge storage may be time consuming. And it is tricky to handle and process such data via conventional software and database procedures, which lead to the research toward big data analytics. It aims at handling of massive data storage with fast processing techniques and to help companies in optimizing business, advancement of operations, making more intelligent, and fast decisions. Hence Big data analytics is an important field that derives insights from the data and prediction system is one of the famous applications of it. It takes the historical data and analyses, and then it forecasts the past and future situations basing on identified hidden patterns of data considered. The analytics can be categorized into different forms namely Business Analytics (BA) and Predictive Analytics (PA). The inclusion of skills, technologies, applications, and processes with statistical techniques adopted by organizations for their data available to impel business planning is referred to as business analytics. The forecasting approach to foresee upcoming events and trends known as Predictive Analytics, which identifies the hidden patterns and determines what is likely to happen from the historical information available using statistical and mathematical models. To enhance the forecasting process, an opinion mining can also be included. Nowadays sentiment of the people also considered to improve the accuracy level of anticipation. Effecting factors should be considered and clearly analyzed to construct accurate model so as to supply most relevant suggestions. Several researchers proposed various prediction algorithms and methods in order to construct the accuracy improved model and user satisfaction. In this chapter, authors studied various anticipating models and discussed their preference criteria. As a part of that, we studied various important preference factors in stock trend prediction and categorized them based on effecting factors. This chapter reports prospect directions in prediction models and compiling an easy guide reference list to help out the researchers.

Deepika Nalabala, M. Nirupama Bhat, P. Victer Paul

7. Intuitives Rechnen – Beurteilen in Graustufen

Für das Einschätzen von Fakten oder Ideen können wir zweiwertige Methoden anwenden und daraus die Schlussfolgerung „richtig“ oder „falsch“ ziehen. Nutzen wir mehrwertige Logiken, so führt das zu einer differenzierten Wahrheitsfindung. Die unscharfe Logik (Fuzzy Logic) verwendet neben 1 (richtig) und 0 (falsch) darüber hinaus alle Werte zwischen 0 und 1. Damit stellt sie eine Erweiterung der klassischen Logik mit den Extremwerten „wahr“ oder „falsch“, „gut“ oder „böse“, „schwarz“ oder „weiß“ dar und hebt die Dichotomie auf. An die Stelle des Alles-oder-Nichts (Entweder-Oder) tritt das Sowohl-als-Auch: Alles ist wahr – zu einem bestimmten Grad. Die unscharfe Logik von Lotfi Zadeh erlaubt das Arbeiten mit linguistischen Variablen oder Termen und wird am Beispiel der Definition von Teenagern illustriert. Als Anwendungsbeispiel dient ein unscharfes Kundenportfolio, mit dessen Hilfe die Kunden individuell und ohne fixe Klassengrenzen betreut werden können. Die Anwendung unscharfer Methoden wird für die digitale Gesellschaft am Beispiel Fuzzy Voting demonstriert. Dadurch sollen knappe Abstimmungen mit einer damit einhergehenden scharfen Trennung zwischen Befürwortern und Gegnern vermieden werden. Im Ausblick wird dazu angeregt, herkömmliche Verfahren der Informatik um intuitionsbasierte Methoden zu ergänzen und damit die Kluft zwischen maschineller Intelligenz und menschlichem Verhalten zu verringern.

Andreas Meier, Fabrice Tschudi

2. Datenbanken – Speichern und Abrufen von Informationen

Das Wort Speicher stammt vom lateinischen Namen spicarium, was so viel wie Getreidespeicher bedeutet. Getreidespeicher wurden über Jahrhunderte gesichert, damit Mäuse oder andere Tiere den Speichervorrat nicht beschädigen konnten. Neben materiellen Speichern existieren heute Speicher für Energie oder Speicher für Daten (Datenbanken). Ohne Datenbanktechnologie kann kein Informationssystem lokal oder global betrieben werden. Abfragesprachen wie SQL (Structured Query Language) erlauben, Informationsinhalte zu suchen, zu kombinieren, zu vergleichen oder in Reports zusammenzufassen und in Infografiken zu erläutern. Heutige Datenbanksysteme sind in der Lage, umfangreiche und verteilte Datenbestände (Big Data) zu verwalten und gleichzeitig den Mehrbenutzerbetrieb zu gewährleisten. Als Anwendungsbeispiel für die Mächtigkeit der Datenbanktechnologie wird das Management der digitalen Wertschöpfungskette für eCommerce und eBusiness diskutiert. Zudem zeigt ein Entwicklungsmodell für eCitizen, wie die Kommunikation von Webplattformen für die Öffentlichkeit gestaltet werden kann. Die Architektur eines Webshops unter Nutzung von SQL- und NoSQL-Datenbanken (NoSQL steht für Not only SQL) sowie eine kritische Würdigung runden das Kapitel ab.

Andreas Meier, Fabrice Tschudi

An Architectural Approach to Managing the Digital Transformation of a Medical Organization

The modern medical management system is influenced by modern medical concepts (value medicine, predictive medicine), on the one hand, and technologies that provide digital transformation (IoT, Big Data, blockchain, etc.), on the other hand. Digital transformation involves the implementation of fundamental changes in the activities of the organization using digital technology. At the same time, business models, medical organization development strategies, business process systems, IT architecture, services architecture, data architecture, etc., change. The object of the research is medical organizations, being the subject of research the digital transformation process of a medical organization. The hypothesis of research: the application of the architectural approach to managing the digital transformation of a medical organization will allow the formation of a reference architectural model of the upper level, as well as the upper level of the digital transformation roadmap of the medical organization. The main result of the research is the formation of the architectural model of the upper level, as well as the formation of the upper level of the transition plan during the digital transformation of the medical organization.

Igor Ilin, Oksana Iliashenko, Victoriia Iliashenko

Application of Big Data Analytics for Improving Learning Process in Technical Vocational Education and Training

The study identifies the application of big data analytics for improving learning process in Technical Vocational Educational and Training (TVET) in Nigeria. Two research questions were answered. The descriptive survey design was employed and the target population was made up of experts in TVET. A structured questionnaire titled “Big Data Analytics in Technical Vocational Education and Training” (BDATVET) is the tool used for data compilation. BDATVET was subjected to face and content validation by three experts, two in TVET and one in Education Technology. Cronbach Alpha was used to determine the reliability coefficient of the questionnaire and it was found to be 0.81. The data collected from the respondents were analyzed using mean and standard deviation. The findings on the problems of big data include among others security, availability and stability of internet network service. The Findings related to the prospects include among others, allows a teacher measuring, monitoring, and responding, in real-time to a student’s understanding of the material. Based on the findings, it was recommended among others that the stakeholders in TVET need to uphold the fundamental security of their data and who may access information about their competencies.

Aliyu Mustapha, Abdullahi Kutiriko Abubakar, Haruna Dokoro Ahmed, Abdulkadir Mohammed

Chapter 10. Digital Transformation of Family Small-to-Medium-Sized Enterprises

Digital technology is shocking the whole industries in the world. It is significant to sustain a trans-generational entrepreneurship by building shared values between families in a digital age, sustaining the value of constant entrepreneurship, and showing values through sustainability. In this article, we will introduce two cases to illustrate how digital transformation process is implemented by these two SMEs with different innovation strategic positions. One is a traditional screw manufacturing enterprise which innovate in the original industry. The other one is a small business which applies the communication technology to enter a new market. This article has significantly value for understanding how the family entrepreneurship is sustained and created during the digital transformation process.

Shu-Ting Chan

Knowledge Graphs Meet Crowdsourcing: A Brief Survey

In recent years, as a new solution for hiring laborers to complete tasks, crowdsourcing has received universal concern in both academia and industry, which has been widely used in many IT domains such as machine learning, computer vision, information retrieval, software engineering, and so on. The emergence of crowdsourcing undoubtedly facilitates the Knowledge Graph (KG) technology. As an important branch of artificial intelligence that is recently fast developing, the KG technology usually involves machine intelligence and human intelligence, especially in the creation of knowledge graphs, human participation is indispensable, which provides a good scenario for the application of crowdsourcing. This paper first briefly reviews some basic concepts of knowledge-intensive crowdsourcing and knowledge graphs. Then, it discusses three key issues on knowledge-intensive crowdsourcing from the perspectives of task type, selection of workers, and crowdsourcing processes. Finally, it focuses on the construction of knowledge graphs, introducing innovative applications and methods that utilize crowdsourcing.

Meilin Cao, Jing Zhang, Sunyue Xu, Zijian Ying

Survey of Global Efforts to Fight Covid-19: Standardization, Territorial Intelligence, AI and Countries’ Experiences

The development of transportation and communication means and the opening up of the world due to the industrial, economic and social revolutions and the emergence of advanced urbanization have resulted in an acceleration of globalization, worldwide supply chains dependencies and greater openness of the world’s ecosystems. At present, the world is experiencing an unparalleled health crisis due to the SARS 2 or covid-19 pandemic, which has given rise to socio-economic crises across the world. In the absence of a vaccine, countries are being forced to revolutionize their response and preparedness policies to health emergencies and compel themselves to the new global dynamic. Our paper, based on feedback from countries, proposed artificial intelligence solutions, the capitalization of standards-based knowledge in the face of covid-19 impacts and the concept of territorial intelligence, contributes to this global effort by proposing sustainable, smart and generic solutions against the current pandemic.

Boudanga Zineb, Mezzour Ghita, Benhadou Siham

Semantic Web and Business Intelligence in Big-Data and Cloud Computing Era

Business Intelligence (BI) involves strategies and technologies for the analysis of business information. Business data can be collected using data mining from online resources, reports, operations, social media and from different sources in the big data era . Thus, BI makes use of heterogeneous data for the analysis of the past, present and future of enterprises and this is challenging; mining the data, storing in an appropriate format and managing/analysing/inferencing of the heterogeneous information is difficult. These problems can be alleviated to a great extent using Semantic Web (SW). SW provides technologies to create rich machine-understandable data that can be automatically processed, analysed and consumed by intelligent applications. Therefore, BI can benefit from SW technologies. In this paper, we review BI methods that apply SW for intelligent data mining, processing and analysis of BI procedures. In particular, we summarize and compare different properties of SW and BI methods and how they handle big data. Finally, we discuss how cloud computing technologies can be integrated with SW and BI. Finally, conclude with future open research challenges.

Adedoyin A. Hussain, Fadi Al-Turjman, Melike Sah

SVM: An Approach to Detect Illicit Transaction in the Bitcoin Network

Blockchain Technology is one of the innovation projects launched in 2008 by Satochi Nakamoto, which makes this technology face to a set of attacks like money laundering, DDOs attack and illicit activity ... In this paper, we compare a Support Vector Machine algorithm with Random Forest and Logistic Regression to detect illicit transaction in the bitcoin network. The proposed approaches use the elliptic dataset. The used dataset contains 203.769 nodes and 234.355 edges, it allows to classify the data into three classes: illicit, licit or unknown. In this study, we consider unknown classes as licit. The accuracy reaches the 98.851% using Random Forest. The recall does not exceed 45% and the precision reaches 65.90% using Random Forest.

Abdelaziz Elbaghdadi, Soufiane Mezroui, Ahmed El Oualkadi

Integrated Tool for Assisted Predictive Analytics

Organizations use predictive analysis in CRM (customer relationship management) applications for marketing campaigns, sales, and customer services, in manufacturing to predict the location and rate of machine failures, in financial services to forecast financial market trends, predict the impact of new policies, laws and regulations on businesses and markets, etc. Predictive analytics is a business process which consists of collecting the data, developing accurate predictive model and making the analytics available to the business users through a data visualization application. The reliability of a business process can be increased by modeling the process and formally verifying its correctness. Formal verification of business process models aims checking for process correctness and business compliance. Typically, data warehouses are usually used to build mathematical models that capture important trends. Predictive models are the foundation of predictive analytics and involve advanced machine learning techniques to dig into data and allow analysts to make predictions. We propose to extend the capability of the Oracle database with the automatic verification of business processes by adapting and embedding our Alternating-time Temporal Logic (ATL) model checking tool. The ATL model checker tool will be used to guide the business users in the process of data preparation (build, test, and scoring data).

Florin Stoica, Laura Florentina Stoica

Intelligent System to Support Decision Making Using Optimization Business Models for Wind Farm Design

In the digital age, all successful business processes depend on well-motivated and effective decision-making. To be more precise, these effective decisions are to be based on properly formulated mathematical models. To achieve such decisions a framework of an intelligent system to support decision-making is proposed. The proposed framework relies on the effective integration of multi-attribute group decision making models (MAGDM) and mathematical optimization models (single or multi-objective). While the MAGDM models contribute to the determination of the most preferred alternative by aggregation different points of view of the group experts, the goal of using the optimization models aims to determine the effectiveness of the selected alternative. The described framework is applied for the selection of wind turbine types for designing a wind farm. The efficiency of the designed wind farm is evaluated by using an optimization model. It is shown that in some cases the selected preferred turbine type leads to less wind farm performance taking into account the particular farm area and wind conditions.

Daniela Borissova, Zornitsa Dimitrova, Vasil Dimitrov

TabbyLD: A Tool for Semantic Interpretation of Spreadsheets Data

Spreadsheets are one of the most convenient ways to structure and represent statistical and other data. In this connection, automatic processing and semantic interpretation of spreadsheets data have become an active area of scientific research, especially in the context of integrating this data into the Semantic Web. In this paper, we propose a TabbyLD tool for semantic interpretation of data extracted from spreadsheets. Main features of our software connected with: (1) using original metrics for defining semantic similarity between cell values and entities of a global knowledge graph: string similarity, NER label similarity, heading similarity, semantic similarity, context similarity; (2) using a unified canonicalized form for representation of arbitrary spreadsheets; (3) integration TabbyLD with the TabbyDOC project’s tools in the context of the overall pipeline. TabbyLD architecture, main functions, a method for annotating spreadsheets including original similarity metrics, the illustrative example, and preliminary experimental evaluation are presented. In our evaluation, we used the T2Dv2 Gold Standard dataset. Experiments have shown the applicability of TabbyLD for semantic interpretation of spreadsheets data. We also identified some issues in this process.

Nikita O. Dorodnykh, Aleksandr Yu. Yurin

QA System: Business Intelligence in Healthcare

Nowadays, Question Answering (QA) systems are used as an automated assistant system for resolving queries in various messaging applications. Accessing a database to get specific information can be stressful and time-consuming if one does not have the much needed technical skills. Designing a QA system/chatbot that can generate the result by accessing the database dynamically has linguistic and design challenges. In this work, we present a medical domain-specific QA system, which can provide real-time response to any employee’s standard business queries in a healthcare company by removing unwanted dependencies on the analytics team. The QA system uses Natural Language Processing (NLP) and Deep Learning (DL) techniques for Named Entity Recognition and text pre-processing, etc. The higher precision value achieved during pilot evaluation has shown how effective our proposed system is for the medical domain question answering system.

Apar Garg, Tapas Badal, Debajyoti Mukhopadhyay

Open Access

Snap4City Platform to Speed Up Policies

In the development of smart cities, there is a great emphasis on setting up so-called Smart City Control Rooms, SCCR. This paper presents Snap4City as a big data smart city platform to support the city decision makers by means of SCCR dashboards and tools reporting in real time the status of several of a city’s aspects. The solution has been adopted in European cities such as Antwerp, Florence, Lonato del Garda, Pisa, Santiago, etc., and it is capable of covering extended geographical areas around the cities themselves: Belgium, Finland, Tuscany, Sardinia, etc. In this paper, a major use case is analyzed describing the workflow followed, the methodologies adopted and the SCCR as the starting point to reproduce the same results in other smart cities, industries, research centers, etc. A Living Lab working modality is promoted and organized to enhance the collaboration among municipalities and public administration, stakeholders, research centers and the citizens themselves. The Snap4City platform has been realized respecting the European Data Protection Regulation (GDPR), and it is capable of processing every day a multitude of periodic and real-time data coming from different providers and data sources. It is therefore able to semantically aggregate the data, in compliance with the Km4City multi-ontology and manage data: (i) having different access policies; and (ii) coming from traditional sources such as Open Data Portals, Web services, APIs and IoT/IoE networks. The aggregated data are the starting point for the services offered not only to the citizens but also to the public administrations and public-security service managers, enabling them to view a set of city dashboards ad hoc composed on their needs, for example, enabling them to modify and monitor public transportation strategies, offering the public services actually needed by citizens and tourists, monitor the air quality and traffic status to establish, if impose or not, traffic restrictions, etc. All the data and the new knowledge produced by the data analytics of the Snap4City platform can also be accessed, observing the permissions on each kind of data, thanks to the presence of an APIs complex system.

Nicola Mitolo, Paolo Nesi, Gianni Pantaleo, Michela Paolucci

Cloud-Based Clinical Decision Support System

Cloud computing is spreading its scope to the different industries, where sensitive data are being maintained. The healthcare industry is facing the challenge of data integrity, confidentiality and huge operational cost of the data. Hence, there is a need to adopt the cloud-based clinical decision support system (CCDSS) in order to maintain the Electronic Medical Records (EMRs), where patients and healthcare professional can easily have access to their legitimate records irrespective of the geographical location, time and cost-effectively through a decision support system. The main objective of this research work is to build a clinical decision support system (DSS) through the deployment of a private Cloud-based System Architecture. This research work explored the pros & cons, challenges that could be faced during implementing a (CCDSS) in a Healthcare for optimization of the resources with time, money, and efforts along with the patient data security. A quantitative survey was conducted to gather the opinions of healthcare stakeholders. Based on the findings of the survey, a rule-based DSS was developed using a Data Flow Modelling tool that facilitated the Cloud deployment of applications and in-House private cloud deployment to ensure the security of data, protection of data, and the regulatory compliance policies in the health organizations. In order to evaluate the proposed system, the User Acceptance Testing was carried out to determine how well the proposed CCDSS meets the requirements.

Solomon Olalekan Oyenuga, Lalit Garg, Amit Kumar Bhardwaj, Divya Prakash Shrivastava

7. On Corporate Innovation

The dominant consensus among all generations, ethnicities, geographies and professions is that it is desirable to have more innovation not less. McKinsey says executives don’t want more of science, research, operation, engineering or clever business models but more innovation, as it is critical for their business growth (McKinsey, Growth and innovation. Strategy and corporate finance, 2020). Can it be perhaps that the innovation we are after is a placeholder for unexpected, positive, surprising new improvements of how we operate today? This chapter demystifies innovation and anchors it’s core in ubiquitous business concepts accessible to everyone. It then shows how leading companies in their field apply these concepts in their daily product reality.

Victor Paraschiv

Chapter 11. The Global South and Industry 4.0: Historical Development and Future Trajectories

This chapter researches the global transformation of industry towards an innovative, sophisticated, and autonomous technology-driven environment. Emerging markets within the Global South should be committed to calibrate its economic, political, and social structures towards investing in a revolutionised industry capable of transforming socio-economic growth. The chapter studies historical industrial revolutions, and presents research towards skills development for Industry 4.0 through digital transformation in emerging markets. The research contribution of the chapter is evident in its research of technologies, innovations, and tailored policies that emerging markets are encouraged to adopt, to participate and thrive in the incipient Industry 4.0.

Wynand Lambrechts, Saurabh Sinha, Tshilidzi Marwala

Die Einstellung der Konsumenten gegenüber der Nutzung von neuen Technologien und künstlicher Intelligenz

In den letzten Jahren sind Smartphones und andere intelligente Geräte Teil unseres Lebens geworden. Für die meisten Aktivitäten der Konsumenten gibt es Apps, welche zu einer besseren Lebensqualität beitragen und die von künstlicher Intelligenz gesteuert werden. Der Einsatz dieser intelligenten Geräte und Programme hat viele Vorteile, wie zum Beispiel die Steigerung der Effizienz für die von den Konsumenten ausgeführten Aktivitäten oder die zu ihrem Vergnügen benutzt werden. Es gibt aber auch viele Nachteile, wie zum Beispiel der Zugriff und die Speicherung personenbezogener Daten oder der Lernaufwand für die Steuerung dieser intelligenten Geräte. Durch die Speicherung und Bearbeitung von personenbezogenen Informationen ist es möglich das Verhalten der Konsumenten zu beeinflussen. In diesem Beitrag werden die Ergebnisse einer Forschung über die Einstellung der Konsumenten gegenüber unterschiedlichen Rollen, die die künstliche Intelligenz in der Gesellschaft einnehmen kann. Es wird empirisch untersucht, inwiefern Konsumenten eher funktionale Roboter oder Roboter mit sozialen Rollen bevorzugen. Die Ergebnisse der Forschung zeigen dass es keine eindeutige Einstellung der Konsumenten gibt, sondern diese hängt sehr stark von der Situation ab, in welcher die Konsumenten mit der künstlichen Intelligenz interagieren. Zum Beispiel für die Unterstützung unterschiedlicher Aktivitäten bevorzugt man funktionale Roboter, wobei für Kommunikationszwecke Roboter mit sozialen Rollen bevorzugt werden.

Corina Pelau, Irina Ene, Ruxandra Badescu

Business Model Innovation and the Change of Value Creation in Consulting Firms

The digital transformation across all industries accelerates growth in the consulting industry. This is a blessing and a curse at the same time. As clients seek help to implement new systems, to utilize emerging technologies for automation, analysis and intelligence and demand ideas and knowledge to build new business models for the digital economy, consulting firms neglect the impact of the digital transformation to their industry. Albeit increasing revenues and profits, the consulting industry faces significant challenges as technology outpaces human capability and capacity in core domains of consultants’ work and new, yet unrecognized, competitors enter the market for professional advice. Neither the business model and value creation paradigm, nor internal processes in consulting firms seem to be adapted and transformed in order to build resilient and sustainable firms in a digital economy. The paradigm and mindset of prioritizing billable work above everything else hinder innovation and the required transformation towards a more digital and scalable business.

Patrick Weber

Megatrend Nachhaltigkeit – Herausforderungen und Lösungsansätze durch digitale Managementstrategien

Um langfristig zukunftsfähig zu bleiben, müssen sich Unternehmen stets an sich stetig ändernde Rahmenbedingungen anpassen. Galt dies früher fast ausschließlich für die Rechtsrahmen in den Ländern ihrer Tätigkeit, so haben sich ökologische und gesellschaftliche Entwicklungen seit dem ausgehenden 20. Jahrhundert so sehr beschleunigt – häufig wahrgenommen in Form von globalen „Krisen“ – dass eigenständige, innovative Antworten von Unternehmensseite zur eigenen wie zur weltweiten Zukunftssicherung notwendig werden. Dabei ist der Anteil der Chancen in globalen Megatrends wie Digitalisierung oder Nachhaltigkeit für Unternehmen umso größer, je früher sie von einer abwehrenden oder nur anpassenden zu einer aktiv gestaltenden Rolle finden. Zwei Instrumente hierfür werden in diesem Artikel näher untersucht: Der Einsatz zumeist junger Managementstrategien und -kulturen wie z. B. Open Innovation oder Science-based targets, sowie innovativer digitaler Lösungen wie z. B. Blockchain oder Digital Twins. Dabei liegt das Augenmerk vor allem auf deren Interdependenz: Die potenziell nachhaltigkeits-steigernden Effekte digitaler Lösungen, wie verbesserte Ressourceneffizienz oder Lieferkettentransparenz, laufen ins Leere oder verkehren sich ins Gegenteil, wenn sie nicht in eine offene, mehrdimensional (im Sinne einer Triple Bottom Line) denkende Managementkultur eingebettet werden. Umgekehrt werden innovative Managementansätze erst ermöglicht durch eine drastisch verbesserte, digitale Informationsinfrastruktur, die die Komplexität ökologischer und sozialer Wirkungen von Unternehmenshandeln aufzeigen und steuerbar machen kann.

Marvin Schulze-Quester

Chapter 4. Institutional Options

This chapter continues the conversation and analysis that began in Chapter 3, but pivots to a discussion and analysis of the different kinds of institutions of higher education that exist. Generally speaking, there are public universities, private universities, 4-year colleges, as well as an array of 2-year and other community colleges. What this chapter does is examine the workload expectations, career options, and realities of being employed at any of these institutions. In addition to the analysis of these more traditional career options, there is also a conversation and presentation of what an alternative academic career might look like. Not every transition from industry to higher education or academia results in a role at a college or university. Practitioners would be well served by understanding what types of alternative academic careers exist, how these roles are different from those at colleges or universities, and what practitioners should prepare in order to succeed in these alternative roles. Rounding out this chapter is an examination of how some trends, including collaborative competition and a focus on international work, are also having an impact on the higher education sector.

Sean Stein Smith

Chapter 3. Innovations in Production Technologies in India

From once a net importer of food during 1960s, India has emerged not only as self-sufficient but has become a net exporter. And all this has happened as a result of series of innovations in production technologies ranging from seeds (high yielding, genetically modified and climate-resilient) that resulted in higher productivity, protection of crops from pests, increase in mineral, vitamin and protein content, to farming practices addressing how to apply water (irrigation), fertilisers, pesticides, machinery that saves on costs and promotes sustainable agriculture. Innovations in drip irrigation with fertigation, soil health cards, neem coating of urea and custom hiring (‘Uberisation’) of farm machinery have yielded encouraging results and need further scaling up. In fact, innovations make an impact beyond production technologies into the field of institutions that ensure effective implementation of policies, into trade, marketing and storage of agri-produce which bring higher value to the farmers. In this paper, we spell out major innovations in production technologies in Indian agriculture which have had a significant impact on overall productivity and production, and also touch on innovations that are currently unfolding in inputs and production processes such as innovations in precision agriculture using smart technologies—artificial intelligence, drones, Internet of things (IoT), remote sensing, etc., and innovations in protected agriculture (poly-houses), hydroponics, aeroponics and aquaponics. Therefore, this chapter aims to cover innovations spreading all along the agri-value chains, from farm to fork, or, more aptly in a demand-driven system, from “plate to plough”.

Ashok Gulati, Ritika Juneja

Chapter 2. Industrial Analytics

The Industrial Internet of Things provides an enterprise a way to use historical data generated in the organization and employ analytics methods to find important answers that bring economic benefits to the organization. Industrial analytics refers to the activity of analyzing large volumes of high variety of data at high velocity. Industrial analytics can be descriptive, diagnostic, predictive, and prescriptive. Machine learning plays an important role in the industrial analytics activity as it provides its predictive and prescriptive elements. Machine learning algorithms can be supervised and unsupervised. Supervised machine learning refers to algorithms that teach the computer patterns and trends in data using labeled historical data. Supervised algorithms can be primarily classification or regression algorithms. Unsupervised learning algorithms learn from test data that has not been labeled or categorized. Instead of responding to feedback, unsupervised learning algorithms identify commonalities in the data and respond based on the presence or absence of such commonalities in each new piece of data. Unsupervised algorithms can perform associations or clustering of data. This chapter presents in an easy-to-understand manner different machine learning algorithms and they operate on data. In addition, the analytic capabilities of IIoT enterprise utilizes, referred to as Analytic Conduits, are explained. These are the conduits that are used to analyze data in an IIoT ecosystem. A conduit is a pipeline with specific technology selections that can store data and execute analytic procedures.

Aldo Dagnino

Chapter 5. Process Verification

Process verification as an element of control relates to two primary concepts – their efficiency and effectiveness. Efficiency is the degree to which the planned activities are performed, and the planned results achieved (ISO 9000:2015, p. 31). The measure of process efficiency is therefore the achievement of defined goals. Hence, if one is to viably verify processes, it is important that every relevant goal is adequately measurable, realistic, and correlated with the entire organization’s objectives. In turn, effectiveness reflects the relationship between the results accomplished and the resources consumed to that end (ISO 9000:2015, p. 31). Hence, effectiveness is a measure associated with the cost of a given result, where cost should be understood as the bulk of all resources and expenditures necessary and utilized to secure the result. Three key conditions must be met for a process to be measurable: defined process measure facilitating the evaluation of efficiency or effectiveness down to the level expected and utilizable by decisionmakers, identified points of reference, i.e. evaluation criteria, against which current measurement results will be compared, obtained and released data characterizing the measured process iteration. In turn, in order for such measurements to be viably usable for the purposes of process verification will require that the same be conducted cyclically to allow observation of trends and patterns as to the efficiency and effectiveness fluctuations in time. This is necessary if one hopes to identify the need for process alteration, correctly diagnose the scope thereof, develop the premise for the planned changes, and evaluate the effects of the same—and do so in a timely manner. The present chapter presents the approaches, methods, and tools utilized to this end in production companies and public administration institutions.

Dr. Anna Kosieradzka, Katarzyna Rostek

Chapter 15. Die Rolle der Sparkasse Darmstadt in einer SMART REGION

Wie ein vernetztes Ökosystem das Banking verändert

Der vorliegende Fachartikel gibt einen Überblick über die Bedeutung von SMART BANKING in einer vernetzten Stadt wie Darmstadt, der Digitalstadt Deutschlands, und ihrer Umgebung. Er zeigt sowohl die Bedingungen, wie den Einsatz von Schlüsseltechnologien, als auch die Auswirkungen, wie die Nutzung von digitalen Geldbörsen oder Identitätsdiensten im Web, auf. Am Beispiel der Sparkasse Darmstadt wird erkenntlich gemacht, wie bereits heute und in naher Zukunft der Alltag in Bezug auf das persönliche Finanzmanagement unserer digital-affinen Kunden aussieht. Smartes Banking betrifft nicht nur zu Zeiten von Industrie 4.0 unsere Firmenkunden, sondern u. a. durch die PSD2, also der neuen Zahlungsdienstrichtlinie für das Mobile- bzw. Online-Banking seit 14.09.2019, auch unsere Privatkunden. Der Fachartikel stellt folglich das Potenzial des sog. „Internet bzw. Payment of Things“ für die Bankenbranche vor. Denn in einer intelligent vernetzten Region spielt das Bezahlen eine erhebliche Rolle. Indem sich das Medium Internet stetig im Wandel befindet, verändert sich auch das Banking. Es ist bargeld- und kontaktlos. Es ist digital. Es ist mehr als nur Kartenzahlung. Es macht sich Alltagsgegenstände wie das Smartphone, ehemals Funktelefon, oder die Smartwatch, ehemals Armbanduhr, zu Nutze. Es basiert zunehmend auf Machine Learning, Big Data, Blockchain und Künstlicher Intelligenz. SMART BANKING bietet Finanzdienstleistern die Chance auf mehr Kundennähe, sodass diese Arbeit sowohl für den zukünftigen Finanzberater, der digital sein wird, als auch für junge Bankkunden, u. a. die sog. Young Affluents, die mit den neuen Technologien vertraut sind und gezielt die Vorzüge der Mobilität und Flexibilität verwenden möchten, interessant ist. Die theoretische Annahme, wie die Rolle von Finanzinstituten in einem vernetzten Ökosystem aussehen kann, wird durch bereits vorhandene Produkte und Services wie eSafe, Fotoüberweisung, yes® usw. bestätigt, bzw. anhand von fiktiv entworfenen, zielgruppen-ähnlichen Personas der Sparkasse Darmstadt plakativ gemacht.

Saskia Templin

Industry 4.0: Why Machine Learning Matters?

Machine Learning is at the forefront of the Industry 4.0 revolution. Both the amount and complexity of data generated by sensors make interpretation and ingestion of data beyond human capability. It is impossible for engineers to optimise an extremely complex production line and to understand which unit or machine is responsible for quality or productivity degradation! It is extremely difficult for engineers to process monitoring or inspection data as it requires a protocol, a trained and certified engineer as well as experience! It is extremely difficult to guarantee the quality of every single product particularly at high production rates! Artificial Intelligence can help answering the above questions. Indeed, machine learning can be used for predictive or condition-based maintenance. Even without knowing the design of the machine (i.e., gearbox stages, bearing design, etc.), a machine learning algorithm is capable to monitor deviation of monitoring sensors features compared to a healthy state. Machine learning can be used to monitor the quality of production by allowing the automation of the quality control process. Monitoring additive manufacturing process to detect defects during printing and allow mitigation through real-time machining and reprinting of the defective area. Ensuring the quality of very complex and sensitive production processes such as MEMS, electronic wafers, solar cells or OLED screens. Brunel Innovation Centre (BIC) is working on developing algorithms combining statistical, signal/image processing for features extraction and deep learning for automated defect recognition for quality control and for predictive maintenance. Brunel Innovation Centre is also working on integrating those technologies into the digital twin.

T. H. Gan, J. Kanfoud, H. Nedunuri, A. Amini, G. Feng

Chapter 4. Role of Advanced Computing in the Drug Discovery Process

The classical experimental approach to drug discovery is a tedious task for biological scientists as they are time-consuming and expensive. With the advent of advanced computing techniques such as artificial intelligence and high-performance computing, the problems of the traditional drug discovery approach can be circumvented. In particular, computational approaches help in analyzing and locating active binding sites and guide towards the selection of potential drug molecules that can effectively and specifically bind to these sites. Once lead molecules are identified, associated compounds can be chemically synthesized and tested. So the iterative time-consuming process of identifying potential drug molecules can be significantly reduced by implementing advanced computing techniques whereby it controls the spread of pandemic diseases. This chapter gives some of the popular high-performance techniques, automated statistical methods, and neural network algorithms for data mining in the drug discovery process along with their scope and application.

Ajitha Mohan, Suparna Banerjee, Kanagaraj Sekar

Chapter 5. From Fuel-Poor to Radiant: Morocco’s Energy Geopolitics and Renewable Energy Strategy

Morocco is a leader in the Middle East and North African region in renewable energy. It seeks to source 52% of its electricity capacity from renewable energy by 2030. A traditionally fuel-poor country, Morocco aims to improve its energy security and to reframe itself as an energy-wealthy country awash with renewable energy resources. It has reorganized its energy organizations to institutionalize and rapidly scale up renewable energy development. The state grapples with multiple, sometimes competing forces in its energy policy. Nearly half of its portfolio will still come from fossil fuels, impeding energy independence and constraining budgets. Its closest neighbor, Algeria, is rich in hydrocarbons, but the countries have poor relations because of the conflict over Western Sahara. This has left Morocco in search of alternatives for natural gas imports and impeded political and economic cooperation, as well as grid integration, in the Maghreb. Moreover, the Moroccan government must balance its need for securing international financing and meeting international goals with domestic pressure to achieve an inclusive energy transition that benefits marginalized stakeholders. This chapter argues that all of these complex factors must be taken into consideration to understand Morocco’s renewable energy policy, overall energy policy, and geopolitical calculus.

Sharlissa Moore

5. Entwicklung und Betrieb integrierter BIA-Lösungen

Das folgende Kapitel behandelt die Entwicklung und den Betrieb unternehmensspezifischer BIA-Lösungen und nimmt dabei eine primär organisatorische Perspektive ein. Die Ausführungen beginnen mit Vorbemerkungen zum aktuellen Kontext der BIA und zur strategieorientierten Einordnung von BIA-Ansätzen. Anschließend wird ein Konzept für den Aufbau und den Betrieb eines übergreifenden BIA-Gesamtansatzes vorgestellt, das zwischen einer Makro-Ebene und einer Mikro-Ebene unterscheidet.

Henning Baars, Hans-Georg Kemper

6. Praktische Anwendungen

Zur Verdeutlichung der in den vorangegangenen fünf Kapiteln vorgestellten Themen und ihres Zusammenspiels werden im Folgenden zehn Fallstudien vorgestellt. Sie sind nicht als „Best-Practice-Lösungen“ zu interpretieren, sondern sollen das Spektrum an aktuellen Systemimplementierungen verdeutlichen – von kleinen Data-Mart-basierten Reporting-Lösungen über fokussierte Lösungen etwa für das CRM oder die Produktion bis hin zu umfangreichen Big-Data-Installationen und komplexen Advanced-Analytics-Umgebungen. Die Fallstudien basieren auf realen Praxissystemen, die jedoch aus didaktischen Gründen anonymisiert, modifiziert und teilweise auch kombiniert wurden.

Henning Baars, Hans-Georg Kemper

4. Informationsbereitstellung

Im folgenden Kapitel werden Konzepte vorgestellt, die darauf ausgelegt sind, die Funktionen aus der Informationsgenerierungsschicht sowie die von diesen generierten Ergebnisse und Modelle aufzubereiten und unterschiedlichen Zielgruppen zugänglich zu machen. Dies umfasst Fragen der Informationsvisualisierung und -präsentation, der Informationsdistribution und die Kanalisierung des Informationszugriffs (Abb. 4.1).

Henning Baars, Hans-Georg Kemper

1. Business Intelligence & Analytics – Begriff und Ordnungsrahmen

Im Mittelpunkt des ersten Kapitels stehen die Abgrenzung des Begriffspaares Business Intelligence & Analytics und die Entwicklung eines BIA-Rahmenkonzeptes, das den grundlegenden Ordnungsrahmen für das vorliegende Werk bildet.

Henning Baars, Hans-Georg Kemper

2. Datenbereitstellung und -modellierung

Die Aufbereitung und Ablage konsistenter, auf die Belange der Entscheider ausgerichteter Daten ist die Grundvoraussetzung für den Einsatz leistungsfähiger BIA-Anwendungssysteme. Das folgende Kapitel beschäftigt sich mit diesem Themenkomplex, wobei zunächst historisch gewachsene Speicherungsformen entscheidungsunterstützender Daten erläutert werden. Es folgen detaillierte Beschreibungen von etablierten, relational-ausgerichteten Data-Warehouse-Konzepten und Ansätzen sog. Operational Data Stores. Anschließend wird auf die grundlegenden Designfragen der relationalen Datenmodellierung eingegangen und der aktuelle Data-Vault-Ansatz vorgestellt. Der letzte Abschnitt des Kapitels widmet sich Big-Data-Ansätzen sowie der Rolle Big-Data-orientierter Datenhaltungen in einem BIA-Kontext.

Henning Baars, Hans-Georg Kemper

Chapter 10. A Global Society with Data Capital

This chapter reviews the current socioeconomic status to examine what a “global society” suited to data capital looks like. The examination is from three perspectives, including expected political returns on data capital, possible changes of most counties, and data sovereigns’ potential achievements. The return on data capital may involve offering fewer financial crises, partitioning with more specific descriptions to change “identity polity” (that is naive with a lazy mind), and reducing inequities in international assistance. The changes the countries might relate to data capital are on public debt reduction and governmental capital accumulation. Besides, reasonable cause is that data sovereigns will own the world soon if they can better utilize both data resources and data force.

Chunlei Tang

Risk Management for Business Process Reengineering: The Case of a Public Sector Organization

Business Process Reengineering (BPR) is the radical transformation of an organization’s structure, culture, and business processes aiming at the improvement of factors such as cost, quality, service, and speed, which questions the core of an organization’s way to do everyday business. Enterprise Resource Planning (ERP) implementation is also a process that causes turbulences even on a well-structured and effectively organized corporation. Business history is full of examples of failure in BPR projects and ERP implementation. Therefore, the combination of two high-risk processes applied simultaneously on an organization—especially when it is characterized by lack of human resources, absence of documented work flows, obsolete ICT infrastructure, and negative corporate culture—makes the adoption of a risk management approach from the initial stages of the project indispensable. The purpose of this paper is to present such a case of a public sector organization so as to highlight the importance of critical success and failure factors for ERP-led BPR projects. The results show that organizational culture, top management commitment, use of information technology, organizational structure, and the ability to incorporate change management are fundamental factors that can contribute to successful BPR/ERP implementation. On the other hand, the resistance to change was found to be the most dominant critical failure factor.

Anastasios Tsogkas, Giannis T. Tsoulfas, Panos Τ. Chountalas

An Empirical Investigation of Analytics Capabilities in the Supply Chain

This study conceptually develops the construct of Supply Chain Analytics Capability. Preliminary analysis of survey data collected from more than 100 firms in India supports several hypotheses relating supply chain analytics architecture modularity and decentralized governance, in a moderating manner, with Supply Chain Analytics Capability. The capability creation path model suggested in this study establishes the antecedents of Supply Chain Analytics Capability and is helpful to firms seeking to develop such a capability.

Thiagarajan Ramakrishnan, Abhishek Kathuria, Jiban Khuntia

The Last Mile Problem and Its Innovative Challengers

While the growth of the e-commerce market and the on-demand economy has changed consumer behavior, this change has also directly impacted the logistics sector. Both small and large enterprises are facing difficulties in meeting the demand and expectations that retailers are putting on them. This is especially prevalent for last mile delivery services, as scaling cost and delivery output are not aligned properly. Naturally, start-ups with disrupting and innovative ideas have been founded around the globe to help bridge the gap between customer expectations, retailers demand, and logistics providers’ possibilities. The goal of this chapter is to give an introduction into the last mile problem and introduce small and large players and their ideas on how to solve the last mile problem.

Aike Festini, David Roth

Rate Comparison and Marketplaces—The Platform Era and Global Freight

Platforms, which connect buyers and sellers at scale, have been a mainstay of the consumer Internet ecosystem for decades, led by companies like Amazon, eBay, Uber, and Airbnb. In recent years, they have increasingly permeated the business environment, beginning with Alibaba for procurement and increasingly with companies like SAP Ariba. The prospect of increased efficiency and access for buyers and improved sales and support for providers makes platforms alluring for the logistics space, particularly given fluctuating capacity, the multiple actors involved, varying support capabilities, and the prospect for reduced costs by automating procurement and management. This essay assesses the platform potential in the logistics space, the drivers of change that make it more relevant than ever, and the four major platform business models being adapted. Finally, it concludes with a case study based on the Freightos Group’s experience in the logistics platform space.

Eytan Buchman

Heterogenous Applications of Deep Learning Techniques in Diverse Domains: A Review

Deep learning (DL) techniques have recently emerged as the most significant techniques for processing big multimedia data. DL networks autonomously extract advanced and inherent features from the big data sets using systematic learning methods. The real-world problem-solving using DL techniques demands large parallel computing infrastructure facilities for achieving high efficiency. Recent developments in deep learning techniques have demonstrated that it could outperform humans in some tasks such as classifying and tracking multimedia data. The deep learning networks can have about 150 hidden layers. The increase in the output performance of deep learning networks is directly proportional to input sample data size. This paper reviewed the literature on applications on deep learning from diverse application domains. Authors have also carried out a comparative study of various DL methods used and highlighted their results.

Desai Karanam Sreekantha, R. V. Kulkarni

Chapter 21. TakafulTech for Business Excellence and Customer Satisfaction

Insurance vis-à-vis takaful represents a tool of prime importance in modern economies. It enables the insureds to reduce and better manage their risk exposures. The basic feature of an insurance contract is that the insured buys a future promise of payment contingent upon the occurrence of specified events. This means the insured pays his consideration from the very beginning of the contract. But before the insurer is called to “perform its part”, the security profile of the insurer may have changed with time. Therefore, the long-term reliability of an insurance/takaful company must be beyond doubt. This has led the regulatory authority to enact regulations aimed at securing the long-term reliability of insurers. The concern for consumer protection has expanded the scope of insurance supervisory body, and therefore, greater consideration may be given to insurance consumer protection measures. Supervisory body need to frame rules/regulations, guidelines to ensure customer protection measures. Strong commitment, integrity and honesty are essential qualities for all positions within the insurance industry. Further, to keep up with the times, on-going training and retraining of key personnel is a necessity. The government regulatory body need to ensure that the insurers and Takaful Operators (TO) adhere to the basic rules and ethics of business.

Kazi Mohammad Mortuza Ali

Service Analytics: Putting the “Smart” in Smart Services

Artificial intelligence in general and the techniques of machine learning in particular provide many possibilities for data analysis. When applied to services, they allow them to become smart by intelligently analyzing data of typical service transactions, e.g., encounters between customers and providers. We call this service analytics. In this chapter, we define the terminology associated with service analytics, artificial intelligence, and machine learning. We describe the concept of service analytics and illustrate it with typical examples from industry and research.

Niklas Kühl, Hansjörg Fromm, Jakob Schöffer, Gerhard Satzger

Developing Real-Time Smart Industrial Analytics for Industry 4.0 Applications

Industry 4.0 refers to the 4th Industrial Revolution—the recent trend of automation and data exchange in manufacturing technologies. Traditionally, Manufacturing Executing System (MES) collects data, and it is only used for periodic reports giving insight about past events. It does not incorporate real-time data for up-to-date reports. Production targets are mostly predefined, before the actual production starts. The different production anomalies are known to happen in the real world, affecting the predefined production targets. Moreover, a key challenge faced by industry is to integrate multiple autonomous processes, machines, and businesses.A broad objective of our work is to build an integrated view that can make data available in a unified model to support different stakeholders of a factory (e.g., factory planners, managers) in decision-making. In this chapter, we focus on designing an approach for building Industry 4.0 smart services and addressing real-time data analytics, which can integrate multiple sources of information and analyze them on the fly. Moreover, we share our experience of applying IoT and data analytics approach to a traditional manufacturing domain, thus enabling smart services for Industry 4.0. We also present our key findings and challenges faced while deploying our solution in real industrial settings. The selected use case studies demonstrate the use of our approach for building smart Industry 4.0 applications.

Pankesh Patel, Muhammad Intizar Ali

Chapter 10. Development of the Digital Economy: Problems and Countermeasures

According to the experience of foreign countries and the practice of China, the following potential problems and risks require attention during the development of the digital economy.

Ma Huateng, Meng Zhaoli, Yan Deli, Wang Hualei


There is no specified framework for intangible properties (IP) in Canadian tax law other than the general provisions applicable to various types of property. Canada’s transfer pricing regime abides by the arm’s length principle. In its application of the arm’s length principle, Canada generally follows the OECD Guidelines.

Bruno Amancio de Camargo, Émilie Granger

Chapter 7. Empirical Analysis of Swisscom and Its Confrontation with the Theory

This chapter starts with the introduction to Swisscom, its organization, size, structure, economic performance, R&D activity, and its relationships with academic partners. Then, based upon the empirical material gathered during the analysis, the practice of collaboration with academic partners in the four consecutive phases: identification, selection, formation, and navigation is discussed. The next subchapter identifies and describes the key tensions, disturbances, and barriers in the collaborative problem-solving, and takes account of the remedies and countermeasures as identified and implemented by the actors involved in the collaborative processes. Subsequently, the organizing processes in activity networks as developed and pursued by Swisscom through three perspectives: perspective taking, perspective shaping, and perspective making are outlined. Finally, the issues related to the intercultural encounter are discussed—in particular, boundary objects, boundary interactions, and the role of boundary brokers in building the bridges between different communities.

Rafal Dudkowski

Chapter 3. Big Data, Healthcare and IoT

“Big data refers to the tools, processes and procedures allowing an organization to create, manipulate, and manage very large data sets and storage facilities”—according to

Parikshit N. Mahalle, Sheetal S. Sonawane

Continuous Experimentation with Product-Led Business Models: A Comparative Case Study

Context. Continuous experimentation is used by many companies to improve their products with users data. In this study, the efficacy of continuous experimentation as used in two different types of business models (product-led or sales-led). Method. Two case companies with a product-led business model and three companies with sales-led business model were compared against each other. 14 interviewees were selected from the cases. Results. Having a product-led business model enabled four different drivers to continuous experimentation: 1) development and sales & marketing integration, 2) improved prioritization, 3) decreased feature bloat, and 4) product measurability. Conclusions. The takeaway message is that a company must be structured in the right way to obtain benefits from experimentation.

Rasmus Ros

Chapter 10. Intelligent Query Processing

This chapter provides an overview to intelligent query processing, a collection of features designed to improve the performance of your queries. Intelligent query processing was introduced with SQL Server 2017 with three original features under the umbrella of adaptive query processing. The term intelligent query processing, however, was not used until the SQL Server 2019 release, which included five more features. As an introductory chapter, I am not covering these features exhaustively so you can refer to the SQL Server documentation for more details.

Benjamin Nevarez

Kapitel 8. Informationstechnologie als Enabler der Digitalisierung

Der Informationstechnologie als Unternehmensfunktion kommt bei der digitalen Transformation eine Schlüsselrolle zu. Diese Organisation ist nicht nur als Berater und Unterstützer aller Fachbereiche bei der Transition gefragt, sondern muss auch die bestehende technische IT-Umgebung bereitstellen und dann kosteneffizient und sicher betreiben sowie sich selbst umfassend weiterentwickeln. Diese Herausforderung erfordert eine „two speed IT“, die als Arbeitsgrundlage eine ganzheitliche IT-Strategie in Anlehnung an die Digitalisierungs-Roadmap umsetzt. Bestehende Anwendungs- und Infrastrukturumgebungen sind sicher zu betreiben und zeitgleich in agiler Vorgehensweise zu renovieren und zu konsolidieren. Damit müssen Einsparungen erreicht werden, die dann genutzt werden, um die "neue 2.0 Welt" mit Hybrid Cloud Architekturen, Software Defined Environments, containerbasierten Microservices, S/4HANA, KI-basierter Automatisierung und auch App-Stores aufzubauen, die dann flexibel und hoch performant die geforderten Services liefern. Bei dieser Transformation kommt dem Thema Sicherheit eine hohe Bedeutung zu, nicht nur in der Business-IT, sondern auch der Werks- und Fahrzeug-IT. Zu diesen Themen werden im Kapitel Vorgehensweisen erläutert - auch anhand von Fallbeispielen.

Uwe Winkelhake

4. Satellitenfunktionen im Personalmanagement

Satellitenfunktionen tragen dazu bei, Kernfunktionen effektiver und effizienter umzusetzen. Deshalb sind darin Aspekte und Trends enthalten, die einen Einfluss auf das Personal, deren Verfügbarkeit, Leistungsfähigkeit oder auch -bereitschaft haben, und die aufgrund ihrer Auswirkungen an Relevanz und Dynamik gewinnen. Neben Anforderungen an zeitgemäße Personalführung sind dies eine kontinuierliche Organisations- und Kulturentwicklung, ein auf Kernfunktionen und Personalressourcen ausgerichtetes Controlling, die Vernetzung sowie Zusammenarbeit mit internen und externen Partnern, das bewusste Management von Vielfalt und die aktive Gestaltung und Nutzung des Digitalisierungsfortschritts. So befähigt die klassische Personaldiagnostik beispielsweise, Bewerbende hinsichtlich ihrer Eignung zu prüfen, doch neue Software-Lösungen im Zuge der Digitalisierung eröffnen spannende und vor allem neue Möglichkeiten, die Personalauswahl noch effizienter und passgenauer zu konzipieren.

Carina Braun, Leena Pundt

Chapter 2. Smartness in the Built Environment: Smart Buildings and Smart Cities

The aim of the present chapter is to provide a literature review of the concept of Smartness in the field of built environment. The proposed literature review focuses on the urban scale, which today represents the predominant experimentation ground for ICTs application to service management, in order to extract recurring Facility Management (FM) models and processes transferable to the building scale. Moreover, the chapter introduces a critical analysis of case studies of Internet of Things (IoT) application to service management at both urban and building scales (smart cities, smart buildings and EU funded projects) in order to identify best practices of FM information management.

Nazly Atta

Achieving Positive Hospitality Experiences through Technology: Findings from Singapore and Malaysia

Customers’ experience is one of the most impactful factors in the tourism industry. Only by offering customers an excellent experience is it possible to build and ensure long-term customer loyalty. In today’s world, technology plays a key role in providing customers with an excellent customer experience. This study has the objective of analyzing how a positive customer experience can be achieved, and which technologies are necessary to ensure this. Results were collected through a literature review, and qualitative interviews with managers of selected hotels, as well as of attractions in Malaysia and Singapore. The analysis of these hotels and attractions is based on a set of criteria to determine the extent of the adoption of the new standards that contribute to positive online customer experiences. As a conclusion, different perspectives are compared, and positive and negative aspects of the use of modern technologies in the tourism industry are specified and discussed.

Melina Weitzer, Valentin Weislämle

Kapitel 4. Praxis: KI Tools und ihre Einsatzmöglichkeiten

In diesem Kapitel werden die Möglichkeiten der künstlichen Intelligenz im Vertriebsbereich genauer erläutert und konkretisiert. Dafür werden die unzähligen KI-Tools und ihre Anwendungsmöglichkeiten in 20 Überkategorien zusammengefasst, die detailliert beschrieben werden. Sie erfahren, wie KI-Systeme die jeweiligen Vertriebstätigkeiten und -prozesse verbessern können und welche konkreten Anwendungsmöglichkeiten es für den Vertrieb gibt. Für jede Kategorie werden Beispiele von Tools genannt sowie auch ein konkretes Praxisbeispiel beschrieben. Das Kapitel soll Sie in die Lage versetzen, konkrete Möglichkeiten für den Einsatz von KI-Tools für Ihr Unternehmen zu finden. Zudem ist es das Ziel, dass Sie eine bessere Vorstellung davon bekommen, welche Potenziale die künstliche Intelligenz für den Vertriebsbereich bietet, sodass Sie konkreten Handlungsbedarf für Ihre Vertriebsorganisation ableiten können.

Livia Rainsberger

Kapitel 2. Nutzen: Was KI für den Vertrieb tun kann

Künstliche Intelligenz ist die perfekte Antwort auf viele Herausforderungen des modernen Vertriebs. Ihre zahlreichen Vorteile lassen sich auf mehreren Ebenen zusammenfassen: Sie generiert nicht nur neue Geschäftsmöglichkeiten und ermöglicht ein besseres Verständnis über Kundenbedürfnisse, sondern übernimmt auch manuelle Aufgaben und erhöht die Produktivität und die Wirksamkeit von Vertriebs- und Planungsaktivitäten. Mit ihren vielfältigen Analysemöglichkeiten generiert sie aus Daten nützliche Erkenntnisse, macht sie zeit- und ortsunabhängig dem Vertrieb verfügbar und unterbreitet sogar konkrete Handlungsempfehlungen zur Zielerreichung. Sie greift auf diverse Daten in unterschiedlichen Systemen zu und transformiert sie zu einer wertvollen Informationsquelle für den Vertrieb. Damit löst sie eine der größten Herausforderungen des Vertriebs in digitalen Zeiten: das hohe und schwer zu bewältigende Datenaufkommen.

Livia Rainsberger

Kapitel 6. Was tun: Handlungsempfehlungen für Vertriebsorganisationen

Wer Künstliche Intelligenz im eigenen Vertrieb implementieren möchte, muss zunächst für die richtige Perspektive im Zusammenhang mit einer strategischen Herangehensweise sorgen. Dabei sind diverse KI-Spezifika zu beachten, unabhängig davon, ob es sich um die Entwicklung der KI-Strategie, die Schaffung von technologischen und organisatorischen Voraussetzungen für die KI-Implementierung oder um das erste KI-Projekt handelt. Die Einführung von KI in Vertriebsorganisationen ist kein Einzelprojekt, sondern eine KI-Reise, wenn man von den vielfältigen Möglichkeiten der KI-Technologie profitieren möchte. Diese Reise sollte durchdacht und strategisch geplant beginnen.

Livia Rainsberger

Conceptualizing a Capability-Based View of Artificial Intelligence Adoption in a BPM Context

Advances in Artificial Intelligence (AI) technologies are creating new opportunities for organizations to improve their performance; however, as with other technologies, many of them have difficulties leveraging AI technologies and realizing performance gains. Research on the business value of information technology (IT) suggests that the adoption of AI should improve organizational performance, though indirectly, through improved business processes and other mediators, but research so far has not extensively empirically investigated the way AI creates business value. The paper proposes a capability-based view of AI adoption based on the conception that, with the adoption of AI, an organization develops AI-enabled capabilities – abilities to mobilize AI resources to effectively exploit, create, extend, or modify its resource base. This leads to higher organizational performance through cognitive process automation, innovation, and organizational learning. The first step in this research is to clarify the AI adoption construct. The goal of the paper is thus to provide a conceptual definition, and deeper insights into the components of the AI adoption construct at the organizational level.

Aleš Zebec, Mojca Indihar Štemberger

Chapter 10. Managing Culture for Management Innovation

Management innovation refers to the implementation of management ideas, practices, tools, or structures that are new to the adopting organization. Organizational performance and other innovation successes are found to be closely related to management innovation. Organizations all over the world, hence, are constantly looking to invent or adopt management innovation in their organizations. With the role of change agents, HRD professionals are increasingly involved in management innovation. Yet, the success rate for implementing management innovation is low and our knowledge on management innovation in Vietnam is limited. In this chapter, we provide an introduction to management innovation and its determinants and development in Vietnam. We describe emerging management innovations in Vietnam and discuss several challenges for their implementation in which organizational culture appears to be a critical factor. Further implications for research and practice are also presented.

Loi Anh Nguyen, Anh Lan Thi Hoang

Smart Tourism: Towards the Concept of a Data-Based Travel Experience

The main purpose of the chapter is to identify the potential of big data analysis (BDA) as a tool to enhance the tourism experience by offering products/services that are more personalised to meet each visitor’s unique needs and preferences, as well as to counteract the negative effects of overtourism. The literature review was adapted to define and estimate the significance of data-based experience management. Some examples of big data (BD) application in travel and tourism were identified (e.g. a predictive tourism experience, Internet-based tourist involvement and co-creation, personalisation of the value proposal, effectiveness improvement and promotion enhancement).

Magdalena Kachniewska

18. Erratum zu: Blockchain in der maritimen Logistik

Die Originalversion des Buches wurde versehentlich ohne die Autorenangaben am Ende von Kapitel 12 veröffentlicht. Das Kapitel wurde nun korrigiert, sodass die Autorenangaben der 4 Autoren nun enthalten sind.

Robert Stahlbock, Leonard Heilig, Philip Cammin, Stefan Voß

IOT: The Theoretical Fundamentals and Practical Applications

The world is growing with immense speed, Internet of things is the need of the globe now, and it has escalated itself in every sphere of our lifestyle starting from smart homes to smart agriculture, automobile sector to educational aids and automated workplaces to circumspect industries. As an estimation, it is said that around 34 billion IoT device will increase in coming days. IoT has established itself into our lives by creating automated things that work for human even without any human intervention. It interconnects a wide range of physical devices which are used up in our day-to-day life with sensors, actuators, programming logic accompanied with network connections which puts life in these hardwares and changes human beings life. The collected data is analysed to predict trends and suggest corrective measures in different domains such as government bodies of various sectors like health care, agriculture, transportation and so on. IoT has evolved in phases that is from the traditional embedded systems to M2M to Internet of things. IoT is built of different types of heterogeneous devices that are connected to the Internet. These heterogeneous devices are hugely diverse due to variety of chipsets based on different architectures and configurations. In addition to this, they are built on different programming platforms which lead to lots of standardization issues. To establish a connectivity among these devices, we need to choose from a pool of dissimilar network and communication protocols in different layers. To overcome this, we need a common standardized model for interoperability between pre-existing devices with different networks and IoT cloud services. The current situation demands development of a standardized, common model which will interconnect different networks and cloud services with heterogeneous hardwares and devices having implementation of standardization in each layer and combining them into a standardized framework. In this paper, we aim to compare existing IoT cloud platforms and its challenges while connecting with different devices. We intend to sort their deficiencies, proposing an efficent, well-equipped and integrated common IoT model embedding the pros of existing IoT model and implementing some additional unique required functionalities. The conclusion and repetitive discussions are meant to draw a complete IoT model overcoming standardization, heterogeneity, fault tolerance and hence promoting interoperability of the existing devices.

Santosh Kumar Pani, Siddharth Swarup Rautaray, Manjusha Pandey

Chapter 11. Conclusions

Francisco Eduardo Beneke Avila

Data Ingestion and Analysis Framework for Geoscience Data

Big earth data analytics is an emerging field since environmental sciences are probably going to profit by its different systems supporting the handling of the enormous measure of earth observation data, gained and produced through perceptions. It additionally benefits by giving enormous stockpiling and registering capacities. Be that as it may, big earth data analytics requires explicitly planned instruments to show specificities as far as significance of the geospatial data, intricacy of handling, and wide heterogeneity of information models and arrangements [1]. Data ingestion and analysis framework for geoscience data is the study and implementation of extracting data on the system and processing it for change detection and to increase the interoperability with the help of analytical frameworks which aims at facilitating the understanding of the data in a systematic manner. In this paper, we address the challenges and opportunities in the climate data through the climate data toolbox for MATLAB [2] and how it can be beneficial to resolve various climate-change-related analytical difficulties.

Niti Shah, Smita Agrawal, Parita Oza

Open Access

Approach to Evaluating the Effect of an Inter-organizational Information System on Performance: The Case of a Destination Management Organization

This research proposes an approach to evaluate the contribution of an interorganizational information system (IOIS) to processes and organizational performance. Using a process-based framework, the approach was developed from a review of the IS evaluation literature and then refined through an in-depth embedded case study of an IOIS used by a destination management organization (DMO). The need for this research, comes from the significant investments in terms of capital and human resources and the numerous challenges that IOISs represent for DMOs. DMO’ IOISs are characterized by their interdependence between multiple stakeholders with sometimes contradictory interests. The approach developed here is of interest to researchers and practitioners in that it allows for a contextualization of IOIS system evaluation, and that it considers the depth and breadth of performance measures.

Marie Claire Louillet, François Bédard, Bertrand Dongmo Temgoua

Metadata Management on Data Processing in Data Lakes

Data Lake (DL) is known as a Big Data analysis solution. A data lake stores not only data but also the processes that were carried out on these data. It is commonly agreed that data preparation/transformation takes most of the data analyst’s time. To improve the efficiency of data processing in a DL, we propose a framework which includes a metadata model and algebraic transformation operations. The metadata model ensures the findability, accessibility, interoperability and reusability of data processes as well as data lineage of processes. Moreover, each process is described through a set of coarse-grained data transforming operations which can be applied to different types of datasets. We illustrate and validate our proposal with a real medical use case implementation.

Imen Megdiche, Franck Ravat, Yan Zhao

A Pipeline for Measuring Brand Loyalty Through Social Media Mining

Enhancing customer relationships through social media is an area of high relevance for companies. To this aim, Social Business Intelligence (SBI) plays a crucial role by supporting companies in combining corporate data with user-generated content, usually available as textual clips on social media. Unfortunately, SBI research is often constrained by the lack of publicly-available, real-world data for experimental activities. In this paper, we describe our experience in extracting social data and processing them through an enrichment pipeline for brand analysis. As a first step, we collect texts from social media and we annotate them based on predefined metrics for brand analysis, using features such as sentiment and geolocation. Annotations rely on various learning and natural language processing approaches, including deep learning and geographical ontologies. Structured data obtained from the annotation process are then stored in a distributed data warehouse for further analysis. Preliminary results, obtained from the analysis of three well known ICT brands, using data gathered from Twitter, news portals, and Amazon product reviews, show that different evaluation metrics can lead to different outcomes, indicating that no single metric is dominant for all brand analysis use cases.

Hazem Samoaa, Barbara Catania

Designing a Sourcing Ecosystem for Strategic Innovation Through “Big Data” Applications

Published research on innovation from Information Technology and Business Process Outsourcing (ITO/BPO) is rare [1]. Strategic innovation involves high uncertainties better addressed through agile methods and a collaborative approach [1–3]. Key success factors in delivering ITO/BPO innovation are high-quality relationships, trust and collaborative cultures [1–4], and establishing an effective governance configuration. The authors report on longitudinal case studies of a global mining company (“GMC”) and a group of its suppliers aimed at understanding how GMC is developing “big data” applications to generate game-changing innovation. This paper describes how GMC has developed a “big data” platform to support internal staff, customers, consultants and third party suppliers to create applications that can transform global mining and smelting industries to deliver a price premium for GMC’s products. GMC has encountered a shortage of suitably experienced data scientists in its key operating locations resulting in a significant skills gap in its big data program. GMC’s sourcing strategy aims to build an open and collaborative ecosystem that draws upon secondary markets to help fill the skills gap. To create an environment in which open innovation [5] can flourish, GMC established an Analytics Speed Team (AST) as an internal consulting and program management group to drive faster progress with big data applications. A contribution of this research is to identify the role of AST in establishing an effective governance configuration for open innovation. A practical contribution is made by analysing the value of secondary markets for ITO services in a sourcing ecosystem optimised for delivering innovation.

Kevan Penter, Brian Perrin, John Wreford, Graham Pervan

Survey Analysis System for Participatory Budgeting Studies: Saint Petersburg Case

The participatory budgeting as a part of e-Government becomes powerful approach that helps citizens to solve issues in the local area with support from government. Implementation of the participatory budgeting in new areas as well as its development is existing ones requires analysis of successful cases integrated with survey of local citizens. The paper addresses to the development of an approach that can be used to conduct surveys, analyze results and get statistics in semi-automatic way. Starting with an overview of top analytical tools, this paper identifies requirements and presents data-analysis system prototype. The prototype is focused on providing information space to analyze data. Data is gathered from various public and government information systems including surveys on participatory budgeting conducted by the governments. The system introduces data processing mechanics decisive for deeper data understanding. The paper also demonstrates prototype microservice architecture, including reasoning for each chosen technology. Prototype was evaluated on the survey conducted in Saint Petersburg municipality.

Nikolay Teslya, Denis Bakalyar, Denis Nechaev, Andrei Chugunov, Georgiy Moskvitin, Nikolay Shilov

The Intellectual Structure of Business Analytics: 2002–2019

This paper identifies the intellectual structure of business analytics using document co-citation analysis and author co-citation analysis. A total of 333 research documents and 17,917 references from the Web of Science were collected. A total of 15 key documents and nine clusters were extracted from the analysis to clarify the sub-areas. Furthermore, burst detection and timeline analysis were conducted to gain a better understanding of the overall changing trends in business analytics. The main implication of the research results is in its ability to provide the state of past and present standards to practitioners and to suggest further research into business analytics to researchers.

Hyaejung Lim, Chang-Kyo Suh

Controlling in Medienunternehmen

Als Controlling wird ein System der Steuerungsunterstützung verstanden, das insbesondere in kompetitiven und risikobehafteten Geschäftsfeldern einen wesentlichen Teil der Unternehmenssteuerung darstellt. Unter dem Begriff Mediencontrolling werden entsprechende unternehmensinterne Steuerungsservices diskutiert, die die Spezifika von Medienprodukten und -geschäftsmodellen berücksichtigen und durch Adaptionen auf konzeptioneller und instrumenteller Ebene das Management von Medienunternehmen unterstützen sollen. Aufgrund des zunehmenden Wettbewerbs, des Innovationsdrucks und der Diversifikation in der Medienbranche gewann Controlling auch hier stark an Bedeutung. Gleichzeitig stellen die externen Rahmenbedingungen gekoppelt mit den Spezifika der Medienproduktion für das Controlling nach wie vor eine Herausforderung dar. Ausgehend von einer Darstellung des Status Quo des Mediencontrollings, das insbesondere Rundfunkunternehmen fokussiert, skizziert der Beitrag die für ein effektives Mediencontrolling erforderlichen Prämissen und Rahmenbedingungen. In Folge versucht er, Mediencontrolling als ein Konzept des Kommunikationscontrollings zu positionieren und damit eine neue Perspektive in die Diskussion einzubringen. Damit wird sowohl die betriebswirtschaftliche als auch die kommunikationswissenschaftliche Sicht auf Medienunternehmen bedient, die Medienunternehmen einerseits als ökonomischen Gesetzmäßigkeiten folgende Wirtschaftseinheiten fasst, andererseits aber auch ihren über das Wirtschaftliche hinausgehenden Impact in die Betrachtung integriert.

Monika Kovarova-Simecek, Tatjana Aubram

Social Media

„Social Media“, verstanden als Anwendungen auf der Basis digitaler, vernetzter und prinzipiell allgemein zugänglicher Technologien, die sozial genutzt werden, um Inhalte unterschiedlicher Art zu kreieren, zu modifizieren oder auszutauschen, sind heute aus kaum einem Lebensbereich wegzudenken. Durch die charakteristischen Netzwerk- und viralen Effekte haben sie sich räumlich und zeitlich mit ungekannter Dynamik verbreitet. Social Media haben dabei nicht nur das Repertoire von Medienangeboten bereichert, sondern ganz generell die Optionen der sozialen Interaktion erweitert, Strukturen der Wertschöpfung verändert und neue Geschäftsmodelle ermöglicht. Die Betreiber dieser Plattformen wurden so über die Medienwirtschaft hinaus zu bedeutenden ökonomischen und politischen Akteuren.

Castulus Kolo

Abstracts of Industry Contributions

Peter Haber, Thomas J. Lampoltshammer, Manfred Mayr, Kathrin Plankensteiner

SmartHealth: IoT-Enabled Context-Aware 5G Ambient Cloud Platform

In previous chapters, we have learned that world is revolutionizing every second and concept of ambient healthy living is taking pace. This chapter presents how the patients getting smart e-health ambient services at home would be shifted over cloud and IoT infrastructure for better collaboration with doctors. The medical field is overwhelmed with heterogenous big data that is coming in with velocity and formats. The opportunity is aroused to integrate big data analytics for achieving ambience in cloud for all sectors including healthcare community to find trends and patterns by mapping the electronic health records (EHRs) into universal data model to provide improved individualized care available in time saving lives with lowered cost through shared resources over cloud. The promise of emergence of SmartHealth into reality is conceived through evolving technologies in data science such as unsupervised and supervised learning to model unstructured data in compliance with HL7 HIPPA standards to find hidden patterns forming graph analytics or sequential patterns for parallel processing. The contribution therefore is given to converge all these platforms into single unified entity in form of SmartHealth 5G Context-Aware Cloud Platform over Cloud Intellect (Ci) that would be recognized as a by-product of standardized learning healthcare system given by Professor Freidman of Institute of Medicine (IOM) in future.

Farzana Shafqat, Muhammad Naeem A. Khan, Sarah Shafqat

Chapter 8. Strengths and Weaknesses of the Enterprise’s Information System

The enterprise is at the heart of the economic activities of modern societies, where it has a special status as a consumer of goods and services within the commercial sphere but also as a producer. In addition, this special status allows it to interact with other state and private spheres. The societal exchanges of the firm can be represented in several ways that mark the difference with an individual exchange: a decision-making centre acting with others, in a responsible and enlightened way; a mesh network where each, in turn, produces and consumes the goods produced by others and a production centre to manufacture a good or offer a service.

Walter Amedzro St-Hilaire

Observation and Control of Smart Grid Using IoT and Cloud Technology

The traditional electrical grids deals with generation, transmission and distribution have one-way communication and are centered around the distributor. They also lack in routing and interconnection of different types of generations which lead to a reduction in the reliability of supply. Hence to overcome this issue we propose the comprehensive architecture of Smart Grid and it’s Observation and control using the Internet of Things (IoT) and Cloud Technology. We have Integrated the different sources of energy (thermal and renewable) in a system and introduces the automatic routing of electric power according to peak and low demand. It also incorporates the smart metering facility using IoT and Cloud Technology. It can gather, send and receive the real-time data, which provide two-way communication of data of power being consumed at customer’s end to the service producer. The whole data can be stored, accessed and analyzed on the very secured and customized API platform. We have built and thoroughly tested demonstration model of our project having Wi-Fi-based smart meter, which helped us to monitor and control the electric power and to enhance the reliability of the system.

Salil Tiwari, Vatsal Agrawal, Sanjiv Kumar Jain, Piyush Kumar Shrivastava

Provchastic: Understanding and Predicting Game Events Using Provenance

Game analytics became a very popular and strategic tool for business intelligence in the game industry. One of the many aspects of game analytics is predictive analytics, which generates predictive models using statistics derived from game sessions to predict future events. The generation of predictive models is of great interest in the context of game analytics with numerous applications for games, such as predicting player behavior, the sequence of future events, and win probabilities. Recently, a novel approach emerged for capturing and storing data from game sessions using provenance, which encodes cause and effect relationships together with the telemetry data. In this work, we propose a stochastic approach for game analytics based on that novel game provenance information. This approach unifies all gathered provenance data from different game sessions to create a probabilistic graph that determines the sequence of possible events using the commonly known stochastic model of Markov chains and also allows for understanding the reasons to reach a specific state. We integrated our solution with an existing open-source provenance visualization tool and provided a case study using real data for validation. We could observe that it is possible to create probabilistic models using provenance graphs for short and long predictions and to understand how to reach a specific state.

Troy C. Kohwalter, Leonardo G. P. Murta, Esteban W. G. Clua
Image Credits