Skip to main content
main-content

Controlling

weitere Buchkapitel

Open Access

Chapter 3. “Vision 2050” to the Rescue of a “Limited Earth”

Next let us consider the second paradigm—“The Limited Earth.” The problems caused by the fact that the Earth is limited are far-reaching. These include not only energy, resources, global warming, air pollution, water pollution, ground pollution, food, and water, but also—if we think broadly—such problems as the widescale spread of infectious diseases of people and livestock. The reason is that the probability of virus mutation and transmission increases along with the probability that wild animals come into contact with livestock, livestock with other livestock, humans with livestock, and so on. And in turn, the probability of contact on the limited surface of the Earth increases in proportion to the square of the population density.

Hiroshi Komiyama

Chapter 6. Association Models

The association models, appropriate for the analysis of ordinal contingency tables, are presented for two-way and multi-way contingency tables. Their features, properties, and the associated graphs are discussed. The models of uniform association (U), row effect (R), column effect (C), multiplicative row–column effect (RC), and the more general RC(

M

) model are illustrated with examples in terms of fit, presentation, and interpretation. They are all worked out in

R

, through functions provided for their fit and the construction of their scores’ plots.

Maria Kateri

Chapter 3. Analysis of Multi-way Tables

Issues discussed in Chap.2 for two-way tables are extended to multi-way contingency tables. Emphasis is given to clarifying the concepts of partial and marginal association. Further on, stratified 2 × 2 tables are analyzed by the Mantel–Haenszel and the Breslow–Day–Tarone tests. Types of independence for three-way tables are introduced. Graphs are presented for multi-way contingency tables while fourfold plots are used to visualize stratified 2 × 2 tables. All examples are implemented in

R

.

Maria Kateri

Chapter 2. Analysis of Two-way Tables

Basic concepts of two-way contingency table analysis are introduced. Descriptive and inferential results on estimation and testing of basic hypotheses are discussed and illustrated in

R

. In particular the comparison of two independent proportions, the test of independence for 2 × 2 and

I

×

J

contingency tables, the linear trend test, and the Fisher’s exact test are presented. Special emphasis is given to the odds ratio for 2 × 2 tables, while the generalized odds ratios for

I

×

J

tables are treated in detail. Finally, graphical displays of categorical data (barplot, fourfold plot, sieve diagram, and mosaic plot) are derived using

R

for examples of this chapter and discussed.

Maria Kateri

Open Access

Chapter 2. The Long Legends: Transcription, Translation, and Commentary

The long legends on the Carta marina will be addressed sheet by sheet, and within each sheet, from left to right and top to bottom. The legends are numbered with a two-number system, the first indicating the sheet, and the second the number of the legend on that sheet. The commentary aims to identify the sources of the legends whenever possible, and remarks will also be offered on images associated with the legends when necessary.

Chet Van Duzer

Chapter 3. Western Dissensus, Non-Western Consensus: A Q Study Into the Meanings of Peace

This chapter reports the results from a Q study amongst professional peace practitioners. It introduces five different visions of peace, that are compared along the dimensions identified in Chap. 2 . The main argument developed in the chapter is that rather than a (Western) liberal peace consensus, a non-Western consensus can be observed. According to most of the Lebanese and Mindanaoan interviewees, peace is a personal endeavour. The Dutch, on the other hand, are divided over all five visions.The chapter also trims down the seven dimensions found in Chaps. 2 , 3 and 4 : ontology (whether peace is seen as a process or a goal); domain (whether it is a personal or a political objective), its embedding in individuals or institutions and the scope of a vision. These four dimensions make up the peace cube that is used in the rest of the book.

Gijsbert M. van Iterson Scholten

6. Iran and the International Community: A Counter-Hegemonic Norm Breaker?

The Islamic Republic of Iran has a long history of being perceived as a counter-hegemonic norm breaker. In this chapter, Wunderlich explores the world order conceptions underlying Iran’s behavior in and toward the international system. Departing from the country’s historical and geographical context as well as its institutional setting, Wunderlich draws attention to the ideological foundations of Iran’s foreign and security policy. The chapter concludes that Iran’s behavior toward the normative order is ambivalent: Notwithstanding norm transgressions and discursive as well as practical contestation in some policy fields, the display of an antagonistic attitude and the propagation of a normative alternative aim toward a reform of the Western liberal-shaped normative order rather than its complete overthrow.

Carmen Wunderlich

Chapter 4. How to Make Community-Oriented Policing Customer Oriented: A Service Design Concept for Policing in Social Media

Social media strategy—like any strategy—for safety and security organizations should be based on up-to-date situational awareness, i.e., knowledge of the various security needs of citizens and communities. It is therefore important to evaluate the underlying generative mechanisms behind the security problems such as low levels of trust or lack of intercultural skills within the police. This chapter discusses Service Design Concepts for policing in social media. It aims at offering police forces and their partners a better understanding of local needs and suggests the service design canvas as a tool for improving policing services. The service design canvas could assist in constructing the social media strategy, to build safety and security not only for but with the communities and citizens.

Olavi Kujanpää, Kari Pylväs, Pirjo Jukarainen, Jarmo Houtsonen, Jari Taponen

5. Managing Climate Risk in a Major Coffee-Growing Region of Indonesia

Indonesia is currently one of the top four coffee exporting countries in the world. Climate change is projected to cause significant impacts on coffee. Without proper adaptation measures, this will significantly lower the productions. Changes in rainfall and increases in temperature will affect the phenological development that would eventually influence yield and quality of crop including the potential risks of pest and disease attacks. Assessment in Toba, a major coffee-growing region of Indonesia, indicated that in the middle of this century (the 2050s), under climate scenarios of RCP4.5 and RCP8.5, suitable areas for coffee production would decrease significantly. The average yield is projected to decrease between 25% and 75% of the current yield. However, the highlands that are currently not suitable for coffee (>1500 m above mean sea level) is projected to become suitable with a higher yield than the current. A significant increase in rainfall during the rainy season and prolonged dry season will also affect coffee phenological development. It will shift the peak of coffee flowering and harvesting seasons in Toba. The severity of the coffee berry borer Hypothenemus hampei (Ferrari) attack will also increase in the future. The current crop management farming practices should be adjusted and improved to adapt to such change.

Rizaldi Boer, Syamsu Dwi Jadmiko, Purnama Hidayat, Ade Wachjar, Muhammad Ardiansyah, Dewi Sulistyowati, Anter Parulian Situmorang

4. Strategies for Scaling Up the Adoption of Organic Farming Towards Building Climate Change Resilient Communities

Adjustments and adaptive responses to diminishing resources (land, water, and energy) in agriculture due to population increase and climate change in the recent decades are varied. Proactive adaptive coping mechanisms must be instituted to avoid the onslaught of massive starvation. Organic and agroecological innovations are the logical options. But organic farming is not one-size-fits-all solution. While organic farming is considered as one of the solutions to farming in crisis, there are many barriers to its adoption. Among these constraints are (1) the nature of organic farming being difficult, laborious, and knowledge and skills intensive, the required environment (air, soil, and water), and the certification requirement and (2) the support systems from government and consumers not in place.Scaling up the adoption of organic farming has a number of prerequisites, specifically: 1. innovation from farmers—the farmers as innovators and scientist/technologists from the academics and science and technology (S/T) institutions; 2. reengineering agri-food systems into agroecotourism as a way of attracting farm visitors and tourist-enthusiasts and attracting human interests and investment flows to the rural areas, generating rural employment, slowing down or stopping out-migration to urban areas and overseas work (OFW); 3. innovative governance-led promotion by expediting the shift from capital and resource intensive (land, water, energy, inputs) to restorative, regenerative, and vibrant agriculture and food systems and expediting this system shift by an innovative ecological carbon emission–soil erosion–water consumption tax to finance the transition and conversion process to agroecology-based organic agriculture; 4. an innovative paradigm shift from food security to health security—from financesurance to healthsurance, from financial banking to health banking, from measuring yield per acre to health per acre as the world transitions agriculture and food system from agrochemical-intensive monoculture to organic polyculture cropping systems; 5. innovative paradigm from supply chain to value chain approach in agriculture and food systems, but implementing these innovations requires 4Ps and 2 Ms (preproduction, production, processing, postproduction linkages + marketing and management); 6. a demand-led (consumer) instead of supply-led (the farmers) approach to promotion; 7. and, finally, a consumption-led greening of agroecosystems by minimizing food wastes and consuming only what we can and reducing the thermodynamic loss in food by consuming less and less meat.

Teodoro C. Mendoza, Roselyn Furoc-Paelmo, Hazel Anne Makahiya, Bernadette C. Mendoza

12. Climate-Smart Agriculture: Assessment and Adaptation Strategies in Changing Climate

Climate change is the most critical threat to food security amid increasing crop demand. This increasing demand for food has been previously tried to be met through the use of synthetic fertilizers and effective application of weed- and pest-controlling chemicals. However, these methods of increasing crop productivity rely on finite resources and are often unsustainable. They are now proven to be posing a great threat to the environment and causing a negative change in the planet’s natural climate. Fortunately, the threat has been realized by scientists, and the world has started to lay the foundations for sustainable intensification of agriculture and to heighten the resilience of crops to climate change. The solutions discovered so far are numerous with many of them not yet tested. Climate change assessment is the first priority in this regard. Much of the recent researches have demonstrated a multi-scale and multidimensional nature of climate change to assess the potential effects of climate change on agriculture and the options for adaptation. These options for adaptation have been different in different regions of the world with clear differences among strategies in rich and poor countries. The pressure for adaptation is greatest in poor countries where the adaptive capacity is least abundant. Adaptation to climate change could be autonomous (market-driven) or planned. Both of these adaptation strategies are driven by certain measures. Some adaptation strategies are easily achieved with the help of existing technologies, some need development of new technologies while others just need policy and institutional/market reforms. Numerous researchers have tried to assess and give tools for the potential impact of climate change which are largely based on modelling techniques. Indeed, models are useful tools for assessing this potential impact and evaluating the options for adaptation, yet they do not match the level of real solutions that could be brought about by efficient adaptive human agency. The importance of agriculture as performance is useful in counterbalancing the modelling approaches towards mitigating the negative impacts of climate change. The adaptation and mitigation strategies are and should be social phenomena which need social attendance in the form of improved and sustainable agricultural practices and could help agriculture contribute less to the changing climate. This chapter will focus on numerous strategies that could be adapted to assess and cope with the negative impacts of changing climate on agriculture.

Muhammad Arif, Talha Jan, Hassan Munir, Fahd Rasul, Muhammad Riaz, Shah Fahad, Muhammad Adnan, Ishaq Ahmad Mian, Amanullah

Chapter 1. Finite Zero Butterworth and Chebyshev Filters

An accurate expression is derived for the minimum order of a finite-zero Butterworth filter (Dutta Roy in IEEE Trans AU-19(1):58–63, 1971) [1], which has a steeper cut-off slope than the corresponding Chebyshev filter (Agarwal and Sedra in IEEE Trans AU-20(2):138–141, 1972) [2]. The inaccuracy occurring in the latter is illustrated by a numerical example.

Suhash Chandra Dutta Roy

Chapter 9. Enhancement of Feedstock Composition and Fuel Properties for Biogas Production

Biogas production has materialized as an auspicious technology for the conversion of renewable energy sources such as agricultural, animal, industrial and municipal wastes into a beneficial form of energy. Biogas production is a very attractive and changeling task because of its slower degradation and requires higher retention time via anaerobic digestion (AD) process. Additionally, there is a chance of toxic intermediates in some of these feedstock may result in the decline of the biogas production process. This Biogas technology can be integrated with various strategies to mitigate the environmental pollution. high availability and low cost of these feedstocks promote new strategies for the minimization of waste. Considerable efforts in chic research are undertaken in order to upgrade the composition of the feedstock, efficiency in terms of fuel property and flexibility of biogas production to enhance the economic viability of biogas plants. Along with the methane, biogas consists of various compounds like CO2, H2S, water vapor, nitrogen, hydrogen and oxygen which tend to pull down the calorific value when compared with natural gas. Absorption, adsorption, cryogenic method and membrane-based gas permeation are several technologies employed to increase the fuel property of biogas.

S. Chozhavendhan, G. Gnanavel, G. Karthiga Devi, R. Subbaiya, R. Praveen Kumar, B. Bharathiraja

Chapter 16. Signal Processing Applications of Charge-Coupled Devices

The charge-coupled device, first disclosed in the Bell System Technical Journal in 1970, is an analog, sampled data delay line and has made a significant impact in the field of signal processing. In this chapter, a brief introduction to the device is followed by the considerations, which make it so important and convenient in signal processing applications. A number of specific applications are outlined and their advantages and limitations are briefly discussed. In particular, the effect of the most important limiting factor, namely, charge-transfer inefficiency is discussed in some detail and several methods of compensation are outlined. The chapter concludes with an indication of the work being done at IIT Delhi on the various aspects relating to theory, fabrication and signal processing applications of the device.

Suhash Chandra Dutta Roy

Chapter 15. Some Aspects of Digital and CTD Signal Processing

The work done by me and my students in the fields of digital and charge transfer device signal processing is briefly described. Some general comments on the future scope of work in these two related fields are included.

Suhash Chandra Dutta Roy

Chapter 9. Improvement of Flame Kernel Growth by Microwave-Assisted Plasma Ignition

Due to the depletion of petroleum resources and environmental concerns, automobile industry has been developing new engine technologies with acceptable cost range to consumers. Among many new technologies, application of non-thermal plasma ignition system is considered as a promising path to achieve high-efficiency clean gasoline vehicles. In this study, we developed a microwave-assisted plasma ignition using 3 kW, 2.45 GHz magnetron with customized electric components and ignitor. This system was tested in a constant volume combustion vessel to investigate the effects of microwave ejection on ignition kernel growth. High-speed shadowgraph imaging and hydroxyl (OH) radical imaging were carried out under various air-fuel ratio, ambient pressure, and ignition strategy conditions. The in-cylinder pressure measurement was also performed to compare combustion phase between conventional spark and microwave-assisted plasma ignition system. The experimental result showed that the microwave ejection on the thermal plasma created by conventional discharge had a significant improvement on initial flame development. The microwave-assisted plasma ignition system indicated advanced combustion phase with extended lean limit where conventional spark ignition failed to achieve flame propagation. The OH imaging on propagating flame presented much higher intensity with microwave-assisted plasma ignition case. The analysis on light emission spectrum showed 7,000 K higher electron temperature in the plasma created with microwave ejection. This implies that chemical reactions which could not be progressed with conventional spark ignition was enabled with additional non-thermal plasma induced by electro-magnetic wave. On the other hand, however, the enhancement in flame development was decreased under high pressure condition due to lower reduced electric field.

Joonsik Hwang, Wooyeong Kim, Choongsik Bae

Chapter 10. Impact of Bioenergy on Environmental Sustainability

Energy and environments are vital elements to our daily life and a way forward for our viable development. The fossil fuels are widely used as primary energy sources that threaten its depletion along with the formation of various harmful greenhouse gases. This necessitates for efficient utilization of energy and the access to the alternative energy resources like bioenergy. It is always being a major concerned for bioenergy deployment while referring to availability of the biomass, competition between the various uses of biomass and the sustainability issues. In spite of its wide applications, there is less study on the environmental effects of bioenergy. This enthuse the challenges that calls for multidisciplinary researches related to environmental sustainability. Production of bioenergy conveys significant prospects to provide a series of environmental, social, economic benefits in addition to the energy and climate goals. In order to open up better chances for agricultural souk and to endorse sustainable growth in rural community, bioenergy plays a vital role. Proper planning and management might yield multiple benefits using bioenergy synergies with the production of food, water, ecosystems and health. This chapter addresses a survey on pertinent literature related to the environmental sustainability arising from the production of bioenergy. In this context, the chapter also deals with the bioconversion technologies and its impact on environment and applications, greenhouse gases and biodiversity, etc.

Kankan Kishore Pathak, Sangeeta Das

Modeling and Experimental Data on the Dynamics of Predation of Rice Plants and Weeds by Golden Apple Snail (Pomacea Canaliculata)

Golden Apple Snail(GAS) (Pomacea canaliculata), popularly known in the Philippines as “golden kuhol” is considered a serious invader in a paddy ecosystem. Rice farmers consider it as a notorious invasive species and a serious pest in several rice farms. In this study, we model the predation of rice plants and weeds by GAS. We formulate three ordinary differential equations to model the simplified dynamics of apple snails, rice plants and weeds in the presence of harvesting on snails. We then investigate the mathematical features of the model and analyze the stability of equilibria. Actual death and harvesting rates on GAS gathered from actual field experiments conducted on the enclosed rice paddies are used for the parameters in the numerical simulations to demonstrate the potential effect of snail harvesting.

Joel Addawe, Zenaida Baoanan, Rizavel Addawe

A Tuberculosis Epidemic Model with Latent and Treatment Period Time Delays

In this paper, a Susceptible-Exposed-Infectious-Treated (SEIT) epidemic model with two discrete time delays for the disease transmission of tuberculosis (TB) is proposed and analyzed. The first time delay $$\tau _1$$ represents the time of progression of an individual from the latent TB infection to the active TB disease, and the other delay $$\tau _2$$ corresponds to the treatment period. We begin our mathematical analysis of the model by establishing the existence, uniqueness, nonnegativity and boundedness of the solutions. We derive the basic reproductive number $$R_0$$ for the model. Using LaSalle’s Invariance Principle, we determine the stability of the equilibrium points when the treatment success rate is equal to zero. We prove that if $$R_0<1$$ , then the disease-free equilibrium is globally asymptotically stable. If $$R_0>1$$ , then the disease-free equilibrium is unstable and a unique endemic equilibrium exists which is globally asymptotically stable. Numerical simulations are presented to illustrate the theoretical results.

Jay Michael R. Macalalag, Elvira P. De Lara-Tuprio, Timothy Robin Y. Teng

Kapitel 5. Zeitlos Führen – was die (Organisations-)Welt im Innersten zusammenhält

Werden die aktuell diskutierten Führungsansätze auf ihr tieferliegendes und intuitivstes Verständnis heruntergebrochen, ergeben sich nahezu zeitlos relevante Aufgaben, die sich Menschen in jedweder verantwortungsvollen Position stellen. In welcher Zusammensetzung und Dosis sie angemessen sind, kommt auf Situation, Aufgabe, Kontext und Gegenüber an. Fünf solcher Aufgaben werden im Folgenden näher betrachtet: 1) Innere Klarheit entwickeln, 2) Grenzen setzen, 3) Beziehungen kultivieren, 4) Ambiguität gestalten und 5) psychologische Sicherheit vermitteln.

Prof. Dr. Claudia Gerhardt

Kapitel 4. Führen in der VUKA-Welt – nur wie?

Die Komplexität in der Außenwelt führt auch zu einer Diversifizierung der Ideen von Führung. Drei mögliche, sich von verschiedenen Seiten an das Thema annähernde Optionen werden im Folgenden vorgestellt: die Suche nach neuen Metaphern, die Ausdifferenzierung von Führungsstilen und die Verteilung von Führungsdimensionen. Anschließend wird der Frage nachgegangen, warum es so schwer ist, dieses Wissen umzusetzen.

Prof. Dr. Claudia Gerhardt

Kapitel 3. Der Mitarbeiter der Zukunft – fachliche und persönliche Kompetenzen

Haltung vor Fachwissen – so lautet die Recruiting-Devise in der VUCA-Welt! Die HR-Maxime „Hire for attitude, train for skills“ setzt immer mehr die Bereitschaft von Mitarbeitern und Führungskräften voraus, sich jederzeit mit ihren Jobs zu wandeln und sich permanent weiterzuentwickeln. Die innere Haltung muss stimmen: Hinzulernen ist immer möglich. Anders ausgedrückt: lebenslanges Lernen als entscheidendes Erfolgskriterium!

Brigitte Ehmann

Chapter 2. Special Diodes and Linear Wave Shaping

Number of diodes having single P–N junction but having different modes of operation, different methods of construction, different characteristics, and special areas of application are available. Such diodes are called special diodes. Some of such special diodes are zener diode, varactor diode, Schottky diode, tunnel diode, photodiode, light emitting diode, and so on.

Prof. Dr. G.S. Tomar, Dr. Ashish Bagwari

Chapter 5. Wind Power Generation in Jordan: Current Situation and Future Plans

Jordan has significant wind energy resources that could be potentially exploited for power generation where the annual average wind speed exceeds 7 m/s (at 10 m height) in some areas of the country. The regions with the greatest potential are located in the North and South of the country. A wind atlas has been available for Jordan since 1989. According to the Ministry of Energy and Mineral Resources, this wind atlas is in the process of being updated with results taken from recent measurements. The Jordanian government is hoping to generate about 1000 MW from wind projects by 2020. As of 2018, six operational wind farms with a total capacity of 369.5 MW have been built and connected to the grid in Jordan. Moreover, there are three wind farms under construction with a total capacity of 148 MW. The average cost of installed wind farms is around 2.2 million USD/MW and the annual average produced electrical energy is about 2.3 GWh/MW where the average capacity factor is about 0.27.

Ali Hamzeh, Mahmoud Awad

An Agile Approach to Validate a Formal Representation of the GDPR

Modelling in a knowledge base of logic formulæ the articles of the GDPR enables a semi-automatic reasoning of the Regulation. To be legally substantiated, it requires that the formulæ express validly the legal meaning of the Regulation’s articles. But legal experts are usually not familiar with logic, and this calls for an interdisciplinary validation methodology that bridges the communication gap between formal modelers and legal evaluators. We devise such a validation methodology and exemplify it over a knowledge base of articles of the GDPR translated into Reified I/O (RIO) logic and encoded in LegalRuleML. A pivotal element of the methodology is a human-readable intermediate representation of the logic formulæ that preserves the formulæ’s meaning, while rendering it in a readable way to non-experts. After being applied over a use case, we prove that it is possible to retrieve feedback from legal experts about the formal representation of Art. 5.1a and Art. 7.1. What emerges is an agile process to build logic knowledge bases of legal texts, and to support their public trust, which we intend to use for a logic model of the GDPR, called DAPRECO knowledge base.

Cesare Bartolini, Gabriele Lenzini, Cristiana Santos

6. Connections: The Power of Learning Together to Improve Healthcare in the United Kingdom

Patient, service user, health, and disability advocacy movements contribute to a stronger voice for patients in healthcare in the UK. The National Health Service, introduced in 1948, funded through general taxation, and free at the point of delivery, is less characterized by paternalism now than at its inception. Successive health reforms, policy directives, and legislation support increasing patient autonomy and choice. With the introduction of the internet, changes in technology, and the growth of social media, patient expectation and behaviour are shifting further towards active involvement in decision-making. Translating policy rhetoric and research evidence into meaningful patient and public involvement practice presents challenges to healthcare professionals and researchers. Co-designing opportunities to learn collaboratively offers ways to strengthen practice through the exchange of experiential knowledge and to generate emergent insight. This enhances relational skills that underpin quality improvement, research, and transformation. Patient and service users eloquently articulate the benefits of learning together and model attributes that are essential to improvement efforts.

Rachel Matthews, Stuart Green, Rowan Myron, Catherine French, Susan Barber, Dionne Matthew, Sandra Jayacodi, Jenny Trite, Adrian Brown, Justin Baker, Howard Bluston, Ron Grant, Jean Straus, Richard M Ballerand, Maurice Hoffman, Fran Husson, Laura Fischer, Cherelle Augustine

5. Patient and Family Engagement in the United States: A Social Movement from Patient to Advocate to Partner

Today’s focus on patient and family engagement in the United States results from nearly seven decades of evolving changes in the culture and practice of health care, creating opportunities for patients, families, caregivers, and community members to partner in decisions about health care and its delivery. The shift from patients as passive recipients of care to advocates to partners reflects multiple areas of influence: changing the traditional patient-provider dynamic, recognizing the power of patients as healthcare consumers, ePatients advocating for collective change, and partnerships in the patient- and family-centered care and patient safety movements. Advancing the practice and science of engagement requires system-level incentives and support, intentional efforts to engage patients and families from diverse backgrounds, and stronger evidence to improve uptake of engagement practices.

Maureen Maurer, Pam Dardess, Tara Bristol Rouse

Leveraging Social Media to Track Urban Park Quality for Improved Citizen Health

In this chapter, we showcase the use of qualitative data available on two “geobrowsers” (i.e., Google Maps and Foursquare) and of a data-mining technique to quantify the sentiment of online reviews about parks. The underlying interest for this study comes from the growing literature suggesting that living near parks or other open spaces contributes to higher levels of physical activity and to lower levels of stress and fewer mental health problems. Mecklenburg County (North Carolina), which encompasses the City of Charlotte, is used as a case study. In a comparison among 97 cities in the USA, The Trust for Public Land ranks Charlotte’s park system at the very bottom and reports their spending per resident on their park system among the lowest 20% of these cities. Considering their lower spending, the city government may be particularly interested to leverage publicly available data from social media to complement the assessments they already perform about their park system, such as satisfaction surveys or quality assessments. Nevertheless, Charlotte’s low ranking – although unfortunate – indicates an opportunity for the city to improve its park system, which in turn could engage residents in more physical activity and, in doing so, create positive community health outcomes.

Coline C. Dony, Emily Fekete

Chapter 3. Introduction to Deep Learning

In this chapter, you will learn about deep learning networks. You will also learn how deep neural networks work and how you can implement a deep learning neural networks using Keras and PyTorch.

Sridhar Alla, Suman Kalyan Adari

Chapter 8. Models for HIV/AIDS

Acquired immunodeficiency syndrome (AIDS) AIDS was first identified as a new disease in the homosexual community in San Francisco in 1981. The human immunodeficiency virus (HIV) HIV/AIDS was identified as the causative agent for AIDS in 1983. The disease has several very unusual aspects. After the initial infection, there are symptoms, including headaches and fever for 2 or 3 weeks. Transmissibility Transmissibility is high for about 2 months, and then there is a very long latent period during which transmissibility Transmissibility is low. At the end of this latent period, which may last 10 years, transmissibility Transmissibility rises, signaling the development of full-blown AIDS. AIDS In the absence of treatment, AIDS is AIDS invariably fatal. Now, HIV can be treated with a combination of highly active antiretroviral therapy (HAART) drugs, HAART which both reduce the symptoms and prolong the period of low infectivity. While there is still no cure for AIDS, AIDS treatment has made it no longer a necessarily fatal disease. To describe the variation of infectivity for HIV, HIV/AIDS one possibility would be to use a staged progression model, Staged progression with multiple infective Infective stages having different infectivity. Infectivity Another possibility would be to use an age of infection model. Age of infection

Fred Brauer, Carlos Castillo-Chavez, Zhilan Feng

Chapter 9. Models for Influenza

Influenza causes more morbidityMorbidity and more mortality Mortality than all other respiratory diseases together. There are annual seasonal epidemics Seasonal epidemic that cause about 500,000 deaths worldwide each year. During the twentieth century there were three influenza pandemics. Pandemic The World Health Organization estimates that there were 40,000,000–50,000,000 deaths worldwide in the 1918 pandemic, Pandemic 2,000,000 deaths worldwide in the 1957 pandemic, and 1,000,000 deaths worldwide in the 1968 pandemic. There has been concern since 2005 that the H5N1 strain of avian influenza Influenza avian could develop into a strain that can be transmitted readily from human to human and develop into another pandemic, Pandemic together with a widely held belief that even if this does not occur there is likely to be an influenza pandemic Pandemic in the near future. More recently, the H1N1 strain H1N1 strain of influenza did develop into a pandemic in 2009, but fortunately its case mortality rate was low and this pandemic turned out to be much less serious than had been feared. There were 18,500 confirmed deaths, but the actual number of deaths caused by the H1N1 influenza Influenza H1N1 may have been as many as 200,000. This history has aroused considerable interest in modeling both the spread of influenza and comparison of the results of possible management strategies. Management strategy

Fred Brauer, Carlos Castillo-Chavez, Zhilan Feng

Chapter 2. Simple Compartmental Models for Disease Transmission

Communicable diseases that are endemic (always present in a population Endemic ) cause many deaths). For example, in 2011 tuberculosis Tuberculosis caused an estimated 1,400,000 deaths and HIV/AIDS HIV/AIDS caused an estimated 1,200,000 deaths worldwide. According to the World Health Organization there were 627,000 deaths caused by malaria, Malaria but other estimates put the number of malaria deaths at 1,2000,000. Measles, Measles which is easily treated in the developed world, caused 160,000 deaths in 2011, but in 1980 there were 2,600,000 measles deaths. The striking reduction in measles deaths is due to the availability of a measles vaccine. Vaccine Other diseases such as typhus, Typhus cholera, Cholera schistosomiasis, Schistomiasis and sleeping sickness Sleeping sickness are endemic in many parts of the world. The effects of high disease mortality Disease mortality on mean life span and of disease debilitation Disease debilitation and mortality on the economy in afflicted countries are considerable. Most of these disease deaths are in less developed countries, especially in Africa, where endemic diseases Endemic disease are a huge barrier to development.

Fred Brauer, Carlos Castillo-Chavez, Zhilan Feng

Chapter 11. Models for Malaria

Malaria is one of the most important diseases transmitted by vectors. The vectors for many vector-transmitted diseases Vector-transmitted disease are mosquitoes Mosquito or other insects which tend to be more common in warmer climates. One influence of climate change in coming years may be to extend the regions where mosquitoes can thrive and thus to cause the spread of vector-transmitted diseases Vector-transmitted disease geographically.

Fred Brauer, Carlos Castillo-Chavez, Zhilan Feng

Chapter 6. Models for Diseases Transmitted by Vectors

Many diseases are transmitted from human to human indirectly, through a vector. Vector Vectors are living organisms that can transmit infectious diseases between humans. Many vectors are bloodsucking insects that ingest disease-producing microorganisms during blood meals Blood meal from an infected (human) host, Host and then inject it into a new host Host during a subsequent blood meal. Blood meal The best known vectors are mosquitoes for diseases including malaria, Malaria dengue fever, Dengue chikungunya, Chikungunya Zika virus, Zika virus Rift Valley fever, Rift Valley fever yellow fever, Yellow fever Japanese encephalitis, Japanese encephalitis lymphatic filariasis, Lymphatic filariasis and West Nile fever, West Nile virus but ticks (for Lyme disease and tularemia), Lyme disease Tularemia bugs (for Chagas’ disease), Chagas’ disease flies (for onchocerciasis), Onchocerciasis sandflies (for leishmaniasis), Leishmaniasis fleas (for plague, Plague transmitted by fleas from rats to humans), and some freshwater snails (for schistosomiasis) Schistosomiasis are vectors for some diseases.

Fred Brauer, Carlos Castillo-Chavez, Zhilan Feng

Chapter 1. Introduction: A Prelude to Mathematical Epidemiology

Recorded history continuously documents the invasion of populations by infectious agents, some causing many deaths before disappearing, others reappearing in invasions some years later in populations that have acquired some degree of immunity, due to prior exposure to related infectious pathogens. The “Spanish” flu epidemic of 1918–1919 exemplifies the devastating impact of relatively rare pandemics; this one was responsible for about 50,000,000 deaths worldwide, while on the mild side of the spectrum we experience annual influenza seasonal epidemics that cause roughly 35,000 deaths in the USA each year.

Fred Brauer, Carlos Castillo-Chavez, Zhilan Feng

8. Fallbeispiele

Um die vielen unterschiedlichen Möglichkeiten, Aspekte und Nuancen von PMO-Implementationen greifbarer machen zu können, bildet in Kap. 8 eine Sammlung von kurzen Fallbeispielen bzw. Reflexionen, die alle anonymisierte reale PMO und deren Entwicklungen beschreiben, den Abschluss des Buches.

Gerhard Ortner, Betina Stur

3. Einteilungsmöglichkeiten für PMO

Ordnungskriterien wie Größe, Bestandsdauer, organisatorische Eingliederung und Funktionen können Hilfestellungen bei der Einordnung eines betrachteten bzw. bei der Konzeption eines neuen PMO bieten. Auch die Art der üblicherweise im Unternehmen durchgeführten Projekte kann zusätzlich Orientierung bieten. Besonders interessant ist die Kombination solcher Ordnungskriterien. In Kap. 3 werden einige solcher Kriterien dargestellt.

Gerhard Ortner, Betina Stur

6. Aufgabengebiet finden, PMO einführen und verankern

Es stellt sich die Frage, wie ein PMO konkret am besten eingeführt werden kann. In der Literatur findet man dazu einander recht ähnliche Ansätze. Am häufigsten wird empfohlen, die Einführung selbst als Projekt durchzuführen. Dabei muss jedes Unternehmen Antworten auf ganz essenzielle Fragen nach Zielen, dem Zeitraum, möglichen Orten, Schnittstellen, den (ersten) Aufgaben und den notwendigen unterstützenden Maßnahmen für sein spezifisches Einsatzszenario finden. Kap. 6 soll einige Inputs dazu liefern.

Gerhard Ortner, Betina Stur

Chapter 5. Layout Widgets

Layout is always a challenging task when building user interface. When it comes to mobile apps, layout is much more complicated considering the large number of different screen resolutions for devices. This chapter covers recipes related to layout in Flutter.

Fu Cheng

Designing Smart Contract for Electronic Document Taxation

In recent years, we have seen a massive blockchain adoption in cryptocurrencies such as Bitcoin. Following the success of blockchain in cryptocurrency industry, many people start to explore the possibility of implementing blockchain technology in different fields. In this paper, we propose Smart Stamp Duty, a system which can revolutionize the way stamp duty (document tax) is managed and paid. The proposed Smart Stamp Duty offers significant improvements on the convenience when paying stamp duty by removing the need of physical revenue stamp. At the same time, the blockchain technology also provides the auditability of the recorded data. Smart stamp duty enables the expansion of the existing electronic stamp duty application to retail level. Smart stamp duty also enables the taxpayers to pay the stamp duty of their electronic documents. Our proposed system also enables the taxpayers to convert their electronic documents into physical documents while still maintain the ability to refer the electronic-based stamp duty payments.

Dimaz Ankaa Wijaya, Joseph K. Liu, Ron Steinfeld, Dongxi Liu, Fengkie Junis, Dony Ariadi Suwarsono

Integer Reconstruction Public-Key Encryption

In [AJPS18], Aggarwal, Joux, Prakash & Santha described an elegant public-key encryption ( $$\mathsf {AJPS}$$ -1) mimicking NTRU over the integers. This algorithm relies on the properties of Mersenne primes instead of polynomial rings.A later ePrint [BCGN17] by Beunardeau et al. revised $$\mathsf {AJPS}$$ -1’s initial security estimates. While lower than initially thought, the best known attack on $$\mathsf {AJPS}$$ -1 still seems to leave the defender with an exponential advantage over the attacker [dBDJdW17]. However, this lower exponential advantage implies enlarging $$\mathsf {AJPS}$$ -1’s parameters. This, plus the fact that $$\mathsf {AJPS}$$ -1 encodes only a single plaintext bit per ciphertext, made $$\mathsf {AJPS}$$ -1 impractical. In a recent update, Aggarwal et al. overcame this limitation by extending $$\mathsf {AJPS}$$ -1’s bandwidth. This variant ( $$\mathsf {AJPS}\hbox {-}\mathsf {ECC}$$ ) modifies the definition of the public-key and relies on error-correcting codes.This paper presents a different high-bandwidth construction. By opposition to $$\mathsf {AJPS}\hbox {-}\mathsf {ECC}$$ , we do not modify the public-key, avoid using error-correcting codes and use backtracking to decrypt. The new algorithm is orthogonal to $$\mathsf {AJPS}\hbox {-}\mathsf {ECC}$$ as both mechanisms may be concurrently used in the same ciphertext and cumulate their bandwidth improvement effects. Alternatively, we can increase $$\mathsf {AJPS}\hbox {-}\mathsf {ECC}$$ ’s information rate by a factor of 26 for the parameters recommended in [AJPS18].The obtained bandwidth improvement and the fact that encryption and decryption are reasonably efficient, make our scheme an interesting post-quantum candidate.

Houda Ferradi, David Naccache

Multi-owner Secure Encrypted Search Using Searching Adversarial Networks

Searchable symmetric encryption (SSE) for multi-owner model draws much attention as it enables data users to perform searches over encrypted cloud data outsourced by data owners. However, implementing secure and precise query, efficient search and flexible dynamic system maintenance at the same time in SSE remains a challenge. To address this, this paper proposes secure and efficient multi-keyword ranked search over encrypted cloud data for multi-owner model based on searching adversarial networks. We exploit searching adversarial networks to achieve optimal pseudo-keyword padding, and obtain the optimal game equilibrium for query precision and privacy protection strength. Maximum likelihood search balanced tree is generated by probabilistic learning, which achieves efficient search and brings the computational complexity close to $$\mathcal {O}(\log N)$$ . In addition, we enable flexible dynamic system maintenance with balanced index forest that makes full use of distributed computing. Compared with previous works, our solution maintains query precision above 95% while ensuring adequate privacy protection, and introduces low overhead on computation, communication and storage.

Kai Chen, Zhongrui Lin, Jian Wan, Lei Xu, Chungen Xu

Chapter 10. Determinants of International Remittance: Evidence from Kerala, India

This chapter investigates the determinants of international remittance flow into the Kerala economy. It explores the influence of socio-demographic and migration-specific characteristics of the migrant and the migrant-sending (remittance receiving) households on the amount of remittance transfers. Using data from the 2011 Kerala Migration Survey, we employed Heckman selection correction model and Tobit model for the analysis. The results suggest that the migrant’s individual characters have a strong influence in the remittance-sending behaviour while the characteristics of the remittance-receiving household have a weaker influence. The migration-specific characteristics—the duration of migration, destination country and presence of dependents abroad—significantly affect the amount of remittance sent.

Anu Abraham

Chapter 9. A Comprehensive Review on Oxygenated Fuel Additive Options for Unregulated Emission Reduction from Diesel Engines

Compared to petrol and diesel engines, higher fuel economy along with higher power output is obtainable from diesel engines. Further, it has better thermal efficiencies and torque characteristics. On the negative side, the diesel engines are a major source of both regulated and unregulated emissions causing deterioration in air quality causing greater health hazard. Therefore, there is an urgent need to mitigate the society from this peril. From the authors’ point of view compared to regulated emissions unregulated emissions should be tackled with greater zeal. There are three possibilities (i) to get rid of IC engines and to use electric vehicles, (ii) to discard present day petro diesel and go for hydrogen as a fuel, (iii) to use alternate source of energy like biodiesel and oxygenated additives to diesel. The third one is relatively easy, quick and viable since no major change to be incorporated to the millions of existing engines. In this chapter, a review on the option of using oxygenated fuel additives such as biodiesel, acetone–butanol–ethanol (ABE) solution and water-emulsion as additives to reduce unregulated emissions is carried out. From this review, it becomes clear that more systematic research is absolutely essential to come to a definite conclusion on unregulated emissions such as polycyclic aromatic hydrocarbons (PAHs), persistent organic compounds (POPs) and carbonyls. When biodiesel and/or ABE solution in the diesel blends is used emissions such as particulate matter (PM), CO, PAHs and POPs do reduce. However, in most cases, the NOx emission increases. Further, through this review, a combination of factors such as higher oxygen content, more complete combustion and cooling effect could be brought out. Unregulated pollutant emissions can be reduced considerably if diesel blend, which contains proper amount of biodiesel, ABE solution and a small amount of water (0.5%), is employed appropriately. This means that such green fuels exhibit excellent performance in both brake thermal efficiency (BTE) and NOx–PM trade-off and in significant emission reductions for PAHs and POPs. This chapter proposes a green diesel fuel blend not only for scientific study but also for future practical application.

Vijayashree, V. Ganesan

Chapter 12. Combustion and Emission Characteristics, and Emission Control of CNG Fueled Vehicles

Natural gas (NG) is considered as one of the most attractive alternative fuels for vehicles due to its clean-burning characteristics. Rapid growth of urbanization and industrialization has multiplied transport fuel demand worldwide. Vehicle population particularly in metropolitan cities has grown exponentially. Adverse effects of vehicular emissions on the environment and human health have forced regulatory bodies to impose increasingly stringent emission legislations. This has necessitated use of cleaner alternative fuels such as NG in the transport sector. NG fuelled vehicles (NGV) have several advantages such as low photochemical reactivity of the exhaust, zero evaporative emissions, minimized cold-start and low-temperature emissions, and suitability for lean-burn operations. Compressed NG (CNG) vehicles have been used since 1960s and have proven safety record. The most important advantage of NGV is that NG is readily available at low cost worldwide, including many developing countries and technologies exist for its transportation, storage, and distribution systems are well matured. This chapter reviews the state-of-the-art in the NG fueled vehicles, focusing on engine combustion and emissions characteristics of NGVs. Worldwide prospects and challenges of NG as a transport fuel are also included. Technical aspects such as CNG properties, and their effect on engine performance, and emissions are discussed. Hydrogen enriched CNG (HCNG) significantly improves mixture flammability limit thus contributing to lean-burn operation of NG fueled spark ignition (SI) engines. Hence, HCNG has shown to be advantageous over traditional SI engines in terms of fuel economy and pollutant emissions. This chapter provides an overview of recent progress on HCNG fueled engines as well. An overview of the emission control strategies for NGVs is presented towards the end. Finally, main challenges and future R&D required for NGVs are identified.

Nirendra N. Mustafi, Avinash Kumar Agarwal

Chapter 14. CO2 Sequestration in Shale with Enhanced Gas Recovery

Shale is an important geological media for carbon capture utilization and storage. On one hand it can be regarded as impermeable caprock to prevent CO2 migration from reservoir, and on the other hand it can also treated as both natural gas and CO2 storage reservoir. CO2-shale reactions within caprock can interfere with the integrity of the rock integrity and compromise the long-term carbon storage safety and stability; however this interaction can also improve the conductivity of the rock to enhance the shale gas recovery from the organic-rich shale. This chapter presents a review of the current state of knowledge regarding CO2 and shale interactions and their potential impacts on shale properties and groundwater quality in the context of CO2 enhanced shale gas recovery. The characterization of shale and CO2 which is critical to the understanding of various interactions between CO2 and shale is first summarized. The major interaction mechanisms between CO2 and shale including CO2-shale-water geochemical reactions, CO2 adsorption induced clay swelling and organic matter extraction with supercritical CO2 and their impact on rock porosity and permeability, and mechanical properties, gas adsorption capacity and groundwater quality are surveyed. Finally, the open questions in this field are emphasized and new research needs are highlighted.

Danqing Liu, Sen Yang, Yilian Li, Ramesh Agarwal

Chapter 6. Overview, Advancements and Challenges in Gasoline Direct Injection Engine Technology

Gasoline direct injection engines have become the popular powertrain for commercial cars in the market. The technology is known for its characteristics of high power output, thermal efficiency and fuel economy. Accurate metering of fuel injection with better fuel utilization makes the engine possible to run on lean mixtures and operation under higher compression ratio relatively makes it of greater potential than PFI engines. Due to its capability of being operated under dual combustion mode by varying fuel injection timing, it can be realized as a cornerstone for future engine technology. Under mode switching, the homogeneous mixture for higher power output at medium and high load-rpm conditions, and stratified mixture for greater fuel economy at low load-rpm conditions are achieved respectively. It can be considered as the technology having the benefits of both diesel engine of higher thermal efficiency and gasoline engine of higher specific power output. But, with the growing concerns towards the limited fuel reserves and the deteriorated environment conditions, strict norms for tail-pipe emissions have been regulated. And considering the higher particulate matter and particle number emissions as a major drawback for GDI engine, upgradation and improvement in designs is needed to meet the required norms of emissions. In the initial section, the chapter gives a brief idea of the overview of the GDI combustion system and its operating modes. Subsequently, the improvements and researches in various aspects like fuel injection parameters and strategies, dual fuel utilization, mixture formation, lean burn control and application of providing turbocharging and residual gas fraction, are elaborately discussed in the direction of optimizing the performance of the engine. Further, the following section explains the major challenges and overcoming of this technology. Review of the work done by various researchers is discussed, focussing on the effect of operating parameters on particulates emissions, injector deposits and knocking in GDI engine. Finally, the chapter presents the concluding ways for enhancing the performance, way forward for making it more efficient and reliable by overcoming the limitations of GDI engine technologies.

Ankur Kalwar, Avinash Kumar Agarwal

Chapter 7. Study on Alternate Fuels and Their Effect on Particulate Emissions from GDI Engines

With strict environmental legislations and to reduce related health hazards, there is immense focus on reducing particulates from gasoline direct injection engines. With increasing use of biofuels in the market, their blends with hydrocarbon fuels are also being considered as cleaner alternatives to gasoline. This chapter confers the addition of oxygenates to gasoline and their capacity to reduce sooting tendency compared to gasoline. Challenges related to optimizing combustion by appropriately choosing engine parameters such as start of ignition, duration of injection, etc. have been addressed. Optimizing combustion can reduce the particulate emissions, by sometimes increasing efficiency. Oxygenated fuels always have the advantage of higher oxidation of soot formed inside the cylinder, which further reduces particulate emissions. Towards the end of this chapter, disadvantages of using oxygenated fuel blends or alternate fuels are discussed.

Sreelekha Etikyala, Vamshi Krishna Gunda

Chapter 5. Prospects of Gasoline Compression Ignition (GCI) Engine Technology in Transport Sector

Compression ignition (CI) engines are mainly fuelled by diesel-like high cetane fuels, and they have higher overall efficiency due to higher compression ratio compared to their spark ignition (SI) engine counterparts. However, modern diesel engines are more expensive, complicated, and emit high nitrogen oxides (NOx) and particulate matter (PM). Simultaneous control of soot and NOx emissions in diesel engines is quite challenging and expensive. Thermal efficiency of SI engines, on the other hand is limited by the tendency of abnormal combustion at higher compression ratios therefore use of high octane fuel is essential for developing more efficient higher compression ratio SI engines in near future. In the foreseeable future, refineries will process heavier crude oil to produce relatively inferior petroleum products to power the IC engines. Also, fuel demand will shift more towards diesel and jet fuels, which would lead to availability of surplus amounts of low octane gasoline with oil marketing companies, with little apparent use for operating the engines. This low octane gasoline will be cheaper and would be available in excess quantities in foreseeable future as the demand for gasoline will further drop due to increase in the fuel economy of modern generation gasoline fuelled vehicles. For addressing these issues, Gasoline compression ignition (GCI) engine technology is being developed, which is a futuristic engine technology that takes advantage of higher volatility, and higher auto-ignition temperature of gasoline and higher compression ratio (CR) of a diesel engine simultaneously to take care of soot and NOx emissions without compromising diesel engine like efficiency. GCI engines can efficiently operate on low octane gasoline (RON of ~70) with better controls at part load conditions. However cold starting, high CO and HC emissions, combustion stability at part load, and high combustion noise at medium-to-full load operations are some of the challenges associated with GCI engine technology. Introductory sections of this chapter highlights future energy and transport scenario, trends of future fuel demand, availability of low octane fuels and development in advanced engine combustion technologies such as HCCI, PCCI, RCCI, and GDI. GCI engine development, its combustion characteristics and controls are discussed in detail. Particular emphasis is given to the effect of various control strategies on GCI combustion, performance and emissions, fuel quality requirement and adaption of GCI technology in modern CI engines. In addition, this chapter reviews initial experimental studies to assess the potential benefits of GCI technology.

Vishnu Singh Solanki, Nirendra Nath Mustafi, Avinash Kumar Agarwal

Chapter 11. Design and Development of Small Engines for UAV Applications

Unmanned Aerial Vehicles (UAVs) have been extensively used for a wide range of applications since World War-II. UAVs are used for several defence purposes such as surveillance, communication, terrain mapping, reconnaissance, and attack. In this chapter, we discuss reciprocating internal combustion engine as a propulsion system for UAVs and the challenges in development of such an engine for aviation. The reciprocating piston engine is one of the most effective powerplants to energise the UAVs. The purpose of these propulsion systems in UAVs is to provide durable, reliable, and extended flight. Currently, no such engine for UAV applications are manufactured in India, and defence sector relies on imported engines only, which severely restricts their application for various other defence applications. This chapter addresses technical issues present in these systems, thus contributing to their development. Aspects related to structural and thermal analysis of engine components have also been discussed, which are essential for designing such engines. This chapter gives broad idea about future of UAV propulsion systems and associated challenges.

Utkarsha Sonawane, Nirendra Nath Mustafi

Chapter 8. Ozone Added Spark Assisted Compression Ignition

The mixed-mode engine combustion strategy where some combination of spark-assisted compression ignition (SACI) and pure advanced compression ignition (ACI) are used at part-load operation with exclusive spark-ignited (SI) combustion used for high power-density conditions has the potential to increase efficiency and decrease pollutant emissions. However, controlling combustion and switching between different modes of mixed-mode operation is inherently challenging. This chapter proposes to use ozone (O3)—a powerful oxidizing chemical agent—to maintain stable and knock-free combustion across the load-speed map. The impact of 0–50 ppm intake seeded O3 on performance, and emissions characteristics was explored in a single-cylinder, optically accessible, research engine operated under lean SACI conditions with two different in-cylinder conditions, (1) partially stratified (double injection—early and late injection) and (2) homogeneous (single early injection). O3 addition promotes end gas auto-ignition by enhancing the gasoline reactivity, which thereby enabled stable auto-ignition with less initial charge heating. Hence O3 addition could stabilize engine combustion relative to similar conditions without O3. The addition of ozone has been found to reduce specific fuel consumption by up to 9%, with an overall improvement in the combustion stability compared to similar conditions without O3. For the lowest loads, the effect of adding O3 was most substantial. Specific NOx emissions also dropped by up to 30% because a higher fraction of the fuel burned was due to auto-ignition of the end gas. Measurement of in-cylinder O3 concentrations using UV light absorption technique showed that rapid decomposition of O3 into molecular (O2) and atomic oxygen (O) concurred with the onset of low-temperature heat release (LTHR). The newly formed O from O3 decomposition initiated fuel hydrogen abstraction reactions responsible for early onset of LTHR. At the beginning of high-temperature heat release (HTHR), end gas temperatures ranged from 840 to 900 K, which is about 200 K cooler than those found in previous studies where intake charge heating or extensive retained residuals were used to preheat the charge. An included analysis indicates that in order to achieve optimal auto-ignition in our engine, the spark deflagration was needed to add 10–40 J of additional thermal energy to the end gas. We have leveraged these results to broaden our understanding of O3 addition to different load-speed conditions that we believe can facilitate multiple modes (SI, ACI, SACI, etc.) of combustion.

Sayan Biswas, Isaac Ekoto

Chapter 2. Demographic Dividend in the Middle East Countries: An Empirical Assessment

For boosting country’s economic growth, it is essential to know “What is demographic dividend and what is its role and significance in increasing the National Income?” During the demographic transition from high fertility and high mortality to low fertility and low mortality, age structure of population undergoes age-structural changes. As a consequence, a stage comes when the rate of growth of population in the working ages exceeds the growth rate of total population. In economic terms, it means that production exceeds the consumption and the surplus available could be used for economic growth. It has been empirically found that the relationship between GDP growth rate and demographic dividend is positive and its impact on economic growth of a country could be miraculous. However, according to Lee and Mason (Finance and Development, 43(3), 2006), demographic dividend is a one-time opportunity, available only for short duration of 30 years to 50 years, and it is not automatic. The latter implies that for reaping maximum benefit of this period, the respective governments should assert to create employment for the new entrants to the working-age groups. This requires planning in advance to offer jobs to maximum number of youths. For the planning, it is necessary to know the expected time when the window of economic opportunity would open and how long the favourable period would last. The present chapter, using the latest available United Nations population projections made in 2017 and released in 2018, gives the estimated years of opening and closure of demographic window and thus duration of reaping economic benefits for the countries of the Middle East.

Prem Saxena

Chapter 3. Material Compatibility Aspects and Development of Methanol-Fueled Engines

The environmental concern and the financial markets over the global scale, alternative fuels were developed to substitute convention fuel. Methanol is a substitute for conventional fuel which can be used in an Internal Combustion (IC) engine. Methanol is an alternative fuel for IC engines in terms of environment and economical aspect. It is renewable, economically, and environmentally interesting. A sustainable strategy is proposed to use methanol in the internal combustion engine in various ranges of concentration. In this chapter, various research data were studied and cited to understand material compatibility aspects and engineering challenges for engine parts made of metals, elastomers, and plastics. The fuel chemistry and quality effects on the engine are discussed. The effects of methanol on the various components of internal combustion engines are studied. The corrosion and wear of engine components are studied and suggested the suitable material for the engine parts which are coming in contact with methanol. The implementations of methanol in spark ignition engine and compression ignition are studied. The engineering path-way of the implementation and design is explained in detail. The design of different engine components such as engine head, fuel injection system, fuel pump, after treatment for methanol fuelled engines are studied and suggested. The vehicle adaptation for methanol fuel is also studied.

Vikram Kumar, Avinash Kumar Agarwal

Fatigue characterization of conventional and high rutting resistance asphalt mixtures using the cyclic indirect tensile test

In this paper, the fatigue behaviour of several asphalt mixtures that are widely used in Vietnam such as the dense graded asphalt concrete (AC) produced with 60/70 pen bitumen and high rutting resistance asphalt mixtures (HRRA) are investigated using the cyclic indirect tensile (IDT) testing device equipped at Ho Chi Minh City University of Technology and Education (HCMUTE). Several level of stress magnitudes were applied to determine the characteristic fatigue line (CFL) of mixtures. The results show that the widely used power law of CFL is applicable for all investigated mixtures. Although the stiffness of the HRRA is twice that of conventional AC, its resistance to fatigue is much higher. The effect of strain magnitude on the stiffness of mixtures or the nonlinearity effect was also recognized. It was observed that the impact of binder type on the nonlinearity effect is insignificant compared to that due to the volumetric design of mixture.

H. T. Tai Nguyen, Anh Thang Le, Vu Tu Tran, Duy Liem Nguyen

Experimental Investigation of a Self-powered Magnetorheological Damper for Seismic Mitigation

The present work investigates the effectiveness of Magnetorheological Damper (MR) damper coupled with the smart self-powered system, MR damper acts as electromagnetic induction (EMI) device in controlling seismic vibration. The proposed smart damping system with an EMI device is capable of converting vibration energy into electrical energy. Thus, the EMI device attached with MR damper can be used as an effective and alternative power source for the MR damper, making it a self-powering system. The primary aim of the experimental study is to identify the performance of the proposed smart damping system using time history loading (El Centro earthquake). For experimentation, the MR damper with EMI was designed and fabricated. To reduce sedimentation, nano Fe3O4 was used in the preparation of MR fluid. The performances of the proposed smart damping system are compared with the passive, semi-active and active control system in force and displacement to evaluate the effectiveness of the self-powered smart damping system in reducing seismic vibration. The experimental results show that the self-powered smart damping system produces more damping force and reduction in displacement. The maximum damping force obtained is 0.67 kN. In an active system, a force was increased by 12.9% and displacement was reduced by 13.4% when compared with the semi-active control system. The results revealed that the proposed EMI can act as a sole power source for the damping system.

C Daniel, G. Hemalatha, L. Sarala, D. Tensing, S. Sundar Manoharan, Xian-Xu Bai

Modelling the hydromechanical behaviour of a granular expansive clayey soil upon hydration using discrete element method

Bentonite-based pellet materials are considered as a sealing material for the isolation of galleries in the French radioactive waste disposal concept, owing to operational convenience. The influence of the granular nature of the material is studied through Discrete Element Method (DEM) simulations. Each pellet is modelled individually and represented by a sphere of same mass and density as real pellets. Swelling pressure tests of pellet mixtures, carried out at laboratory scale, are simulated using a model describing the hydromechanical behaviour of a single pellet upon suction decrease. The mixture behaviour is satisfactorily reproduced upon hydration from 89 MPa (initial state) to 7 MPa of suction. The model is then used to study the behaviour of large granular assemblies of bentonite pellets on the same hydration path. Results highlight that the mixture assembling process, the pellet strength and stiffness, and the mixture density have an influence on the swelling pressure development upon hydration. Numerical results obtained through DEM simulations will be of interest for future Finite Element Method simulations of the full hydration path using double structure models where pellets correspond to the microstructural level.

Benjamin Darde, Jean-Noël Roux, Patrick Dangla, Jean-Michel Pereira, Anh Minh Tang, Jean Talandier, Minh Ngoc Vu

Air permeability of cover concrete quality of precast box culverts affected by casting direction

Casting direction is an external factor that can influence the properties, including air permeability of concrete structures. In the current study, the effect of this factor on the air permeability of precast box culverts was investigated. The Torrent air permeability test was used to measure coefficient of air permeability kT for two box culvert specimens produced using ordinary Portland cement, water-to-binder ratio of 0.485, different casting directions (vertical and horizontal) at the age of 3 months. The obtained results show the different cover concrete qualities between the surfaces of the horizontal casting specimen, whereas the shortcomings during casting process were detected of the vertical casting specimen.

May Huu Nguyen, Kenichiro Nakarai, Saeko Kajita

BIM-based innovative bridge maintenance system using augmented reality technology

A smooth transportation system plays a pivotal role in the development of a country since it enables rapid connect among industrial zones or territories. Especially bridges, which commonly has around 50 years’ service life, needs to be paid more attention to maintenance to keep it in a good service state. Nowadays most bridge has its own bridge maintenance system (BMS) but still bridge engineer is challenged by difficulty in term of store the damage records and repair history, as well as assess the current behavior of the structural system. This paper proposed an innovative BMS which using a schematic BIM-based information management system, collaboration with an automate inspection task using augmented reality (AR) device. According to preventive maintenance strategy, a data schema for BIM model was investigated. An integrated digital model is created to store, manipulate and share the inspection data and maintenance history. On another hand, the onsite inspection task is timely performed. The state-of-the-art lies on the versatile capacity in term of real-time data manipulate of AR device. Besides capturing feature, a chain of algorithms based on computer vision is embedded into the AR device, aims to enhance the precision and performance of inspection task. Right after, the technical damage report is fed-back to the management system and assessment model is discussed. A pilot application to an existing cable-stayed bridge is introduced, which is well applied for a year and show a good potential for bridge maintenance.

NgocSon Dang, ChangSu Shim

Investigation of the use of reclaimed asphalt pavement as aggregates in roller compacted concrete for road base pavement in Vietnam

When reinforcing existing asphalt pavement while the pavement elevation must be maintained, old surface asphalt layers are generally milled to apply the new ones. The use of the reclaimed asphalt pavement (RAP) recovered from road deconstruction is very important. It is an actual need due to the rise of asphalt cost and the lack of natural aggregates. It respects also sustainable development. On the other hand, pavement rutting is the most issue in Vietnam. In order to reduce this issue, it is necessary to find efficient solutions. Roller compacted concrete (RCC), with well-known advantages (simple, economical and high stiffness modulus), is found as a very promising technique. For these purposes, a science and technology project has been granted by the Ministry of Transport of Vietnam. It aims to investigate the use of RAP as aggregates in roller compacted concrete (RCC) for road base pavement. In this project, RAP from two different resources is selected and their characteristics are evaluated. They are then used for the RCC mix design with three different RAP contents (0, 40 and 80% in masse of aggregates) and two different cements (PCB30 and PC40). Specimens are fabricated in laboratory to determine traditional mechanical properties (compressive strength, tensile splitting strength and elastic modulus) according to their curing periods. The results on one RAP show that the studied mixes have potential performances and can be used for road base pavement. Finally, one mix has been chosen to be used for construction of a full-scale experimental pavement.

Thi Huong Giang Nguyen, Tien Dung Nguyen, Trung Hieu Tran, Van Dong Dao, Xuan Cay Bui, Mai Lan Nguyen

A Daily Work Report Based Approach for Schedule Risk Analysis

The Critical Path Method (CPM) has been the dominant scheduling method for the past several decades. Most State Departments of Transportation (DOTs) in the U.S. currently apply CPM to estimate project duration for major and complex projects. The method, however, has limitations, and one of them is its poor ability to analyze schedule risks. While previous studies have identified various factors that cause uncertainties in schedules, the estimated duration from CPM is deterministic. To quantify schedule risks, the Program Evaluation Review Technique which uses three-point estimates for activity duration and Monte Carlo Simulation (MCS) based methods that assign uncertainties to work activities through the probability distributions of activity durations have been limitedly used in the industry. After running the simulation thousands of times, the probability distribution of the total project duration can be developed to assist risk analysis, such as determining a schedule contingency for the project. In current practices, the use of the MCS method is still limited due to various reasons. One of the most challenging issues is the difficulties in estimating the probability distributions of activity durations objectively. This study aims to leverage historical digital daily work report (DWR) data available in the DOTs’ databases to determine the probability distributions of the production rates of work activities then estimate the distributions of activity durations when quantities of the activities are provided. Since the DWRs record the daily accomplished quantity of each work item in the construction phase, the actual production rates of the work items can be calculated to obtain a more accurate and realistic duration estimate for a future project. DWR data collected from a DOT were used to conduct a case study that demonstrates the value of this new approach for schedule risk analysis.

Chau Le, H. David Jeong

Modelling Stress Distribution Around Boreholes

The stress distribution around boreholes plays an important role for wellbore problems. This study aims to model the stress distribution around boreholes of offshore petroleum wells. The stress model around boreholes, which is associated with the in-situ stresses, rock properties as well as the wellbore pressure and configuration, is developed. The new approach uses transformation formula of a full stress tensor including its orientations and magnitudes. A calculating program for the stress analysis of wellbores (SAoWB), which is written in Matlab language, has described and calculated all components of the stress tensor at the wellbore wall as well as around boreholes. In this study, case studies are considered using the program SAoWB based on the new approach. The field case studies are applied for boreholes of the studied wells at Cuu Long basin, offshore Vietnam. The obtaining results from our program SAoWB are in good agreement with the failure observations from high solution image logs of the studied wellbore as well as coring data. They are the bases to study its implications which benefit for offshore petroleum industry activities such as wellbore stability, optimum drilling wellbore trajectories for well planning, drilling mud selection, sanding prediction and control, hydraulic fracturing, etc. Modelling the stress distribution around boreholes from the program SAoWB not only enhances knowledge of in-situ stress tensors but also impacts on well problems, especially for offshore petroleum wells.

Khanh Do Quang, Thanh Nguyen Thi Tam, Phuc Kieu, Nhan Vo Huynh, Quang Hoang Trong

Application of Six Sigma on METRO Rail Construction Project

Six Sigma is a Quality Improvement Technique widely used in Man-ufacturing Industry. Application of Six Sigma is yet to be explored exhaustively in the field of Construction. This study explores the possibility of application of Six Sigma in METRO Rail Construction. This paper defines and analyses Con-struction Performance of Pier Construction in METRO Rail Project of Ahmed-abad using Six Sigma Technique. A stretch of 4 km from North-South Corridor is selected for the study from APMC to Shreyas. The data such as Work Break-down Structure, List of Activities, Time of Activities, Dependency of Activi-ties, Construction sequence etc. is collected and analysed using software Pri-mavera and Minitab. Process Capability Analysis is used to determine how well an actual process meets a set of specification limits in a scheduled plan. Data is represented and analysed by preparing different charts and graphs such as I – Chart, Moving Range Chart, Capability Histogram and Capability Plot. A com-parison was done between the actual works done at the site and the schedule planned for the progress to calculate the Sigma value. It is found from the result that the calculated Sigma value is near to the ultimate Sigma Level. This work provides valuable insights for the implementation of Six Sigma Technique in the Construction Industry. Six Sigma Technique can evaluate the quality of cur-rent construction activity and quantify the improvement goals so as to control succeeding critical activities for the project. Improvement in Quality of Con-struction can be observed at large extent with the application of Six Sigma Technique. Six Sigma Technique can be linked with Lean Principle.

Viraj Parekh, Kushal Solanki, Nishit Prajapati

Traffic Impact Assessment of Infrastructure Development Projects for Sustainable Urban Growth

Infrastructure development projects in urban areas often create im-pacts on the urban life. As the size of the projects increases, undesirable effects such as traffic congestion, traffic accidents, and environmental problems may become unacceptable to the community. In Vietnam, there witnesses rapid de-velopment of infrastructures, especially high-rise buildings in the big cities. These infrastructures generate high traffic demand and may cause serious traffic congestion on the surrounding transport network, thus assessment of traffic im-pacts (TIA) imposed by the projects is essential in controlling spontaneous de-velopment for sustainable urban growth. For TIA to be actuated, there should be a statutory document that institutionalizes the requirements on conducting TIA in the project appraisal, and a technical framework that guides the struc-ture, content, procedure and method for TIA execution. However, in Vietnam, such essential documents are not yet available. This paper reviews TIA imple-mentation in some countries, addresses issues in TIA execution practice, figures out the key contents of a TIA report, and proposes solutions to actualize the execution of TIA for development projects for sustainable urban development in Vietnam.

Trinh Dinh Toan, Dao Van Dong

A Review on Protection Methods Against Debris Accumulation for Bridge in Mountain Areas

In the mountainous region of Vietnam, in the flood season, the phenomenon of debris is very common, causing danger to bridges. Debris affects the scour morphology at bridge piers, thus increasing the bridge failure potential. During the flood in October 2017, the Thia Bridge collapsed in Yen Bai, Vietnam. After the survey, the large trees attached to the piers are believed to be the main cause of this bridge collapse. Debris accumulation countermeasures can be classified into two main categories, including structural and non-structural methods. So, this paper presents details of these methods, and their applicability in Vietnam.

Huy Quang Mai, Phong Dang Nguyen

A Study on Combination of Two Friction Dampers to Control Stayed-Cable Vibration Under Considering its Bending Stiffness

The stayed-cable is one of vital component of cable-stayed bridges. Stayed-cable is often very long with a small diameter and low mass, which can be considered horizontal flexible structure with very low natural frequency. Under the influence of cyclic load in specific conditions, stayed-cable can store the energy and vibrate with large amplitude. This paper focuses on studying the methods of two-friction damper combination for reducing the cable vibration, and evaluates the capacity of friction-damper parameters in mitigating the vibration of stayed-cable. The results show the relationship between the damping factor of stayed-cable and various parameters such as Equivalent viscous constants, friction, spring constant, points attached damper to stayed-cable. For long-span cable-stayed bridges, cable has relatively large diameter and it is normally covered by grouting mortar or Epoxy. Consequently, its bending stiff-ness is considerably increased. Therefore, it is necessary to take into account its bending stiffness during the vibration analysis process. From these results, designers can assess and choose the attaching point as well as parameters of friction damper, which are optimal for specific stayed-cable.

Duy-Thao Nguyen, Duy-Hung Vo

BIM Adoption in Construction Projects Funded with State-managed Capital in Vietnam: Legal Issues and Proposed Solutions

It is claimed that Building Information Modelling (BIM) has been adopted in the construction industry in Vietnam since early 2000. However, in practice, this proves not to be the case, particularly in the context of construction projects funded with state-managed capital. Considering that such funding accounts for the largest market share in construction funded projects in Vietnam, further investigation is warranted. A preliminary survey reveals that the owners and other stakeholders of these projects are cautious of BIM adoption, primarily due to avoiding legal issues that may arise in such projects. In this context, this paper aims to explore the potential legal issues of BIM adoption and propose solutions for the stakeholders. The results will aid in avoiding the issues outlined, which can aid in promoting BIM adoption in Vietnam. Literature shows that the legal issues are not only applicable to Vietnam, but to other countries. To identify the legal issues which arise from BIM adoption in construction projects globally, a through literature review is carried out, where then a survey is performed with stakeholders of construction projects funded with state-managed capital in Vietnam. This then provides the opportunity to verify if the literature findings apply to Vietnam, as well as to discover new issues in a Vietnamese context. According to the survey results, the biggest legal issues for BIM adoption in construction projects funded with state-managed capital in Vietnam are; (i) no expenses system for BIM implementation available, so there is no source of funding for paying consultants/contractors for BIM; (ii) rights and obligations of stakeholders in BIM projects are not defined clearly in the relevant regulations, so there is a high risk of claims and disputes arising; (iii) no digital submission system available, thereby BIM models are not accepted, but hardcopy printouts used, leading to significant work of reviewing and verification of as-built documents. Solutions for these issues are generated and validated with focus groups, which lead to the suggestions of regulation alterations as well as revisions of contract forms/articles applied in construction projects funded with state-managed capital in Vietnam

Thuy-Ninh Dao, The-Quan Nguyen, Po-Han Chen

Predicting onset and orientation of localisation bands using a cohesive-frictional model

In this study, a localisation band idealised as zero-thickness surface and represented by a cohesive-frictional model is directly incorporated into the structure of a new constitutive modelling approach and is activated once a stress-based condition is met. The embedded cohesive-frictional model provides a natural way to detect both the onset and orientation of localisation bands. This is different from existing continuum approaches based on the loss of positiveness of the determinant of the acoustic tensor. The formulation in this approach also creates a strong link between quantities describing the localisation band in the model and experiments, facilitating the calibration of model parameters.

Linh A. Le, Giang D. Nguyen, Ha H. Bui

2. Die Einmaligkeit des Strategischen Managements

Vor einiger Zeit stand ich im Büro eines Spitzenmanagers eines weltweiten Marktführers, als er zu mir sagte: „Harburger, wir können doch nicht immer nur Strategie machen, wir müssen auch noch Geld verdienen.“ Zwei Wochen vorher hatten wir mit dem gesamten Vorstand die „Unternehmens-DNA“ formuliert und auf inhaltliche Stimmigkeit geprüft. Wie man sich leicht denken kann, war dies durchaus herausfordernd. Durch die Struktur des „Strategy Diamond“ kamen dennoch beachtliche firmenspezifische Charakterisierungen zustande.

Wolfgang Harburger

7. Strategische Grundsätze zum ökonomischen Werttransfer

Peter F. Drucker, Aloys Gälweiler, Fredmund Malik, Michael Porter haben konsequent den ökonomischen Werttransfer zum Ausgangspunkt ihrer strategischen Modelle gemacht, indem sie grundlegende strategische Grundsätze daraus abgeleitet haben. Entscheidende Grundsätze für tragfähige Geschäftsstrategien.

Wolfgang Harburger

6. Jobbörse, Stellenangebote und Online-Bewerbung: Das Herzstück Ihrer Karriere-Website

Nun ist er also am Ziel Ihrer Träume: Ihr Besucher, der ursprünglich eigentlich nur kam, um sich über die geflanschten Rückschlagkappen mit doppelter Nutverdrahtung (Modell Nr. XZB34568) auf Ihrer Produkt-Webseite zu informieren, aber über den auffällig platzierten Karrierebutton in der Hauptnavigation auf Ihre vorbildlich gestaltete Karriere-Website stolperte, ist nicht nur begeistert von dem, was Sie als Arbeitgeber bieten, sondern auch von der intuitiv zu bedienenden Jobbörse, dass er sich gleich bewerben will. Auch die Stellenanzeigen sind so formuliert, dass er gar nicht anders kann. Da er sich mit nur einem Klick bewerben kann, ohne sich durch endlos lange Formularwüsten zu quälen, haben Sie die Bewerbung schon bald im Postfach. So könnte das Szenario aussehen. Möglicherweise kam der Besucher auch in der festen Absicht, sich zu bewerben. Oft ist der Weg zur Bewerbung aber steinig und schwer. Immer wieder begegnen mir Karriere-Websites, wo ich bei Klick auf den Link „Stellenangebote“ auf einer Seite lande, die mich darauf hinweist, dass wenn ich jetzt auf „Stellenangebote“ klicke, ich auf einer Seite mit Stellenangeboten lande, die mich dann zu den Stellenangeboten führt.

Henner Knabenreich

9. Generische Wertestrategien und die notwendigen Schlussfolgerungen

Warum finden so gegensätzliche Automobilhersteller wie KIA und Porsche gleichermaßen Käufer? Genau hier setzen die vorne skizzierten generischen Wettbewerbsstrategien mit ihrer Aufklärung ein. Michael Porters generische Marktpositionierungen, die Kostenführerschaft und Differenzierung kombiniert mit den „scope of areas“, geben die Antwort. Kostenführer wie Differenzierer treten entweder als Nischenanbieter (Dollar Store; Porsche) oder mit breit gestreutem Marktzugang (KIA; Mercedes; Walmart) in dem jeweiligen Regionalmarkt auf (Carpenter und Sanders 2009, S. 174). Dagegen sitzen weniger erfolgreiche Unternehmen meist „zwischen den Stühlen“, weil ihnen ein Wettbewerb überlegenes Kundenangebot fehlt (Porter 1980, S. 41 ff).

Wolfgang Harburger

Process-Based Quality Management in Care: Adding a Quality Perspective to Pathway Modelling

Care pathways (CPs) are used as a tool to organize complex care processes and to foster the quality management in general. However, the quality management potentials have not been sufficiently exploited yet, since the development, documentation, and controlling of quality indicators (QIs) for quality management purposes are not fully integrated to the process standards defined by CPs. To support the integration of a quality perspective in CPs, the paper addresses the questions which and how quality concepts can be integrated into the process documentation in order to support managers, health service providers, and patients. Therefore, we extended the widely accepted modelling language “Business Process Model and Notation” (BPMN) with a quality perspective. The conceptualization is grounded on a systematic literature review on (quality) indicator modelling. Together with previous work on the conceptualization of QIs in health care, it provided the basis for a comprehensive domain requirements analysis. Following a design-oriented research approach, the requirements were evaluated and used to design a BPMN extension by implementing the quality indicator enhancements as BPMN meta model extension. All design decisions were evaluated in a feedback workshop with a domain expert experienced in quality management and certification of cancer centres on national and international level. The approach is demonstrated with an example from stroke care. The proposed language extension provides a tool to be used for the governance of care processes based on QIs and for the implementation of a more real-time, pathway-based quality management in health care.

Peggy Richter, Hannes Schlieter

Security Risk Management in Cooperative Intelligent Transportation Systems: A Systematic Literature Review

The automotive industry is maximizing cooperative interactions between vehicular sensors and infrastructure components to make intelligent decisions in its application (i.e., traffic management, navigation, or autonomous driving services). This cooperative behaviour also extends to security. More connected and cooperative components of vehicular intelligent transportation systems (ITS) result in an increased potential for malicious attacks that can negatively impact security and safety. The security risks in one architecture layer affect other layers of ITS; thus, cooperation is essential for secure operations of these systems. This paper presents results from a comprehensive literature review on the state-of-the-art of security risk management in vehicular ITS, evaluating its assets, threats/risks, and countermeasures. We examine these security elements along the dimensions of the perception, network, and application architecture layers of ITS. The study reveals gaps in ITS security risk management research within these architecture layers and provides suggestions for future research.

Abasi-Amefon O. Affia, Raimundas Matulevičius, Alexander Nolte

An Approach for the Automated Generation of Engaging Dashboards

Organizations use Key Performance Indicators (KPIs) to monitor whether they attain their goals. To support organizations at tracking the performance of their business, software vendors offer dashboards to these organizations. For the development of the dashboards that will engage organizations and enable them to make informed decisions, software vendors leverage dashboard design principles. However, the dashboard design principles available in the literature are expressed as natural language texts. Therefore, software vendors and organizations either do not use them or spend significant efforts to internalize and apply them literally in every engaging dashboard development process. We show that engaging dashboards for organizations can be automatically generated by means of automatically visualized KPIs. In this context, we present our novel approach for the automated generation of engaging dashboards for organizations. The approach employs the decision model for visualizing KPIs that is developed based on the dashboard design principles in the literature. We implemented our approach and evaluated its quality in a case study.

Ünal Aksu, Adela del-Río-Ortega, Manuel Resinas, Hajo A. Reijers

Tiered Model-Based Safety Assessment

Processes and techniques used for assessing the safety of a complex system are well-addressed by safety standards. These standards usually recommend to decompose the assessment process into different stages of analysis, so called tiered safety assessment. Each analysis stage should be performed by applying recommended assessment techniques. To provide confidence in the correctness of the whole analysis, some verification techniques, usually traceability checking, are applied between two stages. Even if the traceability provides some confidence in the correctness of the decomposition, the following problems remains How to model the system behaviours at each stage of safety assessment? How to efficiently use these stages during the design process? What is the formal relationship between these modelling stages? To tackle these problems, we propose a way to specify, formalize and implement the relations between assessment stages. The proposal and its pros & cons are illustrated on a Remotely Piloted Aircraft System (RPAS) use-case.

Kevin Delmas, Christel Seguin, Pierre Bieber

Pattern-Based Formal Approach to Analyse Security and Safety of Control Systems

Increased openness and interconnectedness of safety-critical control systems calls for techniques enabling an integrated analysis of safety and security requirements. Often safety and security requirements have intricate interdependencies that should be uncovered and analysed in a structured and rigorous way. In this paper, we propose an approach that facilitates a systematic derivation and formalisation of safety and security requirements. We propose the specification and refinement patterns in Event-B that allow us to specify and verify system behaviour and properties in the presence of both accidental faults and security attacks and analyse interdependencies between safety and security requirements.

Inna Vistbakka, Elena Troubitsyna

Formal Verification of Network Interlocking Control by Distributed Signal Boxes

Interlocking control prevents certain operations from occurring, unless preceded by specific events. It is used in traffic network control systems (e.g. railway interlocking control), piping and tunneling control systems and in other applications like for example communication network control. Interlocking systems have to comply with certain safety properties and this fact elevates formal modeling as the most important concern in their design. This paper introduces an interlocking control algorithm based on the use of what we call Distributed Signal Boxes (DSBs). Distributed control eliminates the intrinsic complexity of centralized interlocking control solutions, which are mainly developed in the field of railway traffic control. Our algorithm uses types of network control units, which do not store state information. Control units are combined according to a limited number of patterns that in all cases yield safe network topologies. Verification of safety takes place by model checking a network that includes all possible interconnections between neighbor nodes. Obtained results can be used to generalize correctness by compositional reasoning for networks of arbitrary complexity that are formed according to the verified interconnection cases.

Stylianos Basagiannis, Panagiotis Katsaros

Kapitel 6. Organisationale Perspektive

In der organisationalen Diskussion findet die Frage, wie sich Komplexitätsbewältigung praktisch handhaben und institutionalisieren lässt, rege Auseinandersetzung. Dies dürfte u. a. auch mit der von mehreren Studien bestätigten Feststellung zusammenhängen, wonach die Halbwertszeit von Unternehmen innerhalb der letzten Jahrzehnte signifikant abgenommen hat. Zu der am öftesten zitierten Referenz gehören die Unternehmen, die im 20. Jahrhundert laut Fortune zu den 500 größten US-Gesellschaften gehörten.

Karim Fathi

3. Theoretical Concepts for the Assessment of Managed Care

The following section is particularly directed at readers who are interested in theory. Using the two main approaches of the new institutional economics (Schauenberg et al. 2005; Coase 1988, 1993; Demsetz 1967, 1968, 1988; North 1991; Jensen and Meckling 1976; Hart and Moore 1990; Furubotn and Pejovich 1974; Alchian and Demsetz 1972; Sloan 1988), certain fundamental ideas will be introduced about the design of institutions as well as incentive and monitoring systems. The transaction cost theory (Williamson 1975, 1985, 1986, 1993; Windsperger 1996) offers a method for analysing the assessment of vertical integration along the value chain and, thus, for evaluating integrated care systems, among other things. The principal-agent theory (Pratt and Zeckhauser 1985; Pauly 1968; Mooney 1993; Akerlof 1970; Arrow 1963) is concentrated on delegation relations and their inherent information asymmetries. There is hardly another field in which the considerable information asymmetries are so clear between those who delegate tasks (e.g. the patient) and those who carry out the tasks (e.g. a physician). The principal-agent theory investigates how contracts can best be designed in such a configuration.

Volker Eric Amelung

12. Cost Management

In healthcare, a variety of different measures and instruments can be applied to manage the cost of healthcare. The following chapter will discuss the usefulness and effectiveness of gatekeeping, formularies and utilization reviews. These three cost management tools are especially relevant to Managed Care. In gatekeeping, most treatment episodes begin with a visit to an individually selected physician, the gatekeeper, who ensures a coordinated and cross-sectorial treatment process. Formularies are used to explicitly define which services are paid for and apply utilisation reviews as a key instrument.

Volker Eric Amelung

9. Contract Design

One of the most important preconditions for high-quality and economically effective treatment results is a selection of suitable service providers with whom an MCO concludes supply contracts (selective contracting). The conclusion of selective contracts is so significant for managed care and healthcare management that it is considered crucial to the definition of managed care by not only the authors of this book, but the literature supports this claim as well.

Volker Eric Amelung

11. Quality Management

In healthcare, a variety of different measures and instruments can be applied to manage the quality of healthcare. The following chapter will discuss the usefulness and effectiveness of guidelines and clinical paths, disease management and chronic care, case management, patient coaching and quality management as specific instruments for quality management.

Volker Eric Amelung

Cortical and Subcortical Contributions to Predicting Intelligence Using 3D ConvNets

We present a novel framework using 3D convolutional neural networks to predict residualized fluid intelligence scores in the MICCAI 2019 Adolescent Brain Cognitive Development Neurocognitive Prediction Challenge datasets. Using gray matter segmentations from T1-weighted MRI volumes as inputs, our framework identified several cortical and subcortical brain regions where the predicted errors were lower than random guessing in the validation set (mean squared error = 71.5252), and our final outcomes (mean squared error = 70.5787 in the validation set, 92.7407 in the test set) were comprised of the median scores predicted from these regions.

Yukai Zou, Ikbeom Jang, Timothy G. Reese, Jinxia Yao, Wenbin Zhu, Joseph V. Rispoli

A Combined Deep Learning-Gradient Boosting Machine Framework for Fluid Intelligence Prediction

The ABCD Neurocognitive Prediction Challenge is a community driven competition asking competitors to develop algorithms to predict fluid intelligence score from T1-w MRIs. In this work, we propose a deep learning combined with gradient boosting machine framework to solve this task. We train a convolutional neural network to compress the high dimensional MRI data and learn meaningful image features by predicting the 123 continuous-valued derived data provided with each MRI. These extracted features are then used to train a gradient boosting machine that predicts the residualized fluid intelligence score. Our approach achieved mean square error (MSE) scores of 18.4374, 68.7868, and 96.1806 for the training, validation, and test set respectively.

Yeeleng S. Vang, Yingxin Cao, Xiaohui Xie

Kapitel 8. Wenn Bots übernehmen – Chatbots im Recruiting

Durch die Nutzung von Chatbots können Arbeitgeber bei der Suche nach den richtigen Kandidaten nur profitieren. Im Folgenden wird erläutert, wie sowohl die Candidate Experience, das Recruiting als auch die Employer Brand von einem sinnvollen Einsatz von Chatbots profitieren können. Daneben wird ein Blick in die Zukunft geworfen und geschaut, welche Entwicklungen den Chatbot-Markt in den kommenden Jahren weiter verändern werden.

Luc Dudler

Kapitel 13. Nur etwas für Konzerne oder klappt Recruiting Analytics auch im Mittelstand?

Fehlentscheidungen im Recruiting können kostspielig werden. Daher gilt es sie vermeiden. Um Ursachen für Fehlentwicklungen frühzeitig zu erkennen und proaktiv Veränderungen zu bewirken, kann Recruiting Analytics die Lösung sein. Doch gerade mittelständische Unternehmen tun sich noch schwer bei der Implementierung. Doch wer die relevanten Berührungspunkte zu Kandidaten identifizieren kann, vorhandene Datenquellen für sich nutzt und die richtigen Key Performance Indicators festlegt, kann schnell erkennen, wie effektiv einzelne Maßnahmen wirken, Transparenz über den gesamten Recruiting-Prozess erreicht wird und das eingesetzte Budget deutlich effizienter genutzt werden kann.

Marcel Rütten

Automatic Paraspinal Muscle Segmentation in Patients with Lumbar Pathology Using Deep Convolutional Neural Network

Recent evidence suggests an association between low back pain (LBP) and changes in lumbar paraspinal muscle morphology and composition (i.e., fatty infiltration). Quantitative measurements of muscle cross-sectional areas (CSAs) from MRI scans are commonly used to examine the relationship between paraspinal muscle characters and different lumbar conditions. The current investigation primarily uses manual segmentation that is time-consuming, laborious, and can be inconsistent. However, no automatic MRI segmentation algorithms exist for pathological data, likely due to the complex paraspinal muscle anatomy and high variability in muscle composition among patients. We employed deep convolutional neural networks using U-Net+CRF-RNN with multi-data training to automatically segment paraspinal muscles from T2-weighted MRI axial slices at the L4-L5 and L5-S1 spinal levels and achieved averaged Dice score of 93.9 $$\%$$ and mean boundary distance of 1 mm. We also demonstrate the application using the segmentation results to reveal tissue characteristics of the muscles in relation to age and sex.

Wenyao Xia, Maryse Fortin, Joshua Ahn, Hassan Rivaz, Michele C. Battié, Terry M. Peters, Yiming Xiao

Harmonic Balance Techniques in Cardiovascular Fluid Mechanics

In cardiovascular fluid mechanics, the typical flow regime is unsteady and periodic in nature as dictated by cardiac dynamics. Most studies featuring computational simulations have approached the problem exploiting the traditional mathematical formulation in the time domain, an approach that incurs huge computational cost. This work explores the application of the harmonic balance method as an alternative numerical modeling tool to resolve the dynamic nature of blood flow. The method takes advantage of the pulsatile regime to transform the original problem into a family of equations in frequency space, while the combination of the corresponding solutions yields the periodic solution of the original problem. As a result of this study we conclude that only a few harmonics are required for resolving the presented fluid flow problem accurately and the method is worth of further investigation in this field.

Taha Sabri Koltukluoğlu, Gregor Cvijetić, Ralf Hiptmair

TopAwaRe: Topology-Aware Registration

Deformable registration, or nonlinear alignment of images, is a fundamental preprocessing tool in medical imaging. State-of-the-art algorithms restrict to diffeomorphisms to regularize an otherwise ill-posed problem. In particular, such models assume that a one-to-one matching exists between any pair of images. In a range of real-life-applications, however, one image may contain objects that another does not. In such cases, the one-to-one assumption is routinely accepted as unavoidable, leading to inaccurate preprocessing and, thus, inaccuracies in the subsequent analysis. We present a novel, piecewise-diffeomorphic deformation framework which models topological changes as explicitly encoded discontinuities in the deformation fields. We thus preserve the regularization properties of diffeomorphic models while locally avoiding their erroneous one-to-one assumption. The entire model is GPU-implemented, and validated on intersubject 3D registration of T1-weighted brain MRI. Qualitative and quantitative results show our ability to improve performance in pathological cases containing topological inconsistencies.

Rune Kok Nielsen, Sune Darkner, Aasa Feragen

A Cooperative Autoencoder for Population-Based Regularization of CNN Image Registration

Spatial transformations are enablers in a variety of medical image analysis applications that entail aligning images to a common coordinate systems. Population analysis of such transformations is expected to capture the underlying image and shape variations, and hence these transformations are required to produce anatomically feasible correspondences. This is usually enforced through some smoothness-based generic metric or regularization of the deformation field. Alternatively, population-based regularization has been shown to produce anatomically accurate correspondences in cases where anatomically unaware (i.e., data independent) regularization fail. Recently, deep networks have been used to generate spatial transformations in an unsupervised manner, and, once trained, these networks are computationally faster and as accurate as conventional, optimization-based registration methods. However, the deformation fields produced by these networks require smoothness penalties, just as the conventional registration methods, and ignores population-level statistics of the transformations. Here, we propose a novel neural network architecture that simultaneously learns and uses the population-level statistics of the spatial transformations to regularize the neural networks for unsupervised image registration. This regularization is in the form of a bottleneck autoencoder, which learns and adapts to the population of transformations required to align input images by encoding the transformations to a low dimensional manifold. The proposed architecture produces deformation fields that describe the population-level features and associated correspondences in an anatomically relevant manner and are statistically compact relative to the state-of-the-art approaches while maintaining computational efficiency. We demonstrate the efficacy of the proposed architecture on synthetic data sets, as well as 2D and 3D medical data.

Riddhish Bhalodia, Shireen Y. Elhabian, Ladislav Kavan, Ross T. Whitaker

Curriculum Semi-supervised Segmentation

This study investigates a curriculum-style strategy for semi-supervised CNN segmentation, which devises a regression network to learn image-level information such as the size of the target region. These regressions are used to effectively regularize the segmentation network, constraining the softmax predictions of the unlabeled images to match the inferred label distributions. Our framework is based on inequality constraints, which tolerate uncertainties in the inferred knowledge, e.g., regressed region size. It can be used for a large variety of region attributes. We evaluated our approach for left ventricle segmentation in magnetic resonance images (MRI), and compared it to standard proposal-based semi-supervision strategies. Our method achieves competitive results, leveraging unlabeled data in a more efficient manner and approaching full-supervision performance.

Hoel Kervadec, Jose Dolz, Éric Granger, Ismail Ben Ayed

Unsupervised Quality Control of Image Segmentation Based on Bayesian Learning

Assessing the quality of segmentations on an image database is required as many downstream clinical applications are based on segmentation results. For large databases, this quality assessment becomes tedious for a human expert and therefore some automation of this task is necessary. In this paper, we introduce a novel unsupervised approach to assist the quality control of image segmentations by measuring their adequacy with segmentations produced by a generic probabilistic model. To this end, we introduce a new segmentation model combining intensity and a spatial prior defined through a combination of spatially smooth kernels. The tractability of the approach is obtained by solving a type-II maximum likelihood which directly estimates hyperparameters. Assessing the quality of the segmentation with respect to the probabilistic model allows to detect the most challenging cases inside a dataset. This approach was evaluated on the BRATS 2017 and ACDC datasets showing its relevance for quality control assessment.

Benoît Audelan, Hervé Delingette

Integrating 3D Geometry of Organ for Improving Medical Image Segmentation

Prior knowledge of organ shape and location plays an important role in medical imaging segmentation. However, traditional 2D/3D segmentation methods usually operate as pixel-wise/voxel-wise classifiers where their training objectives are not able to incorporate the 3D shape knowledge explicitly. In this paper, we proposed an efficient deep shape-aware network to learn 3D geometry of the organ. More specifically, the network uses a 3D mesh representation in a graph-based CNN which can handle the shape inference and accuracy propagation effectively. After integrating the shape-aware module into the backbone FCNs and jointly training the full model in the multi-task framework, the discriminative capability of intermediate feature representations is increased for both geometry and segmentation regularizations on disentangling subtly correlated tasks. Experimental results show that the proposed network can not only output accurate segmentation, but also generate smooth 3D mesh simultaneously which can be used for further 3D shape analysis.

Jiawen Yao, Jinzheng Cai, Dong Yang, Daguang Xu, Junzhou Huang

Generalized Non-rigid Point Set Registration with Hybrid Mixture Models Considering Anisotropic Positional Uncertainties

Image-to-patient or pre-operative to intra-operative registration is an essential problem in computer-assisted surgery (CAS). Non-rigid or deformable registration is still a challenging problem with partial overlapping between point sets due to limited camera view, missing data due to tumor resection and the surface reconstruction error intra-operatively. In this paper, we propose and validate a normal-vector assisted non-rigid registration framework for accurately registering soft tissues in CAS. Two stages including rigid and non-rigid registrations are involved in the framework. In the stage of the rigid registration that does the initial alignment, the normal vectors extracted from the point sets are used while the position uncertainty is assumed to be anisotropic. With the normal vectors incorporated, the algorithm can better recover the point correspondences and is more robust to intra-operative partial data which is often the case in a typical laparoscopic surgery. In the stage of the non-rigid registration, the anisotropic coherent point drift (CPD) method is formulated, where the isotropic error assumption is generalized to anisotropic cases. Extensive experiments on the human liver data demonstrate our proposed algorithm’s several great advantages over the existing state-of-the-art ones. First, the rigid transformation matrix is recovered more accurately. Second, the proposed registration framework is much more robust to partial scan. Besides, the anisotropic CPD outperforms the original CPD significantly in terms of robustness to noise.

Zhe Min, Li Liu, Max Q.-H. Meng

Transferring from ex-vivo to in-vivo: Instrument Localization in 3D Cardiac Ultrasound Using Pyramid-UNet with Hybrid Loss

Automated instrument localization during cardiac interventions is essential to accurately and efficiently interpret a 3D ultrasound (US) image. In this paper, we propose a method to automatically localize the cardiac intervention instrument (RF-ablation catheter or guidewire) in a 3D US volume. We propose a Pyramid-UNet, which exploits the multi-scale information for better segmentation performance. Furthermore, a hybrid loss function is introduced, which consists of contextual loss and class-balanced focal loss, to enhance the performance of the network in cardiac US images. We have collected a challenging ex-vivo dataset to validate our method, which achieves a Dice score of 69.6% being 18.8% higher than the state-of-the-art methods. Moreover, with the pre-trained model on the ex-vivo dataset, our method can be easily adapted to the in-vivo dataset with several iterations and then achieves a Dice score of 65.8% for a different instrument. With segmentation, instruments can be localized with an average error less than 3 voxels in both datasets. To the best of our knowledge, this is the first work to validate the image-based method on in-vivo cardiac datasets.

Hongxu Yang, Caifeng Shan, Tao Tan, Alexander F. Kolen, Peter H. N. de With

Smart recommendation system to simplify projecting for an HMI/SCADA platform

Abstract—Modelling and connecting machines and hardware devices of manufacturing plants in HMI/SCADA software platforms is considered time-consuming and requires expertise. A smart recommendation system could help to support and simplify the tasks of the projecting process. In this paper, supervised learning methods are proposed to address this problem. Data characteristics, modelling challenges, and two potential modelling approaches, one-hot encoding and probabilistic topic modelling, are discussed.The methodology for solving this problem is still in progress. First results are expected by the date of the conference.

Sebastian Malin, Kathrin Plankensteiner, Robert Merz, Reinhard Mayr, Sebastian Schöndorfer, Mike Thomas

Chapter 3. Urbanization as a Turning Point from Scale to Population

In the 13th FYP period, China has entered a new stage of population urbanization. New-type urbanization is still the greatest potential for China’s development. Based on this, new ideas are needed to deepen the reform of the household registration system in the 13th FYP period. The first idea is a change from population control to service and management of population. The second is a change from the dual household registration system in urban and rural areas to the comprehensive implementation of the residence permit system. The third is a change in the population management from the public security department to the population service department. By 2020, it will be a historic breakthrough to replace the dual household registration system in urban and rural areas with the residence permit system.

Fulin Chi

Chapter 2. Industrial Change as a Turning Point from Industry Orientation to Service Orientation

The core of China’s economic transformation and upgrading during the 13th FYP period is industrial structural change. For one thing, industrialization has entered the late stage in which a general trend is marked by the accelerated development of service industry. For another, contradictions caused by irrational industrial structure have become ever manifest. In terms of service-oriented industry structure formed in 2020, the key is to propel the opening of market by breaking the monopoly as an essential point while making an effort to resolve the contradictions between policy and system during the opening of service industry.

Fulin Chi

2. Scheitern vorprogrammiert

„Wir haben zwar ein tolles ERP-System, das liefert aber Ergebnisse, die nicht mit der Realität übereinstimmen. Wir müssen vieles auf manuellem Wege erledigen und die Ergebnisse permanent überprüfen.“ Oder: „Mit steigendem Umsatz haben wir verwaltungstechnisch viel mehr Aufwand als erwartet.“ Solche Sätze hört man immer wieder in Unternehmen und wir werden fast täglich mit dem Thema Stammdaten konfrontiert. Stets fragen wir uns, warum sich Unternehmen ein ERP-System anschaffen, es jedoch nicht zum Fliegen bringen und ihm auch wenig Aufmerksamkeit schenken. Viele unserer Kunden verlieren das eigentliche Ziel, welches sie mit einem ERP-System verfolgt haben, aus den Augen.

Tobias Hertfelder, Philipp Futterknecht

Enhancing OCT Signal by Fusion of GANs: Improving Statistical Power of Glaucoma Clinical Trials

Accurately monitoring the efficacy of disease-modifying drugs in glaucoma therapy is of critical importance. Albeit high resolution spectral-domain optical coherence tomography (SDOCT) is now in widespread clinical use, past landmark glaucoma clinical trials have used time-domain optical coherence tomography (TDOCT), which leads, however, to poor statistical power due to low signal-to-noise characteristics. Here, we propose a probabilistic ensemble model for improving the statistical power of imaging-based clinical trials. TDOCT are converted to synthesized SDOCT images and segmented via Bayesian fusion of an ensemble of generative adversarial networks (GANs). The proposed model integrates super resolution (SR) and multi-atlas segmentation (MAS) in a principled way. Experiments on the UK Glaucoma Treatment Study (UKGTS) show that the model successfully combines the strengths of both techniques (improved image quality of SR and effective label propagation of MAS), and produces a significantly better separation between treatment arms than conventional segmentation of TDOCT.

Georgios Lazaridis, Marco Lorenzi, Sebastien Ourselin, David Garway-Heath

Evaluation of Retinal Image Quality Assessment Networks in Different Color-Spaces

Retinal image quality assessment (RIQA) is essential for controlling the quality of retinal imaging and guaranteeing the reliability of diagnoses by ophthalmologists or automated analysis systems. Existing RIQA methods focus on the RGB color-space and are developed based on small datasets with binary quality labels (i.e., ‘Accept’ and ‘Reject’). In this paper, we first re-annotate an Eye-Quality (EyeQ) dataset with 28,792 retinal images from the EyePACS dataset, based on a three-level quality grading system (i.e., ‘Good’, ‘Usable’ and ‘Reject’) for evaluating RIQA methods. Our RIQA dataset is characterized by its large-scale size, multi-level grading, and multi-modality. Then, we analyze the influences on RIQA of different color-spaces, and propose a simple yet efficient deep network, named Multiple Color-space Fusion Network (MCF-Net), which integrates the different color-space representations at both a feature-level and prediction-level to predict image quality grades. Experiments on our EyeQ dataset show that our MCF-Net obtains a state-of-the-art performance, outperforming the other deep learning methods. Furthermore, we also evaluate diabetic retinopathy (DR) detection methods on images of different quality, and demonstrate that the performances of automated diagnostic systems are highly dependent on image quality.

Huazhu Fu, Boyang Wang, Jianbing Shen, Shanshan Cui, Yanwu Xu, Jiang Liu, Ling Shao

Local and Global Consistency Regularized Mean Teacher for Semi-supervised Nuclei Classification

Nucleus classification is a fundamental task in pathology diagnosis for cancers, e.g., Ki-67 index estimation. Supervised deep learning methods have achieved promising classification accuracy. However, the success of these methods heavily relies on massive manually annotated data. Manual annotation for nucleus classification are usually time consuming and laborious. In this paper, we propose a novel semi-supervised deep learning method that can learn from small portion of labeled data and large-scale unlabeled data for nucleus classification. Our method is inspired by the recent state-of-the-art self-ensembling (SE) methods. These methods learn from unlabeled data by enforcing consistency of predictions under different perturbations while ignoring local and global consistency hidden in data structure. In our work, a label propagation (LP) step is integrated into the SE method, and a graph is constructed using the LP predictions that encode the local and global data structure. Finally, a Siamese loss is used to learn the local and global consistency from the graph. Our implementation is based on the state-of-the-art SE method Mean Teacher. Extensive experiments on two nucleus datasets demonstrate that our method outperforms the state-of-the-art SE methods, and achieves $$F_1$$ scores close to the supervised methods using only 5%–25% labeled data.

Hai Su, Xiaoshuang Shi, Jinzheng Cai, Lin Yang

Chapter 5. A Formalising Economy

Using different quantitative variables together with the various interview accounts, this chapter shows that Paraguay’s economy is experiencing a formalisation process and that this formalisation surge cannot be explained by economic growth alone. It argues that the implementation of the deductible-personal income tax in 2012 led to a “change in culture” of market participants. This “change in culture” relates to a formalisation of market transactions, illustrated by a surge in officially documented purchases and rising value added- and firms’ corporate income tax contributions. It thus shows how the Paraguayan tax system nudges the final consumer to assists authorities in the formalising process.

Jonas Richter

Chapter 27. PerformancePerformance Functional Coverage Performance Implications of Coverage Methodology

This chapter describes the methodology components of Functional Verification. What you should cover, when you should cover, performance implications and applications on how to write properties that combine the power of assertions with power of Functional Coverage.

Ashok B. Mehta

Arterial Spin Labeling Images Synthesis via Locally-Constrained WGAN-GP Ensemble

Arterial spin labeling (ASL) images begin to receive much popularity in dementia diseases diagnosis recently, yet it is still not commonly seen in well-established image datasets for investigating dementia diseases. Hence, synthesizing ASL images from available data is worthy of investigations. In this study, a novel locally-constrained WGAN-GP model ensemble is proposed to realize ASL images synthesis from structural MRI for the first time. Technically, this new WGAN-GP model ensemble is unique in its constrained optimization task, in which diverse local constraints are incorporated. In this way, more details of synthesized ASL images can be obtained after incorporating local constraints in this new ensemble. The effectiveness of the new WGAN-GP model ensemble for synthesizing ASL images has been substantiated both qualitatively and quantitatively through rigorous experiments in this study. Comprehensive analyses reveal that, this new WGAN-GP model ensemble is superior to several state-of-the-art GAN-based models in synthesizing ASL images from structural MRI in this study.

Wei Huang, Mingyuan Luo, Xi Liu, Peng Zhang, Huijun Ding, Dong Ni

Multiple Sclerosis Lesion Segmentation with Tiramisu and 2.5D Stacked Slices

In this paper, we present a fully convolutional densely connected network (Tiramisu) for multiple sclerosis (MS) lesion segmentation. Different from existing methods, we use stacked slices from all three anatomical planes to achieve a 2.5D method. Individual slices from a given orientation provide global context along the plane and the stack of adjacent slices adds local context. By taking stacked data from three orientations, the network has access to more samples for training and can make more accurate segmentation by combining information of different forms. The conducted experiments demonstrated the competitive performance of our method. For an ablation study, we simulated lesions on healthy controls to generate images with ground truth lesion masks. This experiment confirmed that the use of 2.5D patches, stacked data and the Tiramisu model improve the MS lesion segmentation performance. In addition, we evaluated our approach on the Longitudinal MS Lesion Segmentation Challenge. The overall score of 93.1 places the $$L_2$$ -loss variant of our method in the first position on the leaderboard, while the focal-loss variant has obtained the best Dice coefficient and lesion-wise true positive rate with 69.3% and 60.2%, respectively.

Huahong Zhang, Alessandra M. Valcarcel, Rohit Bakshi, Renxin Chu, Francesca Bagnato, Russell T. Shinohara, Kilian Hett, Ipek Oguz

Regression-Based Line Detection Network for Delineation of Largely Deformed Brain Midline

Brain midline shift is often caused by various clinical conditions such as high intracranial pressure, which can be deadly. To facilitate clinical evaluation, automated methods have been proposed to classify whether midline shift is severe or not, e.g., larger than 5 mm away from the ideal midline. There are only limited methods using landmark or symmetry, attempting to provide more intuitive results such as midline delineation. However, landmark- or symmetry-based methods could be easily affected by anatomical variability and large brain deformations. In this study, we formulated the midline delineation as a skeleton extraction task and proposed a novel regression-based line detection network (RLDN) for the robust midline delineation especially in largely deformed brains. Basically, the proposed method includes three parts: (1) multi-scale line detection, (2) weighted line integration, and (3) regression-based refinement. The first two parts were used to capture high-level semantic and low-level detailed information to extract deformed midline, while the last part was utilized to regress more accurate midline positions. We validated the RLDN on 100 training and 28 testing subjects with a mean midline shift of 7 mm and the maximum shift of 16 mm (induced by hemorrhage). Experimental results show that our proposed method achieves state-of-the-art accuracy with a mean line difference of $$1.17\pm 0.72$$ mm and F1-score of 0.78 from manual delineations. Our proposed robust midline delineation method is also beneficial for other cases such as midline deformation from tumor, traumatic brain injury, and abscess.

Hao Wei, Xiangyu Tang, Minqing Zhang, Qingfeng Li, Xiaodan Xing, Xiang Sean Zhou, Zhong Xue, Wenzhen Zhu, Zailiang Chen, Feng Shi

Deep Angular Embedding and Feature Correlation Attention for Breast MRI Cancer Analysis

Accurate and automatic analysis of breast MRI plays a vital role in early diagnosis and successful treatment planning for breast cancer. Due to the heterogeneity nature, precise diagnosis of tumors remains a challenging task. In this paper, we propose to identify breast tumor in MRI by Cosine Margin Sigmoid Loss (CMSL) with deep learning (DL) and localize possible cancer lesion by COrrelation Attention Map (COAM) based on the learned features. The CMSL embeds tumor features onto a hyper-sphere and imposes a decision margin through cosine constraints. In this way, the DL model could learn more separable inter-class features and more compact intra-class features in the angular space. Furthermore, we utilize the correlations among feature vectors to generate attention maps that could accurately localize cancer candidates with only image-level labels. We build the largest breast cancer dataset involving 10,290 DCE-MRI scan volumes for developing and evaluating the proposed methods. The model driven by CMSL achieved a classification accuracy of 0.855 and AUC of 0.902 on the testing set, with sensitivity and specificity of 0.857 and 0.852, respectively, outperforming competitive methods overall. In addition, the proposed COAM accomplished more accurate localization of the cancer center compared with other state-of-the-art weakly supervised localization method.

Luyang Luo, Hao Chen, Xi Wang, Qi Dou, Huangjing Lin, Juan Zhou, Gongjie Li, Pheng-Ann Heng

Feature Transformers: Privacy Preserving Lifelong Learners for Medical Imaging

Deep learning algorithms have achieved tremendous success in many medical imaging problems leading to multiple commercial healthcare applications. For sustaining the performance of these algorithms post-deployment, it is necessary to overcome catastrophic forgetting and continually evolve with data. While catastrophic forgetting could be managed using historical data, a fundamental challenge in Healthcare is data-privacy, where regulations constrain restrict data sharing. In this paper, we present a single, unified mathematical framework - feature transformers, for handling the myriad variants of lifelong learning to overcome catastrophic forgetting without compromising data-privacy. We report state-of-the-art results for lifelong learning on iCIFAR100 dataset and also demonstrate lifelong learning on medical imaging applications - X-ray Pneumothorax classification and Ultrasound cardiac view classification.

Hariharan Ravishankar, Rahul Venkataramani, Saihareesh Anamandra, Prasad Sudhakar, Pavan Annangi

Constructing Multi-scale Connectome Atlas by Learning Graph Laplacian of Common Network

Recent development of neuroimaging and network science allow us to visualize and characterize the whole brain connectivity map in vivo. As the importance of volumetric image atlas, a common brain connectivity map (called connectome atlas) across individuals can offer a new window to understand the neurobiological underpinning of cognition and behavior that are related to brain development or neuro-disorders. However, a major obstacle to the application of classic atlas construction methods in the setting of brain network is that the region-to-region connectivity, often encoded in a graph, does not exactly comply with the Euclidean space that has widely been used for the regular data structure such as grid. To address this challenge, we first turn the brain network (encoded in the graph) into a set of graph signals in the Euclidean space via the diffusion mapping technique. Furthermore, we cast the construction of connectome atlas into a learning-based graph inference model that simultaneously (1) aligns all individual graph signals to a common space spanned by the graph spectrum bases of the latent common network, and (2) learns graph Laplacian of the common network that is in consensus with all aligned graph signals. We have evaluated our novel connectome atlas method with the comparison to the counterpart non-learning based methods in analyzing the brain networks for both neurodevelopmental and neurodegenerative diseases, where our proposed learning-based method shows more reasonable results in terms of accuracy and replicability.

Minjeong Kim, Xiaofeng Zhu, Ziwen Peng, Peipeng Liang, Daniel Kaufer, Paul J. Larienti, Guorong Wu

Revealing Functional Connectivity by Learning Graph Laplacian

Functional connectivity (FC) has been widely used to understand how the human brain works and to discover the neurobiological underpinnings of brain disorders in many neuroscience and clinical studies. Linear FC measures such as Pearson’s correlation have been widely used in functional neuroimaging studies that are based on the observed direct or indirect electrophysiological correlates within BOLD (blood oxygen-level dependent) signals. However, there is still much to be done in the area of methods development for separation of non-linear neural and non-neural sources of signal variation. In this paper, we address this fundamental issue with a novel data-driven approach to reveal the intrinsic and reproducible FCs via graph signal processing and graph learning techniques. Specifically, we regard BOLD signals from the whole-brain as graph signals that reside on the functional network. Then, we jointly smooth the BOLD signals in the context of the brain network and optimize the network connectivity by learning the graph Laplacian that represents the network spectrum for adaptive BOLD signal smoothing. We have evaluated our novel functional network construction method on simulated brain network data and resting-state functional magnetic resonance imaging data in the study of frontotemporal dementia (FTD). Compared with the conventional correlation based methods, our proposed learning-based method shows improvements in accuracy and greater statistical power.

Minjeong Kim, Amr Moussa, Peipeng Liang, Daniel Kaufer, Paul J. Larienti, Guorong Wu

Efficient Ultrasound Image Analysis Models with Sonographer Gaze Assisted Distillation

Recent automated medical image analysis methods have attained state-of-the-art performance but have relied on memory and compute-intensive deep learning models. Reducing model size without significant loss in performance metrics is crucial for time and memory-efficient automated image-based decision-making. Traditional deep learning based image analysis only uses expert knowledge in the form of manual annotations. Recently, there has been interest in introducing other forms of expert knowledge into deep learning architecture design. This is the approach considered in the paper where we propose to combine ultrasound video with point-of-gaze tracked for expert sonographers as they scan to train memory-efficient ultrasound image analysis models. Specifically we develop teacher-student knowledge transfer models for the exemplar task of frame classification for the fetal abdomen, head, and femur. The best performing memory-efficient models attain performance within 5% of conventional models that are $$1000{\times }$$ larger in size.

Arijit Patra, Yifan Cai, Pierre Chatelain, Harshita Sharma, Lior Drukker, Aris T. Papageorghiou, J. Alison Noble

A Novel Loss Function Incorporating Imaging Acquisition Physics for PET Attenuation Map Generation Using Deep Learning

In PET/CT imaging, CT is used for PET attenuation correction (AC). Mismatch between CT and PET due to patient body motion results in AC artifacts. In addition, artifact caused by metal, beam-hardening and count-starving in CT itself also introduces inaccurate AC for PET. Maximum likelihood reconstruction of activity and attenuation (MLAA) was proposed to solve those issues by simultaneously reconstructing tracer activity (λ-MLAA) and attenuation map (μ-MLAA) based on the PET raw data only. However, μ-MLAA suffers from high noise and λ-MLAA suffers from large bias as compared to the reconstruction using the CT-based attenuation map (μ-CT). Recently, a convolutional neural network (CNN) was applied to predict the CT attenuation map (μ-CNN) from λ-MLAA and μ-MLAA, in which an image-domain loss (IM-loss) function between the μ-CNN and the ground truth μ-CT was used. However, IM-loss does not directly measure the AC errors according to the PET attenuation physics, where the line-integral projection of the attenuation map (μ) along the path of the two annihilation events, instead of the μ itself, is used for AC. Therefore, a network trained with the IM-loss may yield suboptimal performance in the μ generation. Here, we propose a novel line-integral projection loss (LIP-loss) function that incorporates the PET attenuation physics for μ generation. Eighty training and twenty testing datasets of whole-body 18F-FDG PET and paired ground truth μ-CT were used. Quantitative evaluations showed that the model trained with the additional LIP-loss was able to significantly outperform the model trained solely based on the IM-loss function.

Luyao Shi, John A. Onofrey, Enette Mae Revilla, Takuya Toyonaga, David Menard, Joseph Ankrah, Richard E. Carson, Chi Liu, Yihuan Lu

Complete Fetal Head Compounding from Multi-view 3D Ultrasound

Ultrasound (US) images suffer from artefacts which limit its diagnostic value, notably acoustic shadow. Shadows are dependent on probe orientation, with each view giving a distinct, partial view of the anatomy. In this work, we fuse the partially imaged fetal head anatomy, acquired from numerous views, into a single coherent compounding of the full anatomy. Firstly, a stream of freehand 3D US images is acquired, capturing as many different views as possible. The imaged anatomy at each time-point is then independently aligned to a canonical pose using an iterative spatial transformer network (iSTN), making our approach robust to fast fetal and probe motion. Secondly, images are fused by averaging only the best (most salient) features from all images, producing a more detailed compounding. Finally, the compounding is iteratively refined using a groupwise registration approach. We evaluate our compounding approach quantitatively and qualitatively, comparing it with average compounding and individual US frames. We also evaluate our alignment accuracy using two physically attached probes, that capture separate views simultaneously, providing ground-truth. Lastly, we demonstrate the potential clinical impact of our method for assessing cranial, facial and external ear abnormalities, with automated atlas-based masking and 3D volume rendering.

Robert Wright, Nicolas Toussaint, Alberto Gomez, Veronika Zimmer, Bishesh Khanal, Jacqueline Matthew, Emily Skelton, Bernhard Kainz, Daniel Rueckert, Joseph V. Hajnal, Julia A. Schnabel

Improving Whole-Brain Neural Decoding of fMRI with Domain Adaptation

In neural decoding, there has been a growing interest in machine learning on functional magnetic resonance imaging (fMRI). However, the size discrepancy between the whole-brain feature space and the training set poses serious challenges. Simply increasing the number of training examples is infeasible and costly. In this paper, we propose a domain adaptation framework for whole-brain fMRI (DawfMRI) to improve whole-brain neural decoding on target data leveraging source data. DawfMRI consists of two steps: (1) source and target feature adaptation, and (2) source and target classifier adaptation. We evaluate its four possible variations, using a collection of fMRI datasets from OpenfMRI. The results demonstrated that appropriate choices of source domain can help improve neural decoding accuracy for challenging classification tasks. The best-case improvement is $$10.47\%$$ (from $$77.26\%$$ to $$87.73\%$$ ). Moreover, visualising and interpreting voxel weights revealed that the adaptation can provide additional insights into neural decoding.

Shuo Zhou, Christopher R. Cox, Haiping Lu

Biomedical Image Segmentation by Retina-Like Sequential Attention Mechanism Using only a Few Training Images

In this paper we propose a novel deep learning-based algorithm for biomedical image segmentation which uses a sequential attention mechanism able to shift the focus of attention across the image in a selective way, allowing subareas which are more difficult to classify to be processed at increased resolution. The spatial distribution of class information in each subarea is learned using a retina-like representation where resolution decreases with distance from the center of attention. The final segmentation is achieved by averaging class predictions over overlapping subareas, utilizing the power of ensemble learning to increase segmentation accuracy. Experimental results for semantic segmentation task for which only a few training images are available show that a CNN using the proposed method outperforms both a patch-based classification CNN and a fully convolutional-based method.

Shohei Hayashi, Bisser Raytchev, Toru Tamaki, Kazufumi Kaneda

Multi-scale Attentional Network for Multi-focal Segmentation of Active Bleed After Pelvic Fractures

Trauma is the worldwide leading cause of death and disability in those younger than 45 years, and pelvic fractures are a major source of morbidity and mortality. Automated segmentation of multiple foci of arterial bleeding from abdominopelvic trauma CT could provide rapid objective measurements of the total extent of active bleeding, potentially augmenting outcome prediction at the point of care, while improving patient triage, allocation of appropriate resources, and time to definitive intervention. In spite of the importance of active bleeding in the quick tempo of trauma care, the task is still quite challenging due to the variable contrast, intensity, location, size, shape, and multiplicity of bleeding foci. Existing work presents a heuristic rule-based segmentation technique which requires multiple stages and cannot be efficiently optimized end-to-end. To this end, we present, Multi-Scale Attentional Network (MSAN), the first yet reliable end-to-end network, for automated segmentation of active hemorrhage from contrast-enhanced trauma CT scans. MSAN consists of the following components: (1) an encoder which fully integrates the global contextual information from holistic 2D slices; (2) a multi-scale strategy applied both in the training stage and the inference stage to handle the challenges induced by variation of target sizes; (3) an attentional module to further refine the deep features, leading to better segmentation quality; and (4) a multi-view mechanism to leverage the 3D information. MSAN reports a significant improvement of more than $$7\%$$ compared to prior arts in terms of DSC.

Yuyin Zhou, David Dreizin, Yingwei Li, Zhishuai Zhang, Yan Wang, Alan Yuille

Joint Localization of Optic Disc and Fovea in Ultra-widefield Fundus Images

Automated localization of optic disc and fovea is important for computer-aided retinal disease screening and diagnosis. Compared to previous works, this paper makes two novelties. First, we study the localization problem in the new context of ultra-widefield (UWF) fundus images, which has not been considered before. Second, we propose a spatially constrained Faster R-CNN for the task. Extensive experiments on a set of 2,182 UWF fundus images acquired from a local eye center justify the viability of the proposed model. For more than 99% of the test images, the improved Faster R-CNN localizes the fovea within one optic disc diameter to the ground truth, meanwhile detecting the optic disc with a high IoU of 0.82. The new model works reasonably well even in challenging cases where the fovea is occluded due to severe retinopathy or surgical treatments.

Zhuoya Yang, Xirong Li, Xixi He, Dayong Ding, Yanting Wang, Fangfang Dai, Xuemin Jin

Confounder-Aware Visualization of ConvNets

With recent advances in deep learning, neuroimaging studies increasingly rely on convolutional networks (ConvNets) to predict diagnosis based on MR images. To gain a better understanding of how a disease impacts the brain, the studies visualize the salience maps of the ConvNet highlighting voxels within the brain majorly contributing to the prediction. However, these salience maps are generally confounded, i.e., some salient regions are more predictive of confounding variables (such as age) than the diagnosis. To avoid such misinterpretation, we propose in this paper an approach that aims to visualize confounder-free saliency maps that only highlight voxels predictive of the diagnosis. The approach incorporates univariate statistical tests to identify confounding effects within the intermediate features learned by ConvNet. The influence from the subset of confounded features is then removed by a novel partial back-propagation procedure. We use this two-step approach to visualize confounder-free saliency maps extracted from synthetic and two real datasets. These experiments reveal the potential of our visualization in producing unbiased model-interpretation.

Qingyu Zhao, Ehsan Adeli, Adolf Pfefferbaum, Edith V. Sullivan, Kilian M. Pohl

Adaptive Functional Connectivity Network Using Parallel Hierarchical BiLSTM for MCI Diagnosis

Most of the existing dynamic functional connectivity (dFC) analytical methods compute the correlation between pairs of time courses with the sliding window. However, there is no clear indication on the standard window characteristics (length and shape) that best suit for all analyses, and it cannot pinpoint to compute the dynamic correlation of brain region for each time point. Besides, most of the current studies that utilize the dFC for MCI identification mainly relied on the local clustering coefficient for extracting dynamic features and the support vector machine (SVM) as a classifier. In this paper, we propose a novel adaptive dFC inference method and a deep learning classifier for MCI identification. Specifically, a group-constrained structure detection algorithm is first designed to identify the refined topology of the effective connectivity network, in which the individual information is preserved via different connectivity values. Second, based on the identified topology structure, the adaptive dFC network is then constructed by using the Kalman Filter algorithm to estimate the brain region connectivity strength for each time point. Finally, the adaptive dFC network is validated in MCI identification using a new Parallel Hierarchical Bidirectional Long Short-Term Memory (PH-BiLSTM) network, which extracts as much brain status change information as possible from both the past and future information. The results show that the proposed method achieves relatively high classification accuracy.

Yiqiao Jiang, Huifang Huang, Jingyu Liu, Chong-Yaw Wee, Yang Li

COMETA: An Air Traffic Controller’s Mental Workload Model for Calculating and Predicting Demand and Capacity Balancing

In ATM (Air Traffic Management), traffic and environment are not important by themselves. The most important factor is the cognitive work performed by the air traffic controller (ATCo). As detailed mental pictures can overcome ATCo’s limited attentional resources (causing Mental Overload), she/he can use internal strategies called abstractions to mitigate the cognitive complexity of the control task. This paper gathers the modelling, automation and preliminary calibration of the Cognitive Complexity concept. The primary purpose of this model is to support the ATM planning roles to detect imbalances and make decisions regarding the best DCB (Demand and Capacity Balancing) measures to resolve hotspots. The four parameters selected that provide meaningful operational information to mitigate cognitive complexity are Standard Flow Interactions, Flights out of Standard Flows, Potential Crossings and Flights in Evolution. The model has been integrated into a DCB prototype within the SESAR (Single European Sky ATM Research) 2020 Wave 1 during Real Time Simulations.

Patricia López de Frutos, Rubén Rodríguez Rodríguez, Danlin Zheng Zhang, Shutao Zheng, José Juan Cañas, Enrique Muñoz-de-Escalona

Do Cultural Differences Play a Role in the Relationship Between Time Pressure, Workload and Student Well-Being?

Student workload is an issue that has implications for undergraduate student learning, achievement and well-being. Time pressure, although not the only factor that influences students’ workload or their perception of it, is very pivotal to students’ workload. This may vary from one country to the other and maybe affected by cultural differences. The current study investigated the impact of nationality and time pressure on well-being outcomes as well as perceptions of academic stress and academic work efficiency. The study was cross-cultural and cross-sectional in nature and comprised 360 university undergraduates from three distinct cultural backgrounds: White British, Ethnic Minorities (in the United Kingdom) and Nigerian. The findings suggest that time pressure directly or indirectly (i.e. in tandem with nationality) predicted negative outcomes, work efficiency and academic stress. This implies that nationality/ethnicity also plays a role in the process.

Omolaso Omosehin, Andrew P. Smith

Hybrid Models of Performance Using Mental Workload and Usability Features via Supervised Machine Learning

Mental Workload (MWL) represents a key concept in human performance. It is a complex construct that can be viewed from multiple perspectives and affected by various factors that are quantified by different collection of methods. In this direction, several approaches exist that aggregate these factors towards building a unique workload index that best acts as a proxy to human performance. Such an index can be used to detect cases of mental overload and underload in human interaction with a system. Unfortunately, limited work has been done to automatically classify such conditions using data mining techniques. The aim of this paper is to explore and evaluate several data mining techniques for classifying mental overload and underload by combining factors from three subjective measurement instruments: System Usability Scale (SUS), Nasa Task Load Index (NASATLX) and Workload Profile (WP). The analysis focused around nine supervised machine learning classification algorithms aimed at inducing model of performance from data. These models underwent through rigorous phases of evaluation such as: classifier accuracy (CA), receiver operating characteristics (ROC) and predictive power using cost/benefit analysis. The findings suggest that Bayesian and tree-based models are the most suitable for classifying mental overload/underload even with unbalanced data.

Bujar Raufi

Deep Learning via Fused Bidirectional Attention Stacked Long Short-Term Memory for Obsessive-Compulsive Disorder Diagnosis and Risk Screening

The compulsive urges to perform stereotyped behaviors are typical symptoms of obsessive-compulsive disorder (OCD). OCD has certain hereditary tendencies and the direct OCD relatives (i.e., sibling (Sib)) have 50% of the same genes as patients. Sib has a higher probability of suffering from the same disease. Resting-state functional magnetic resonance imaging (R-fMRI) has made great progress by diagnosing OCD and identifying its high-risk population. Accordingly, we design a new deep learning framework for OCD diagnosis via R-fMRI data. Specifically, the fused bidirectional attention stacking long short-term memory (FBAS-LSTM) is exploited. First, we obtain two independent time series from the original R-fMRI by frame separation, which can reduce the length of R-fMRI sequence and alleviate the training difficulty. Second, we apply two independent BAS-LSTM learning on the hidden spatial information to obtain preliminary classification results. Lastly, the final diagnosis results are obtained by voting from the two diagnostic results. We validate our method on our in-house dataset including 62 OCD, 53 siblings (Sib) and 65 healthy controls (HC). Our method achieves average accuracies of 71.66% for differentiating OCD vs. Sib vs. HC, and outperforms the related algorithms.

Chiyu Feng, Lili Jin, Chuangyong Xu, Peng Yang, Tianfu Wang, Baiying Lei, Ziwen Peng

Speed Estimation of an Induction Motor with Sensorless Vector Control at Low Speed Range Using Neural Networks

This paper deals with the study, design and implementation of a speed estimation technique, using an Artificial Neural Network (ANN), for a three-phase induction motor type Squirrel Cage. The main objective of this technique is to estimate the speed of the engine from its own conditions at low speed, with the purpose of applying vector control without the use of a traditional speed sensor that supplies the control loop with the speed of rotation the motor. When replacing the traditional sensor with RNA, a sensorless control loop is obtained. The training of the RNA was done collecting data in different ranges of engine speed, both empty and loaded. The simulation of the system is carried out in the Simulink platform, from the Matlab software package and the implementation of the control strategy is done using the Raspberry Pi 3B card.

Martin Gallo Nieves, Jorge Luis Diaz Rodriguez, Jaime Antonio Gonzalez Castellanos

Systemic Model of the Rice Chain. Plateau of Ibagué Case

The rice supply chain in Colombia has low competitiveness, which happens mainly in the echelon of farmers. The problems are related to production costs, yield and paddy rice prices that impact farmers’ profitability. In this work, a dynamic simulation model is proposed, which combines elements of several authors. It follows the line that worked [1], it is incorporated the inventories of each of the echelons of the supply chain proposed by [2, 3] and the formulation of pricing described by [4]; to identify the actors and relevant variables of the chain, analyze the behavior of the rice chain in different situations and propose improvements. From analysis of scenarios, it was concluded that the cultivated area and yield are the variables that allow to achieve a greater impact on the profitability of the farmer.

Harold Alexander Cuellar-Molina, José Fidel Torres-Delgado, Nelson Javier Tovar-Perilla

Assessment of Metaheuristic Techniques Applied to the Optimal Reactive Power Dispatch

The optimal reactive power dispatch (ORPD) problem consists of finding the optimal settings of several reactive power resources in order to minimize system power losses. The ORPD is a complex combinatorial optimization problem that involves discrete and continuous variables as well as a nonlinear objective function and nonlinear constraints. From the point of view of computational complexity, the ORPD problem is NP-complete. Several techniques have been reported in the specialized literature to approach this problem in which modern metaheuristics stand out. This paper presents a comparison of such techniques with a Mean-Variance Mapping Optimization (MVMO) algorithm implemented by the authors with two different constraint handling approaches. Several tests with the IEEE 30 bus test system show the effectiveness of the proposed approach which outperforms results of previously reported methods.

Daniel Camilo Londoño, Walter Mauircio Villa-Acevedo, Jesús María López-Lezama

A Methodology for Driving Behavior Recognition in Simulated Scenarios Using Biosignals

The recognition of aggressive driving patterns could aid to improve driving safety and potentially reduce traffic fatalities on the roads. Driving behavior is strongly shaped by emotions and can be divided into two main categories: calmed (non-aggressive) and aggressive. In this paper, we present a methodology to recognize driving behavior using driving performance features and biosignals. We used biosensors to measure heart rate and galvanic skin response of fifteen volunteers while driving in a simulated scenario. They were asked to drive in two different situations to elicit calmed and aggressive driving behaviors. The purpose of this study was to determine if driving behavior can be assessed from biosignals and acceleration/braking events. From two-tailed student t-tests, the results suggest that it is possible to differentiate between aggressive and calmed driving behavior from biosignals and also from longitudinal vehicle’s data.

Juan Antonio Dominguez-Jimenez, Kiara Coralia Campo-Landines, Sonia Helena Contreras-Ortiz

Sine-Cosine Algorithm for OPF Analysis in Distribution Systems to Size Distributed Generators

This paper addresses the analysis the optimal power flow (OPF) problem in alternating current (AC) radial distribution networks by using a new metaheuristic optimization technique known as a sine-cosine algorithm (SCA). This combinatorial optimization approach allows for solving the nonlinear non-convex optimization OPF problem by using a master-slave strategy. In the master stage, the soft computing SCA is used to define the power dispatch at each distributed generator (dimensioning problem). In the slave stage, it is used a conventional radial power flow formulated by incidence matrices is used for evaluating the total power losses (objective function evaluation). Two conventional highly used distribution feeders with 33 and 69 nodes are employed for validating the proposed master-slave approach. Simulation results are compared with different literature methods such as genetic algorithm, particle swarm optimization, and krill herd algorithm. All the simulations are performed in MATLAB programming environment, and their results show the effectiveness of the proposed approach in contrast to previously reported methods.

María Lourdes Manrique, Oscar Danilo Montoya, Víctor Manuel Garrido, Luis Fernando Grisales-Noreña, Walter Gil-González

Kapitel 1. Das Wesentliche ist unsichtbar – Wert-Kokreation und Value Capture im Sportmanagement

Nichts hält länger als eine gute Theorie. Logiken – im Sinne von Denkweisen – beeinflussen unsere Entscheidungen und die Art, wie wir uns Zusammenhänge vorstellen. Dieser Aufsatz bietet dem Leser eine neue Logik, die sich von der traditionell im Sportmanagement verwendeten Denkweise unterscheidet. Traditionell werden Sportevents als Produkte oder Güter aufgefasst, die Wert stiften. Die neue Logik betrachtet Sportevents als Plattformen, auf denen Wert durch das Zusammenwirken vieler Menschen geschaffen wird. Auf den Punkt gebracht: dieses Werk analysiert die Welt des Sports anders, nach unserer Meinung besser, weil es dem Sportmanagement hilft, wertstiftende Aktivitäten eines Netzwerks auf einer gemeinsamen Plattform zu analysieren.

Herbert Woratschek, Guido Schafmeister, Guido Ellert

6. Phasenmodelle von Partnerschaften

In einem Ökosystem sind die wesentlichen Erfolgsfaktoren die Kompatibilität der Strategien und eine konsistente Zielsetzung. Darüber hinaus streben alle Beteiligten gezielt die kulturelle Konvergenz ihrer Werte an. Je höher die Zahl der an einem Ökosystem beteiligten Parteien, umso schwieriger ist diese Aufgabe. Sie zu lösen, erfordert gewisse Managementkompetenzen wie strategische Weitsicht und Planungsfähigkeiten sowie konsistente Prozesse und Verfahren zur Umsetzung strategischer Maßnahmen und Ziele. In diesem Kapitel erörtern wir schrittweise die einzelnen Kernkompetenzen, die für die Planung und Implementierung strategischer Partnerschaften notwendig sind.

Noah Farhadi

7. Ökosystem-Governance

Betriebswirtschaftliche Ökosysteme entstehen gezielt durch den Aufbau strategischer Partnerschaften zwischen Primärproduzenten und Komplementären. Es handelt sich dabei nicht um statische Strukturen. Im Gegenteil – Kooperationen können hochdynamisch sein, wie wir bereits im ersten Kapitel erörtert haben. Diese Komplexität hat ihre Ursachen in der Vielzahl der Verflechtungen und der wechselseitigen Einflussnahme aller Beteiligten in einem Ökosystem. In diesem Kapitel wird die Struktur eines Governance-Modells anhand eines Beispiels vorgestellt. Generell können Partnerschaftsunternehmen zwischen zwei Governance-Formen wählen: Steuerung durch Beteiligungen oder durch vertragliche Vereinbarungen zwischen finanziell selbstständigen und rechtlich unabhängigen Akteuren. Akquisition von Beteiligungen wird bevorzugt, wenn die Risiken eines opportunistischen Verhaltens sehr hoch sind. Vertragliche Vereinbarungen sind nützlich, um die gegenseitigen Rechte und Pflichten, die Beiträge der Partner, die Wege des Austauschs und das Vorgehen zur Bereinigung möglicher Konflikte zu klären. Unser Modell in diesem Buch beschreibt lediglich, wie Ökosystempartnerschaften unabhängig von Beteiligungen mithilfe eines Governance-Modells gesteuert werden und modulare, sequenzielle oder wechselseitige Synergien erzeugen.

Noah Farhadi

13. Martingales and Numeraires

The most important and profound concept that the reader may have gained from the material presented in this book so far is that of Risk neutrality risk neutrality, which can be summarized as follows: Today’s price of a (tradable) financial instrument is equal to the discounted expectation of its future price if this expectation is calculated with respect to the risk-neutral probability measure.

Hans-Peter Deutsch, Mark W. Beinker

32. Time Series Modeling

Time series analysis aims to develop a model, which describes the time series in all its measurable features.

Hans-Peter Deutsch, Mark W. Beinker

21. Fundamentals

In general, the term Risk risk signifies the possibility that some future event might have some negative consequences. Since the future is uncertain, the term risk is tightly connected with the probability or likelihood that an uncertain future event actually becomes real.

Hans-Peter Deutsch, Mark W. Beinker

1. Introduction

The explosive development of derivative financial instruments continues to provide new possibilities and increasing flexibility to manage finance and risk in a way specifically tailored to the needs of individual investors or firms.

Hans-Peter Deutsch, Mark W. Beinker

Kapitel 4. Theoretischer Rahmen – von der Verhaltensabsicht der Onlinebeteiligung

Die Beschreibung der Aktivitätstypologien (vgl. Kapitel 2.3) hat bereits einen Hinweis auf unterschiedliche Motive und Gründe gegeben, aufgrund derer sich Parteimitglieder innerparteilich engagieren.

Annika Döweling

Chapter 12. Influence of Surface Texturing on Friction and Wear

The chapter highlights the investigations on the friction reduction capability of a pre-determined sized hemispherical dimples of 3 mm diameter, taking into consideration the fact of easy availability of the tool (ball nose end mill) for industrial applications. The dimples were created using CNC milling machines on EN 31 disc. A pin-on-disc tribometer was used to investigate the tribological behavior of the various textured density surface (7.5, 15 and 22.5%) against EN 8 steel under various loading conditions (120, 140, and 160 N) and very harsh lubricating conditions: dry, partial lubrication (lubricant supplied 54 mL dropwise at a flowrate 1 mL/s) and starved lubrication (10 mL lubricant spread over the disc before the experiment). A significant decrease of 12% reduction of coefficient of friction (COF) was observed with 15% texture density under 120 N while the COF increased by 40–60% at texture density of 22.5% and high loads. In dry condition there was no significant change in COF but the specific wear rate decreased by 64.69% in 22.5% texture density. In the present set of experiments carried out at lighter load (120 N), both 7.5 and 15% texture densities exhibited better results as compared to 22.5% texture densities (under partial lubrication). Surface characterizations by optical microscopy revealed that the friction reduction of the dimpled surface was primarily due to the lubricant retaining capability by the dimples which acted as oil reservoirs, but high texture densities intensified the friction.

Shubrajit Bhaumik, Chiradeep Ghosh, Basudev Bhattacharya, Viorel Paleu, Rajeev Kumar Naik, Prayag Gopinath, A. Adithya, Ankur Dhanwant

Chapter 13. Magneto Rheological Fluid Based Smart Automobile Brake and Clutch Systems

The chapter deals smart fluid i.e. Magneto Rheological fluid which is gaining interest of researcher as the range of application is vast. This book chapter starts with introduction of MR fluid, constituents of MR fluid and detailed study of each constituent. Also, discussing about the present necessity of MR fluid technology we discuss the operational modes of MR fluid. MR devices function basically on three operational modes of MR fluid i.e. flow mode, shear mode and squeeze mode. Every mode possess its own characteristics in high performance application system. Further Mathematical modelling of various rheological parameters and MR fluid is carried out, there after the detailed synthesis process and characterization of MR fluid is discussed. At last overview of MR fluid application is discussed and in detailed progress in MR brakes and clutch system is discussed.

Rakesh Jinaga, Shreedhar Kolekar, T. Jagadeesha

Chapter 10. Lubrication Effectiveness and Sustainability of Solid/Liquid Additives in Automotive Tribology

In the automotive industry, losses results from friction and wear processes are huge, and every year almost thirty percent of the economy is consumed due to tribological losses. Recent advancement in technologies now permits the tribologist to design suitable lubrication techniques that were unachievable in the past. Recently, vapor film deposition or adding a thin layer of lubricants with improved physical and chemical properties is enormous. However, the suitability of such type of techniques is still in developing stage. To control the contact mechanism of sliding/rolling elements in the automotive industry, this work reports the importance of solid/liquid particles in lubrication. The friction and wear behavior of nanoparticles based on thin film coating and liquid lubrication technique is studied with traditional lubrication concept of vapor deposition and fluid film lubrication. Both techniques are necessary for designing lubricating film at the nanometer scale to control the surface properties of materials at nano/micro scales. Further, nanoparticles of self-lubricious materials are also used to prepare laboratory grease and are compared with traditional industrial grease. The obtained results are discussed by the intrinsic mechanism of sliding/rolling, theories of friction, wear, and involved parameters in the tribological tests. The potential application of prepared vapor deposition films/nanolubricants is loaded gears, bearings, piston cylinder, etc. The work can also be suitable in other industries where failure occurrence is repeated continuously due to resulting frictional losses.

R. K. Upadhyay

Chapter 5. Mead and Blumer: Social Theory and Symbolic Interactionism

The discussion of Lifeworld as an alternative in the previous chapter rises in a European context, but late in the nineteenth century, an alternative discussion as well was brought to being at the American universities, especially at the University of Chicago.

Woodrow W. Clark II, Michael Fast

Looking Inside the Black Box: Core Semantics Towards Accountability of Artificial Intelligence

Recent advances in artificial intelligence raise a number of concerns. Among the challenges to be addressed by researchers, accountability of artificial intelligence solutions is one of the most critical. This paper focuses on artificial intelligence applications using natural language to investigate if the core semantics defined for a large-scale natural language processing system could assist in addressing accountability issues. Core semantics aims to obtain a full interpretation of the content of natural language texts, representing both implicit and explicit knowledge, using only ‘subj-action-(obj)’ structures and causal, temporal, spatial and personal-world links. The first part of the paper offers a summary of the difficulties to be addressed and of the reasons why representing the meaning of a natural language text is relevant for artificial intelligence accountability. In the second part, a-proof-of-concept for the application of such a knowledge representation to support accountability, and a detailed example of the analysis obtained with a prototype system named CoreSystem is illustrated. While only preliminary, these results give some new insights and indicate that the provided knowledge representation can be used to support accountability, looking inside the box.

Roberto Garigliano, Luisa Mich

A Systematic Approach to Programming and Verifying Attribute-Based Communication Systems

A methodology is presented for the systematic development of systems of many components, that interact by relying on predicates over attributes that they themselves mutually expose. The starting point is a novel process calculus AbC (for Attribute-based Communication) introduced for modelling collective-adaptive systems. It is shown how to refine the model by introducing a translator from AbC into UML-like state machines that can be analyzed by UMC. In order to execute the specification, another translator is introduced that maps AbC terms into ABEL, a domain-specific framework that offers faithful AbC-style programming constructs built on top of Erlang. It is also shown how the proposed methodology can be used to assess relevant properties of systems and to automatically obtain an executable program for a non-trivial case study.

Rocco De Nicola, Tan Duong, Omar Inverso, Franco Mazzanti

How Formal Methods Can Contribute to 5G Networks

Communication networks have been one of the main drivers of formal methods since the 70’s. The dominant role of software in the new 5G mobile communication networks will once again foster a relevant application area for formal models and techniques like model checking, model-based testing or runtime verification. This chapter introduces some of these novel application areas, specifically for Software Defined Networks (SDN) and Network Function Virtualization (NFV). Our proposals focus on automated methods to create formal models that satisfy a given set of requirements for SDN and NFV.

María-del-Mar Gallardo, Francisco Luque-Schempp, Pedro Merino-Gómez, Laura Panizo

From Dynamic Programming to Programming Science

Some Recollections in Honour of Stefania Gnesi

Stefania Gnesi graduated summa cum laude in Scienze dell’Informazione at the University of Pisa in June 1978.

Ugo Montanari

Computational Process and Code-Form Definition in Design

In design process, drawing has always preceded the construction phase. The act of drawing, based on basic geometric elements such as lines, curves, surfaces and solid, allows to organize one’s ideas, manage resources and predict results.

Giorgio Buratti

Organic Reference in Design. The Shape Between Invention and Imitation

Historical treatises pursued the search for a rule in geometric laws that envisage the relationship between numbers and forms, fixing the articulation and the measure of architecture. Despite an inevitable inertia, architectural research always showed in formal and structural canons the concepts expressed by geometry, which like any science evolves in an attempt to explain increasingly complex facts, as the man’s ability to observe the nature’s world progresses. New geometries coincide with new space-structural conceptions that refer to inspirational models, which are based on the commitment of nature: on one hand it asks questions to explain, on the other hand it offers solutions to design problems.

Michela Rossi

Kapitel 4. Methoden

Um das theoretische Verständnis über den Prozess der Bewältigung von Herausforderungen im Gründungsverlauf sowie speziell den Umgang mit dem Scheitern zu erweitern, ist das Ziel, eine umfassende Systematisierung in Form eines Prozessmodelles zu entwickeln. Die betreffend angewandte Methodik soll in diesem Kapitel vorgestellt und erläutert werden. Im Abschnitt 4.1 Methodologischer Ansatz erfolgt dazu einführend die Begründung der Verwendung einer qualitativen Methodik.

Alexander Goebel

Kapitel 2. Theoretischer Hintergrund

Ziel des Kapitels 2. Theoretischer Hintergrund ist die strukturierte und systematische Beschreibung der Grundstruktur des Prozessmodells zum Umgang mit unternehmerischen Herausforderungen. Dazu erfolgt im ersten Abschnitt 2.1 Unternehmensgründungen als Prozess die Abgrenzung und Einordnung des Gründungsbegriffs. Ferner werden bestehende Prozessmodelle sowie relevante Betrachtungsebenen vorgestellt, aus welchen die Notwendigkeit sowie erste Rahmenfaktoren für das Prozessmodell zum Umgang mit unternehmerischen Herausforderungen abgeleitet werden.

Alexander Goebel

Kapitel 6. Diskussion

Im folgenden Teil der Arbeit werden die Ergebnisse der Untersuchung anhand der Fragestellungen zusammengefasst und diskutiert sowie Implikationen für die Forschung als auch Praxis vorgestellt.

Alexander Goebel

Kapitel 5. Ergebnisse

In diesem Kapitel werden die Ergebnisse der vorliegenden Studie vorgestellt. Die Reihenfolge der Präsentation orientiert sich an der Struktur der Forschungsfragen aus 3. Untersuchungsgegenstand Prozessmodell. Weiterhin wird die in Abschnitt 2.3.2 Umgang mit Herausforderungen eingeführte Strukturierung aufgegriffen, welche sich, wie in Abbildung 6 dargestellt, in eine Betrachtung der Aktionen & Reaktionen sowie der Ressourcen differenzieren lässt.

Alexander Goebel

5. Organic Air Pollutants: Measurement, Properties & Control

In last few decades, air pollution has emerged as a major threat to human health and different gaseous and particulates air pollutants are found to be influencing global climate directly or indirectly via altering radiative forcing or of cloud microphysical properties. Organic air pollutants contribute a substantial portion to the total pollutants load and usually, in polluted location, their contribution can go up to 90% of the total particulate phase air pollutants or aerosols. Organic particulate pollutants or aerosols can be both primary or secondary in nature and generally changes their characteristics significantly upon reacting with different atmospheric oxidants like ozone or hydroxyl radicals. Sources and characteristics of organic air pollutants generally display a wide range of spatiotemporal variability. Several techniques are available the detection and measurement of organic air pollutants but time resolution and types of organic pollutants detected by different techniques vary significantly. However, since the characteristics of the organic pollutants can change relatively quickly via atmospheric processing it is desirable to measure these pollutants in real time. Recent advances in mass spectrometric techniques have enabled the scientists to understand the evolution of these organic pollutants in real-time and that led to more accurate source apportionment and identification of factors that influence the formation of secondary organic aerosols. In this chapter readers will get a comprehensive overview of organic air pollutants characteristics, their sources, evolution in the atmosphere and possible ways to control their abundance.

Abhishek Chakraborty

3. In-situ Measurements of Aerosols from the High-Altitude Location in the Central Himalayas

Aerosols, both natural and anthropogenic, affect the Earth’s climate directly due to the absorption and scattering of solar radiation and indirectly by modifying the cloud microphysics. Due to the short lifetime of these aerosols, their distribution is non-uniform and large uncertainties exist in their estimates at global and regional scale. The characteristics of atmospheric aerosols vary largely from one region to another due to spatial and temporal variations in the emission sources, transport, atmospheric transformation and removal of aerosol particles. The aerosol measurements over the Himalayan region are of crucial importance in order to provide a far-field picture quite away from potential sources. The ground-based measurements of aerosols are utilized along with satellite data to explain various aerosol characteristics over the Himalayan region. The roles of different processes such as boundary layer dynamics, meteorology, regional and long-range transport are assessed. In addition, the aerosol variation over the foothills of the Himalayas in the Indo-Gangetic Plain region has also been studied and the role of the boundary layer dynamics and updraft/downdraft of aerosols is elaborated. The high-altitude location of Himalayas is characterized by the low aerosol loading specially in winter, while significant aerosol abundance is observed in the spring. However, the significant aerosol abundance is observed over the foothills location throughout the year. The strong confinement of aerosols in the foothill region is evident, which leads to the significant enhancement in the surface concentration of aerosols. Interestingly, in the spring season, significant aerosol abundance is seen over the Himalayan region as well. The investigation of the mixing layer depth and the vertical distribution of aerosols over this region in spring reveals the transport and buildup of aerosols from the foothills region to the Himalayan region. The role of absorbing aerosols in the radiation budget over the central Himalaya region is also discussed.

Hema Joshi, Manish Naja, Tarun Gupta

20. Measurement, Analysis, and Remediation of Bisphenol-A from Environmental Matrices

Bisphenol-A (BPA) is one of the important emerging contaminants, which has been widely used as a raw material for the preparation of epoxy and polycarbonate. They are mainly present in our daily use products such as the lining of water containers, canned food and beverages, infant bottles, and medical devices due to its heat resistance and elasticity property. BPA is an alkyl phenol, find its primary route to environmental matrices by leaching out from the final consumer product containers or during the manufacturing process. The various factors responsible for leaching of BPA include temperature, the presence of acids, and storage time. It is an endocrine disruptor and causes an adverse impact on humans as well as aquatic organisms. Therefore, many countries have banned their usage, especially in infant bottles and other food containers. Conventional wastewater treatment technologies have been inefficient for degrading these type of persistent compounds. Advanced treatment techniques are sustainable approaches for the removal of persistent compounds from water. Further, the measurement and analysis of BPA and their conjugates requires sophisticated analytical instrumentation. This chapter provides a detailed review of available measurement and analysis methods for determination of bisphenol-A. Further, the chapter also reviews the various treatment technologies for the removal of BPA from environmental matrices.

Sukanya Krishnan, Ansaf V. Karim, Swatantra Pratap Singh, Amritanshu Shriwastav

Methodology for Environmental Learning Based on Material Flow Diagram of Green Multidimensional Bookkeeping System

The purpose of this study was to develop a methodology for environmental learning that enables qualitative understanding of environmental burdens. The tool in the methodology used the Material Flow Diagram of the Green Multidimensional Bookkeeping System (Green MDBS). Green MDBS is an environmental information system that enables bottom-up aggregation of environmental burden data in an individual process. In Green MDBS, all materials are regarded as “potential environmental burdens,” meaning that input or output of any material can potentially affect the environment. To accurately reduce environmental burdens, it is important that the types of materials and services are understood prior to quantitative measurements and calculations. Thus, the first step of the Green MDBS is to identify all types of materials and services, related to a relevant process. It is suggested that this step could be used as a simulation tool for environmental learning when applied to previous studies. In this paper, methodology for environmental learning using the Material Flow Diagram is presented, and some results of implementing this methodology in university classrooms are presented. Challenges faced in the application of the methodology to active learning, concerning environmental burdens in daily and economic activities, are further discussed.

Keiko Zaima

How to Describe a Large Business on a Business Board Game: An Illustration of Construction Company

Following a request to gamify construction company’s business model, the authors’ group (BASE) discussed whether it was possible to represent such a large industry in a business board game. We defined two principles, “cutoff branch” and “gradualism.” By applying them, we have succeeded in developing three different game sets called BASE Construction Game (BCG). We tested all games at Sirindhorn International Institute of Technology (SIIT), Thammasat University, from February to April 2018. To evaluate their teaching effectiveness, we conducted four questionnaire surveys. The results showed that BCG satisfied all learning goals and can be further improved in the future.

Ryoju Hamada, Kriengsak Panuwatwanich, Tomomi Kaneko, Masahiro Hiji, Kantamas Burunchai, Guntapol Choompolanomakhun, Chattavut Sri-on

Learning Efficacy Among Executives and Students of an Organizational Growth Game

Business games are used for organizational performance interventions as well as for educational purposes. To what extent can games be designed for intervention and used for educational purposes (and vice versa)? The authors study the learning efficacy of a game originally designed to support the implementation of the growth strategy for a client organization, a Dutch SME operating on the global market. Data was collected systematically through surveys before and after the game, 1 session with 25 executives from the client company and 2 sessions with 39 students of entrepreneurship. The findings indicate that although the learning efficacy, game quality and enjoyment among both groups are good or average, the differences are significant. The conclusion is that although business games in general are an effective intervention and active learning tool, the influence of contextual factors on learning among students may be more pronounced than it is among the executives for which the game has originally been designed.

Jessika Weber-Sabil, Harald Warmelink, Alessandro Martinisi, Thomas Buijtenweg, Kevin Hutchinson, Igor Stefan Mayer

Problems in Experiment with Biological Signals in Software Engineering: The Case of the EEG

The electroencephalograph (EEG) signal is one of the most widely used signal in the field of computer science to analyze the electrical brain waves from software developers and students. In this paper we present initial research results of an empirical study related to application of EEG in measurement of software development activities. We discuss existing methods and problems of running such experiments in future. In particular, we focus on the different kinds of limitations implied by modern EEG devices as well as the issues related to evaluation of the collected data set.

Herman Tarasau, Ananga Thapaliya, Oydinoy Zufarova

Method of Improving the Cyber Resilience for Industry 4.0. Digital Platforms

Cyber resilience is the most important feature of any cyber system, especially during the transition to the sixth technological stage, and related Industry 4.0 technologies: Artificial Intelligence (AI), Cloud and foggy computing, 5G +, IoT/IIoT, Big Data and ETL, Q-computing, Block chain, VR/AR, etc. We should even consider the cyber resilience as primary one, because the mentioned systems cannot exist without it. Indeed, without the sustainable formation, made of the interconnected components of the critical information infrastructure, it does not make sense to discuss the existence of 4.0 Industry cyber-systems. In case when the cyber security of these systems is mainly focused on assessment of the incidents’ probability and prevention of possible security threats, the cyber security is mainly aimed at preserving the targeted behavior and cyber systems’ performance under the conditions of known (about 45%) as well as unknown (the remaining 55%) cyber-attacks.

Sergei Petrenko, Khismatullina Elvira

A Smart Health-Oriented Traditional Chinese Medicine Pharmacy Intelligent Service Platform

With the national emphasis on traditional Chinese medicine treatments and the development of the modern Internet, people are increasingly showing a strong interest in traditional Chinese medicine, leading to the transformation of traditional Chinese medicine enterprises. The optimisation and innovation of the traditional Chinese medicine pharmacy service has become a hot topic. Therefore, this study combines the advantages of traditional Chinese medicine with Internet technology to build a smart health-oriented traditional Chinese medicine pharmacy intelligent service platform. It integrates hospitals, pharmacies, drug decoction centres, distribution centres and other resources, and forms a traditional Chinese medicine decoction, distribution and traceability system. The system realises the informatisation, automation and standardisation of traditional Chinese medicine pharmacy services. In this study, the platform is implemented using the Internet of Things and the Internet in Nanjing Pharmaceutical Co., Ltd. to provide patients with standard modern drug decoction and distribution services, and to monitor and manage the decoction, distribution and traceability processes.

Lei Hua, Yuntao Ma, Xiangyu Meng, Bin Xu, Jin Qi

Cycle-Consistent Training for Reducing Negative Jacobian Determinant in Deep Registration Networks

Image registration is a fundamental step in medical image analysis. Ideally, the transformation that registers one image to another should be a diffeomorphism that is both invertible and smooth. Traditional methods like geodesic shooting study the problem via differential geometry, with theoretical guarantees that the resulting transformation will be smooth and invertible. Most previous research using unsupervised deep neural networks for registration address the smoothness issue directly either by using a local smoothness constraint (typically, a spatial variation loss), or by designing network architectures enhancing spatial smoothness. In this paper, we examine this problem from a different angle by investigating possible training mechanisms/tasks that will help the network avoid predicting transformations with negative Jacobians and produce smoother deformations. The proposed cycle consistent idea reduces the number of folding locations in predicted deformations without making changes to the hyperparameters or the architecture used in the existing backbone registration network. Code for the paper is available at https://github.com/dykuang/Medical-image-registration .

Dongyang Kuang

Tunable CT Lung Nodule Synthesis Conditioned on Background Image and Semantic Features

Synthetic CT image with artificially generated lung nodules has been shown to be useful as an augmentation method for certain tasks such as lung segmentation and nodule classification. Most conventional methods are designed as “inpainting” tasks by removing a region from background image and synthesizing the foreground nodule. To ensure natural blending with the background, existing method proposed loss function and separate shape/appearance generation. However, spatial discontinuity is still unavoidable for certain cases. Meanwhile, there is often little control over semantic features regarding the nodule characteristics, which may limit their capability of fine-grained augmentation in balancing the original data. In this work, we address these two challenges by developing a 3D multi-conditional generative adversarial network (GAN) that is conditioned on both background image and semantic features for lung nodule synthesis on CT image. Instead of removing part of the input image, we use a fusion block to blend object and background, ensuring more realistic appearance. Multiple discriminator scenarios are considered, and three outputs of image, segmentation, and feature are used to guide the synthesis process towards semantic feature control. We trained our method on public dataset, and showed promising results as a solution for tunable lung nodule synthesis.

Ziyue Xu, Xiaosong Wang, Hoo-Chang Shin, Holger Roth, Dong Yang, Fausto Milletari, Ling Zhang, Daguang Xu

Investigating the Effect of Embodied Visualization in Remote Collaborative Augmented Reality

This paper investigates the influence of embodied visualization on the effectiveness of remote collaboration in a worker-instructor scenario in augmented reality (AR). For this purpose, we conducted a user study where we used avatars in a remote collaboration system in AR to allow natural human communication. In a worker-instructor scenario, spatially separated pairs of subjects have to solve a common task, while their respective counterpart is either visualized as an avatar or without bodily representation. As a baseline, a Face-to-face (F2F) interaction is carried out to define an ideal interaction. In the subsequent analysis of the results, the embodied visualization indicates significant differences in copresence and social presence, but no significant differences in the performance and workload. Verbal feedback of our subjects hints that augmentations, like the visualization of the viewing direction, are more important in our scenario than the visualization of the interaction partner.

Kristoffer Waldow, Arnulph Fuhrmann, Stefan M. Grünvogel

Exploring the Use of Immersive Virtual Reality to Assess the Impact of Outdoor Views on the Perceived Size and Spaciousness of Architectural Interiors

It has been widely reported that rooms with larger windows tend to feel more spacious, and previous studies have found a significant impact of the particular external view that a window affords on people’s preferences for its size and shape. However, little is yet well-understood about how what is seen through the window affects either the subjective sense of spaciousness in a room or the apparent metric size of the interior space. We report the results of a two-part experiment with 14 participants that uses HMD-based immersive virtual reality technology to assess the impact of multiple characteristics of outdoor views on both subjective ratings of spaciousness within a room and on action-based judgments of the room size. Across four different outdoor view conditions, spanning day/night and vista distance variations, as well as three different control conditions including the use of frosted glass, substituting a 2D painting for the window, and removing the window altogether, we found no significant differences in participants’ spaciousness ratings. Comparing room size judgments in a subset of the aforementioned conditions, we found a slightly greater underestimation of egocentric distance to the opposing wall when it contained a window onto a distant vista than when the wall was blank, with intermediate results in the case that a painting, rather than a window, was present. We discuss possible explanations for these findings and outline planned follow-up studies.

Megan Zhao, Ariadne Sinnis-Bourozikas, Victoria Interrante

Open Access

ForeSight - Platform Approach for Enabling AI-based Services for Smart Living

In future, smart home and smart living applications will enrich daily life. These applications are aware of their context, use artificial intelligence (AI) and are therefore able to recognize common use cases reliably and adapt these use cases individually with the current user in mind. This paper describes a concept for such an AI-based platform. The presented platform approach considers different stakeholders, e.g. the housing industry, service providers and tenants.

Jochen Bauer, Hilko Hoffmann, Thomas Feld, Mathias Runge, Oliver Hinz, Andreas Mayr, Kristina Förster, Franz Teske, Franziska Schäfer, Christoph Konrad, Jörg Franke

Open Access

Ubiquitous Healthcare Systems and Medical Rules in COPD Domain

Chronic Obstructive Pulmonary Disease (COPD) is a severe lung illness that causes a progressive deterioration in the function and structure of the respiratory system. Recently, COPD became the fifth cause of mortality and the seventh cause of morbidity in Canada. The advancement of context-aware technology creates a new and important opportunity to transform the standard shape of healthcare services into a more dynamic and interactive form. This research project design and validates a rule-based ontology-reasoning framework that provides a context-aware system for COPD patients. The originality of the proposed approach consists in its methodology to prove the efficiency of this model in simulated examples of real-life scenarios based on collaborative data analysis, recognized by specialized medical experts.

Hicham Ajami, Hamid Mcheick, Karam Mustapha

An Automated CNN-based 3D Anatomical Landmark Detection Method to Facilitate Surface-Based 3D Facial Shape Analysis

Maternal alcohol consumption during pregnancy can lead to a wide range of physical and neurodevelopmental problems, collectively known as fetal alcohol spectrum disorders (FASD). In many cases, diagnosis is heavily reliant on the recognition of a set of characteristic facial features, which can be subtle and difficult to objectively identify. To provide an automated and objective way to quantify these features, this paper proposes to take advantage of high-resolution 3D facial scans collected from a high-risk population. We present a method to automatically localize anatomical landmarks on each face, and align them to a standard space. Subsequent surface-based morphology analysis or anatomical measurements demands that such a method is both accurate and robust.The CNN-based model uses a novel differentiable spatial to numerical transform (DSNT) layer that could transform spatial activation to numerical values directly, which enables end-to-end training. Experiments reveal that the inserted layer helps to boost the performance and achieves sub-pixel level accuracy.

Ruobing Huang, Michael Suttie, J. Alison Noble

Blockchain in Supply Chain Management: Australian Manufacturer Case Study

The recent explosion of interest around Blockchain and capabilities of this technology to track all types of transaction more transparently and securely motivate us to explore the possibilities Blockchain offers across the supply chain. This paper examines whether Blockchain makes a good fit for use in an Australian manufacturer supply chain. To address this, the research uses Technology Acceptance Model (TAM) as a framework from the literature. Blockchain allows us to have permissioned or permission-less distributed ledgers where stakeholders can interact with each other. It details how Blockchain works and the mechanism of hash algorithms which allows for greater security of information. It also focuses on the supply chain management and looks at the intricacies of a manufacturers supply chain. We present a review of the processes in place of an electrical manufacturer and the problems faced in the supply chain. A model is proposed in using public and private Blockchains to overcome these issues. The proposed solution has the potential to bring greater transparency, validity across the supply chain, and improvement of communication between stakeholders involved. We also point out some potential issues that should be considered if adopting Blockchain.

Elias Abou Maroun, Jay Daniel, Didar Zowghi, Amir Talaei-Khoei

Preference Feedback for Driving in an Unfamiliar Traffic Regulation

Driving in an unfamiliar traffic regulation is associated with difficulties in adjusting with the new conditions and rules. Providing feedback in a proper way can help drivers overcome such difficulties. This paper aims to explore the most preferred feedback modality, feedback presenting time, and frequency of presenting feedback when turning left at a roundabout when driving in an unfamiliar traffic regulation, namely, a keep-left traffic regulation. Driving in a roundabout includes navigation, speed, and signal indication. Thirty-five participants who were not familiar with an Australian traffic regulation (i.e. keep-left and a right-hand driving vehicle) answered the online survey. We found that visual feedback is the most preferred modality in all driving tasks related to driving at a roundabout. Also, concurrent feedback is the most preferred feedback presenting time. There is no a particular preferred frequency to present the feedback. Our findings would help design the feedback system to assist the driver in such a driving condition.

Hasan J. Alyamani, Annika Hinze, Stephen Smith, Manolya Kavakli

Assessment of Pollination Ecosystem Service Provided of Urban Ecosystems in Bulgaria

The honeybee is the most important insect pollinator, and the service they provide is among the most important regulation ecosystem services. The main problem in the assessment of ecosystem services of urban habitats in Bulgaria is the lack of data from the inventory. There is only aggregated data at municipality level about the number of beehives. The main objective of this work is to present an approach for assessment of pollination service provided by urban ecosystems in Bulgaria and the results of its mapping at national scale. The approach relies on application of two spatially explicit indicators which based on parameters such as density of bee families and minimal flying coverage. Following the matrix approach for spatially explicit ecosystem service assessments suggested, pollination supply capacities were assessed for all municipalities in Bulgaria. We used statistical data for beehive holdings and colonies per municipality for the years 2010 and 2016. The results of the assessment were used to generate maps of the pollination supply capacity of urban ecosystems in Bulgaria. They provide appropriate information about the spatial distribution of this service throughout the country which can be used for the needs of regional planning. The high sensitivity of the sector, as well as its importance for the environment and the economy, requires a careful and long-term state policy.

Mariyana Lyubenova, Stoyan Nedkov, Miglena Zhiyanski, Georgi Popchev, Petar Petrov

Erosion Control Service of Forest Ecosystems: A Case Study from Northeastern Turkey

Erosion is one of the most significant environmental problems in Turkey and many other regions of the world. Thus, appropriate erosion control services can help reduce soil loss and maintain ecosystem services (ES). Forests play a crucial role in this process as they are very useful in erosion control when properly managed. This chapter depicts a case of erosion control service in a forest ecosystem in northeastern Turkey by assessing statistical relationships of soil properties with forest inventory data through field observations, direct measurements and calculated data of growing stock, basal area, and soil erodibility (K-factor) from 108 forest plots. We found several significant correlations between those factors and in particular tree density, basal area, stand age, layered forest structure, stand height, undergrowth, and species composition along with some ecological parameters proofed to be useful indicators for a quick assessment of erosion control ES of forests. Erosion rates could be reduced by increasing the number of trees per unit area with smart forest management. It seems that optimum species composition can easily be achieved through the presence of the broadleaved trees ES indicator. Because mixed forests generally had lower silt content in their soil, they seem to be less prone to erosion processes. This case study helped to identify the site-specific key indicators for assessing erosion control ES as well as potential mitigation strategies for forest ecosystems in northeastern Turkey. It also showed that a single proxy indicator might not sufficiently represent such complex processes. Thus, the use of a bundle of indicators may result in more accurate estimates. For a more general assessment, sound ES indicators still need to be developed on regional or national level for decision-makers and practitioners to make wise decisions and proper land allocations.

Can Vatandaşlar, Mehmet Yavuz, Michael Leuchner

Chapter 3. Locating BRICS Development Strategies in Global Development Policy Narratives

This discussion focuses on whether Brazil-Russia-India-China-South Africa (BRICS) alternative framings of International Development Aid (IDA) differ from those dominating Northern liberal and neoliberal development narratives and policy settings. Are the BRICS as emerging powers/donors in the new ‘Beyond Aid’ debate in the process of constructing a new development paradigm? Beyond Aid is linked to South–South cooperation that ostensibly charts a new development path for both BRICS and the Forum on China-Africa Cooperation (FOCAC). The analysis reveals that BRICS official development narratives and policies remain embedded in global systemic realities of North–South domination, linked to multilateral institutional dynamics. BRICS development poses risks of entrenching and recreating economic patterns of economic co-dependency and exploitation in the South, particularly Africa.

Lisa Thompson

Dairy Cow Tiny Face Recognition Based on Convolutional Neural Networks

In practical applications of cow face recognition, the accuracy is often lower than expected because of the influence of camera’s low resolution and position. In this paper, we aim to develop and pilot a method for improving recognition accuracy and recovering identity information for generating cow faces closed to the real identity. Specifically, our network architecture consists of two parts: a super-resolution network for recovering a high-resolution cow face from a low-resolution one, and a face recognition network. The super-resolution network is cascaded with the recognition network. An alternately training strategy was introduced to ensure the stability of the training process. The cow face dataset was collected by us, which contains 85200 dairy cow face images from 1000 subjects. Experimental evaluations demonstrate the superiority of the proposed method. Our method has achieved 94.92% recognition accuracy on the small size (12 × 14) cow face.

Zehao Yang, Hao Xiong, Xiaolang Chen, Hanxing Liu, Yingjie Kuang, Yuefang Gao

Chapter 7. Effective Engagement of Digital Natives in the Ever-Transforming Digital World

The whole business world is undergoing continuous transformation due to the innovations taking place in the business process and models as part of digital transformation. Earlier worries about ‘digital disruptions’ are not annoying anyone anymore and everyone is in a race to leverage the changes and opportunities of digital technologies at their best for the benefit of their organization and people. The percentage of ‘digital natives’ among various groups of stakeholders such as consumers, employees, and so on is showing a phenomenal increase and poses a greater challenge to the business organizations in terms of building loyalty and commitment among the people concerned. The intention, behavior, and involvement of digital natives in digital platforms and their use are characteristically different from others such as ‘digital immigrants’ or mere ‘digital literates.’ This chapter attempts to throw some light on how to manage digital natives both as internal and external stakeholders of the business. The authors elaborate on various ways and means, including the extent of application of the advantages of social media, for engaging digital natives both as a consumer and as an employee. Skills and abilities needed for employees to make a successful digital transition are also narrated.

Anju Varghese Philip, Zakkariya K. A.

Chapter 15. Digital Technology to Enhance Project Leadership Practice: The Case of Civil Construction

Digital transformation is fundamentally influencing all aspects of business and society. It can enable individuals and organizations to transcend from one way of working to another. In construction, emphasis is shifting from project management to project leadership, as professionals need assisting tools to not only aid in managing tasks and activities, but aid in leading people. As such, the adoption of digital technologies may hold the key in taking project leadership practice into the future. To date, there have been few attempts to explore the potential of digital technology as an aid for project leadership development and practice. This research therefore aims to investigate this potential in the context of the Australian civil construction industry. We review the existing body of knowledge on both industry-specific leadership demands and relevant digital technology capabilities to identify areas of improvement and to guide interviews with construction project managers. So far, literature has focused on endorsing particular leadership behaviors and/or styles, while ignoring the difficulties faced by professionals in practicing these behaviors. We find that project managers of civil contractors within Sydney understand the significance of leadership but are often overwhelmed by its complexity.

John Ekechukwu, Thorsten Lammers

Chapter 14. Risk Management in the Digital Era: The Case of Nigerian Banks

This study therefore investigates the way the banking sector is exploiting technology in order to reduce missed opportunities and realized risks, whether technology has been exploited in order to integrate different systems, by collecting and analyzing massive volumes of data from an unlimited number of sources across multiple locations. Essentially, our research question was answered through the use of content analysis of integrated reports of selected Nigerian banks where the focus is on the manner and the way in which risks have been captured. We content-analyze this section of the integrated report in order to determine whether Nigerian banks indicated in these reports have deployed technology or not and whether or not advanced technology would have been deployed in order to reduce missed opportunities and realized risks.

Tankiso Moloi, Oluwamayowa Olalekan Iredele

Chapter 6. Contribution of Atmospheric Reactive Nitrogen to Haze Pollution in China

Reactive nitrogen (Nr) plays a significant role in atmospheric chemistry and is closely related to environmental and climate change. For example, ammonia, amines, and nitrogen oxides are involved in aerosol formation and have significant environmental implications, including regional haze pollution, acid deposition, and eutrophication. In addition, nitrate and ammonium are major compounds of atmospheric particulate matter, contributing approximately one-third of PM2.5. Although the concentration of amines in the atmosphere is probably two or three orders of magnitude lower than that of ammonia, amines can significantly assist the growth of both neutral and ionic clusters. The goal of this chapter is to exhibit the role of Nr in haze pollution and the importance of Nr mitigation measures. This chapter first introduces the mechanisms of haze formation related to atmospheric Nr and then discusses the contribution of Nr to PM2.5 pollution across China. Ultimately, the effects of Nr mitigation on PM2.5 pollution are evaluated, and possible measures that could be taken in the future are provided.

Yuepeng Pan, Yang Zeng, Shili Tian, Qianqian Zhang, Xiaying Zhu

Chapter 2. Anthropogenic Emissions of SO2, NOx, and NH3 in China

Since 2010 China has contributed approximately 30% of SO2, 24% of NOx, and 20% of NH3 global anthropogenic emissions, which has caused severe air pollution and led to adverse impacts on human health and ecosystems. Reliable emission estimation for SO2, NOx, and NH3 from anthropogenic sources is essential for both understanding the sources of air pollution and designing effective air pollution control measures. In this chapter, long-term anthropogenic emissions of SO2, NOx, and NH3 in China, their driving forces, and underlying uncertainties are analyzed systematically. Emissions of SO2 and NOx have significantly decreased as a consequence of stringent clean air policies implemented in China in recent years. National emissions of SO2 and NOx decreased by 62% and 17% during 2010–2017, respectively. Emission control measures are the main drivers of these reductions, among which pollution controls on power plants and industries are the most effective mitigation measures. The total NH3 emissions in China increased from 5.9 to 11.1 Tg from 1980 to 1996, driven by increasing demand for meat and enhanced crop yields, and then decreased to 9.7 Tg in 2012. The two major contributors were livestock manure and synthetic fertilizer application, which contributed 80–90% of total NH3 emissions. Emission estimates from various investigations are compared from bottom-up and top-down perspectives. Finally, we suggest future directions for accurate emission estimates and improved design of air pollution control policies in China.

Qiang Zhang, Yu Song, Meng Li, Bo Zheng

Chapter 2. Corporate Tax Management and Chinese Enterprises

This chapter reviews past works on corporate tax management management. Within the broad framework of corporate tax management, the extant theoretical literature addresses three issues that this book examines. Past empirical studies on corporate tax management in modern corporations is dealt with next, with a focus on three specific features of China’s market—government ownership, corruption, and marketization. These reviews help identify the research gaps that need to be filled and the accompanying research required to elucidate the China context. The literature review and the knowledge gaps identified pave the way for unraveling the analytical chapters that follow subsequently.

Chen Zhang, Rajah Rasiah, Kee Cheok Cheong

Chapter 1. Introduction

China’s experience with economic transition from central planning to a more market-oriented economy is unique, Vietnam being the only other country that most closely resembles China’s experience almost a decade later. Because of the gradualist approach adopted—Deng Xiaoping’s famous characterization of “feeling the stones to cross the river”—parts of the economy had remained unreformed at any time.

Chen Zhang, Rajah Rasiah, Kee Cheok Cheong

Chapter 4. Economic Reforms and Market Outcomes over Time

Taxation is a significant cost borne by firms, which affects firms’ decision-making behavior regarding the available choices on the magnitude and structure of output, disposal of net profit, direction of capital investment, among many other things. Thus, reducing the corporate tax burden has become a powerful motivational force in corporate conduct. Indeed, corporate tax management has emerged as an important financial strategy desired by shareholders to improve firm value. Expert accountants are hired by corporations for this purpose.

Chen Zhang, Rajah Rasiah, Kee Cheok Cheong

14. Agilität konkret – projekthafte Umsetzung von Organisationsveränderungen

Agilität ist keine neue Philosophie, die erst im Kontext der Digitalisierung entwickelt wurde. Dass Agilität mit der fortschreitenden Digitalisierung, mit einem hohen Innovationstempo oder einem angeblich sich rasch wandelnden Markt neu in Zusammenhang gebracht wird, scheint eher eine Idee von Trainern und Beratern zu sein. Agilität hat im Grundsatz nichts mit diesen Einflüssen zu tun. Ja, wir werden in Zukunft immer stärker manuelle Prozesse durch IT-Lösungen ersetzen. Wir werden durch neue Technologien wie Robotic, KI und Blockchain ganz neue Möglichkeiten erhalten, Produkte und Lösungen zu entwickeln, die den Menschen einen Nutzen stiften. Tatsächlich sind es aber nicht diese technischen Entwicklungen, die ein Umdenken in den Führungsetagen erfordern. Vielmehr hat schon 2001 eine Gruppe von Software-Entwicklern erkannt, dass die Zusammenarbeit innerhalb eines Projekts und ganz besonders die Interaktion mit dem Kunden neu definiert werden muss. Offenheit, Fairness, Partnerschaftlichkeit stehen im Vordergrund. In den letzten Jahren kam hinzu, dass die Menschen für ihr Berufsleben andere Schwerpunkte setzen. Sie wollen teilhaben, gestalten, Verantwortung übernehmen, Freiräume mit Ideen füllen und Beruf und Freizeit in Einklang bringen. Sie möchten in einer Art und Weise arbeiten, die man agil nennt. Dies setzt eine Führungskultur voraus, in der Menschen auf Augenhöhe an gemeinsamen Zielen arbeiten. Muss ein Unternehmen agil aufgestellt sein, um die digitale Transformation erfolgreich zu meistern? Nein! Aber es hilft und macht vieles einfacher.

Wolfram M. Walter

Ein Instrumentarium zur holistischen Analyse der Arbeitsfähigkeit – Teil I eines partizipativen Ansatzes

Heute wird Gesundheitsförderung vorrangig in gesonderten Bemühungen zum betrieblichen Gesundheitsmanagement (kurz: BGM) realisiert, ohne dass sie in die Gesamtstrategie des Unternehmens eingebettet ist und/oder einen monetär orientierten Charakter aufweisen würde. Die sogenannte Work-Life-Health-Strategie (kurz: WLH) soll daher eine Möglichkeit aufzeigen, wie es Unternehmen in einem 2x4-Punkte-Plan gelingen kann, Bestrebungen zum Erhalt der Arbeitsfähigkeit, zur Förderung der Gesundheit und damit einen ausgeglichenen Wertemanagement gleichermaßen zu realisieren, in dem alle Bereiche einer ethisch wie auch monetär orientierten Unternehmensführung ausbalanciert nebeneinander existieren und miteinander interagieren können. Dazu werden im vorliegenden ersten Teil vier Schritte dargestellt, die zunächst den Status quo der im Unternehmen existierenden Auffassungen zu den Bereichen Arbeit, Leben und Gesundheit erfassen. Es erfolgt eine Auswertung anhand von Kennzahlen mit daraus abgeleiteten Handlungsempfehlungen, die zugleich den Übergang zum zweiten Teil des partizipativen Ansatzes (siehe Kapitel 11) schaffen.

Frauke Remmers, Martin Ulber

23. Praxisbericht eines grundzuständigen Messstellenbetreibers zur Einführung intelligenter Messsysteme

Die Einführung intelligenter Messsysteme stellt Messstellenbetreiber in der Energiewirtschaft vor eine Vielzahl von Herausforderungen. Mit dem Gesetz zur Digitalisierung der Energiewende steht ein regulatorischer Rahmen bereit. Eine Umsetzung ist unter Beachtung regulatorischer, technischer, prozessualer und wirtschaftlicher Vorgaben unternehmensspezifisch auszugestalten. Dieser Beitrag beschäftigt sich mit den Herausforderungen und dem bei der Stromnetz Hamburg als grundzuständigem Messstellenbetreiber für Strom gewählten Ansatz sowie den hierbei gewonnenen Erkenntnissen.

Manfred Stübe, José González

49. Asset-Management – Versorgungsnetze digital steuern

Digitales Asset-Management ist eine unverzichtbare Kompetenz moderner Netzbetreiber. Die Pfalzwerke Netz AG hat in den letzten Jahren ein qualitativ hochwertiges System entwickelt und implementiert.

Marc Mundschau, Ingolf Quint

46. Robotic Process Automation in der Energiewirtschaft

Nach der industriellen Revolution ist nun die digitale Revolution (Industrie 4.0) in vollem Gang. Während in der industriellen Revolution in erster Linie Produktionsprozesse im Fokus der Maschinenunterstützung standen, wurde mit Entwicklung der PC zunehmend der Einsatzbereich um die kaufmännischen Prozesse erweitert (z. B. mit Einführung von Enterprise-Resource-Planning-Systemen). Zurück blieben und bleiben oftmals Schnittstellen zu anderen Systemen oder manuelle Tätigkeiten, für die sich die Entwicklung und Anpassung der bestehenden Softwareanwendungen oftmals nicht gelohnt haben bzw. nicht lohnen. Genau hier setzt Robotic Prozess Automation (RPA) an. Laut Forrester-Bericht (vgl. Matzge 2019) ist RPA in der Top-Ten der wichtigsten IT-Trends für Chief Information Officer. Warum das so ist, was RPA genau ist und worin konkrete Einsatzmöglichkeiten gerade für Energieversorgungsunternehmen bestehen, wird im folgenden Beitrag ausgeführt.

Marcus Krüger, Ingmar Helmers

10. Die Rolle der IT für die Utilities 4.0

Die Dynamisierung des Marktes, die technologischen Möglichkeiten der Digitalisierung sowie wachsende Erwartungen von Kunden führen für die Utilities 4.0 zu einem gewaltigen Umbruch. Unabhängig davon, wie das individuelle Geschäftsmodell zukünftig aussehen mag: Die IT wird zu einem bestimmenden Faktor der Geschäftstätigkeit. IT ist Business und Business ist IT. Sie ist nicht nur interner Dienstleister oder Innovationstreiber, sie ist vielmehr zentraler Dreh- und Angelpunkt für die Optimierung der bestehenden und den Aufbau zukünftiger Geschäftstätigkeit. Zwischen der „Usability“ des Internet of Things und den Sicherheitsanforderungen der Operational Technology muss die Unternehmens-IT die unterschiedlichsten Anforderungen synchronisieren. Denn neue Konzepte in allen Bereichen der Energieversorgung – wie virtuelle Kraftwerke, Smart Home und Smart Energy, Netze und Sektorkopplung (Power-to-X) – erfordern eine Verknüpfung von Daten aus verschiedensten Quellen. Hier wird die IT das Rückgrat sein, um dem Unternehmen flexible und innovative Antworten bereitzustellen und auch in Zukunft ertragsstarke Geschäftsmodelle zu ermöglichen.

Olaf Terhorst, Marcus Warnke

Ein Instrumentarium zur Realisierung zukunftsorientierter Arbeitsfähigkeit – Teil II eines partizipativen Ansatzes

Die bisher dargestellten Anforderungen, Modelle und Instrumentarien zur Arbeitsfähigkeit haben gezeigt, dass nur eine integrative Betrachtung aller Komponenten erfolgsversprechend ist. Kapitel 9 hat in diesem Kontext bereits die ersten Schritte auf dem Weg zu einer Work-Life-Health-Balance aufgezeigt, welche durch die Entwicklung eines Maßnahmensets (Kapitel 10) und den Anforderungen an ein Instrumentarium zur Analyse der Arbeitsfähigkeit allgemein (Kapitel 8) flankiert wurde. Im zweiten Teil der in Kapitel 9 begonnenen Darstellung einer Work-Life-Health orientierten Unternehmensstrategie soll es nun darum gehen, wie die bisher verorteten Werte der drei Bereiche Arbeit, Leben und Gesundheit konkret in Maßnahmen und quantifizierbare Werte überführt und angewendet werden können. Dazu stellen die weiteren vier Schritte Ansätze eines Planungs-, Steuerungs- und Evaluationsprozesses dar. Dieser zeigt zum einen konkrete Maßnahmen auf, die das Unternehmen seinen Mitarbeitern in den unterschiedlichen Feldern anbieten kann. Zum anderen enthält er Überlegungen hinsichtlich eines ganzheitlichen Controlling-Ansatzes in Form einer arbeitsfähigkeitsfokussierenden Balanced Scorecard.

Frauke Remmers, Bianca Zorn

Kapitel 7. Aufgaben- und Lösungsteil

Die folgenden Aufgaben dienen der Wiederholung und Diskussion der Inhalte der einzelnen Kapitel. In den von uns durchgeführten Kursen hat es sich bewährt, die Aufgaben jeweils am Ende des jeweiligen Kurstages von den Kursteilnehmern in Kleingruppen bearbeiten und zu Beginn des darauf folgenden Kurstages präsentieren zu lassen. Auf diese Weise wird auch während eines Blockkurses laufende Wiederholung sichergestellt und kreative Interaktion gefördert.

Gunther Friedl, Burkhard Pedell

Z

Die Zahlungsbilanz eines Landes ist eine Aufzeichnung aller wirtschaftlichen Transaktionen zwischen Inländern und Ausländern in einem Jahr. Mit Inländern und Ausländern sind Personen und Unternehmen gemeint.

Wolfram Klitzsch

Kapitel 6. Integriertes Controlling mit SAP-Software

Die bisherigen Kapitel haben gezeigt, wie im Rahmen von SAP ERP ein System einer operativen internen Erfolgsrechnung aufgebaut und zur Unterstützung von Entscheidungen verwendet werden kann. Der Gegenstand des betriebswirtschaftlichen Bereichs Controlling ist jedoch weitergehend. Dieses Kapitel geht auf den Gegenstand des Controllings ein und zeigt auf, inwieweit SAP durch sein Produkt ERP bzw. weitere Software-Anwendungen den Anforderungen an ein modernes Controlling genügt.

Gunther Friedl, Burkhard Pedell

C

Cashflow heißt „Kassenfluss“ und dokumentiert die Ein- und Auszahlungen eines Unternehmens. Ein Cashflow Statement (Liquiditätsbericht oder Kassenbericht) zeigt, ob das Unternehmen stets zahlungsfähig ist.

Wolfram Klitzsch

Kapitel 5. Implementierung einer Ergebnis- und Marktsegmentrechnung in SAP

Zur Überwachung des Unternehmenserfolges und als Vorbereitung für die Sortimentspolitik führen Sie nun eine mehrfach gestufte Deckungsbeitragsrechnung durch. Es geht darum, fixe und variable Bestandteile entstandener Kosten entsprechend den Informationsbedürfnissen aufzubereiten und auszuwerten. Zur Vereinfachung wird in der vorliegenden Fallstudie angenommen, dass sämtliche Gemeinkosten fix sind, d. h., es gibt keine variablen Gemeinkosten.

Gunther Friedl, Burkhard Pedell

Kapitel 4. Implementierung einer Erzeugniskalkulation in SAP

Zur Ermittlung der Selbstkosten pro Zierbrunnen führen Sie eine Kostenträgerstückrechnung durch, um fundierte Aussagen über Ihre operative Kostenstruktur treffen zu können. Im Endeffekt geht es bei der Ermittlung der Selbstkosten der Brunnen erstens darum, Ihren mengenmäßigen Produktionsprozess auf Produktebene zu bewerten. Zweitens müssen Sie zusätzlich die hierfür notwendigen Verwaltungstätigkeiten kostenrechnerisch berücksichtigen.

Gunther Friedl, Burkhard Pedell

Kapitel 3. Implementierung einer Kostenstellenrechnung in SAP

Sie gehen nun als erste konkrete Tätigkeit im SAP-System die Implementierung einer Kostenstellenrechnung für das neue Tochterunternehmen Deutsche Zierbrunnen GmbH an. Als Referenz für die erfolgreiche Durchführung der einzelnen Implementierungsschritte dienen die von Ihnen bereits manuell ausgewerteten Daten des aktuellen Monats (siehe Fallstudie in Kapitel 2).

Gunther Friedl, Burkhard Pedell

Kapitel 1. Grundlagen von SAP ERP

Dieses Kapitel erläutert die Grundlagen von SAP ERP. Es zeigt, wie die Kosten- und Erlösrechnung in SAP ERP abgebildet wird, und beschreibt die Grundstruktur des Moduls Controlling (CO). Erste Hinweise zum Arbeiten am System runden das Kapitel ab.

Gunther Friedl, Burkhard Pedell

Kapitel 2. Fallstudie Deutsche Zierbrunnen GmbH

In diesem Kapitel wird die Fallstudie beschrieben, welche die Basis für die darauf folgenden Kapitel bildet. Die in diesem Kapitel dargestellten Strukturen werden später in SAP ERP umgesetzt. Wird das Buch als Kursunterlage verwendet, bietet es sich an, die Kursteilnehmer zunächst sehr ausführlich mit der Fallstudie vertraut zu machen, damit bei deren Umsetzung das Verständnis des betriebswirtschaftlichen Hintergrunds keine Schwierigkeiten mehr bereitet.

Gunther Friedl, Burkhard Pedell

B

Bad Bank heißt „schlechte Bank“.Eine Bad Bank wird als reine Abwicklungsbank gegründet und übernimmt die weitgehend wertlosen Wertpapiere und Kreditforderungen der (normalen) Bank, damit die normale Bank mit ordentlichen Wertpapieren und normalem Geschäft weiterarbeiten kann.

Wolfram Klitzsch

The Shape of Things to Come

In the summer of 1999 two economists, David and Jürgen, were strolling along the lakeshore promenade of Lake Mendota, Madison, Wisconsin. It was a beautiful, serene summer evening. That past afternoon they had presented a paper at the Econometric Society Summer Meetings on the influence of industry knowledge conditions, firm size and ownership structures on innovation and investment. The topic had attracted some prominent scholars in the field who vividly discussed the paper and provided valuable comments. Walking at a leisurely place David and Jürgen were reflecting on the session and how to utilize the received feedback to improve their paper for submission (After many revisions this ESSM paper was published as Audretsch and Weigand (2005)).

Martin Prause, Jürgen Weigand

Festschrift to David B. Audretsch

The authors, two scholars from the Basque region of Spain, use their chapter to highlight David’s career, his contribution to the founding of the entrepreneurial research field, and his impact upon their own careers. Particularly noting David’s influence in convincing them to work more closely with local economic development actors, the authors highlight the importance that David has placed on linking theory with practice. This theme flows throughout the chapter.

J. L. González-Pernía, Iñaki Peña-Legazkue
Bildnachweise