Skip to main content

Windows

weitere Buchkapitel

Chapter 5. Intelligent Interaction in Accessible Applications

Advances in artificial intelligence over the past decade, combined with increasingly affordable computing power, have made new approaches to accessibility possible. In this chapter we describe three ongoing projects in the Department of Computer Science at North Carolina State University. CAVIAR, a Computer-vision Assisted Vibrotactile Interface for Accessible Reaching, is a wearable system that aids people with vision impairment (PWVI) in locating, identifying, and acquiring objects within reach; a mobile phone worn on the chest processes video input and guides the user’s hand to objects via a wristband with vibrating actuators. TIKISI (Touch It, Key It, Speak It), running on a tablet, gives PWVI the ability to explore maps and other forms of graphical information. AccessGrade combines crowd-sourcing with machine learning techniques to predict the accessibility of Web pages.

Sina Bahram, Arpan Chakraborty, Srinath Ravindran, Robert St. Amant
A Novel Scheme for Enhancing Quality of Pictures

Due to some hardware limitations of the camera, i.e., limited depth of the focus or due to certain situational factors such as motion of the camera, at the time of clicking pictures, motion of the object, moving background, movement of the camera, in all such cases, image captured through camera device may not be fully focused all the time or we can say the captured image will not be clearly visible at all the locations in it, some objects in this image may be blurred or unclear, that can be overcome via image fusion strategy and resultant fused image possess good quality involving essential information and features. Similarly, if we talk about medical images, there are limitations of the highly expensive medical machines for taking modality of the human internal body, i.e., a certain kind of machine can take only a certain kind of structure in the image, such as some machine is capable to take soft tissues, while some is capable for bony structure, in all such scenarios quality enhancement frameworks facilitate the speedy diagnosis and solve research purpose. Many application areas involve object detection, photography, medical imaging, remote sensing, surveillance and many more. Through this research paper authors have presented a novel framework for producing high quality pictures based on honey badger optimization scheme and deep convolution neural network. Proposed framework has been tested on multi-focus color image dataset and comparative analysis with recently developed picture quality enhancement strategies demonstrated the superiority of the proposed technique. Assessment parameters such as mutual information, i.e., MI, universal quality index, i.e., UQI, structural similarity index measurement, i.e., SSIM have been taken to assess the quality of pictures.

Vineeta Singh, Vandana Dixit Kaushik
Revisiting the Recent Advancements in the Design and Performance of Solar Greenhouse Dryers

The agriculture sector is the backbone of India’s economy and has shown the strong correlation between economic growth and agricultural development. There is a need for a new and effective technology that can improve the efficiency and profitability of our farming systems. One of the most common technologies is the green house. A greenhouse, also referred to as a glasshouse or a hothouse, is a structure that’s made of glass and designed to grow plants that require certain environmental conditions. The main objective of this type of system is to provide a good environment for the plants to grow well. Although it is centuries old, this technology is still being used in India. The potential of greenhouse cultivation is immense due to the semi-arid climate in India. This study aims to review various design of greenhouse structure that can be utilized by the farmers.

Anil Singh Yadav, Abhay Agrawal, Amit Jain, Rajiv Saxena, Manoj Kumar, Abhishek Sharma, Sonali Singh
Laser-Induced Spark Ignition of Methane-Air Mixtures in Constant Volume Combustion Chamber

The laser-induced spark ignition (LISI) of methane (CH4)-air mixtures were experimentally studied in a constant volume chamber. The experiments were carried with a nanosecond pulsed Nd: YAG laser of 1064 nm wavelength. A piezoelectric pressure transducer coupled to a DAQ system was used for measurement of pressure–time history. The chamber has four diametrically opposite optical windows; two optical windows for laser beam entry and exit and two for optical diagnostics. The experiments were carried out at an initial chamber pressure of 2.5, 5.0, and 7.5 bar and at chamber temperature of 298 K. The minimum pulse energy and breakdown threshold intensity required for breakdown of both pure air and methane goes on decreasing as the initial chamber pressure goes on increasing from 1.0 to 8.0 bar. The pressure–time (p–t) history was recorded for different equivalence ratios (0.6 to 1.4) and at different initial chamber pressure conditions. The peak pressure was observed at ϕ = 1.0 for all initial chamber pressure conditions and as chamber pressure increases from 2.5 to 7.5 bar the peak pressure rises and time to attain peak pressure reduces. The minimum pulse energy needed for burning of methane-air mixture was observed at ϕ = 1.0 and it reduces as chamber pressure increases from 2.5 to 7.5 bar. The analyzed data will be useful to study the behavior of laser ignition of the methane-air mixture and can be useful to develop commercial LISI engines in future.

Prashant Patane, Vishal Kolapte, Milankumar Nandgaonkar, Subhash Lahane
Design of Mobile Monitoring System for Natural Resources Audit Considering Risk Control

In order to better ensure the health of natural resources and environment and avoid the risk of environmental pollution, this paper puts forward the design method of natural resources audit mobile monitoring system considering risk control, optimizes the hardware structure of natural resources audit mobile monitoring system, further optimizes the system software function and operation process, optimizes the natural resources risk identification and control method, and constructs the management index of natural resources audit mobile monitoring. Finally, the experiment proves that the mobile monitoring system of natural resources audit considering risk control has high practicability in the process of practical application and fully meets the research requirements.

Huang Meng, Xuejing Du
Network Information Security Risk Assessment Method Based on Machine Learning Algorithm

The current computer network information security risk assessment methods have the problems of low assessment accuracy, which seriously restricts the assessment effect. In order to solve this problem and improve the effect of network information security risk assessment and the level of network information security, this paper designs a network information security risk assessment method based on network learning algorithm. Describe the risk calculation form, extract the performance characteristics of network information, identify the network risk factors, draw conclusions according to logical reasoning, adopt computer network risk control and defense measures, use machine learning algorithm to build a security system model, and optimize the security risk assessment mode. The experimental results prove that the highest accuracy rate of the network information security risk assessment method is 95.612%, indicating that the network information security risk assessment method is more practical after combining the machine learning algorithm.

Ruirong Jiang, Liyong Wan
Chapter 5. Peace, Pandemics, and Conflict

This chapter explores the nexus between peace, pandemics, and conflict. It begins by discussing the role that disease has had in shaping human history. The risk of a natural pandemic becoming an extinction-level threat to humanity is then assessed. Pandemics’ effects on making conflict more likely are identified as an under-researched area. Peacebuilding opportunities alongside pandemic preparedness and response are discussed. The chapter then concludes with a discussion of critical questions for future research on the relationships between peace, pandemics, and conflict.

Noah B. Taylor
Digital Management System of Library Books Based on Web Platform

In view of the slow response speed and single function of the library’s book management system, this paper designs a library’s digital management system based on the network platform. In the hardware part, t91sam9263 chip is used as the core, and the digital management system is established based on the web platform. In the software part, on the basis of dividing the grid space of digital management of books, the management mode of virtual space of books is designed by using virtualization technology. Introduce user interest indicators to recommend book resources personalized. The system test results show that in the face of high concurrent requests, the response time of the designed system is less than 300 ms, the error rate of book recommendation is less than 5%, and the performance is obviously improved.

Xing Zhang
Design of Mobile Monitoring System for Tower Crane in Assembly Construction Based on Internet of Things Technology

High data loss rate exists in the mobile monitoring system of assembly tower crane in construction. Therefore, a mobile monitoring system of assembly tower crane based on Internet of things technology is designed. Hardware part: adopt 32-bit data bus, integrate common high-definition multimedia interface; Software part: make use of space geometry principle to construct anti-collision model of tower group, transmit terminal parameters of tower crane safety monitoring system, optimize remote communication protocol of assembly building construction by using internet of things technology, and set up function of mobile monitoring system of tower crane. The experimental results show that the average loss rate of the two systems is 27.871%, 37.807% and 37.452% respectively, which shows that the higher loss rate is improved after the combination of IOT technology.

Dongwei Zhang, Shun Li, Hongxu Zhao
Research on Intelligent Prediction of Power Transformation Operation Cost Based on Multi-dimensional Mixed Information

Operation cost is an important link in the operation of power enterprises. In the process of intelligent prediction of power transformation operation cost, there is a problem of low accuracy. Therefore, an intelligent prediction method of power transformation operation cost based on multi-dimensional mixed information is designed. Evaluate the fixed cost of power grid, determine the budget amount in different budget periods, extract the life cycle of power grid substation equipment, establish the cost estimation relationship, use multi-dimensional mixed information to build the cost control model, refine the project category, and optimize the intelligent prediction mode of operation cost according to the different nature of each link cost. Test results: the average prediction accuracy of the intelligent prediction method of power grid substation operation cost in this paper and the other two intelligent prediction methods of power grid substation operation cost are 79.357%, 71.066% and 69.313% respectively, indicating that after using multi-dimensional mixed information, the application effect of the designed intelligent prediction method of power grid substation operation cost is more prominent.

Ying Wang, Xuemei Zhu, Ye Ke, Jing Yu, Yonghong Li
Design of Numerical Control Machining Simulation Teaching System Based on Mobile Terminal

Today’s manufacturing industries all over the world widely use CNC technology to improve the manufacturing capacity and level, and improve the adaptability and competitiveness of the dynamic and changeable market. The research and development of CNC technology and the promotion and application of CNC products require a large number of high-quality CNC professionals, and CNC teaching and training are therefore in a very important position. In order to improve the success rate of system requests, a numerical control machining simulation teaching system based on mobile terminals is designed. Using the combination of PC and motion control card, combined with the oscillation circuit inside the PIC16F877 microcontroller to form a complete oscillation circuit; using 3D graphics technology to simulate the CNC machining process, optimize the CNC machining process, build a tool database, and transfer these parameters to the In the simulation program, the cutting process is simplified as a one-dimensional Boolean operation along the line of sight, and the function of the simulation teaching system is designed by using the mobile terminal. Experimental results: The request success rate of the designed system is high, indicating that its use effect is better.

Liang Song, Juan Song
Real Time Broadcasting Method of Sports Events Using Wireless Network Communication Technology

With the increase of time, the problem of image quality reduction caused by unbalanced network load will appear in the real-time broadcasting of sports events. A real-time broadcasting method of sports events is designed by using wireless network communication technology. Audio and video decoding and coding are divided into two independent threads working at the same time, which can make the frame rate reach the HD standard and enhance the stability of the encoding and decoding process. The GAN model is used to enhance the rate conversion. In the inter frame mode, integer transformation, quantization, reordering and entropy coding are performed on the residual block to complete the coding of the macroblock, which is stored or transmitted through the NAL layer. Wireless network communication technology is applied to distribute the number of channels in the space of mutual interference and balance the load of relay network. For the viewer, after receiving the streaming media data block, analyze the RTP packet, decode the video data, and then play the video. The test results show that the real-time broadcasting method of sports events using wireless network communication technology can improve PSNR, reduce the distortion of video sequence and ensure the stability of output picture.

Xueqiu Tang, Yang Yang
Recommendation Method of Ideological and Political Mobile Teaching Resources Based on Deep Reinforcement Learning

In order to improve the quality of ideological and political education and achieve the goal of effective management of mass mobile teaching resources, this paper puts forward a recommendation method of ideological and political mobile teaching resources based on deep reinforcement learning. Firstly, based on the theory of deep reinforcement learning, the recommendation model of ideological and political mobile teaching resources is constructed, and the recommendation method of ideological and political mobile teaching resources is extracted effectively.

Yonghua Wang
Design of Online Auxiliary Teaching System for Accounting Major Based on Mobile Terminal

Online teaching is a common teaching form at present. In the process of using the online auxiliary teaching system for accounting majors, there is a defect that the memory occupies a large space. In order to solve the above problems, an online auxiliary teaching system for accounting majors based on mobile terminals is designed. Hardware part: The power supply design adopts the form of independent power supply in blocks, and configures the external memory interface of C6722B; the software part: builds a database of students’ classroom behavior, migrates behavior attribute data, and uses the online teaching platform as a carrier to obtain the teaching objectives of accounting majors. The mobile terminal optimizes the data transmission function of the online auxiliary teaching system. Experimental results: The memory footprint of the online auxiliary teaching system for accounting majors designed this time and the other two online auxiliary teaching systems for accounting majors are: 357.42M, 484.96M, and 486.99M respectively. The online auxiliary teaching system for accounting majors is more suitable for use.

Yanbin Tang
Intelligent Push Method of Human Resources Big Data Based on Wireless Social Network

Human resources big data has a wide distribution range, a large amount of data and a variety of data types. Aiming at the problem of low integration of human resources raw data, an intelligent push method of human resources big data based on wireless social network is proposed. Combined with wireless social network, the human resources data is integrated and mined, and the human resources data is preprocessed to build an OAP data warehouse; then a human resources recommendation algorithm combined with the wireless social network latent semantic model is proposed. Behavior, mining the potential job characteristics of job seekers, and then realize the intelligent push and matching of human resources big data. The test results show that the intelligent push method of human resources big data based on wireless social network proposed in this study has a significantly better recall rate than the traditional single latent semantic model and deep forest algorithm, and effectively improves the integration degree and push efficiency of human resources raw data.

Xiaoyi Wen, Jin Li
Chapter 9. Deploying Azure SQL DB Hyperscale Using PowerShell

Over the previous five chapters, we demonstrated how to design a Hyperscale environment that satisfied a set of requirements and then deploy it using the Azure Portal. Although using the portal is a great way to learn about the various components of a Hyperscale database and how they fit together, it is strongly recommended to move to using infrastructure as code (IaC) to deploy your environments. In this chapter, we’ll look at using PowerShell and the Azure PowerShell modules to deploy the resources that make up the environment. We will deploy the same environment we have used in the previous five chapters, but instead, we’ll use only the Azure PowerShell module to create and configure the resources.

Zoran Barać, Daniel Scott-Raynsford
Low-Enthalpy Geothermal Applications

This chapter discusses two low-enthalpy geothermal applications in Perth, Western Australia. The first application pertains to using tepid groundwater for the municipal heating of Olympic-size outdoor swimming pools. The second application examines the viability of ground source heat pumps (GSHP) against air source heat pumps (ASHP). In the first application, the objective is to develop an accurate sizing methodology to improve the capital effectiveness for geothermal swimming pools. The predicted pool-water temperature and heating demands are compared against on-site measurements at a Leisure Centre. This model can replicate 71 and 73% of the measured heating capacity data within ±25 kW for the 30-m pool and ±35 kW for the 50-m pool, respectively. In the second application, we assess the feasibility of implementing a GSHP vis-à-vis an ASHP for domestic applications. For the second application, the GSHP has a constant coefficient of performance (COP) of 3.8 ± 6.7%, while that of ASHP ranges from 2.2 to 2.7 ± 6.5%. For cooling, the GSHP has a constant COP of 3.1 ± 13%, while that of ASHP varied between 1.4 and 2.4 ± 11.5%. When a GSHP is considered with a planned installation of a borehole for irrigation, the payback period ranges from near-immediate to four years.

Tine Aprianti, Kandadai Srinivasan, Hui Tong Chua
Chapter 18. Migrating to Hyperscale

In the previous chapter, we looked at the types of workloads that would benefit from or require the use of the Azure SQL Database Hyperscale tier. Now that we’re familiar with when Hyperscale will be a good fit for you, we need to review the processes required to migrate an existing database into Hyperscale. This chapter will walk you through the different migration scenarios and identify the migration options available for each. Any database migration should be approached with care, so planning and testing are always recommended. We will run through some approaches that can be used to evaluate Hyperscale with our workload before performing a production migration.

Zoran Barać, Daniel Scott-Raynsford
Chapter 2. Azure SQL Hyperscale Architecture Concepts and Foundations

Over the years, many new features and improvements have been continuously developed as part of and alongside the traditional SQL database architecture. As cloud computing platforms began to take prominence in the technology sphere, a new form of cloud database architecture has progressively emerged. This new form of architecture incorporates new methodologies involving multitiered architecture. A key characteristic is decoupling compute nodes from the storage layer and the database log service. It aims to incorporate many of the pre-existing and newly evolving features into the new architectural paradigm. This allows for the best combination of performance, scalability, and optimal cost. In this chapter, we will examine in more depth the architectural differences with Hyperscale that enable it to provide improvements in performance, scale, and storage when compared with other tiers of Azure SQL Database. The multitier architecture lies at the heart of this architectural transformation, so we will spend the majority of our time on it in this chapter.

Zoran Barać, Daniel Scott-Raynsford
Chapter 3. Planning an Azure SQL DB Hyperscale Environment

Now that we’ve completed a short tour of the SQL on Azure landscape and taken a high-level look at how the Hyperscale architecture differs, it’s time to look at planning a production Azure SQL DB Hyperscale environment.

Zoran Barać, Daniel Scott-Raynsford
Chapter 5. Administering a Hyperscale Database in a Virtual Network in the Azure Portal

In Chapter 4, we deployed a logical server with a Hyperscale database and connected it to a virtual network using a private endpoint. The logical server’s public endpoint was disabled. In this chapter, we will list some of the common ways DBAs can connect to the Hyperscale database in a virtual network. We will also demonstrate a simple and secure approach to enabling management when another, more complex networking infrastructure connects your management machines to the virtual network.

Zoran Barać, Daniel Scott-Raynsford
The Role of Geothermal Heat Pump Systems in the Water–Energy Nexus

Unplanned rapid urbanization is considered to be one of the major drivers of change in cities across the world. It leads to an inadequate transformation of urban environments, affecting strategic energy and water management infrastructure, resulting as well in an escalation in energy demand and a greater pressure on stormwater facilities. It is estimated that one third of the total energy demand in the European Union (EU) is associated to air-conditioning in buildings, whilst conventional drainage systems have become unsustainable under the current scenario of climate change. In this context of uncontrolled challenges, the EU is encouraging the incorporation of Nature-Based Solutions (NBS) in order to promote resilient infrastructure and to reduce instability. Sustainable Drainage Systems (SuDS) have been selected as key Stormwater Control Measures (SCM), contributing to a paradigm shift in urban water management. As the need for multifunctional spaces evolves due to the lack of urban land, SuDS are increasingly becoming a potential asset to house renewable energy structures, helping to develop the water–energy nexus. Thus, this chapter deals with the opportunities arising in this new research line combining surface geothermal energy systems and SuDS. Both laboratory and field experiences have been analyzed, compiling the lessons learned, identifying the present knowledge gaps, and proposing the future prospects for development. Therefore, paving the way for the effective combination of both technologies.

Carlos Rey Mahia, Felipe Pedro Álvarez Rabanal, Stephen J. Coupe, Luis Ángel Sañudo Fontaneda
Chapter 9. Connectivity

Connectivity, as a new digital marketing mix element, is discussed as one of the most influential digital marketing mix elements. Addressability and findability features are defined and discussed with their relationship with connectivity through visiting email marketing, domain name branding, and search engine marketing applications in the literature. Managerial issues were also addressed for the successful implementation of such connectivity features to generate better marketing value for all market agents in the digital world.

S. Umit Kucuk
Chapter 9. Drivers of Shareholder Value Creation in M&A: Event Study of the European Banking Sector in the Post-financial Crisis Era

The paper investigates the factors driving shareholder value creation following extraordinary financial transactions in the European banking sector after the 2007–2008 crisis. The study analyzes a sample of transactions between commercial banks announced between 2010 and 2020, to verify whether these acquisitions resulted in the creation of value for acquirers. It is found positive and statistically significant abnormal returns for the acquirers at the time of announcement. When identifying some aspects of the target that influence the returns, the study tests whether the market, in a period of crisis, recognizes a premium if the target is “good.” The paper finds that the market valorizes target companies with low NPL ratios, high levels of capitalization with respect to the credit granted, and balanced exposure to interest rates.

Gimede Gigante, Mario Baldacchini, Andrea Cerri
Chapter 10. Deploying Azure SQL DB Hyperscale Using Bash and Azure CLI

The previous chapter introduced the concept of infrastructure as code (IaC) as well as briefly describing the difference between implicit and declarative IaC models. The chapter also stepped through the process of deploying the example SQL Hyperscale environment using Azure PowerShell, as well as providing a closer look at some of the commands themselves. In this chapter, we are going to replicate the same process we used in the previous chapter, deploying an identical SQL Hyperscale environment that was defined in Chapter 4, except we’ll be using shell script (Bash specifically) and the Azure CLI instead of Azure PowerShell. We will break down the important Azure CLI commands required to deploy the environment as well as provide a complete Bash deployment script you can use for reference.

Zoran Barać, Daniel Scott-Raynsford
Chapter 12. Testing Hyperscale Database Performance Against Other Azure SQL Deployment Options

In the previous chapter, we talked about how to deploy the SQL Hyperscale environment using Azure Bicep. We explained a few of the key Azure Bicep resources required to deploy the SQL Hyperscale environment and supporting resources. In this chapter, we are going to do an overall performance comparison between the traditional Azure architecture and Hyperscale architecture. For this purpose, we are going to deploy three different Azure SQL database service tiers.

Zoran Barać, Daniel Scott-Raynsford
Kapitel 5. Inklusive Sprache und Sprachwandel im Unternehmen

Sprachsensibilität ist heute für jedes Unternehmen zentral, das mit internen und externen Bezugsgruppen in Beziehung steht – oder diese aufbauen möchte. Das gelingt im guten Dialog, der Verstehen ermöglicht, und durch das Management von Vielstimmigkeit (Polyphonie) in der Organisation. Sprachkompetenz bedeutet auch, dass Kommunikator:innen ein Grundverständnis von sprachlichen Mechanismen besitzen, darum wird in diesem Kapitel die Methode des Framings in der Sprache erläutert. Das hier vorgestellte Konzept der inklusiven Kommunikation umfasst die Repräsentation von Vielfalt, Diskriminierungsfreiheit, diversitätssensible Formulierungen, Geschlechtergerechtigkeit und Verständlichkeit bzw. Barrierefreiheit. Es ist international gültig, da es nicht auf eine einzelne Sprache fokussiert, sondern auf umfassende Prinzipien setzt. Sie geben die Richtung vor, um in einem Unternehmen partizipativ einen organisationsspezifischen Sprachwandel zu begleiten. Dieser gelingt durch die vier Phasen des Corporate Language Change Modells, die in der Beratungspraxis erfolgreich eingesetzt wurden. Der abschließende Blick auf die Automatisierung zur Unterstützung eines diversitätssensiblen Sprachgebrauchs zeigt: Technik kann Sprachgefühl nicht ersetzen, aber einen Beitrag zur kommunikativen Inklusion leisten.

Annika Schach
Kapitel 39. Qualitätsmanagement als Impulsgeber für die langfristige Sicherung von Wettbewerbsvorteilen

In der Industrie und Wissenschaft bezeichnet man mit dem Begriff Wettbewerbsvorteil den Vorsprung eines Unternehmens auf dem Markt gegenüber seinen Konkurrenten im stetigen Wettbewerb. Wettbewerbsvorteil sind durch Konkurrenten nicht einfach kopierbar und dienen der langfristigen Absicherung eines Unternehmens, indem es Kundenvorteile schafft, zum Beispiel mehr Nutzen durch qualitativ höherwertige Produkte, bessere Prozesse, Innovationen, einen besseren Service oder auch daraus niedrigere Preise. Ein strategischer Wettbewerbsvorteil bedeutet, dass ein Unternehmen seinen Kunden Produkte und Dienstleistungen verkaufen kann, die einzigartige Attribute haben und welche die Kunden wertschätzen.

Marc Helmold
Kapitel 1. Einführung

Das Einführungskapitel gliedert sich in mehrere Abschnitte. Nach einer kurzen Klärung der verwendeten, grundlegenden Begriffe erfolgt die Erläuterung der Benutzungsoberfläche von Solid Edge 2023. Hier werden nacheinander alle einzelnen Menüpunkte, die vorhandenen Buttons und die Mausbelegungen mit ihren jeweiligen Funktionen vorgestellt.Wie bei jedem Kapitel bildet eine kurze Zusammenstellung einfacher Kontrollfragen den Abschluss. Diese dienen dem Anwender als Selbstkontrolle zum vermittelten Inhalt des Kapitels.

Michael Schabacker
Kapitel 11. Erweiterte Realität im Qualitätsmanagement

Erweiterte Realität oder Extended Reality (XR) ermöglicht ein stufenweises vollständiges oder teilweise virtuelles Abbild der Realität, welches der Realität entweder stark ähnelt oder vollständig von dieser abweichen kann. Die Grenzen zwischen der Realität und der virtuellen Welt werden durch neue Hardware und Rechenleistungen sowie leistungsstarke Netze mit hoher Bandbreite zunehmend fließend und eröffnen neue, atemberaubende Eindrücke, welche bis vor kurzem nur in Science-Fiction-Filmen vorstellbar waren. Erweitere Realität ist auch eine Schlüsseltechnologie für das sogenannte Metaverse, die dritte große Generation des Internets, welche auch als Web 3.0 bezeichnet wird.

Jürgen Fritz
A Comparative Study and Analysis of Time Series Forecasting Techniques for Indian Summer Monsoon Rainfall (ISMR)

The importance of monsoon rains cannot be looked over, as it has an impact on activities all year round, from agricultural to industrial. In the domains of water resource management and agriculture, accurate rainfall estimation and forecast is extremely useful in making crucial decisions. This study presents various deep learning approaches such as Multi-layer Perceptron, Convolutional Neutral Network, Long Short-Term Memory Networks, and Wide Deep Neural Networks to forecast the Indian summer monsoon rainfall (ISMR) (June–September) based on seasonal and monthly time scales. For modeling purposes, the ISMR time series data sets are divided into two categories: (1) training data (1871–1960) and (2) testing data (1961–2016). Statistical analyses reveal ISMR’s dynamic nature, which couldn’t be predicted accurately by statistical and mathematical models. Therefore, this study provides a comparative analysis that demonstrates the effectiveness of various algorithms to forecast ISMR. Moreover, it also weighs the result with established existing models.

Vikas Bajpai, Tanvin Kalra, Anukriti Bansal
Design of Electronic Communication Power Monitoring System Based on GPRS Technology

If the electronic communication power supply fails, the entire electronic communication system will be paralyzed, resulting in the abnormal operation of the system and increased maintenance costs in the later period. However, due to the slow transmission rate of the electronic communication power supply monitoring system, a GPRS based electronic communication power supply monitoring system is designed. Hardware part: the communication resources of the base station are used for networking, and the AC input is sent to the rectifier module after power distribution; Software part: identify the type of monitoring object, change the electrical signal or non electrical signal into a standard electrical signal, select UDP as the transmission protocol, use GPRS technology to formulate the communication protocol, and optimize the software function of the electronic communication power monitoring system. Experimental results: the average transmission rate of the electronic communication power supply monitoring system designed this time and the other two electronic communication power supply monitoring systems are 63.712 kpbs, 54.586 kpbs and 54.057 kpbs respectively, which shows that the system performance is more superior after the GPRS technology is integrated into the electronic communication power supply monitoring system.

Ying Liu, Fangyan Yang
Cloud Service-Based Online Self-learning Platform for College English Multimedia Courses

Because the traditional college English multimedia course network independent learning platform has the problems of slow response time and low student satisfaction, a cloud service based College English multimedia course network independent learning platform is designed. Hardware part: simulate the maximum frequency of input signal and design a complete power on reset (POR) and power off reset (PDR) circuit; Software part: increase the investment in multimedia network teaching, improve the management structure of College English multimedia courses, take the Internet as the main carrier, build a network independent learning model, and optimize the software functions of the platform by using cloud services. Experimental results: the average response time of the College English multimedia course network autonomous learning platform in this paper and the other two autonomous learning platforms are 8.464 s, 13.276 s and 13.697 s respectively, which shows that the application effect of the College English multimedia course network autonomous learning platform is better and the satisfaction of students is improved after making full use of the cloud service technology.

Guiling Yang
Research on Export Trade Information Sharing Method Based on Social Network Data

Export trade, also known as export trade, refers to the trading activities of selling domestic products or processed products to overseas markets. Due to the large amount of export trade information and the limited storage of resources, the utilization rate of resources is low and the ability of information sharing is poor. To this end, this paper proposes a method of export trade information sharing based on social network data. Through the distributed classification technology of the blockchain platform, we can access trade financing data and information in real time and establish an information service mode. Bayesian estimation is used for data fusion. Establish social network data communication links to transmit information resources. Federal learning algorithm is used to map the original data into the corresponding data sharing model to realize the sharing of export trade information. The test results show that the export trade information sharing method based on social network data can improve the detection rate and shorten the running time, so as to maximize the utilization efficiency of shared information, and achieve better information sharing effect.

Guiling Yang
Separation Algorithm of Fixed Wing UAV Positioning Signal Based on AI

Unmanned aerial vehicle (UAV) is an unmanned aircraft remotely controlled by radio, which is widely used in reconnaissance. However, during the operation of UAV, the positioning signal is easily disturbed by noise, which leads to low separation accuracy and poor positioning effect of fixed wing UAV. To this end, a fixed wing UAV positioning signal separation algorithm based on artificial intelligence is proposed. The fixed-wing UAV positioning signal denoising algorithm is constructed by collecting the feature information of fixed-wing UAV, and the denoising of fixed-wing UAV positioning signal is completed. In order to reduce the signal separation error and realize the fixed wing UAV positioning signal separation, signal separation is processed according to the positioning signal algorithm. Experimental results show that the proposed algorithm can effectively separate the UAV location signal from the noise, and has high accuracy and good location effect under serious multipath interference.

Zhihui Zou, Zihe Wei
A Prediction Model with Multi-Pattern Missing Data Imputation for Medical Dataset

Medical data is over and over again analyzed for disease diagnosis and proper treatment. Medical dataset usually contain missing data it is also treated as error. These missing values possibly will clue to incorrect disease diagnosis result. Meanwhile the medical data collection is costly, time incontrollable and an essential on the way to collected beginning various issues. Therefore get better missing data is an alternative of re-collecting the medical data. In this paper a Prediction Model has been proposed for missing data imputation in medical data. An experiment includes various datasets to validate the model as well as to establish the importance of imputation. A new Method name called enhanced random forest regression predictor is proposed for missing data imputation on medical dataset. Method is validated using 3 datasets named wisconsin, dermatology and breast cancer. All the datasets are downloaded from UCI repository. Missing data are generated manually in the original data from 1% to 15%. The proposed Prediction model is predict the missing values based on enhanced random forest regression predictor and evaluates the model using various classifiers. Classification is assessment of normal and abnormal disease diagnostics and produce the result of this experiment is accuracy. Proposed predictor has been compared with two imputation method as KNN and mice forest. Missing prediction model is perform better compared with other methods. Evaluation is demonstrating the classification and gives accuracy which is compared with original dataset and the imputed dataset. Missing data problem is a serious problem in medical data and can guidance downstream disease analysis. A proposed enhanced missing prediction model for missing data imputation is an application of imputing the missing data and disease analysis using classification in better way.

K. Jegadeeswari, R. Ragunath, R. Rathipriya
Research on Equipment Management System of Smart Hospital Based on Data Visualization

Medical equipment is an important part of the assets of smart hospitals and an important guarantee for clinical departments to complete normal medical treatment. Strengthening the management of medical equipment, giving full play to the maximum benefits of medical equipment, and preventing the loss and idleness of medical equipment have been widely valued by hospitals. However, hospitals are facing high equipment investment, difficult management and low operation and maintenance efficiency. In order to improve the management effect of network equipment, security equipment, guidance equipment and other electromechanical equipment in smart hospitals, an application method of smart hospital equipment management system based on data visualization is proposed. Aiming at the intelligent management of equipment, the visualization technology is used to monitor the operation status of various equipment, and the management strategy of various equipment in the smart hospital is optimized to improve the operation efficiency of the equipment in the smart hospital. Build a smart hospital equipment operation supervision system to achieve efficient management of smart hospital equipment. Finally, it is confirmed by experiments that the smart hospital equipment management function of the system in this paper is perfect, and the operation stability is strong, and it has high practicability and reliability in the actual application process.

Yuanling Ma, Xiao Ma, Chengnan Pan, Runlin Li, Zhi Fang
Remote Tutoring System of Ideological and Political Course Based on Mobile Client

Long-distance tutoring system can not quickly from the massive learning resources in search of users interested in resources. Therefore, this paper puts forward the design of distance tutoring system based on mobile client. In the hardware part, the FPGA ARM framework is adopted, and the XGMII interface is connected with the harmonic sub-layer of the link layer to realize the continuous transmission of a large number of data streams.In the software part, the interactive structure of the system is designed to form a good docking and interactive relationship between online learning and after-class learning mode. Design module structure based on mobile client, to meet the needs of students’ online VOD courseware, curriculum information and teacher information. Based on the hybrid recommendation algorithm, the personalized course recommendation is carried out, and the courses are retrieved from the relevant course recommendation database and returned to users. System test results show that the system designed in this paper can reduce the maximum response time of user query and processing applications, so it is more practical.

Xiaopan Chen, Jianjun Tang
Modified K-Neighbor Outperforms Logistic Regression and Random Forest in Identifying Host Malware Across Limited Data Sets

Using probabilistic risk assessment and decision-making methodology, this study analyzes and manages risks to Supervisory Control and Data Acquisition (SCADA) systems that are made on purpose. Seemingly, the attacker can launch attacks anywhere in the world from a single place. Viruses and other dangerous executables tend to stay in the system for a while and then spread their copy to other systems on the network. One of the greatest issues for security experts is detecting cyber-attacks and starting immediate recovery from them when they have spread across the entire system at a triggered time and are doing significant harm. Any SCADA system that has been compromised can have an effect on the functioning of functional blocks and measured parameters, changes in the operating circumstances of the installations, and abnormal beginnings, stops, and modifications to the installed units as instructed by the attackers. Samples are represented as a separate byte file in this study after the raw dataset has been preprocessed. The byte file is used for both testing and training prediction models using statistical processes, which can then be utilized to detect malware in critical infrastructure systems. Finding malicious executables based on both nature and signature is the focus of this study. Each model's conclusion is found on limited malware data samples; however, these samples produce convincing results for previously unidentified malware. The results of the experiments reveal that, when there are few training samples available for a given harmful file, modified K-neighbor outperforms Logistic Regression and Random Forest.

Manish Kumar Rai, K. Haripriya, Priyanka Sharma
Research on Autonomous Learning Management Software Based on Mobile Terminal

There is a large amount of data calculation in the data transmission from the mobile terminal to the server edge. Psychologically assisted autonomous learning management software provides users with learning resources and learning activities to stimulate and maintain learning motivation. According to the software requirement, the whole development architecture is established based on the mobile terminal device, and the client mobile terminal device responds to the user operation and sends the data request to the Web server. The server adopts module entity to reflect the system database concept, divides the task into different modules according to the decision result, and reduces the delay of data transmission. The client is mainly composed of three interfaces: system login module, online learning module and support service module, which correspond to different activities respectively. The test results show that the psychologically aided autonomous learning management software based on mobile terminal device can reduce CPU usage and memory usage and improve performance.

Han Yin, Qinglong Liao
Automated Energy Modeling Framework for Microcontroller-Based Edge Computing Nodes

When IoT-enabled applications utilized edge nodes rather than cloud servers, they aimed to apply diligent energy-efficient mechanisms on edge devices. Accordingly, frameworks and approaches that monitor/model microcontrollers, including Espressif-Processor-based (ESP) edge nodes, have drawn mainstream attention among researchers working in the edge intelligence domain. The traditional approaches to measuring the energy consumption of edge nodes are either not online or prone to complex solutions. This article attempts to develop an Automated Energy Modeling Framework (AEM) for microcontroller-based edge nodes of IoT-enabled applications. The proposed approach baselines the energy consumption values; models energy consumption values of components using a random forest (RF) algorithm; and, automatically suggests the energy consumption of edge nodes in real-time – i.e., during the execution of IoT-enabled applications on edge nodes. Experiments were carried out to validate two applications’ automated energy modeling approach using Espressif’s ESP devices. The proposed mechanism would benefit energy-conscious IoT-enabled application developers who focus on minimizing the energy consumption of embedded-based edge nodes such as ESPs.

Emanuel Oscar Lange, Jiby Mariya Jose, Shajulin Benedict, Michael Gerndt
Designing a Secure E Voting System Using Blockchain with Efficient Smart Contract and Consensus Mechanism

These days, most people are not satisfied with the final result of the voting system. This is because the current system for voting is centralized and fully controlled by the election commission. So, there is a chance that the central body can be compromised or hacked and the final result can be tampered. In this direction, a decentralized, Blockchain and Internet of Things (IoT) based methodology for voting system is devised and presented in this paper. The efficient smart contracts and consensus mechanism are applied in the proposed system to enhance the security. Blockchain is totally transparent, secured and immutable technique because it uses concept like encryption, decryption, hash function, consensus and Merkle tree etc. which make Blockchain Technology an appropriate platform for storing and sharing the data in a secured and anonymous manner. IoT makes use of biometric sensors using which people can cast their votes in not only physical mode but also in digital mode. As a response, a message is received to the owner for casting his vote to ensure the authentication. In this way, we can make the present voting system more secure and trust-worthy using the properties of both Blockchain and IoT, and therefore, we can give more value to the election process in the democratic countries. The proposed method ensures security as well as reduces the computational time as compared to the existing approaches.

Durgesh Kumar, Rajendra Kumar Dwivedi
An Intelligent Behavior-Based System to Recognize and Detect the Malware Variants Based on Their Characteristics Using Machine Learning Techniques

An Intrusion Detection System’s (IDS) primary goal is to safeguard the user and their equipment against malicious malware. IDS offers more security than more established techniques like firewalls. Malware software affects the integrity, confidentiality, and availability of data by launching cyberattacks from a computer-based system. There has been a lot of advancement in computer crime, and IDS has grown tremendously to keep up with it. Researchers have been trying to advance in this field and increase the chances of detecting an attack while also maintaining the working system and network. In this research, we provide a novel approach to identifying and detecting malware programs based on their attributes and behavior, using machine learning techniques. The paper also suggests several techniques for analyzing malware behavior, including filtering useful system operations, defining the type of action, generating behavior, and assessing risk score and frequency. The proposed method achieved higher accuracy of 94% using the Random Forest machine learning classifier when compared to other classifiers.

Vasudeva Pai, Abhishek S. Rao, Devidas, B. Prapthi
Prediction Method of Crack Depth of Concrete Building Components Based on Ultrasonic Signal

Cracks in concrete components have a serious impact on the safety of building structures. On the one hand, with the increase of service life, cracks will reduce the safety of building structures, and with the deepening of cracks, the service life of building structures will be reduced. Therefore, it is very necessary to predict cracks in concrete components. In the crack prediction of concrete building components, the predicted results deviate from the actual value due to the deviation of the measured strain value of concrete. Based on ultrasonic signal, a method for predicting crack depth of concrete building components is proposed. In the process of concrete ultrasonic transmission, the number and length of micro cracks will increase and expand due to stress concentration. According to this phenomenon, the finite element simulation of concrete building components is carried out to obtain the damage model. The characteristics of concrete cracks are extracted based on ultrasonic signals, and the law of acoustic frequency changing with time can also reflect the state of medium stress. The extracted crack signal features of concrete building components are input into CNN model for prediction and recognition. The test results show that the prediction method of crack depth of concrete building components based on ultrasonic signal can improve the accuracy of prediction results and has high engineering application value.

Kangyan Zeng, Yan Zheng, Jiayuan Xie, Caixia Zuo
Design of Energy Consumption Monitoring System for Group Building Construction Based on Mobile Node

Because the nodes in the data transmission network are generally fixed, once there is a problem of node missing or interference, the reliability of data transmission will be reduced, which will affect the performance of the monitoring system. Therefore, a group building construction energy consumption monitoring system based on mobile nodes is designed. First, the system framework including operation layer, decision-making layer and management layer is designed. Secondly, according to the monitoring content of energy consumption data information of large-scale group buildings, the hardware structure of the system is optimized. Finally, through the wireless transmission technology of mobile nodes, the accurate collection and effective transmission of energy consumption in group building construction are carried out, so as to complete the energy consumption monitoring function of the system. Finally, the experiment proves that the energy consumption monitoring system of group building construction based on mobile nodes has high practicability in the practical application process.

Yan Zheng, E. Yang, Shuangping Cao, Kangyan Zeng
Construction of Mobile Education Platform for Entrepreneurial Courses of Economic Management Specialty Based on Cloud Computing

In order to solve the problem of unbalanced course resource scheduling of mobile education platform when the number of users increases, a mobile education platform based on cloud computing is constructed. In the hardware part, the FPGA chip of XC6SLX16 is selected as the platform, and the decoupling network is designed according to different power input to eliminate the noise on power pin. In the software part, according to the requirements of the economic management course, we integrate the scattered teaching to form the rich teaching resource base and realize the unified management of users, roles and organizations. In order to improve the concurrency performance of the platform, the entrepreneurial course resource database of economic management major is scheduled based on cloud computing. Design each function module of the mobile education platform, input the keywords can get the related more detailed development resources, realize the mutual communication discussion. The test results show that the platform has good performance, and can improve the network throughput and meet the design requirements.

Huishu Yuan, Xiang Zou
Pixel Attention Based Deep Neural Network for Chest CT Image Super Resolution

The High-Resolution chest CT scan images help to diagnose lung related diseases accurately. In general, the more advanced hardware used in CT Scan machines, the more high resolution images will be generated. But it is a costlier approach. This limitation can be overcome with the post processing of the images generated from the CT machine. Even when the image is upscaled, the quality of the image should be retained. So, the process of reconstructing the High-Resolution images from the Low-Resolution images is known as Image Super-Resolution. The recent advancements in hardware and Super Resolution deep neural networks enabled reconstructing High-Resolution images in an efficient way. The objective quality metric Peak-Signal-to-Noise-Ratio evaluates the performance of a SR deep model. In this paper, proposed a pixel attention based deep neural network, MediSR for chest CT scan medical image Super-Resolution. The model is trained with two chest CT datasets and the experimental results showed an improvement of 1.78% and 18.23% for the 2 $$\times $$ × and 4 $$\times $$ × scale factors over the existing literature.

P. Rajeshwari, K. Shyamala
Building a Multi-class Prediction App for Malicious URLs

The page that houses a malicious snippet that could misuse a user's computing resources, steal confidential data, or carry out other forms of assaults is known as a malicious host URL. They are generally distributed across the world wide web under various usage categories like spam, malware, phishing, etc. Although numerous methods or fixes (to identify URLs) have been developed in recent years, still cyberattacks continue to occur.This study contributes towards implementing three tiers of the system for detection and protection from harmful URLs. The first tier focuses on evaluating the performance of discriminative features in model creation. Discriminative features are derived from URL details and “Whois” webpage information that helps in improving detection performance with less latency and low computational complexity. The influence of feature variation on Parametric (neural network) and non-parametric classifier detection results are assessed to narrow down to the most prominent features to be adapted in the best model for the task of identifying URLs with multi-categorization. The study reveals that non-parametric ensemble models like Light GBM, XGBoost, and Random Forest performed well with a detection accuracy of over 95%, which facilitated building a real-time detection system and differentiating multiple attack types (such as Malware, Phishing, and spam).The second tier focuses on validation with a global database to know, if entered URL is reported as suspicious by various detection engines already. If not, it enables the user in updating the global database with URL details that are new and not reported yet. Finally, the two modules are integrated to create a web application using Streamlit that provides full system protection against malicious URLs.

Vijayaraj Sundaram, Shinu Abhi, Rashmi Agarwal
Performance Assessment of Machine Learning Techniques for Corn Yield Prediction

Agriculture Industry has evolved tremendously over the past few years. Numerous obstacles have been raised in the agricultural fields including change in climate, pollution, lack of land and resource, etc. To overcome these hurdles and increase the crop productivity, agricultural practices need to adopt smarter technologies. Crop Yield prediction at an early stage is an significant task in precision farming. The yield of any crop depends on many factors including crop genotype, climatic conditions, soil properties, fertilizers used, etc. In this work, we propose a framework based on machine learning technique to predict the yield of corn in 46 districts of Uttar Pradesh, the largest Indian state in terms of population over a period of 37 years. We combine weather data, climatic data, soil data and corn yield data to help farmers to predict the annual production of corn in their district. We implement Linear Regression (LR), Decision Tree (DT) Regression, Random Forest (RF) Regression, and ensemble Bagging Extreme Gradient Boosting (XGBoost) model. Upon evaluation of all models and comparing them we observe that Bagging XGBoost Regression model outperforms all other models with the accuracy of 93.8% and RMSE= 9.1.

Purnima Awasthi, Sumita Mishra, Nishu Gupta
Intelligent Push Method of News and Information for Network Users Based on Big Data

In view of the poor management effect of massive news and information, this paper proposes an intelligent push method of news and information for network users based on big data. Based on big data technology to identify network users news interest preferences, build network user news information classification recommendation algorithm, simplify the network user news information intelligent push steps, the experimental results prove that this paper design method in network user news information push has high practicability, can fully meet the research requirements, has certain application value.

Ting Chen, Zihui Jin
A Novel Weighted Visibility Graph Approach for Alcoholism Detection Through the Analysis of EEG Signals

Detection of neurological disorders such as Alzheimer, Epilepsy, etc. through electroencephalogram (EEG) signal analysis has become increasingly popular in recent years. Alcoholism is one of the severe brain disorders that not only affects the nervous system but also leads to behavioural issues. This work presents a weighted visibility graph (WVG) approach for the detection of alcoholism, which consists of three phases. The first phase maps the EEG signals to WVG. Then, the second phase extracts important network features, viz., modularity, average weighted degree, weighted clustering coefficient, and average degree. It further identifies the most significant channels and combines their discriminative features to form feature vectors. Then, these feature vectors are trained by different machine learning classifiers in the third phase, achieving 98.91% classification accuracy. The visibility graph (VG) is not only robust to noise, but it also inherits many dynamical properties of EEG time series. Moreover, preserving weight on the links in VG aids in detecting sudden changes in the EEG signal. Experimental analysis of the alcoholic EEG signals indicates that the average accuracy of the proposed approach is higher or comparable to other reported studies.

Parnika N. Paranjape, Meera M. Dhabu, Parag S. Deshpande
Blockchain-Aided Keyword Search over Encrypted Data in Cloud

Attribute-based keyword search (ABKS) achieved significant attention for data privacy and fine-grained access control of outsourced cloud data. However, most of the existing ABKS schemes are designed based on a semi-honest and curious cloud storage system in which the search fairness between two parties becomes questionable. Hence, it is vital to building a protocol that provides mutual trust between the cloud and its users. This paper proposes a blockchain-aided keyword search over encrypted data, which achieves search fairness between the cloud and its users using Ethereum blockchain and smart contracts. Additionally, the system accomplishes fine-grained access control, limiting access to the data to only those who have been given permission. Besides, the scheme allows multi keyword search by the users. The security analysis shows that our scheme is indistinguishable against chosen-plaintext attack and other malicious attacks. The performance analysis shows that the scheme is efficient .

Uma Sankararao Varri
3D Visualization Method of Folk Museum Collection Information Based on Virtual Reality

Aiming at the lack of real-time simulation rendering of Folk Museum collection information, a three-dimensional visualization method of Folk Museum collection information based on virtual reality is proposed. According to the collection information classification of Folk Museum, the exhibition space of museum is divided. The museum is structured, the rendering scene of collection information is constructed by using parametric scene description language, and these data are uniformly managed by means of spatial projection. According to the attribute value of the rendered scene, a three-dimensional visualization model of the collection information of the Folk Museum is established based on virtual reality. The role agent communicates with the outside rather than directly accessing the role, which can enhance the encapsulation of data and the reusability of code, so as to reduce the response delay. The test results show that the 3D visualization method of Folk Museum collection information based on virtual reality can shorten the information response time and optimize the real-time rendering process.

He Wang, Zi Yang
Design of Remote Video Surveillance System Based on Cloud Computing

Intelligent mobile network remote video monitoring system is an essential technology in the current society. In order to ensure the operation effect of Intelligent mobile network remote video monitoring system, the design method of intelligent mobile network remote video monitoring system based on cloud computing is proposed, and the system hardware structure and software function are optimized. Finally, the experiment proves that the design of an intelligent mobile network remote video surveillance system based on cloud computing has high practicability in practical application, and fully meets the research requirements.

Wei Zou, Zhitao Yu
Application of Abelian Mechanism on UFS-ACM for Risk Analysis

This proposed research paper focuses on and takes care of Abelian-Mudulo’s application over a Unix Access Control Mechanism to resolve the unordered, unset-up, uncertainty of the UFS. The purpose of the Abelian mechanism is going to implement the UFS ACM at the right time in the right way by applying the attributes of a Unix File System (Read, Write, Execute). This Abelian (RWX) access control mechanism is the process of communicating and transforming every request of the customer to use any device, components, data, and applications over Edge Computing. This Abelian Mechanism resolves protection issues to determine the client’s response as well as qualities of services (QoS) to be invested into the classification, normalization, and frequent pattern mechanisms deciding on the major components of Reading, Writing, and Execute to access the data and information on anywhere of the globe through the UNIX Server and Web Portal. The prevention is inversely proportional to a set of risks. The attributes about attributes (Meta Attributes) provide prediction of current, and future security, and the risks pattern. Finally, this research work covers a wide range of Standardization, Normalization, Optimization, and Fuzzy Laws for risk assessment.

Padma Lochan Pradhan
Analyzing Fine-Tune Pre-trained Models for Detecting Cucumber Plant Growth

Deep learning (DL) models have been used extensively for various applications such as image recognition, virtual chatbots, healthcare, and object detection tasks. DL models are trained with huge data for better prediction ability. It is difficult to collect a lot of data. Thus, use of transfer learning with even lesser samples may yield better recognition rate. However, there exist various pre-trained models which are used for transferring knowledge of one domain to other domain. As, they are trained over specific domain, they need to be fine-tuned as per target domain to improve detection rate. Therefore, this paper proposes six different models by using VGG16, VGG19, Xception, InceptionV3, DenseNet201, and MobileNetV2 respectively. Considering, agriculture domain, monitoring growth of plant is crucial. It may helP_In identifying issues early in plant such as nutrient deficiencies, diseases, weed infections, and affected by pests or insects. Thus, the proposed models are evaluated over cucumber plant stage dataset. Findings show that proposed model using VGG16 (P_VGG16) attains maximum testing accuracy of 97.98%. It is also obtained that P_VGG16 improves accuracy rate by 2% as compared to VGG16. This work also shows the comparison of proposed models with their respective original state-of-the-art pre-trained models.

Pragya Hari, Maheshwari Prasad Singh
Kapitel 9. Darstellung und Reflexion der Befunde

Die 30 befragten Personen werden anhand der in den Expert*inneninterviews erhobenen demografischen Daten in Abschnitt 9.1 beschrieben. Darauffolgend werden die Ergebnisse mit Hilfe der Abschnitte 9.2 Digitalisierung, 9.3 Flexibilisierung sowie 9.4 Alter(n)smanagement strukturiert und übersichtlich dargestellt. Die Datenaufbereitung ist, in Abhängigkeit von den Ergebnissen, von variierenden Darstellungsmitteln geprägt, wie bspw. Paraphrasen, wörtlichen Zitaten sowie Diagrammen (Kuckartz, 2016).

Daniela Dohmen
Information Technologies and Cultural Tourism—The Case of the Virtual Museums

The introduction of strategies that include information technology in the development of public cultural policies may potentiate the cultural democratization processes and boost tourism. The use and impact of information technology while facilitating cultural production and as an enabler instrument broadening of cultural public is a key piece in the way of the amplification of cultural tourism. In this article, it is argued that the introduction of strategies that include information technologies in the development of public cultural policies enhances the cultural tourism. Specifically, this article seeks to demonstrate the use and impact of information technologies while promoting cultural production and as a facilitator process of increasing cultural audiences, as well as being a key player in the way of cultural democratization and contribute to a more sustainable tourism.

Vitor Santos
Improving on the Markov-Switching Regression Model by the Use of an Adaptive Moving Average

Regime detection is vital for the effective operation of trading and investment strategies. However, the most popular means of doing this, the two-state Markov-switching regression model (MSR), are not an optimal solution, as two volatility states do not fully capture the complexity of the market. Past attempts to extend this model to a multi-state MSR have proved unstable, potentially expensive in terms of trading costs, and can only divide the market into states with varying levels of volatility, which is not the only aspect of market dynamics relevant to trading. We demonstrate it is possible and valuable to instead segment the market into more than two states not on the basis of volatility alone, but on a combined basis of volatility and trend, by combining the two-state MSR with an adaptive moving average. A realistic trading framework is used to demonstrate that using two selected states from the four thus generated leads to better trading performance than traditional benchmarks, including the two-state MSR. In addition, the proposed model could serve as a label generator for machine learning tasks used in predicting financial regimes ex ante.

Piotr Pomorski, Denise Gorse
Chapter 2. The Progressivity and Transformative Role of Culture
Findings from Self-Governance in Yugoslavia Towards Life-Centred Development

Until the 60s, in Yugoslavia, the emphasis was mainly on the economic policy and instruments—in a narrower sense. Traditionally unprofitable activities started to appear to capture in a more excellent picture of development. The culture slowly became an integral part of social being by allowing the politics and policy to become social constructs (through self-governance socialism) and not the exclusively the construct of political elites. Cultural development represented one of the few most critical dimensions of social development, integrated in the sense of total development. Historical analysis regarding the cultural policy in sustainable development, and vice versa, will provide a better understanding of the place of the culture and its importance, for example, in Yugoslavia and Serbia. The critical discourse and content analysis, scoring on the endogenous knowledge, culturally-driven and further more environmentally-driven factors, will help capture the contributions of culture to a greater or lesser extent to the paradigm of sustainability in history of Serbia and its Yugoslav heritage. Historical conclusions are fundamental grounding for future assistance in re-solving environmental, social, cultural and economic issues and challenges resulting from economic policy trends and pressures. Economic history is not frozen, instead it is still being written, highly depended on the present: What is the relationship between culture, development, and sustainability from decolonized lenses? How and why it is still relevant to apply commonly inherited aspects of knowledge regarding Yugoslav self-governance in the contemporary context? What is meant by life-centered development—to be done for the future? Why is it essential to start form the decolonization of knowledge and epistemic erasures towards imagining future integrative cultural and environmental policies?

Milica Kočović De Santo
Chapter 2. Ludus Thronis: De novem orbis miraculis—The Wonders of the Ancient World in George R. R. Martin’s A Song of Ice and Fire

In George. R. R. Martin’s series of novels, A Song of Ice and Fire, and in its adaptation for TV as Game of Thrones, we can find multiple historical, artistic, geographical and literary references. They take us back to our own sense as a civilization. In this context, it is possible to understand how the appropriation of elements of Antiquity can give depth to a discourse, whether written or visual. Such an appropriation gives plausibility and a familiar scenario to new worlds and stories. In this way, we make them our own more easily. In Martin’s world, we can see the creation of a list of wonders as such by a scholar and traveler named Lomas Longstrider. In his book, Wonders Made by Man, he collects nine wonders of the world, emulating the Seven Wonders of the Ancient World as a new Antipater of Sidon or Philo of Byzantium. Thus, we discover the Great Pyramid of Meereen, the Titan of Braavos, the Lighthouse of Oldtown or the gardens and walls of Qarth. And we see other scenarios that draw directly from the classical sources and their cultural transmission throughout history.

Ainhoa De Miguel Irureta, Juan Ramón Carbó García
Chapter 9. Visualization of the Urban Thermal Environment Using Thermography

In addition to the physical cityscape, a unique thermal environment is present in every city that is not directly visible but exerts considerable influence on the daily lives of urban residents. The spatial and temporal distributions of surface temperatures across the urban environment are related to the spatial form and constituent materials of the city and can be visualized using thermal infrared cameras to observe and investigate the resulting urban thermal environment.

Akira Hoyano, Hiroki Takahashi
Caves in Plitvice Lakes

In the karst terrains to which the Plitvice Lakes National Park belongs, there are two sides: the face and the reverse. Tectonics, carbonate dissolution, and gravitational processes have created a variety of aboveground and subterranean forms. The underground is full of smaller and larger empty spaces, the majority of which are inaccessible to humans. Caves are natural entrances to the underground that allow us to explore it. For the inhabitants of karst areas, the movement of water from the surface to the subsurface, underground water reservoirs and groundwater levels are important factors in living conditions. Also, caves have been interesting to people since prehistoric times, mostly as places of safe stay. Interest in caves continues to this day, as they represent fascinating places of perpetual darkness, absence of flora and hard-to-see fauna of unusual beauty. Viewed from a biological perspective, subterranean environments form a whole range of habitats that are inhabited by organisms that are associated with them in varying degrees of adaptation. Research, knowledge and monitoring of the underground are carried out in karst areas worldwide, and special attention should be paid to them in protected areas.

Kazimir Miculinić, Tvrtko Dražina, Nikola Markić, Neven Bočić
Chapter 12. Land, Knowledge, Strategies

This first chapter in Part III, begins an attempt to draw conclusions about farmers’ social networks in terms of household power to influence and access livelihood assets through social relations. Social network power is linked to household land development beliefs and behaviors, comprising knowledge, involvement, and perception of influence on land development. Social networks are then linked to livelihood strategies. The relationship between organizational power (in this case, the perceived ability to influence land use planning and development) and livelihood strategies was in one sense clear: regardless of strong or weak social network power, the Yamuna Khadir community perceived no or minimal influence (n = 98; 88%), and they were planning informal or, rather, not planning livelihood strategies. What did emerged was a distinction between households reporting detailed versus vague livelihood strategies and those planning to stay in Delhi versus planning to return to their homeland. Households with strong social networks were significantly more likely to report detailed land use knowledge, general knowledge, and perceived ability to influence land use planning and development. Of the 14 (12%) households who believed they had an influence on land use development, more than half (n = 8) gave detailed livelihood strategies. And households with detailed livelihood strategies were statistically three times as likely to believe they had an influence on land use planning and development.

Jessica Ann Diehl
Chapter 10. Rent or Own? Landlords as a Social Network Collective

This seventh chapter in Part II, summarizes findings from the interviews with Delhi farmers triangulated with other supporting evidence. It describes social networks and access to resources organized by relational collectives: the different types of people farmers might interact with as part of their livelihoods. In this chapter, landlord relations are described and explored. Landlords represented a social group with the greatest bridging potential for tenant farmers. Despite variability in the social economic status of landowners—some were more advantaged than others, owning land gave an individual greater leverage with the government and developers. In this study, landlords provided access to livelihood assets in the form of human, financial, physical, and social capitals. Power relations with landlord relations are contextualized against larger social-political contexts and an attempt is made to quantify power relations. This chapter includes a conversation with a wealthy landowner, four years after the original fieldwork, and explores the continued state of contestation of the land on the Yamuna River Floodplain between the city government and landowners and occupants. There is also a summary of interviews with ten farmers who identified themselves as landowners that illustrates the range of resilience to precarity among this group. Themes of tenure, title, and identification are explored to call attention to the ambiguity and trade-offs in land security. In terms of characterizing the landlord social network, households could: invest human and social capital through interactions with the landlord; withdraw human, physical, social, and financial capital when the landlord made improvements, helped with loans or in other ways, gave advice or taught, did not charge rent, offered small jobs, or was involved in land use discussion or a court case; and, exchange human, physical, social, and financial capital through a positive relationship, loans, paying/compensating for the land, and interacting with the landlord frequently. One-third of households interviewed (n = 43; 36%) were evaluated to have strong landlord network power: they were able to influence and access various livelihood assets through their landlords. Conversely, two-thirds (n = 75; 64%) had weak landlord network power.

Jessica Ann Diehl
Future of E-commerce by Implementing Blockchain Payments System

Blockchain technology is presently widening into different sections of the Information Technology, Banking and Finance, Currency, Healthcare, Records of Property, Voting, Communications Technology community and especially the main field which plays an important part in E-commerce. Our proposed idea provides a solution to 4 new problems: faster processing of funds, ensuring user privacy and user control, providing smooth flow of various different modules and optimized database modules to manage and easily process the data to the front-end. It also keeps all the records of what is shared with whom when and why without involving any 3rd party. When paired with Data encryption keys, blockchain offers transparent, tamper-proof, and secure platforms that can allow creative solutions. In this paper we will show how e-commerce can be enabled using blockchain with all the drawbacks that a customer faces while ordering online currently. This helps the small and medium enterprise businesses to run with the confidentiality that no data or any private details will be retrieved or leaked.

G. Nagarajan, Naman Jain, R. Naman Rathore
Hand Gesture Recognition for Human-Computer Interaction Using Computer Vision

We use gestures to communicate with friends, family, and colleagues every day. Gestures have always been a natural and intuitive form of interaction and communication, from waving our hands across the hallway to signal someone to keep the elevator doors open or greet friends from afar. Gestures are the universally understood extension of our body language and comprise a core part of everyday interaction. When we consider this idea for vision-based interaction with computers, we get hand gesture recognition for human-computer interaction using computer vision. We perform a predefined set of gestures in front of a camera and assign individual actions to them. Hence, a computer can recognize the said gestures and complete the appropriate actions. We use this exact method to develop a way to interact with computers with little to no contact. The advent of computer vision and machine learning data libraries makes this dream of contactless interactive technology possible. In light of the pandemic, We wish to create an AI virtual computer mouse. We recognize predefined gestures with a camera and assign appropriate actions that a mouse commonly performs to them. Since the limitations of a physical mouse no longer bind us, we can redefine the conventions of operating a computer with a new set of convenient and easy to grasp rules. We are starting this by developing a virtual gesture-based volume controller. We can perform these feats due to the marvelous works done by academics and researchers that have come before us. The main goal is to use the advantages of computer vision and machine learning to combat the pandemic’s challenges and push forward the future of interactive technology. Thus, we get Human-computer interaction using hand recognition technology.

Kavin Chandar Arthanari Eswaran, Akshat Prakash Srivastava, M. Gayathri
Sensors Based Advanced Bluetooth Pulse Oximeter System

Arduino based Bluetooth-equipped pulse oximeter is a measurement device that uses near infrared spectroscopy to measure blood pressure, and is designed with the HC-05 Bluetooth module. It can be employed using a smart mobile application or hardware. The oximeter uses a I2C 16 * 2 display module, which is a parallel data converter chip that works seamlessly with the LCD display module. This chip can convert the I2C data into parallel data, which is required by the LCD display. The portable terminal uses a digital algorithm to determine the value of the oxygen saturation and the pulse rate, and it does so through the smart mobile app interface. The designed oximeter can help doctors to keep a time to time check on the patient’s pulse and Spo2 level from anywhere in the hospital via their mobile phones, which would especially be helpful to keep the doctors, nurses distant from the patients during any Pandemic. The paper presents a novel model of Arduino based Bluetooth Pulse Oximeter using sensors and Bluetooth module with its applications in various sectors.

Jaspinder Kaur, Ajay Kumar Sharma, Divya Punia
Saral Anuyojan: An Interactive Querying Interface for EHR

Maintaining a lifelong medical record is impossible without proper standards. For an individual, different records from different sources must be brought meaningfully together for them to be of some use. To achieve this, we need a set of pre-defined standards for information capture, storage, retrieval, exchange, and analytics. It has been found that electronic health records can enhance the quality and safety of care while improving the management of health information and clinical data. While electronic health records have so much potential, it is difficult to use them. It requires queries written in AQL to interact with the EHR database. Writing AQL queries is a complex as well as a tedious task. An interface is needed that can speed up the querying process thereby enhancing efficiency. Considering the importance of using electronic health records and the difficulty of using them, we aim to design Saral Anuyojan which is a system that consists of components such as user interface, query translator, and interface manager. The user interface takes input from the user and then the query translator converts it into AQL queries for further processing. Then the AQL query is sent to the backend (EHRbase) which stores it in a standard format. Finally, the output is returned as a visual interpretation on the user interface. Requirements of the clinicians and patients are limited (view and update) so they can be implemented without much complexity. Since, EHR has a complex structure difficult to use by a non-technical user, this problem is re-solved in our approach by making an easy user user-interface we are bypassing the long and complex approach of learning AQL. The proposed system Saral Anuyojan helps in improving the management and efficiency of the healthcare sector.

Kanika Soni, Shelly Sachdeva, Arpit Goyal, Aryan Gupta, Divyanshu Bose, Subhash Bhalla
Chapter 3. Multilaterals Leading the Innovation Path

During the last century, significant improvements are seen in the financing of infrastructure projects the world over. Public services that were traditionally being provided by government has been gradually provided by private sector also with the government taking over the role of enabler in place of provider of services to the users. Along the way, developing countries are pushing for structural and institutional reforms to meet the loan disbursements from multilateral agencies and to attract private sector investments into infrastructure. NGOs exerted sufficient pressure on governments and multilateral development banks to adopt sustainable policies and approaches alongside the development goals. The four main funding sources for MDBs include (1) paid-in capital or subscribed capital, (2) callable capital, (3) retained earnings and accumulated reserves, and (4) preferred creditor status (PCS). The various instruments that are offered by MDBs include loans, grants, credit lines, technical assistance (TA), guarantees, and equity. A significant share of MDB loans is concentrated in China, India, Indonesia, Philippines, and Pakistan. The range of instruments that the MDBs have been offering is quite wide; however, the sophistication and the customization have increased substantially in response to the need of the DMCs. The landscape of innovative financing is gradually shifting from simple resource mobilization to including results-based and outcome-based financing mechanisms. The time taken in getting the loan processed by the MDB is a matter of concern.

Raghu Dharmapuri Tirumala, Piyush Tiwari
Chapter 4. Exponential Growth of Sustainable Debt: Green Bonds Surge

A significant number of investments are required to meet the target under the SDGs and internal financial resources of a country are not adequate to meet these requirements. Green bonds are one of the methods of raising finances for investments on full or partial capital expenditure of green projects. Since about the last 8–10 years, many institutions have started issuing of green bonds and this has been exponentially growing that by 2015, it has increased more than four times since its 2013 levels. In the year 2021, USA, China, Germany, France, and UK were the leading countries that have issued green bonds. Several emerging economies such as China, India, Poland, and Hungary have also issued green bonds during 2020–21. MDBs were the frontrunners in devising innovative ways of generating financial resources for addressing the climate, environment, and sustainability challenges. The third-party or independent certifications are sought after as they give better credence to the issuers of their intentions and to the investors that their funds have been rightly deployed. Many countries, regions, and institutions have developed their own taxonomies of the definition and process to be adopted for green bonds. Exchanges help investors invest in green bonds and other climate solutions and can act as a platform for developing indices that can accelerate the market. The current application of green bond proceeds is restricted to a few sectors such as the renewable energy, buildings, and transport. Green taxonomy will help in accelerating the existing levels of the investors’ inclination and understanding about how the green bonds work. This will also address the issue of greenwashing and increase the trust of the investors in the green bonds. Also, the taxonomy should be flexible enough to include various preferences of the different investors.

Raghu Dharmapuri Tirumala, Piyush Tiwari
Chapter 11. Liquids, Solids, and Intermolecular Forces

In much of this chapter, we focus largely, but not exclusively, on water and the forces that permit liquids to be in the liquid state instead of other states. Understanding this chemistry is useful as the world struggles to meet its need for water in agriculture, industry, and household uses in the face of an ever-increasing population. We then apply our understanding of forces to a survey of solids, especially crystals and metals, looking at their structure and function.

Michael Mosher, Paul Kelter
Chapter 14. Chemical Equilibrium

Chemical equilibrium is the point in all chemical reactions at which there is no net change in the concentration of reactants or products. Chemical equilibrium is a dynamic process, during which both the forward and reverse reaction continue, though at rates that maintain equilibrium. We will learn to determine the concentration of reactants and products at equilibrium and, a most practical idea, we will learn to control the position of chemical equilibrium for a reaction by changing its pressure, temperature and/or concentration. A catalyst does not affect the equilibrium position. Rather, it changes the reaction mechanism, substantially increasing the speed of the reaction.

Michael Mosher, Paul Kelter
Developing Supply Chain Risk Management Strategies by Using Counterfactual Explanation

Supply Chain Risk Management (SCRM) is necessary for economic development and the well-being of society. Therefore, many researchers and practitioners focus on developing new methods to identify, assess, mitigate and monitor supply chain risks. This paper developed the Risk Management by Counterfactual Explanation (RMCE) framework to manage risks in Supply Chain Networks (SCNs). The RMCE framework focuses on monitoring SCN, and in case of any risks eventuating, it explains them to the user and recommends mitigation strategies to avoid them proactively. RMCE uses optimisation models to design the SCN and Counterfactual Explanation (CE) to generate mitigation recommendations. The developed approach is applied to an actual case study related to a global SCN to test and validate the proposed framework. The final results show that the RMCE framework can correctly predict risks and give understandable explanations and solutions to mitigate the impact of the monitored risks on the case study.

Amir Hossein Ordibazar, Omar Hussain, Ripon K. Chakrabortty, Morteza Saberi, Elnaz Irannezhad
EXOGEM: Extending OpenAPI Generator for Monitoring of RESTful APIs

The creation of adaptive and reconfigurable Service Oriented Architectures (SOA) must take into account the unpredictability of the Internet and of potentially buggy software, and thus requires monitoring subsystems for detecting degradations and failures as soon as possible. In this paper we propose EXOGEM, a novel and lightweight monitoring framework for REpresentational State Transfer (REST) Application Programming Interfaces (APIs). EXOGEM is an extension to the mainstream code generator OpenAPI Generator, and it allows to create a monitoring subsystem for generated APIs with limited changes to the usual API development workflow. We showcase the approach on a smart grid testbed, where EXOGEM monitors the interaction of a heatpump with a system that optimizes its operations. Our measurements estimate EXOGEM’s comparable to the usage of HTTPS when the server is not flooded with requests. Moreover, in one experiment EXOGEM was used to identify high load, and to activate computational elasticity. Together, this suggests that EXOGEM can be a useful monitoring framework for real-life systems and services.

Daniel Friis Holtebo, Jannik Lucas Sommer, Magnus Mølgaard Lund, Alessandro Tibo, Junior Dongo, Michele Albano
Chapter 2. The Westernmost Tethyan Margins in the Rif Belt (Morocco), A Review

The Rif belt is the westernmost segment of the Maghrebides and the southern branch of the Gibraltar Arc connecting North Africa to Iberia. The Rif belt formed coevally with the Betic Cordilleras (northern branch of the Arc) during the Cenozoic due to the Africa-Eurasia convergence associated with the subduction of the westernmost Tethys lithosphere of the Ligurian-Maghrebian basin. In this work, we describe the remnants of the margins of the latter basin as exposed in the Rif belt. The External Zones of the belt expose remnants of the Jurassic southern Ocean-Continent Transition (OCT) of the Maghrebian Tethys and a Triassic volcanic-rich segment of the NW African passive margin. These consist, respectively, of serpentinites and gabbros slivers included in the accretionary prism derived from the inversion of the African passive margin. The northern margin of the Maghrebian Ocean is classically represented by the Dorsale Calcaire and Predorsalian Triassic-Paleogene units at the external border of the Internal Zones (Alboran Domain). The latter mainly consists of two complexes of basement nappes, from top to bottom, the Ghomarides (Malaguides in Spain) and the Sebtides (Alpujarrides in Spain). The Dorsale sedimentary units are transitional between the Ghomarides-Malaguides coeval sequences and the Maghrebian Flyschs deposits. They likely detached from the Sebtides-Alpujarrides thinned crust domain. Marbles of probable Triassic age overlie the granulites (kinzigites) envelope of the Beni Bousera peridotites included in the Lower Sebtides units. Thus, the mantle of the Sebtides-Alpujarrides domain would have been exhumed close to the surface as early as the Triassic during the incipient formation of a Jurassic magma-poor margin bordering the Maghrebian Tethys to the north.

André Michard, Ahmed Chalouan, Aboubaker Farah, Omar Saddiqi
Chapter 7. Ordovician–Upper Silurian–Triassic Petroleum System Assessment in the Chotts Area

The Upper Silurian Fegaguira Formation is thought to be an active source rock in the Chotts Basin of southern Tunisia and is thought to have probably supplied Ordovician and Triassic clastic reservoirs in the area. However, various debates are still held regarding either the source rock distribution and thermal maturity but also the reservoirs viability and extension. Within this scope, this study focussed on the evaluation and the characterization of Ordovician and Triassic reservoirs through the integration of well logging data of twenty wells drilled in the southern Chotts Basin aiming to better delineate prolific levels. Furthermore, a 1D BasinMod modelling was achieved to reconstruct the Fegauira source rock burial and thermal histories and also to estimate its hydrocarbon generation and expulsion potential. The El Atchane and Hamra Ordovician reservoirs generally bear low to fair petrophysical characteristics and mostly fall within nearly tight reservoirs. Paleozoic orogenic phases, especially the Hercynian phase had a major impact on their lateral distribution. The Triassic TAGI (Trias argilo-grèseux inférieur) reservoir, with good porosity, changes to volcanic material to the West which is believed to have been settled through faulting during the Tethyan rifting. Strikingly, this volcanic material bears also good porosities which could have been enhanced through fracturing and diagenetic processes. The 1D Basin modelling shows that the Fegaguira source rock is mature in real wells and has begun hydrocarbon generation during Early Cretaceous. Hydrocarbon expulsion at a SATEX of 10% took place since Paleogene, the earliest. This initial evaluation of Ordovician and Triassic (TAGI) reservoirs combined to Fegaguira burial and thermal history modelling point to a functioning petroleum system in the Chotts area with more targets to be discovered further West. This study is an anticipation for possible more plays to be discovered in an area which needs to be further thoroughly explored for petroleum accumulations.

S. Kraouia, A. Ben Salem, M. Saidi, K. El Asmi, A. Mabrouk El Asmi
Fast Curing Biobased Epoxy Hardener for RTM Applications

Efficient lightweight solutions becoming increasingly important in the automotive industry. Since the trend of using sustainable electric engine drives is ongoing, the CO2 impact of car components is becoming more important for the overall aim to produce a CO2 neutral mobility until 2050.So far, for automotive components almost exclusively carbon fibre reinforced plastics have been used in lightweight construction. Natural fibres offer an ecological alternative for non- or semi-structural car body parts. They exhibit, however, lower stiffness and strength than carbon fibres, but mechanical properties are sufficient for many applications for car body parts. Due to their naturally grown structure, natural fibres dampen sound and vibrations better. Their lower tendency to splinter can help reduce the risk of injury in the event of an accident. In addition, they do not cause skin irritation during processing.The overall aim of the project is the development of a sustainable biosourced natural fibre reinforced epoxide for a car door. The research approach is a fast curing bio-sourced epoxy system for RTM (Resin Transfer Moulding) applications with a glass transition temperature >100 ℃ of the natural fibre reinforced composite. Due to its chemical structure, the bio-based epoxy resin has a tough elastic behaviour, which could offer advantages in the crash test. Among the used chemistry for the bio-sourced material, the kinetics of the bio-sourced resin will be considered and brought into relation of the whole production process. Additionally, data of dynamic thermomechanical analysis of the material will be presented.

Stefan Friebel, Ole Hansen, Jens Lüttke
Chapter 7. Water Pollution Detection System for Illegal Toxic Waste Dumps

Nowadays, there is an increment of the contaminated rivers in Malaysia due to illegal toxic waste dumping. They increased water pollution cases from a river in Malaysia, such as Johor and Selangor. This paper aims to detect real-time pollution to make the authorities take fast action to prevent widespread pollution and contamination. This work’s significance stems from its ability to wirelessly monitor real-time data, detect early pollution sources, and detect criminal activity. The system detects illegal toxic waste dumping via a wireless sensor network in every polluted river. It consists of the Arduino UNO as the microcontroller, a 9 V lithium-ion rechargeable battery as the power supply, a pH metre sensor, a DS18B20 temperature sensor, and a turbidity sensor, a SX1278 LoRa, a GPS Neo 6 m, and a SIM800C GSM. The WSN system is used to track freshwater quality measurements and is implemented at a distributed location. Each node can communicate with a range of water quality sensors. The signals from a GPS can give accurate and concise information used to estimate the exact location of the contaminated water. The collected data from each sensor will go to the sub-base station as the device network coordinator and alert the dedicated people of the activities via the GSM network. The accuracy shows that both classifications to distinguish clear freshwater and polluted water using ten different situations prove that this project has great potential for real-time detection of illegal toxic waste dumping in the target area.

Zuhanis Mansor, Nurul Nur Sabrina Abdul Latiff
Prediction of Teff Yield Using a Machine Learning Approach

Teff is one of the main ingredients in everyday food for most Ethiopians, and its production mainly depends on natural conditions of the climate, unpredictable changes in the climate, and other growth factors. Teff production is extremely variable on different occasions and creates complex scenarios for prediction of yield. Traditional methods of prediction are incomplete and require field data collection, which is costly, with the result being poor prediction accuracy. Remotely sensed satellite image data has proven to be a reliable and real-time source of data for crop yield prediction; however, these data are enormous in size and difficult to interpret. Recently, machine-learning methods have been in use for processing satellite data, providing more accurate crop prediction results. However, these approaches are used in croplands covering vast areas or regions, requiring huge amounts of cropland mask data, which is not available in most developing countries, and may not provide accurate household level yield prediction. In this article, we proposed a machine learning based Teff Yield Prediction System for smaller cropland areas using publicly available multispectral satellite images, that represent spectral reflectance information related to the crop growth status collected from different satellites (Landsat-8, Sentinel-2). For this, we have prepared our own satellite image dataset for training. A Convolutional Neural Network was developed and trained to be fit for a regression task. A training loss of 3.3783 and a validation loss of 1.6212 were obtained; in other words, the model prediction accuracy was 98.38%. This shows that our model's performance is very promising.

Adugna Necho Mulatu, Eneyachew Tamir
Numerical Simulation and Optimization of a Locally Built Midibus Structure in Quasi-static and Rollover Condition

Rollover crashworthiness concerns the ability of a vehicle’s structural system and components to absorb energies with complete protection of occupants in dynamic (rollover) crash scenarios. First, this study aims to analyze a locally built midibus structure in rollover crashes using numerical investigation (LS-DYNA) as stated by United Nations Regulation 66 (UNECE R66). Also, this study considered the quasi-static simulation to determine the energy absorbing and load-deformation behavior of the midibus frame sections. Then, the two alternatives in design optimization were presented via reinforcement design and numerical optimization (Successive Response Surface Method in LS-OPT) to improve the strength and weight of the midibus structure. As a rollover simulation result, the maximum deformation of the baseline structure occurred at pillar A and three bays. As a result, the baseline midibus structure failed the standard requirement and has unacceptable strength in both quasi-static and rollover simulation. Moreover, related to the baseline model, the structure’s weight of the reinforced Model was effectively reduced by 5.2%. However, an optimized model (using the Successive Response Surface Method) has reduced the weight of the reinforced model by 5.6%. Lastly, the Energy Absorption and Specific Energy Absorption of the baseline and the two alternative models were evaluated and compared.

Hailemichael Solomon Addisu, Ermias Gebrekidan Koricho, Adino Amare Kassie

Open Access

Chapter 6. Major Green Technology Innovation and Implementation Mechanism

Communities are the main places for city life and living. From international experience, improvements in the living quality of residents will generally lead to an increase in carbon emissions per capita. The core of community renewal and renovation, however, is to improve residents’ living quality. Therefore, Chinese community/life carbon emissions may rise greatly in the future unless active interventions implementing green technologies and green lifestyles are adopted. If not, community renewal is likely to become a huge drag on achieving China’s “Double Carbon” goal.

China Council for International Cooperation on Environment and Development (CCICED) Secretariat

Open Access

Chapter 8. Global Green Supply Chain

The term “Global Value Chains” refers to the processes by which value is added across different stages from production to consumption and carried out by actors located in different parts of the world (CCICED in Global green value chains—greening China’s ‘soft commodity’ value chains [EB/OL], 2020 [1]). In the global value chain, the production process is divided and distributed into different countries, with different companies undertaking their own specific tasks. Global value chains have significant advantages in many aspects, but their impacts on the environment cannot be ignored. Because they require huge volumes of raw commodities, sourced from diverse origins, global value chains can have significant negative impacts on biodiversity, climate change, ecological functions and the rights and livelihoods of communities in regions where commodities are produced.

China Council for International Cooperation on Environment and Development (CCICED) Secretariat
An Ensemble Model Based on Learning Vector Quantization Algorithms for Early Detection of Cassava Diseases Using Spectral Data

In Sub-Saharan Africa, cassava is the second most significant food crop after maize. Cassava brown streak disease (CBSD) and cassava mosaic virus disease (CMD) combined account for nearly 90% of productivity losses. Automating the detection and classification of crop diseases could help professionals diagnose diseases more accurately and allow farmers in remote locations to monitor their crops without the help of specialists. Machine learning algorithms have been used in the early detection and classification of crop diseases. Previous research has used plant image data captured with smartphones. However, disease symptoms must be observable to use this strategy (using image data). Unfortunately, once symptoms appear on the aerial part of the plant, the root, which is the edible part of the plant, is destroyed. In this study, we used spectral data in a three-class classification challenge for diagnosing cassava diseases. We propose an ensemble model based on Generalized Learning Vector Quantization (GLVQ), Generalized Matrix LVQ (GMLVQ), and Local Generalized Matrix LVQ (LGMLVQ). Experimental results revealed that the LGMLVQ model had the best overall performance on the precision, recall, and F1-score followed by our proposed ensemble model, the GMLVQ model performed third, and finally GLVQ model. Also, using an accuracy performance metric, LGMLVQ had overfitting issues even though it had the highest accuracy of 100%, followed by our proposed ensemble model with an accuracy of 82%, and then the third in performance was the GMLVQ model with an accuracy of 74% and the least performed model on accuracy was GLVQ model with an accuracy of 56%.

Emmanuel Ahishakiye, Waweru Mwangi, Petronilla Murithi, Ruth Wario, Fredrick Kanobe, Taremwa Danison
An Adaptive and Dynamic Heterogeneous Ensemble Model for Credit Scoring

The determination of the financial credibility of a person for a loan is a challenging task as many variables are taken into consideration. Recently, there has been a surge in the application of machine learning approaches in the design of robust and effective credit scoring models as part of the human social development agenda under the assumption that the variables will remain stable for a long time. However, in real-life, the behavior of customers changes over time and the variables used to quantify the financial credibility of a person for a loan such as past performances on debt obligations, profiling, main household, income and demographics tend to drift and evolve over time. This paper considers credit scoring as an ephemeral scenario as variables tend to drift over time and proposes the application of data stream learning techniques in credit scoring since they are tailored for incremental learning. This makes the scoring model to be able to detect and adapt to changes in the customer behavior.We propose the Adaptive and Dynamic Heterogeneous Ensemble (ADHE) approach that is capable of learning incrementally and adapting to drifting variables and consists of models derived from different learning algorithms to exploit diversity. The prediction performance of ADHE is evaluated using datasets that are publicly available and we compared the accuracy and computational cost of ADHE with existing state of the art models. Our proposed approach performs significantly well when compared to existing state of the art benchmark models on prediction accuracy according to the non-parametric test.

Tinofirei Museba
Artificial Orca Algorithm for Solving University Course Timetabling Issue

Timetabling problem for university courses (UCTP) is one of most traditional challenges that have been emphasized for a long time by many researches. This issue belong to NP-Hard problems, which are hard to solved with classical algorithms due to their complexity. Swarm intelligence become a trend to solve NP-hard problems, and also solve real life issues. This paper proposes a new based Artificial Orca Algorithm (AOA) solver for university courses timetabling problem. In order to evaluate our proposal, A series of are carried out on Ghardaia University Timetabling data, the performance of the proposed approach are evaluated and compared with other algorithms developed to solve the same problem. The results show a clear superiority of our proposal against the other in terms of execution time and result quality.

Abdelhamid Rahali, KamelEddine Heraguemi, Samir Akhrouf, Mouhamed Benouis, Brahim Bouderah
DeBic: A Differential Evolution Biclustering Algorithm for Microarray Data Analysis

Biclustering is one of the interesting topics in bioinformatics and one of the crucial approaches to extracting meaningful information from data and performing high-dimensional analysis for gene expression data. However, since the colossal space complexity and the nature of the problem are proven to be NP-Hard, an approach to identifying valuable biclusters with a good quality measure is required in a reasonable amount of time. Moreover, metaheuristics and evolutionary computation algorithms have shown incredible success in this area. This paper offers a novel method of a Differential Evolution Based Biclustering algorithm to extract Biclusters called DeBic. The results of the experiments on the popular Yeast Cell-Cycle dataset indicate unique and interesting biclusters getting discovered with larger sizes.

Younes Charfaoui, Amina Houari, Fatma Boufera
An IoT Based Helopeltis Sp Pest Control System

Agriculture is the main activity of residents in the southern part of Tanzania. Cashew nuts being the most cash crop brought by Portuguese, has never been yielding the optimal yields due to the existence of pests especially Helopeltis Sp. The Internet of Things (IoT) based Pest Control system aims to implement a system which will be able to capture, identify and store the Helopeltis pest using Google Colab and Proteus simulation tools. Pest recognition process has been done in the Google Colab Pro platform using tensor-flow library, python 3.7 programming language and Faster-RCNN InceptionV2 model. The accuracy of 97.87% was obtained while the mechanical part of wiping pests into container, wiping them from the top of the container lid to the environment and farmer notification SMS (Short Message Service) has been facilitated using Proteus. The control of the cashew nut pests will now be in a green way by discouraging the use of pesticides which also destroys pollinators and degrades the quality of the soil, the crop and the environment. It has been found that image training requires adequate resources including high performance graphic card, memory and processing power, the use of Google Colab has catered for all of them. Also it has been noted that the more the images are used for the training the more accuracy the detection will be.

Kannole E. Veronica, Rushingabigwi Gerard, Diwani Abubakar
Chapter 2. Laughter and the Formation of a Concept of Humour

This chapter discusses the historically misleading conflation of humour with laughter as a means of creating a genealogy of humour theory. It maps the understandings of laughter in Europe prior to a concept of humour. It then outlines the uncertain and erratic formation of a concept of humour from popular humoral theory, becoming first a rough synonym for a jest and then a general unstable covering term for a range of specific discursive phenomena. The notion of a sense of humour also depended on shifts in the meaning of sense. The expression dates only from the nineteenth century. The chapter concludes by discussing humour as a loan word from French.

Conal Condren
Swin UNETR for Tumor and Lymph Node Segmentation Using 3D PET/CT Imaging: A Transfer Learning Approach

Delineation of Gross Tumor Volume (GTV) is essential for the treatment of cancer with radiotherapy. GTV contouring is a time-consuming specialized manual task performed by radiation oncologists. Deep Learning (DL) algorithms have shown potential in creating automatic segmentations, reducing delineation time and inter-observer variation. The aim of this work was to create automatic segmentations of primary tumors (GTVp) and pathological lymph nodes (GTVn) in oropharyngeal cancer patients using DL. The organizers of the HECKTOR 2022 challenge provided 3D Computed Tomography (CT) and Positron Emission Tomography (PET) scans with ground-truth GTV segmentations acquired from nine different centers. Bounding box cropping was applied to obtain an anatomic based region of interest. We used the Swin UNETR model in combination with transfer learning. The Swin UNETR encoder weights were initialized by pre-trained weights of a self-supervised Swin UNETR model. An average Dice score of 0.656 was achieved on a test set of 359 patients from the HECKTOR 2022 challenge. Code is available at: https://github.com/HC94/swin_unetr_hecktor_2022 .Aicrowd Group Name: RT_UMCG

Hung Chu, Luis Ricardo De la O Arévalo, Wei Tang, Baoqiang Ma, Yan Li, Alessia De Biase, Stefan Both, Johannes Albertus Langendijk, Peter van Ooijen, Nanna Maria Sijtsema, Lisanne V. van Dijk
Online Adaptive Multivariate Time Series Forecasting

Multivariate Time Series (MTS) involve multiple time series variables that are interdependent. The MTS follows two dimensions, namely spatial along the different variables composing the MTS and temporal. Both, the complex and the time-evolving nature of MTS data make forecasting one of the most challenging tasks in time series analysis. Typical methods for MTS forecasting are designed to operate in a static manner in time or space without taking into account the evolution of spatio-temporal dependencies among data observations, which may be subject to significant changes. Moreover, it is generally accepted that none of these methods is universally valid for every application. Therefore, we propose an online adaptation of MTS forecasting by devising a fully automated framework for both adaptive input spatio-temporal variables and adequate forecasting model selection. The adaptation is performed in an informed manner following concept-drift detection in both spatio-temporal dependencies and model performance over time. In addition, a well-designed meta-learning scheme is used to automate the selection of appropriate dependence measures and the forecasting model. An extensive empirical study on several real-world datasets shows that our method achieves excellent or on-par results in comparison to the state-of-the-art (SoA) approaches as well as several baselines.

Amal Saadallah, Hanna Mykula, Katharina Morik
Stacking Feature Maps of Multi-scaled Medical Images in U-Net for 3D Head and Neck Tumor Segmentation

Machine learning, especially deep learning, has achieved state-of-the-art performance on various computer vision tasks. For computer vision tasks in the medical domain, it remains as challenging tasks since medical data is heterogeneous, multi-level, and multi-scale. Head and Neck Tumor Segmentation Challenge (HECKTOR) provides a platform to apply machine learning techniques to the medical image domain. HECKTOR 2022 provides positron emission tomography/computed tomography (PET/CT) images which includes useful metabolic and anatomical information to sufficiently make an accurate tumor segmentation. In this paper, we proposed a stacked-multi-scaled medical image segmentation framework to automatically segment the Head and Neck tumor using PET/CT images. The main idea of our network was to generate various low-resolution feature maps of PET/CT images to make a better contour of Head and Neck tumors. We used multi-scaled PET/CT images as inputs, and stacked different intermediate feature maps by resolution for a better inference result. In addition, we evaluated our model on the HECKTOR challenge test dataset. Overall, we achieved a 0.69786, 0.66730 mean Dice score on GTVp and GTVn respectively. Our team’s name is HPCAS.

Yaying Shi, Xiaodong Zhang, Yonghong Yan
Head and Neck Primary Tumor and Lymph Node Auto-segmentation for PET/CT Scans

Segmentation of head and neck (H &N) cancer primary tumor and lymph nodes on medical imaging is a routine part of radiation treatment planning for patients and may lead to improved response assessment and quantitative imaging analysis. Manual segmentation is a difficult and time-intensive task, requiring specialist knowledge. In the area of computer vision, deep learning-based architectures have achieved state-of-the-art (SOTA) performances for many downstream tasks, including medical image segmentation. Deep learning-based auto-segmentation tools may improve efficiency and robustness of H &N cancer segmentation. For the purpose of encouraging high performing methods for lesion segmentation while utilizing the bi-modal information of PET and CT images, the HEad and neCK TumOR (HECKTOR) challenge is offered annually. In this paper, we preprocess PET/CT images and train and evaluate several deep learning frameworks, including 3D U-Net, MNet, Swin Transformer, and nnU-Net (both 2D and 3D), to segment CT and PET images of primary tumors (GTVp) and cancerous lymph nodes (GTVn) automatically. Our investigations led us to three promising models for submission. Via 5-fold cross validation with ensembling and testing on a blinded hold-out set, we received an average of 0.77 and 0.70 using the aggregated Dice Similarity Coefficient (DSC) metric for primary and node, respectively, for task 1 of the HECKTOR2022 challenge. Herein, we describe in detail the methodology and results for our top three performing models that were submitted to the challenge. Our investigations demonstrate the versatility and robustness of such deep learning models on automatic tumor segmentation to improve H &N cancer treatment. Our full implementation based on the PyTorch framework and the trained models are available at https://github.com/xmuyzz/HECKTOR2022 (Team name: AIMERS).

Arnav Jain, Julia Huang, Yashwanth Ravipati, Gregory Cain, Aidan Boyd, Zezhong Ye, Benjamin H. Kann
A Coarse-to-Fine Ensembling Framework for Head and Neck Tumor and Lymph Segmentation in CT and PET Images

Head and neck (H &N) cancer is one of the most prevalent cancers [1]. In its treatment and prognosis analysis, tumors and metastatic lymph nodes may play an important role but their manual segmentations are time-consuming and laborious. In this paper, we propose a coarse-to-fine ensembling framework to segment the H &N tumor and metastatic lymph nodes automatically from Positron Emission Tomography (PET) and Computed Tomography (CT) images. The framework consists of three steps. The first step is to locate the head region in CT images. The second step is a coarse segmentation, to locate the tumor and lymph region of interest (ROI) from the head region. The last step is a fine segmentation, to get the final precise predictions of tumors and metastatic lymph nodes, where we proposed a ensembling refinement model. This framework is evaluated quantitatively with aggregated Dice Similarity Coefficient (DSC) of 0.77782 in the task 1 of the HECKTOR 2022 challenge[2, 3] as team SJTU426.

Xiao Sun, Chengyang An, Lisheng Wang
PathOracle: A Deep Learning Based Trip Planner for Daily Commuters

In this paper, we propose a novel data-driven approach for a trip planner, that finds the most popular multi-modal trip using public transport from historical trips, given a source, a destination, and user-defined constraints such as time, minimum switches, or preferred modes of transport. To solve the most popular trip and its variants, we propose a multi-stage deep learning architecture, PathOracle, that consists of two major components: KSNet to generate key stops, and MPTNet to generate popular path trips from a source to a destination passing through the key stops. We also introduce a unique representation of stops using Stop2Vec that considers both the neighborhood and trip popularity between stops to facilitate accurate path planning. We present an extensive experimental study with a large real-world public transport based commuting Myki dataset of Melbourne city, and demonstrate the effectiveness of our proposed approaches.

Md. Tareq Mahmood, Mohammed Eunus Ali, Muhammad Aamir Cheema, Syed Md. Mukit Rashid, Timos Sellis
Explainable Anomaly Detection System for Categorical Sensor Data in Internet of Things

Internet of things (IoT) applications deploy massive number of sensors to monitor the system and environment. Anomaly detection on streaming sensor data is an important task for IoT maintenance and operation. However, there are two major challenges for anomaly detection in real IoT applications: (1) many sensors report categorical values rather than numerical readings; (2) the end users may not understand the detection results, they require additional knowledge and explanations to make decision and take action. Unfortunately, most existing solutions cannot satisfy such requirements. To bridge the gap, we design and develop an eXplainable Anomaly Detection System (XADS) for categorical sensor data. XADS trains models from historical normal data and conducts online monitoring. XADS detects the anomalies in an explainable way: the system not only reports anomalies’ time periods, types, and detailed information, but also provides explanations on why they are abnormal, and what the normal data look like. Such information significantly helps the decision making for users. Moreover, XADS requires limited parameter setting in advance, yields high accuracy on detection results and comes with a user-friendly interface, making it an efficient and effective tool to monitor a wide variety of IoT applications.

Peng Yuan, Lu-An Tang, Haifeng Chen, Moto Sato, Kevin Woodward
Attention, Filling in the Gaps for Generalization in Routing Problems

Machine Learning (ML) methods have become a useful tool for tackling vehicle routing problems, either in combination with popular heuristics or as standalone models. However, current methods suffer from poor generalization when tackling problems of different sizes or different distributions. As a result, ML in vehicle routing has witnessed an expansion phase with new methodologies being created for particular problem instances that become infeasible at larger problem sizes.This paper aims at encouraging the consolidation of the field through understanding and improving current existing models, namely the attention model by Kool et al. We identify two discrepancy categories for VRP generalization. The first is based on the differences that are inherent to the problems themselves, and the second relates to architectural weaknesses that limit the model’s ability to generalize. Our contribution becomes threefold: We first target model discrepancies by adapting the Kool et al. method and its loss function for Sparse Dynamic Attention based on the alpha-entmax activation. We then target inherent differences through the use of a mixed instance training method that has been shown to outperform single instance training in certain scenarios. Finally, we introduce a framework for inference level data augmentation that improves performance by leveraging the model’s lack of invariance to rotation and dilation changes.

Ahmad Bdeir, Jonas K. Falkner, Lars Schmidt-Thieme
ImbalancedLearningRegression - A Python Package to Tackle the Imbalanced Regression Problem

This package helps Python users address imbalanced regression problems. Popular Python packages exist for imbalanced classification. However, there is still little Python support for imbalanced regression. Imbalanced regression is a well-known problem that occurs across domains, where a continuous target variable is poorly represented on ranges that are important to the end-user. Here, a re-sampling strategy is applied to modify the distribution of the target variable, biasing it towards the end-user interests so that downstream learning algorithms can be trained on the most relevant cases. The package provides an easy-to-use and extensible implementation of eight state-of-the-art re-sampling methods for regression, including four under-sampling and four over-sampling techniques. Code related to this paper is available at: https://github.com/paobranco/ImbalancedLearningRegression .

Wenglei Wu, Nicholas Kunz, Paula Branco
Distributional Correlation–Aware Knowledge Distillation for Stock Trading Volume Prediction

Traditional knowledge distillation in classification problems transfers the knowledge via class correlations in the soft label produced by teacher models, which are not available in regression problems like stock trading volume prediction. To remedy this, we present a novel distillation framework for training a light-weight student model to perform trading volume prediction given historical transaction data. Specifically, we turn the regression model into a probabilistic forecasting model, by training models to predict a Gaussian distribution to which the trading volume belongs. The student model can thus learn from the teacher at a more informative distributional level, by matching its predicted distributions to that of the teacher. Two correlational distillation objectives are further introduced to encourage the student to produce consistent pair-wise relationships with the teacher model. We evaluate the framework on a real-world stock volume dataset with two different time window settings. Experiments demonstrate that our framework is superior to strong baseline models, compressing the model size by $$5\times $$ 5 × while maintaining $$99.6\%$$ 99.6 % prediction accuracy. The extensive analysis further reveals that our framework is more effective than vanilla distillation methods under low-resource scenarios. Our code and data are available at https://github.com/lancopku/DCKD .

Lei Li, Zhiyuan Zhang, Ruihan Bao, Keiko Harimoto, Xu Sun
Grasping Partially Occluded Objects Using Autoencoder-Based Point Cloud Inpainting

Flexible industrial production systems will play a central role in the future of manufacturing due to higher product individualization and customization. A key component in such systems is the robotic grasping of known or unknown objects in random positions. Real-world applications often come with challenges that might not be considered in grasping solutions tested in simulation or lab settings. Partial occlusion of the target object is the most prominent. Examples of occlusion can be supporting structures in the camera’s field of view, sensor imprecision, or parts occluding each other due to the production process. In all these cases, the resulting lack of information leads to shortcomings in calculating grasping points.In this paper, we present an algorithm to reconstruct the missing information. Our inpainting solution facilitates the real-world utilization of robust object matching approaches for grasping point calculation. We demonstrate the benefit of our solution by enabling an existing grasping system embedded in a real-world industrial application to handle occlusions in the input. With our solution, we drastically decrease the number of objects discarded by the process.

Alexander Koebler, Ralf Gross, Florian Buettner, Ingo Thon
Few-Shot Forecasting of Time-Series with Heterogeneous Channels

Learning complex time series forecasting models usually requires a large amount of data, as each model is trained from scratch for each task/data set. Leveraging learning experience with similar datasets is a well-established technique for classification problems called few-shot classification. However, existing approaches cannot be applied to time-series forecasting because i) multivariate time-series datasets have different channels, and ii) forecasting is principally different from classification. In this paper, we formalize the problem of few-shot forecasting of time-series with heterogeneous channels for the first time. Extending recent work on heterogeneous attributes in vector data, we develop a model composed of permutation-invariant deep set-blocks which incorporate a temporal embedding. We assemble the first meta-dataset of 40 multivariate time-series datasets and show through experiments that our model provides a good generalization, outperforming baselines carried over from simpler scenarios that either fail to learn across tasks or miss temporal information.

Lukas Brinkmeyer, Rafael Rego Drumond, Johannes Burchert, Lars Schmidt-Thieme
Chapter 8. Fruits of Securitization

Securitization is done for a purpose. It is a tool rather than a goal. That has been the case with Erdoğan’s Turkey. This chapter discusses briefly how Erdoğan has used securitization as a tool to grab the power in Turkey and ensure its authoritarianist grip of power. Erdoğan and the AKP employed securitization to change the regime, which they successfully did from a parliamentarian system to a presidential system. And finally, they are employing securitization to bring their domestic issues into the transnational level, where they influence the diaspora communities, and call them to action.

Ihsan Yilmaz, Erdoan Shipoli, Mustafa Demir
Chapter 9. Technical Means of Studying the Liner “Titanic”

The British transatlantic passenger steamer (liner) “Titanic” was launched on May 31, 1911 at the Harland & Wolff shipyard in Belfast by order of the White Star Line shipping company. The “Titanic” was divided into 16 watertight compartments, had a double bottom, two four-cylinder triple expansion steam engines and a steam turbine, had a speed of up to 23 knots, was electrified and radio-equipped, had 4 elevators, as a result of which it was the most advanced liner of its time.

Mikhail Klyuev, Anatoly Schreider, Igor Rakitin
Chapter 3. Basic Equipment for Underwater Archaeological Research

Basic equipment for underwater archaeological research includes: base ship; diving (scuba) equipment; submersible vehicles; devices for scattering and removing soil; equipment for marking polygons; equipment for the conservation of artifacts.

Mikhail Klyuev, Anatoly Schreider, Igor Rakitin

Open Access

7. Real-Time Simulation for Steering the Tunnel Construction Process

Currently, in mechanized tunneling, the steering of tunnel boring machines (TBM) in practice is mainly decided based on engineering expert knowledge and recorded monitoring data. In this chapter, a new concept of exploiting the advantages of simulation models to support the steering phase is presented, which allows optimizing the construction process. With the aim to support the steering decision during tunnel construction by means of real-time simulations, predictive simulation models are established in the initial planning phase of a tunnel project. The models are then capable of being continuously updated with monitoring data during the construction. The chapter focuses on explaining models for real-time predictions of logistics processes and tunneling induced settlements as well as the risk of building damage in more details. Additionally, application examples, which are practical-oriented, are also presented to illustrate the applicability of the proposed concept.

Ba Trung Cao, Lukas Heußner, Annika Jodehl, Markus Obel, Yara Salloum, Steffen Freitag, Markus König, Peter Mark, Günther Meschke, Markus Thewes

Open Access

6. Digital Design in Mechanized Tunneling

Digital design methods are constantly improving the planning procedure in tunnel construction. This development includes the implementation of rule-based systems, concepts for cross-document and -model data integration, and new evaluation concepts that exploit the possibilities of digital design. For planning in tunnel construction and alignment selection, integrated planning environments are created, which help in decision-making through interactive use. By integrating room-ware products, such as touch tables and virtual reality devices, collaborative approaches are also considered, in which decision-makers can be directly involved in the planning process. In current tunneling practice and during planning stage, Finite Element (FE) simulations form an integral element in the planning and the design phase of mechanized tunneling projects. The generation of adequate computational models is often time consuming and requires data from many different sources. Incorporating Building Information Modeling (BIM) concepts offers opportunities to simplify this process by using geometrical BIM sub-models as a basis for structural analyses. In the following chapter, some modern possibilities of digital planning and evaluation of alignments in tunnel construction are explained in more detail. Furthermore, the conception and implementation of an interactive BIM and GIS integrated planning system, ‘‘BIM-to-FEM’’ technology which automatically extracts relevant information needed for FE simulations from BIM sub-models, the establishment of surrogate models for real-time predictions, as well as the evaluation and comparison of planning variants are presented.

Abdullah Alsahly, Hoang-Giang Bui, Lukas Heußner, Annika Jodehl, Rodolfo Javier Williams Moises, Markus Obel, Marcel Stepien, Andre Vonthron, Yaman Zendaki, Steffen Freitag, Markus König, Elham Mahmoudi, Peter Mark, Günther Meschke, Markus Thewes
Federated Ensemble Algorithm Based on Deep Neural Network

In the realm of multi-source privacy data protection, federated learning is now one of the most popular study topics. When the data being used is not local, its architecture has the ability to train a common model that can satisfy the needs of many parties. On the other hand, there are circumstances in which local model parameters are challenging to incorporate and cannot be used for security purposes. As a result, a federated ensemble algorithm that is based on deep learning has been presented, and both deep learning and ensemble learning have been used within the context of federated learning. Using the various integrated algorithms that integrate local model parameters, which improve the accuracy of the model and take into account the security of multi-source data, the accuracy of the local model can be improved by optimizing the parameters of the local model. This in turn improves the accuracy of the local model. The results of the experiments show that, in comparison to conventional multi-source data processing technology, the accuracy of the algorithm in the training model for the MNIST dataset, the digits dataset, the letter dataset, and the wine dataset is improved by 1%, 8%, 1%, and 1%, respectively, and the accuracy is guaranteed. Additionally, accuracy is guaranteed. It also improves the security of data and models that come from more than one source, which is a very useful feature.

Dan Wang, Ting Wang
Object Detection Based Automated Optical Inspection of Printed Circuit Board Assembly Using Deep Learning

Advancement of technologies in the electronics industry has render decrease in electronic components sizes and increase in number of components on Printed Circuit Board (PCB). Industries specialize in manufacturing Printed Circuit Board Assembly (PCBA), also implementing manual visual inspection in In Process Quality Control (IPQC) verification process to ensure quality of products. Such technology advancement has increased workload of operators and time taken to perform inspection. This study is aimed to reduce time consumption and cognitive load of operators, while ensuring consistency of visual inspection during component verification process by utilizing deep learning models to perform object detection based automated optical inspection of images consisting electronic components. Three deep learning algorithms were used in the study, which are Faster R-CNN, YOLO v3 and SSD FPN. Both Faster R-CNN and SSD FPN utilized ResNet-50 backbone, whereas YOLO v3 was built with Darknet-53 backbone. Various input image dimension and image resizing options were explored to determine the best model for object detection. At the end of the study, SSD FPN with input image dimension resized to 640 × 640 by keeping image aspect ratio and with padding is concluded as the best localization and classification model to perform object detection for various types of components present in digital image.

Ong Yee Chiun, Nur Intan Raihana Ruhaiyem
Performance Evaluation of Deep Learning Algorithms for Young and Mature Oil Palm Tree Detection

Oil palm trees one of the most essential economic crops in Malaysia have an economic lifespan of 20–30 years. Estimating oil palm tree age automatically through computer vision would be beneficial for plantation management. In this work, the object detection technique is proposed by applying high-resolution satellite imagery, tested with four different deep learning architectures, namely SSD, Faster R-CNN, CenterNet, and EfficientDet. The models are trained using TensorFlow Object Detection API and assessed with performance metrics and visual inspection. It is possible to produce automated oil palm trees detection model on age range estimation, either young or mature based on the crown size. Faster R-CNN is identified as the best model with total loss of 0.0047, mAP of 0.391 and mAR of 0.492, all with IoU threshold from 0.5 to 0.95 with a step size of 0.05. Parameter tuning was done on the best model and further improvement is possible with the increasing batch size.

Soh Hong Say, Nur Intan Raihana Ruhaiyem, Yusri Yusup
Short-Time Fourier Transform with Optimum Window Type and Length: An Application for Sag, Swell and Transient

The characteristics of power quality signals are non-stationary, where the behaviour confirms the negative consequence in sensitive equipment. Modern cross-term time-frequency distributions (TFDs) are able to characterize the power quality accurately but suffer from a delay in measurement since the power quality signals, in this case, sag, swell and transient, need to be analyzed in real-time. It is shown that one window shift (OWS) properties of linear time-frequency representation (TFR) results from short-time Fourier transform (STFT) satisfies accuracy, complexity and memory. By optimally selecting the window length of 512, the TFR is able to provide optimal time, and frequency localization, as well as the spectral leakage, can be reduced by the Hanning window. The proposed technique can accurately characterize the power quality signals averagely by 95%, as well as the complexity and memory usage is low. Finally, the paper is concluded by the recommendation of pre-setting for optimum window type and length for real-time power quality measurement.

Muhammad Sufyan Safwan Mohamad Basir, Nur-Adibah Raihan Affendy, Mohamad Azizan Mohamad Said, Rizalafande Che Ismail, Khairul Huda Yusof
The Nature Outside Cities: Trade-Offs and Synergies of Cultural Ecosystem Services from Natura 2000 Sites

The high level of anthropization in urban areas has induced a shift of resource demand, where the supply has moved outside the cities’ boundaries. Thus, protected areas located in the cities’ proximity have faced pressure to satisfy cities’ needs leading to conflicts and loss of critical ecosystem servicesEcosystem services. Our study aims to assess the cultural ecosystemEcosystem services servicesCultural ecosystem services (CES) and recreational activities provided by five Natura 2000Natura 2000 sites located in near-urban environments. We used photographs uploaded on social media and multiple correspondence analysis to investigate the synergies and trade-offsSynergies and Trade-offs between different CES and recreational activities. Analyzed photosPhotos showed synergies between aesthetic values and related activities, such as photographing landscapesLandscape and watching wildlife. However, we found trade-offs between aesthetic values and recreational activities, which are the result of the different types of managementManagement. Protected areas offer multiple opportunities for conducting scientific and educational investigations to conserve and protect key species and habitatsHabitat. As a result, we found synergies between knowledge values and educational and conservational activities. We conclude that Natura 2000Natura 2000 sites located in the proximity of urban spaces are valuable places for nature-experience outside cities. Therefore, the importance of such locations has to be considered when planningPlanning urban greenGreen infrastructure.

Denisa Lavinia Badiu, Constantina-Alina Hossu, Cristian Ioja, Mihai-Răzvan Niţă
Urban Forests in Megacities from the Perspective of Ecosystem Services Using the Timiryazevsky Forest Park, Moscow, as a Case Study

This study analyses different links between the urban forestForest as an element of greenGreen infrastructure and greenGreen space production in megacities using an urban forestForest parkForest park in MoscowMoscow as a case study. It illustrates the different functions and meanings of urban forestForest parksForest park and assesses their social, ecological and economic values using the concept of ecosystem servicesEcosystem services. Such evaluation and assessmentAssessment of greenGreen areas highlight the importance of the city greenGreen in coping with high anthropogenic loads and reveal the need to not only maintain but also significantly improve their environmental condition. There is great urgency to develop approaches to ensure sustainable developmentSustainable development in MoscowMoscow based on the concepts of urban greenGreen infrastructure and ecosystem servicesEcosystem services. Although ecosystem servicesEcosystem services provided by forestsForest have been assessed already for Russia at the national level, the identification and evaluation of ecosystem servicesEcosystem services at the city level, especially those provided by urban forestsForest, is a relatively new research field in Russia. Such assessmentsAssessment demonstrate the values that are ‘invisible’ to developers that must be compensated for during housing development. Therefore, recognition of ecological and economic arguments representing the market value of urban forestsForest might help avoid their replacement by more profitable land usesLand use.

Mikhail Antonenko, Diana Dushkova, Tatyana Krasovskaya
3. Capture

Wir sind nun am Start zur nächsten Etappe unserer Reise zur organisationalen Resilienz. Diese Etappe wird herausfordernd, weil wir unterwegs viel Gepäck aufnehmen werden. Sie erinnern sich: Capture bedeutet beim Storytelling, die richtigen Geschichten für den aktuellen Anlass zu finden. Dieses Kapitel soll Sie dabei unterstützen, sich Ihr eigenes Lager an Stories anzulegen. Dabei gehen wir auch darauf ein, welche Art von Geschichte situationsspezifisch passt. Und am Ende des Kapitels teilen wir unsere Erfahrungen, wo gute Geschichten zu finden sind und wie wir unsere Stories bis zum Gebrauch lagern. Auch diese Reiseetappe ist wieder in die bereits bekannten Zwischenstopps Risikomanagement, Krisenmanagement und Fehlermanagement unterteilt. Sind Sie bereit? Dann anschnallen und los geht’s …

Ilka Heinze, Thomas Henschel, Jens Hirt
Class-Incremental Learning via Knowledge Amalgamation

Catastrophic forgetting has been a significant problem hindering the deployment of deep learning algorithms in the continual learning setting. Numerous methods have been proposed to address the catastrophic forgetting problem where an agent loses its generalization power of old tasks while learning new tasks. We put forward an alternative strategy to handle the catastrophic forgetting with knowledge amalgamation (CFA), which learns a student network from multiple heterogeneous teacher models specializing in previous tasks and can be applied to current offline methods. The knowledge amalgamation process is carried out in a single-head manner with only a selected number of memorized samples and no annotations. The teachers and students do not need to share the same network structure, allowing heterogeneous tasks to be adapted to a compact or sparse data representation. We compare our method with competitive baselines from different strategies, demonstrating our approach’s advantages. Source-code: github.com/Ivsucram/CFA .

Marcus de Carvalho, Mahardhika Pratama, Jie Zhang, Yajuan Sun
A Pre-screening Approach for Faster Bayesian Network Structure Learning

Learning the structure of Bayesian networks from data is a NP-Hard problem that involves optimization over a super-exponential sized space. Still, in many real-life datasets a number of the arcs contained in the final structure correspond to strongly related pairs of variables and can be identified efficiently with information-theoretic metrics. In this work, we propose a meta-algorithm to accelerate any existing Bayesian network structure learning method. It contains an additional arc pre-screening step allowing to narrow the structure learning task down to a subset of the original variables, thus reducing the overall problem size. We conduct extensive experiments on both public benchmarks and private industrial datasets, showing that this approach enables a significant decrease in computational time and graph complexity for little to no decrease in performance score.

Thibaud Rahier, Sylvain Marié, Florence Forbes
Reinforcement Learning for Multi-Agent Stochastic Resource Collection

Stochastic Resource Collection (SRC) describes tasks where an agent tries to collect a maximal amount of dynamic resources while navigating through a road network. An instance of SRC is the traveling officer problem (TOP), where a parking officer tries to maximize the number of fined parking violations. In contrast to vehicular routing problems, in SRC tasks, resources might appear and disappear by an unknown stochastic process, and thus, the task is inherently more dynamic. In most applications of SRC, such as TOP, covering realistic scenarios requires more than one agent. However, directly applying multi-agent approaches to SRC yields challenges considering temporal abstractions and inter-agent coordination. In this paper, we propose a novel multi-agent reinforcement learning method for the task of Multi-Agent Stochastic Resource Collection (MASRC). To this end, we formalize MASRC as a Semi-Markov Game which allows the use of temporal abstraction and asynchronous actions by various agents. In addition, we propose a novel architecture trained with independent learning, which integrates the information about collaborating agents and allows us to take advantage of temporal abstractions. Our agents are evaluated on the multiple traveling officer problem, an instance of MASRC where multiple officers try to maximize the number of fined parking violations. Our simulation environment is based on real-world sensor data. Results demonstrate that our proposed agent can beat various state-of-the-art approaches.

Niklas Strauss, David Winkel, Max Berrendorf, Matthias Schubert
Detecting Anomalies with Autoencoders on Data Streams

Autoencoders have achieved impressive results in anomaly detection tasks by identifying anomalous data as instances that do not match their learned representation of normality. To this end, autoencoders are typically trained on large amounts of previously collected data before being deployed. However, in an online learning scenario, where a predictor has to operate on an evolving data stream and therefore continuously adapt to new instances, this approach is inadequate. Despite their success in offline anomaly detection, there has been little research leveraging autoencoders as anomaly detectors in such a setting. Therefore, in this work, we propose an approach for online anomaly detection with autoencoders and demonstrate its competitiveness against established online anomaly detection algorithms on multiple real-world datasets. We further address the issue of autoencoders gradually adapting to anomalies and thereby reducing their sensitivity to such data by introducing a simple modification to the models’ training approach. Our experimental results indicate that our solution achieves a larger gap between the losses on anomalous and normal instances than a conventional training procedure.

Lucas Cazzonelli, Cedric Kulbach
Factorized Structured Regression for Large-Scale Varying Coefficient Models

Recommender Systems (RS) pervade many aspects of our everyday digital life. Proposed to work at scale, state-of-the-art RS allow the modeling of thousands of interactions and facilitate highly individualized recommendations. Conceptually, many RS can be viewed as instances of statistical regression models that incorporate complex feature effects and potentially non-Gaussian outcomes. Such structured regression models, including time-aware varying coefficients models, are, however, limited in their applicability to categorical effects and inclusion of a large number of interactions. Here, we propose Factorized Structured Regression (FaStR) for scalable varying coefficient models. FaStR overcomes limitations of general regression models for large-scale data by combining structured additive regression and factorization approaches in a neural network-based model implementation. This fusion provides a scalable framework for the estimation of statistical models in previously infeasible data settings. Empirical results confirm that the estimation of varying coefficients of our approach is on par with state-of-the-art regression techniques, while scaling notably better and also being competitive with other time-aware RS in terms of prediction performance. We illustrate FaStR’s performance and interpretability on a large-scale behavioral study with smartphone user data.

David Rügamer, Andreas Bender, Simon Wiegrebe, Daniel Racek, Bernd Bischl, Christian L. Müller, Clemens Stachl
Improved Regret Bounds for Online Kernel Selection Under Bandit Feedback

In this paper, we improve the regret bound for online kernel selection under bandit feedback. Previous algorithm enjoys a $$O((\Vert f\Vert ^2_{\mathcal {H}_i}+1)K^{\frac{1}{3}}T^{\frac{2}{3}})$$ O ( ( ‖ f ‖ H i 2 + 1 ) K 1 3 T 2 3 ) expected bound for Lipschitz loss functions. We prove two types of regret bounds improving the previous bound. For smooth loss functions, we propose an algorithm with a $$O(U^{\frac{2}{3}}K^{-\frac{1}{3}}(\sum ^K_{i=1}L_T(f^*_i))^{\frac{2}{3}})$$ O ( U 2 3 K - 1 3 ( ∑ i = 1 K L T ( f i ∗ ) ) 2 3 ) expected bound where $$L_T(f^*_i)$$ L T ( f i ∗ ) is the cumulative losses of optimal hypothesis in $$\mathbb {H}_{i} =\{f\in \mathcal {H}_i:\Vert f\Vert _{\mathcal {H}_i}\le U\}$$ H i = { f ∈ H i : ‖ f ‖ H i ≤ U } . The data-dependent bound keeps the previous worst-case bound and is smaller if most of candidate kernels match well with the data. For Lipschitz loss functions, we propose an algorithm with a $$O(U\sqrt{KT}\ln ^{\frac{2}{3}}{T})$$ O ( U KT ln 2 3 T ) expected bound asymptotically improving the previous bound. We apply the two algorithms to online kernel selection with time constraint and prove new regret bounds matching or improving the previous $$O(\sqrt{T\ln {K}} +\Vert f\Vert ^2_{\mathcal {H}_i}\max \{\sqrt{T},\frac{T}{\sqrt{\mathcal {R}}}\})$$ O ( T ln K + ‖ f ‖ H i 2 max { T , T R } ) expected bound where $$\mathcal {R}$$ R is the time budget. Finally, we empirically verify our algorithms on online regression and classification tasks.

Junfan Li, Shizhong Liao
Team-Imitate-Synchronize for Solving Dec-POMDPs

Multi-agent collaboration under partial observability is a difficult task. Multi-agent reinforcement learning (MARL) algorithms that do not leverage a model of the environment struggle with tasks that require sequences of collaborative actions, while Dec-POMDP algorithms that use such models to compute near-optimal policies, scale poorly. In this paper, we suggest the Team-Imitate-Synchronize (TIS) approach, a heuristic, model-based method for solving such problems. Our approach begins by solving the joint team problem, assuming that observations are shared. Then, for each agent we solve a single agent problem designed to imitate its behavior within the team plan. Finally, we adjust the single agent policies for better synchronization. Our experiments demonstrate that our method provides comparable solutions to Dec-POMDP solvers over small problems, while scaling to much larger problems, and provides collaborative plans that MARL algorithms are unable to identify.

Eliran Abdoo, Ronen I. Brafman, Guy Shani, Nitsan Soffair
Deep Active Learning for Detection of Mercury’s Bow Shock and Magnetopause Crossings

Accurate and timely detection of bow shock and magnetopause crossings is essential for understanding the dynamics of a planet’s magnetosphere. However, for Mercury, due to the variable nature of its magnetosphere, this remains a challenging task. Existing approaches based on geometric equations only provide average boundary shapes, and can be hard to generalise to environments with variable conditions. On the other hand, data-driven methods require large amounts of annotated data to account for variations, which can scale up the costs quickly. We propose to solve this problem with machine learning. To this end, we introduce a suitable dataset, prepared by processing raw measurements from NASA’s MESSENGER (MErcury Surface, Space Environment, GEochemistry, and Ranging) mission and design a five-class supervised learning problem. We perform an architectural search to find a suitable model, and report our best model, a Convolutional Recurrent Neural Network (CRNN), achieves a macro F1 score of 0.82 with accuracies of approximately 80% and 88% on the bow shock and magnetopause crossings, respectively. Further, we introduce an approach based on active learning that includes only the most informative orbits from the MESSENGER dataset measured by Shannon entropy. We observe that by employing this technique, the model is able to obtain near maximal information gain by training on just two Mercury years worth of data, which is about 10% of the entire dataset. This has the potential to significantly reduce the need for manual labeling. This work sets the ground for future machine learning endeavors in this direction and may be highly relevant to future missions such as BepiColombo, which is expected to enter orbit around Mercury in December 2025.

Sahib Julka, Nikolas Kirschstein, Michael Granitzer, Alexander Lavrukhin, Ute Amerstorfer
5. Das Koordinationsproblem in der regionalen Integration

In Afrika kommt es zu einem fehlerhaften Kreislauf bzw. zu einem Koordinationsversagen zwischen der langsamen Schaffung gut integrierter regionaler Märkte und einer geringen wirtschaftlichen Diversifizierung (sowie geringer Sophistication und Spezialisierung). Allgegenwärtige Handelsschranken führen zu einem paradoxen Tarifmuster, bei dem afrikanische Nachbarn schlechter behandelt werden als entfernte Handelspartner. Angesichts weit verbreiteter Unregelmäßigkeiten und hoher Handelskosten ist „Trade Facilitation“ zu einem wichtigen technischen Ansatz zur Erleichterung des Handels geworden, der von Geberorganisationen unterstützt wird. In diesem Kapitel werden das systemische Potenzial und die Grenzen von Handelserleichterungsprogrammen untersucht. Als allgemeine Alternative zur institutionenlastigen, unvollkommenen Integration entlang des ausgetretenen linearen Pfades wird in Teilen der Wirtschaftsliteratur eine „leichte“ Integration vorgeschlagen. Es wird untersucht, inwieweit eine solche leichtgewichtige Integration die Fallstricke des klassischen Ansatzes vermeiden kann. Dynamische Effekte werden als letzte Verteidigungslinie für das klassische Modell von Wirtschaftsunionen angeführt, bleiben aber umstritten, da die Handelsforschung sie in Süd-Süd-Wirtschaftsgemeinschaften nicht empirisch nachweisen kann. Das Kapitel schließt mit der Frage, welche Art von neuer Wirtschaftspolitik erforderlich ist, um solche dynamischen Effekte effektiv zu realisieren. Die Antwort wird in Teil II gegeben.

Helmut Asche
CGPM: Poverty Mapping Framework Based on Multi-Modal Geographic Knowledge Integration and Macroscopic Social Network Mining

Having high-precision and high-resolution poverty map is a prerequisite for monitoring the United Nations Sustainable Development Goals(SDGs) and for designing development strategies with effective poverty reduction policies. Recent deep-learning-related studies have demonstrated the effectiveness of the geographically-fine-grained data composed with satellite images, geolocated article texts and Open-Street-Map in poverty mapping. Unfortunately, there is no presented method which considers the multimodality of data composition or the underlying macroscopic social network among the investigated clusters in socio-geographic space. To alleviate these problems, we propose CGPM, a novelty end-to-end socioeconomic indicator mapping framework featured with the cross-modality knowledge integration of multi-modal features, and the generation of macroscopic social network. Furthermore, considering the deficiency of labeled clusters for model training, we proposed a weak-supervised specialized framework CGPM-WS to overcome this challenge. Extensive experiments on the public multimodality socio-geographic data demonstrate that CGPM and CGPM-WS significantly outperforms the baselines in semi-supervised and weak-supervised tasks respectively of poverty mapping.

Zhao Geng, Gao Ziqing, Tsai Chihsu, Lu Jiamin
SAViR-T: Spatially Attentive Visual Reasoning with Transformers

We present a novel computational model, SAViR-T, for the family of visual reasoning problems embodied in the Raven’s Progressive Matrices (RPM). Our model considers explicit spatial semantics of visual elements within each image in the puzzle, encoded as spatio-visual tokens, and learns the intra-image as well as the inter-image token dependencies, highly relevant for the visual reasoning task. Token-wise relationship, modeled through a transformer-based SAViR-T architecture, extract group (row or column) driven representations by leveraging the group-rule coherence and use this as the inductive bias to extract the underlying rule representations in the top two row (or column) per token in the RPM. We use this relation representations to locate the correct choice image that completes the last row or column for the RPM. Extensive experiments across both synthetic RPM benchmarks, including RAVEN, I-RAVEN, RAVEN-FAIR, and PGM, and the natural image-based “V-PROM” demonstrate that SAViR-T sets a new state-of-the-art for visual reasoning, exceeding prior models’ performance by a considerable margin.

Pritish Sahu, Kalliopi Basioti, Vladimir Pavlovic
SkipCas: Information Diffusion Prediction Model Based on Skip-Gram

The development of social network platforms such as Twitter and Weibo has accelerated the generation and transmission of information. Predicting the growth size of the information cascade is widely used in the fields of preventing rumor spread, viral marketing, recommendation system and so on. However, most of the existing methods either cannot fully capture the structural representation of the cascade graph, or cannot effectively utilize the dynamic changes of information diffusion, which often leads to poor prediction results. Therefore, in this paper, we propose a novel deep learning model called SkipCas to predict the growth size of the information cascade. First, we use the diffusion path and time effect at each diffusion time in the cascade graph to obtain the dynamic process of the information diffusion. Second, we put the sequence of biased random walk sampling into the skip-gram model to obtain the structural representation of the cascade graph. Finally, we combine the dynamic diffusion process and the structural representation to predict the growth size of the information cascade. Extensive experiments on two real datasets show that our model SkipCas significantly improves the prediction accuracy compared with the state-of-the-art models.

Dedong Ren, Yong Liu
Service Is Good, Very Good or Excellent? Towards Aspect Based Sentiment Intensity Analysis

Aspect-based sentiment analysis (ABSA) is a fast-growing research area in natural language processing (NLP) that provides more fine-grained information, considering the aspect as the fundamental item. The ABSA primarily measures sentiment towards a given aspect, but does not quantify the intensity of that sentiment. For example, intensity of positive sentiment expressed for service in service is good is comparatively weaker than in service is excellent. Thus, aspect sentiment intensity will assist the stakeholders in mining user preferences more precisely. Our current work introduces a novel task called aspect based sentiment intensity analysis (ABSIA) that facilitates research in this direction. An annotated review corpus for ABSIA is introduced by labelling the benchmark SemEval ABSA restaurant dataset with the seven (7) classes in a semi-supervised way. To demonstrate the effective usage of corpus, we cast ABSIA as a natural language generation task, where a natural sentence is generated to represent the output in order to utilize the pre-trained language models effectively. Further, we propose an effective technique for the joint learning where ABSA is used as a secondary task to assist the primary task, i.e. ABSIA. An improvement of 2 points is observed over the single task intensity model. To explain the actual decision process of the proposed framework, model explainability technique is employed that extracts the important opinion terms responsible for generation (Source code and the dataset has been made available on https://www.iitp.ac.in/~ai-nlp-ml/resources.html#ABSIA , https://github.com/20118/ABSIA )

Mamta, Asif Ekbal
Visconde: Multi-document QA with GPT-3 and Neural Reranking

This paper proposes a question-answering system that can answer questions whose supporting evidence is spread over multiple (potentially long) documents. The system, called Visconde, uses a three-step pipeline to perform the task: decompose, retrieve, and aggregate. The first step decomposes the question into simpler questions using a few-shot large language model (LLM). Then, a state-of-the-art search engine is used to retrieve candidate passages from a large collection for each decomposed question. In the final step, we use the LLM in a few-shot setting to aggregate the contents of the passages into the final answer. The system is evaluated on three datasets: IIRC, Qasper, and StrategyQA. Results suggest that current retrievers are the main bottleneck and that readers are already performing at the human level as long as relevant passages are provided. The system is also shown to be more effective when the model is induced to give explanations before answering a question. Code is available at https://github.com/neuralmind-ai/visconde .

Jayr Pereira, Robson Fidalgo, Roberto Lotufo, Rodrigo Nogueira

Open Access

Fragmented Visual Attention in Web Browsing: Weibull Analysis of Item Visit Times

Users often browse the web in an exploratory way, inspecting what they find interesting without a specific goal. However, the temporal dynamics of visual attention during such sessions, emerging when users gaze from one item to another, are not well understood. In this paper, we examine how people distribute visual attention among content items when browsing news. Distribution of visual attention is studied in a controlled experiment, wherein eye-tracking data and web logs are collected for 18 participants exploring newsfeeds in a single- and multi-column layout. Behavior is modeled using Weibull analysis of item (article) visit times, which describes these visits via quantities like durations and frequencies of switching focused item. Bayesian inference is used to quantify uncertainty. The results suggest that visual attention in browsing is fragmented, and affected by the number, properties and composition of the items visible on the viewport. We connect these findings to previous work explaining information-seeking behavior through cost-benefit judgments.

Aini Putkonen, Aurélien Nioche, Markku Laine, Crista Kuuramo, Antti Oulasvirta
Inferring Tie Strength in Temporal Networks

Inferring tie strengths in social networks is an essential task in social network analysis. Common approaches classify the ties as weak and strong ties based on the strong triadic closure (STC). The STC states that if for three nodes, A, B, and C, there are strong ties between A and B, as well as A and C, there has to be a (weak or strong) tie between B and C. So far, most works discuss the STC in static networks. However, modern large-scale social networks are usually highly dynamic, providing user contacts and communications as streams of edge updates. Temporal networks capture these dynamics. To apply the STC to temporal networks, we first generalize the STC and introduce a weighted version such that empirical a priori knowledge given in the form of edge weights is respected by the STC. The weighted STC is hard to compute, and our main contribution is an efficient 2-approximative streaming algorithm for the weighted STC in temporal networks. As a technical contribution, we introduce a fully dynamic 2-approximation for the minimum weight vertex cover problem, which is a crucial component of our streaming algorithm. Our evaluation shows that the weighted STC leads to solutions that capture the a priori knowledge given by the edge weights better than the non-weighted STC. Moreover, we show that our streaming algorithm efficiently approximates the weighted STC in large-scale social networks.

Lutz Oettershagen, Athanasios L. Konstantinidis, Giuseppe F. Italiano
SECLEDS: Sequence Clustering in Evolving Data Streams via Multiple Medoids and Medoid Voting

Sequence clustering in a streaming environment is challenging because it is computationally expensive, and the sequences may evolve over time. K-medoids or Partitioning Around Medoids (PAM) is commonly used to cluster sequences since it supports alignment-based distances, and the k-centers being actual data items helps with cluster interpretability. However, offline k-medoids has no support for concept drift, while also being prohibitively expensive for clustering data streams. We therefore propose SECLEDS, a streaming variant of the k-medoids algorithm with constant memory footprint. SECLEDS has two unique properties: i) it uses multiple medoids per cluster, producing stable high-quality clusters, and ii) it handles concept drift using an intuitive Medoid Voting scheme for approximating cluster distances. Unlike existing adaptive algorithms that create new clusters for new concepts, SECLEDS follows a fundamentally different approach, where the clusters themselves evolve with an evolving stream. Using real and synthetic datasets, we empirically demonstrate that SECLEDS produces high-quality clusters regardless of drift, stream size, data dimensionality, and number of clusters. We compare against three popular stream and batch clustering algorithms. The state-of-the-art BanditPAM is used as an offline benchmark. SECLEDS achieves comparable F1 score to BanditPAM while reducing the number of required distance computations by 83.7%. Importantly, SECLEDS outperforms all baselines by 138.7% when the stream contains drift. We also cluster real network traffic, and provide evidence that SECLEDS can support network bandwidths of up to 1.08 Gbps while using the (expensive) dynamic time warping distance.

Azqa Nadeem, Sicco Verwer
ARES: Locally Adaptive Reconstruction-Based Anomaly Scoring

How can we detect anomalies: that is, samples that significantly differ from a given set of high-dimensional data, such as images or sensor data? This is a practical problem with numerous applications and is also relevant to the goal of making learning algorithms more robust to unexpected inputs. Autoencoders are a popular approach, partly due to their simplicity and their ability to perform dimension reduction. However, the anomaly scoring function is not adaptive to the natural variation in reconstruction error across the range of normal samples, which hinders their ability to detect real anomalies. In this paper, we empirically demonstrate the importance of local adaptivity for anomaly scoring in experiments with real data. We then propose our novel Adaptive Reconstruction Error-based Scoring approach, which adapts its scoring based on the local behaviour of reconstruction error over the latent space. We show that this improves anomaly detection performance over relevant baselines in a wide variety of benchmark datasets.

Adam Goodge, Bryan Hooi, See Kiong Ng, Wee Siong Ng
A Study of Term-Topic Embeddings for Ranking

Contextualized representations from transformer models have significantly improved the performance of neural ranking models. Late interactions popularized by ColBERT and recently compressed with clustering in ColBERTv2 deliver state-of-the-art quality on many benchmarks. ColBERTv2 uses centroids along with occurrence-specific delta vectors to approximate contextualized embeddings without reducing ranking effectiveness. Analysis of this work suggests that these centroids are “term-topic embeddings”. We examine whether term-topic embeddings can be created in a differentiable end-to-end way, finding that this is a viable strategy for removing the separate clustering step. We investigate the importance of local context for contextualizing these term-topic embeddings, analogous to refining centroids with delta vectors. We find this end-to-end approach is sufficient for matching the effectiveness of the original contextualized embeddings.

Lila Boualili, Andrew Yates
Multi-source Inductive Knowledge Graph Transfer

Large-scale information systems, such as knowledge graphs (KGs), enterprise system networks, often exhibit dynamic and complex activities. Recent research has shown that formalizing these information systems as graphs can effectively characterize the entities (nodes) and their relationships (edges). Transferring knowledge from existing well-curated source graphs can help construct the target graph of newly-deployed systems faster and better which no doubt will benefit downstream tasks such as link prediction and anomaly detection for new systems. However, current graph transferring methods are either based on a single source, which does not sufficiently consider multiple available sources, or not selectively learns from these sources. In this paper, we propose MSGT-GNN, a graph knowledge transfer model for efficient graph link prediction from multiple source graphs. MSGT-GNN consists of two components: the Intra-Graph Encoder, which embeds latent graph features of system entities into vectors; and the graph transferor, which utilizes graph attention mechanism to learn and optimize the embeddings of corresponding entities from multiple source graphs, in both node level and graph level. Experimental results on multiple real-world datasets from various domains show that MSGT-GNN outperforms other baseline approaches in the link prediction and demonstrate the merit of attentive graph knowledge transfer and the effectiveness of MSGT-GNN.

Junheng Hao, Lu-An Tang, Yizhou Sun, Zhengzhang Chen, Haifeng Chen, Junghwan Rhee, Zhichuan Li, Wei Wang
Keyword Embeddings for Query Suggestion

Nowadays, search engine users commonly rely on query suggestions to improve their initial inputs. Current systems are very good at recommending lexical adaptations or spelling corrections to users’ queries. However, they often struggle to suggest semantically related keywords given a user’s query. The construction of a detailed query is crucial in some tasks, such as legal retrieval or academic search. In these scenarios, keyword suggestion methods are critical to guide the user during the query formulation. This paper proposes two novel models for the keyword suggestion task trained on scientific literature. Our techniques adapt the architecture of Word2Vec and FastText to generate keyword embeddings by leveraging documents’ keyword co-occurrence. Along with these models, we also present a specially tailored negative sampling approach that exploits how keywords appear in academic publications. We devise a ranking-based evaluation methodology following both known-item and ad-hoc search scenarios. Finally, we evaluate our proposals against the state-of-the-art word and sentence embedding models showing considerable improvements over the baselines for the tasks.

Jorge Gabín, M. Eduardo Ares, Javier Parapar
Domotics for the Independence and Participation in Daily Life of People with Severe Disabilities

Domotics improves quality of life of people with disabilities. Maintaining independence and participation in daily life is the main goal in inclusive projects, starting with rehabilitative interventions. Complex technological systems tailored to conditions and life style of individuals with disabilities can be implemented through open source technology. This case report discusses the design features and results of domotic projects aimed at promoting independence and participation in daily life of people with severe disabilities. Crucial factors for the success of the projects were: customization of the system according to the needs of end users and its adaptation during the progression of the disease; relevance of social connections with respect to the use of technology by the individual; potential interference of unpredictable events that could compromise the user’s technological experience. This field of technological application is full of opportunities but still needs proofs of effectiveness. Individualized projects require a lot of technical attention and human care, even in the subsequent stages of implementation. The numerous criticalities that could undermine the success of systems of this type must be addressed by appropriately considering individual cases, the use of resources and the technological intensity of the projects.

Edda Capodaglio, Alessandro Panighi, Monica Panigazzi
Comfort for the Health of Premature Patients

The newborn hospitalized in a health facility, to receive the best care, must also be able to count on a global comfort of the rooms. As it is now the case in the workplace, the assessment of the causes of discomfort must simultaneously consider the various aspects. The control of noise, brightness and microclimatic and air quality parameters all contribute to the achievement of the well-being of the subject as well as health treatments. The environment contributes to strengthening the body’s response to treatments and improves the endurance of pain. This condition is even more evident in premature subjects or subjects affected by pathologies in the first months of life. The health care worker must therefore take care of the environmental aspects or be assisted by the technical services of the health facility to achieve the best patient comfort. The quality of the environment will also affect the psychophysical state of health workers who are called to guarantee high standards of care and to whom it is necessary to guarantee safety conditions in the workplace.The managers of the UOC of Neonatology and Neonatal Intensive Care. S. Giuseppe Moscati Hospital Avellino have decided to improve the care and work environment to avoid the negative effects on the health of newborns, and to improve the psychophysical conditions of workers, preventing them from suffering injuries or making mistakes during assistance.

M. del Gaudio, A. Lama, C. Vedetta, S. Moschella
Solar Roof Panel Extraction from UAV Photogrammetric Point Cloud

Many buildings are using solar panels as an additional source of electricity. As solar energy is renewable energy and the maintenance cost of solar panels is cheap. This research uses a statistical approach of analyzing point clouds generated from UAV-based photogrammetric processing. An algorithm has been developed to extract solar panels on the building rooftops. The data acquisition is done using an Unmanned Aerial Vehicle (UAV) platform mounted with an optical sensor. The RGB images acquired are further used to generate a photogrammetric point cloud dataset. Geomatics engineering building of Indian Institute of Technology Roorkee, India is considered as the study area, on which solar panels were already installed on its roof. Normal vectors are computed for each points in the building point cloud dataset. The normal vector has its components in the x-axis, y-axis, and z-axis correspondingly. Based on the contribution of the z-component of normal vectors, the points are classified into roof, facade, and solar panel points respectively. The results obtained are evaluated by comparing classified points with respect to manually classified solar panel points. This comparision suggests that the developed algorithm is effective in extracting the solar roof panels efficiently. This research can be used to calculate the effective area of solar panels.

S. K. P. Kushwaha, Harshit, Kamal Jain
Influence of European UAS Regulations on Image Acquisition for 3D Building Modeling

The dynamic development of 3D building reconstruction using digital images obtained with unmanned aerial systems (UAS) has been observed in recent years. The popularity of UAS is due to its wide technological availability at a low price, compared to geodetic measurement equipment, laser scanners, or manned flight missions. In practice, the usage of UAS for 3D building reconstruction and modelling allows the acceleration of the production process (image acquisition, processing, computation) while maintaining a high quality of the final product. With the increasing number of new flying objects in airspace and because of differences in UAS regulation in each EU country, it was necessary to adapt the rules for the operation of unmanned aircraft to standardize regulations, make operations easier, and assure aviation safety. Due to this fact, from 31 December 2020, the new European Union (EU) Commission Implementing Regulation 2019/947 on the rules and procedures for the operation of unmanned aircraft entered into force across the continent. The new regulations replaced each EU national’s existing laws and applied to all UAS pilots. They have adopted a risk-based approach and do not distinguish between leisure or commercial activities like previous regulations. To assess the operational risk and to determine the category of flight mission, the weight and specifications of the UAS, the operation, and UAS pilot qualification are taken into account. Because of that, new categories of operations have been established. In this study, the review of 3D reconstruction using UAS was performed and the new EU UAS regulations in the context of the image acquisition of buildings in different levels of detail (LoD) were studied. For this purpose, the practical 3D reconstructions of buildings were analyzed. Furthermore, taking Poland as an example, new unified EU rules were compared with the previous ones.

Grzegorz Gabara
Automatic Ship Detection Using CFAR Algorithm for Quad-Pol UAV-SAR Imagery

Remote Sensing data, either airborne or satellites, are very much useful for incorporating the Geographical Information System (GIS) technology. SAR sensors are good as compared to optical sensors for monitoring maritime activity due to their capability of penetrating clouds and can work without depending upon any weather condition. SAR sensors can work day and night while optical sensors need a source to illuminate the surface hence can only work in the daytime. Many studies have been done on UAV SAR sensors for different applications like oil spills, ship detection, etc. Moreover, the polarimetric technique helps in understanding the feature much more in detail by using phase information like orientation and shape of the object using scattering behavior. In this paper, the main focus of the study is the Automatic ship detection using the Adaptive Threshold Algorithm popularly known as Constant False Alarm Rate (CFAR) for polarimetric UAV SAR data. Coherency Matrix ( $${T}_{3}$$ ) is computed from quad-pol covariance SAR data $${C}_{3}$$ and CFAR algorithm is applied to each element of the coherency matrix to detect ships. The sea surface follows the surface scattering and this can be highly helpful to distinguish the ships from the sea background. Moreover, due to the homogeneous background of imagery, the CFAR algorithm works more precisely as it can compute the adaptive threshold for each pixel using the background area by assuming it to the Gaussian in nature. Moreover, the Global Self-consistent, Hierarchical, High-resolution Geography Database (GSHHG) vector coastline layer and Digital Elevation Model (DEM) are used for masking out the land area to enhance the area of interest. In this study, $${T}_{22}$$ element of the scattering matrix shows better results in the detection of the ships and in determining the shape of the ships. Finally, the efficiency of the algorithm is measured using the Receiver Operating Characteristics (ROC) curve.

Harshal Mittal, Ashish Joshi
Effects of Flight Plan Parameters on the Quality and Usability of Low-Cost UAS Photogrammetry Data Products for Tree Crown Delineation

The continued understanding of the influence of flight planning characteristics on data quality is crucial in the demand for minimizing costs and maximizing the output potential of Uncrewed Aerial Systems (UAS) for forestry applications. This study was conducted to ascertain the effects of various combinations of flying height and percentage overlaps on the quality of photogrammetry data products generated from images acquired by a low-cost UAS (Mavic 2 Pro), with emphasis on tree crown delineation in a Mangium plantation forest in the Philippines. The quality of the products is evaluated based on their completeness and the accuracy of tree crown delineations. Results suggest that the percentage completeness increases as the flying height and percentage overlap increase. More than 90% completeness was achieved for 90% overlap regardless of the flying height. Tree crown delineations using multiresolution segmentation of Digital Surface Models (DSMs) generated from images with a flying height of 120 m and percentage overlap of 80% and 90%, achieved the highest overall accuracy of 43.35%. This study showed that a minimum of 80% overlap must be aimed when acquiring images to ensure higher completeness of the data products and that flying at 120 m above ground with at least 80% overlaps can provide more accurate tree crown delineations.

Jojene R. Santillan, Jun Love E. Gesta, Marcia Coleen N. Marcial
High-Speed Wi-Fi Systems for Long Range FANETS: Real Problems, Experiments, and Lessons Learnt

With the combinational use of geospatial and UAV technology, people have shown that much clearer and more precise surface features of the area under consideration can be extracted. Usually, for this, a good quality camera with limited memory is used. Although there are advancements in battery technology, the amount of time required in extracting data, analyze it and then rescheduling another flight is still a challenge. With advancements in Wi-Fi chips, which are light, reliable, and also cost reasonably less, one can set up a system for the swarm of UAVs to collect geospatial imagery data and simultaneously send that data over to the ground for real-time analysis. This will not only save the time for gathering data but will also provide newer opportunities for research. Also as the overall integrated systems are costly, this technology can be used for smaller missions and tinkered UAV projects. This paper discusses vastly experiments that were done for high-speed data transfer rates, the problem one faces during the design of such systems, and lessons learned for further research. FANETS—Flying Ad-Hoc NETworkS are being widely studied. These days much of the discussions are limited to radio connectivity, which is dependent on heavy equipment loaded on large UAVs. The scope of this study is limited to more affordable, medium to small UAVs that are widely used in geospatial technology as they are agile and small in size. This paper also gives a brief about probabilistic aspects of regular practice that leads to a successful connection. Where there is a large number of nodes, how can hopping help, is also briefly discussed along with its further scopes of research?

Utkarsh Ahuja
Assessment of Human Activity Classification Algorithms for IoT Devices

Human activity classification is assuming great relevance in many fields, including the well-being of the elderly. Many methodologies to improve the prediction of human activities, such as falls or unexpected behaviors, have been proposed over the years, exploiting different technologies, but the complexity of the algorithms requires the use of processors with high computational capabilities. In this paper different deep learning techniques are compared in order to evaluate the best compromise between recognition performance and computational effort with the aim to define a solution that can be executed by an IoT device, with a limited computational load. The comparison has been developed considering a dataset containing different types of activities related to human walking obtained from an automotive Radar. The procedure requires a pre-processing of the raw data and then the feature extraction from range-Doppler maps. To obtain reliable results different deep learning architectures and different optimizers are compared, showing that an accuracy of more than 97% is achieved with an appropriate selection of the network parameters.

Gianluca Ciattaglia, Linda Senigagliesi, Ennio Gambi
A Preliminary Prototype of Smart Healthcare Modular System for Cardiovascular Diseases Remote Monitoring

According to the World Health Organization (WHO), about 17.9 millions people per year die from cardiovascular diseases (CVDs), representing more than 32% of all deaths recorded worldwide. Moreover, focusing on the European Society of Cardiology (ESC) member countries, CVDs remain the most common cause of death. Additionally, the worldwide pandemic situation has further underlined the necessity of remote health monitoring systems, aiming at remotely supervising patients by clinicians, able to analyse and record a selected set of parameters with high accuracy levels and promptly identify significant events. In this scenario, this work aims to propose a reconfigurable preliminary prototype of Smart Healthcare Modular System (SHeMS), which combines modularity and real-time signal analysis, leveraging on sensor fusion and Internet of Things technologies. The architecture of the system, as well as the selected main components will be presented, together with the rationale for the development of the whole system. The current evolution of the project has focused particularly on the accurate measurement of the electrocardiographic (ECG) signal, considered the most critical vital parameter for remote monitoring. Besides, the presented ongoing activity is devoted to the implementation of other significant modules, like SpO2, blood pressure, temperature, vocal messages, to enhance monitoring capabilities and increase the detection accuracy of critical events.

Valentina Di Pinto, Federico Tramarin, Luigi Rovati
Exploiting Blood Volume Pulse and Skin Conductance for Driver Drowsiness Detection

Attention loss caused by driver drowsiness is a major risk factor for car accidents. A large number of studies are conducted to reduce the risk of car crashes, especially to evaluate the driver behavior associated to drowsiness state. However, a minimally-invasive and comfortable system to quickly recognize the physiological state and alert the driver is still missing. This study describes an approach based on Machine Learning (ML) to detect driver drowsiness through an Internet of Things (IoT) enabled wrist-worn device, by analyzing Blood Volume Pulse (BVP) and Skin Conductance (SC) signals. Different ML algorithms are tested on signals collected from 9 subjects to classify the drowsiness status, considering different data segmentation options. Results show that using a different window length for data segmentation does not influence ML performance.

Angelica Poli, Andrea Amidei, Simone Benatti, Grazia Iadarola, Federico Tramarin, Luigi Rovati, Paolo Pavan, Susanna Spinsante
Chapter 7. Prediction of Lateral Surface Settlement Caused by Shield Tunneling of Adjacent Buildings

The soil displacement and structure deformation caused by shield tunneling has been a common concern in urban subway construction in China, among which the peck formula is still the most widely used to predict the lateral deformation of soil mass.

Zhi Ding, Xinjiang Wei, Yong Wu
Chapter 6. Study on the Influence and Control Standard of Double Line Shield Tunneling on Adjacent Buildings

In recent years, with the continuous development of China’s subway construction.

Zhi Ding, Xinjiang Wei, Yong Wu
Chapter 2. Automatic Extraction of Surface Water Bodies from High-Resolution Multispectral Remote Sensing Imagery Using GIS and Deep Learning Techniques in Dubai

Acquiring vector feature layers such as surface water bodies from high-resolution remote sensing (HRRS) imagery has gained growing scientific interest worldwide. Thus, several strategies, technologies, techniques, and methods were designed and developed in order to delineate surface water bodies from remote sensing imagery varying in spectral, spatial and temporal characteristics. This research puts forward an intuitive method to extracting the surface water bodies from multispectral high-resolution drones and satellite imagery using an integrated deep learning method for GIS modeling in Dubai Emirate. In this research, training data was extracted first. Then, an advanced object detection model based on deep learning is introduced. Next, the implementation of this model in several areas across Dubai Emirate is comprehensively evaluated, including detection, recognition, classification, counting, and quality assessment. At the end, recommendations and limitations are summarized. The final evaluation tests performed comparing the produced outputs to the reference data suggest that higher accuracy for surface water extraction is accomplished from the multispectral high-resolution images than traditional approaches such as machine learning (Supervised classification) and photo interpretation. Overall, in urban areas, the average accuracy reached 98% while it reached 99% in rural areas, respectively. This novel method offers great opportunities for the Department of Geographic Information Systems Centre (DGISC) at Dubai Municipality, under multiple land-use scenarios, to optimize and limit the extremely heavy amount of expensive human labor responsible for editing the records through field surveys or photo interpretation or other manual techniques.

Lala El Hoummaidi, Abdelkader Larabi
Chapter 7. Implementation of Urban Organic Waste Collection and Treatment System in a Brazilian Municipality: An Analysis Based on a Socio-technical Transition Theory

The circular economy concept has been gaining ground and changing assumptions regarding the linear approach view of waste. In Brazil, Law N. 12.305/2010, which established the Solid Waste National Policy (SWNP), stipulates that Federal States and Municipalities must draw up waste management plans to encourage the recyclable and compostable fraction of the urban waste to be valued. However, the implementation of an organic waste collection and treatment system – to recover the value of organic waste in urban centers meeting the health and safety requirements and at the same time optimizing public investments – has been one of the crucial aspects in the municipal waste management plans. The success of such a system also depends on the effective participation of the citizens and on the municipal policy changes. On the basis of a socio-technical transition theory, we highlight the importance of several innovations brought by incumbents and niche actors to the process of transition toward a more sustainable urban management system in Florianopolis, a city in Southern Brazil. The municipal holder of waste management public services, COMCAP, incrementally reoriented the regime and adjusted its action to consider social and educational aspects over time. In 1986, it was the pioneer to adopt the selective collection of dry recyclables in Brazil and composting initiatives in poor communities. Partnerships with other actors lead to outstanding initiatives, such as composting projects in a community, which was recognized as a practice in agroecology at the occasion of the International Green Week and the Global Forum for Food and Agriculture. In 2021, COMCAP was the first company to implement an organic waste door-to-door collection system. Understanding how the implementation of this system occurred from a multilevel perspective contributes to other municipalities in starting a transition process toward a more sustainable model based on circular economic concepts. The impacts of these changes on the achievement of some Sustainable Development Goals (SDGs) set by the United Nations are also addressed in this chapter.

Mônica Maria Mendes Luna, Matheus Moraes Zambon
A Multirate Accelerated Schwarz Waveform Relaxation Method

Schwarz Waveform relaxation (SWR) [1, 2, 6] is an iterative algorithm for solving time dependent partial differential equations (PDEs) in parallel. The domain of the PDE is partitioned into overlapping or non-overlapping subdomains, then the PDE is solved iteratively on each subdomain.

Ronald D Haynes, Khaled Mohammad
11. Internationale Allianz für Wissenschaftsdiplomatie: Soziale Kompetenz als Prädiktor für einen soliden Verhandlungsprozess – amerikanische und europäische Selbstwahrnehmung

In diesem Kapitel richten Mauro Galluccio und Mattia Sanna ihre gemeinsame Aufmerksamkeit auf ein multidisziplinäres Forschungsprojekt, das von Mauro Galluccio auf beiden Seiten des Atlantiks, in den Vereinigten Staaten und in der Europäischen Union (EU), konzipiert und geleitet wird. Das Hauptziel besteht darin, besser zu verstehen, wie hochqualifizierte Verhandlungsführer und Diplomaten in komplexen Verhandlungsprozessen unter Bedingungen von Ungewissheit und Mehrdeutigkeit denken, fühlen und sich verhalten, und ob soziale Kompetenz ein Prädiktor für einen soliden Verhandlungsprozess sein könnten. Unsere Ergebnisse deuten darauf hin, dass es eine Variabilität im Verhandlungsergebnis gibt, die mit einem einzelnen Verhandlungsführer in Verbindung gebracht werden kann, und es scheint, dass ein Teil dieser Variabilität speziell mit der sozialen Kompetenz der Verhandlungsführer in Verbindung gebracht werden kann und nicht mit anderen Variablen. Die Forschung in diesem Bereich hat das Potenzial, sowohl die Verhandlungsforschung als auch die faktengestützte Ausbildung von Verhandlungsführern zu verbessern.

Mauro Galluccio, Mattia Sanna
CNN Hardware Accelerator Architecture Design for Energy-Efficient AI

Reducing the energy consumption of deep neural network hardware accelerator is critical to democratizing deep learning technology. This chapter introduces the AI accelerator design considerations for alleviating the AI accelerator’s energy consumption issue, including the metrics for evaluating the AI accelerator. These design considerations mainly specialize in accelerating convolutional neural network (CNN) architecture, the most dominant DNN architecture nowadays. Most energy-efficient AI accelerating methods covered in this chapter are categorized into approximation and optimization techniques. The target is to reduce the number of multiplication or the memory footprint by modifying the multiplication and accumulation (MAC) operation or dataflow to make the AI accelerator more energy-efficient and lightweight.

Jaekwang Cha, Shiho Kim
Hardware Accelerators in Embedded Systems

The empirical law for embedded system design is that adding hardware increases a power requirement. However, with hardware accelerators, the traditional empirical rule rarely changes. Adding hardware can affect performance. Analyzing algorithms of programmable logic and implementing appropriate accelerators allow designers to increase design performance while reducing power consumption in embedded computing systems. Neuromorphic chips and neural processing units, also known as the intelligent processing unit, are special network application package processors that use a data-driven parallel computing architecture to process extensive multimedia data, especially video and images. We will look at two trends: General Purpose Graphics Processing Units and Neural Processing Units with central and graphics processing units. We are also going to review three commercialized hardware accelerators in embedded systems.

Jinhyuk Kim, Shiho Kim
TweetStream2Story: Narrative Extraction from Tweets in Real Time

The rise of social media has brought a great transformation to the way news are discovered and shared. Unlike traditional news sources, social media allows anyone to cover a story. Therefore, sometimes an event is already discussed by people before a journalist turns it into a news article. Twitter is a particularly appealing social network for discussing events, since its posts are very compact and, therefore, contain colloquial language and abbreviations. However, its large volume of tweets also makes it impossible for a user to keep up with an event. In this work, we present TweetStream2Story, a web app for extracting narratives from tweets posted in real time, about a topic of choice. This framework can be used to provide new information to journalists or be of interest to any user who wishes to stay up-to-date on a certain topic or ongoing event. As a contribution to the research community, we provide a live version of the demo, as well as its source code.

Mafalda Castro, Alípio Jorge, Ricardo Campos
LifeCLEF 2023 Teaser: Species Identification and Prediction Challenges

Building accurate knowledge of the identity, the geographic distribution and the evolution of species is essential for the sustainable development of humanity, as well as for biodiversity conservation. However, the difficulty of identifying plants, animals and fungi is hindering the aggregation of new data and knowledge. Identifying and naming living organisms is almost impossible for the general public and is often difficult, even for professionals and naturalists. Bridging this gap is a key step towards enabling effective biodiversity monitoring systems. The LifeCLEF campaign, presented in this paper, has been promoting and evaluating advances in this domain since 2011. The 2023 edition proposes five data-oriented challenges related to the identification and prediction of biodiversity: (i) PlantCLEF: very large-scale plant identification from images, (ii) BirdCLEF: bird species recognition in audio soundscapes, (iii) GeoLifeCLEF: remote sensing based prediction of species, (iv) SnakeCLEF: snake recognition in medically important scenarios, and (v) FungiCLEF: fungi recognition beyond 0–1 cost.

Alexis Joly, Hervé Goëau, Stefan Kahl, Lukáš Picek, Christophe Botella, Diego Marcos, Milan Šulc, Marek Hrúz, Titouan Lorieul, Sara Si Moussi, Maximilien Servajean, Benjamin Kellenberger, Elijah Cole, Andrew Durso, Hervé Glotin, Robert Planqué, Willem-Pier Vellinga, Holger Klinck, Tom Denton, Ivan Eggel, Pierre Bonnet, Henning Müller
iDPP@CLEF 2023: The Intelligent Disease Progression Prediction Challenge

Amyotrophic Lateral Sclerosis (ALS) and Multiple Sclerosis (MS) are chronic diseases characterized by progressive or alternate impairment of neurological functions (motor, sensory, visual, cognitive). Patients have to manage alternated periods in hospital with care at home, experiencing a constant uncertainty regarding the timing of the disease acute phases and facing a considerable psychological and economic burden that also involves their caregivers. Clinicians, on the other hand, need tools able to support them in all the phases of the patient treatment, suggest personalized therapeutic decisions, indicate urgently needed interventions.The goal of iDPP@CLEF is to design and develop an evaluation infrastructure for AI algorithms able to: 1. better describe disease mechanisms; 2. stratify patients according to their phenotype assessed all over the disease evolution; 3. predict disease progression in a probabilistic, time dependent fashion. iDPP@CLEF run as a pilot lab in CLEF 2022, offering tasks on the prediction of ALS progression and a position paper task on explainability of AI algorithms for prediction; 5 groups submitted a total of 120 runs and 2 groups submitted position papers.iDPP@CLEF will continue in CLEF 2023, focusing on the prediction of MS progression and exploring whether pollution and environmental data can improve the prediction of ALS progression.

Helena Aidos, Roberto Bergamaschi, Paola Cavalla, Adriano Chiò, Arianna Dagliati, Barbara Di Camillo, Mamede Alves de Carvalho, Nicola Ferro, Piero Fariselli, Jose Manuel García Dominguez, Sara C. Madeira, Eleonora Tavazzi
An Analytical Assessment and Retrofit Using Nanomaterials of Rural Houses in Heat Wave-Prone Region in India

The Indian state of Andhra Pradesh experiences intense heat wave in the summer months. It is important to assess the indoor comfort hours of rural houses which are built with locally available materials because of economic constraints. This study aims to gauge the embodied energy and heat conductance of the houses in the heat wave-prone hot and humid climate of Vijayawada, Andhra Pradesh, and suggest retrofit to better the indoor thermal environment. A field study of four houses with different walling materials and the same roofing material of a typical village is carried out, and their embodied energy and thermal performance are compared with a conventional modern house from the same location. HTC-AMV06 Thermometer is used for field measurements of indoor dry bulb temperature and humidity, and the globe thermometer is used for outdoor temperature data on a summer day in April. Thermal energy models are simulated in energy plus and correlated with recorded data to validate the models. Validated models are used for computing indoor comfort hours. Embodied energy analysis shows that a house made with a reed wall and mud plaster with a reed roof has the lowest embodied energy (473.5 MJ/m2). It is only 9.47% of the conventional house which has very high embodied energy (5002.2 MJ/m2). Comfort hours for all the houses lie in a narrow range of 51.4–47.18% irrespective of the variation in embodied energy. Aerogel, when used as an insulation material, reduces indoor temperature by 11.09 °C in cement block houses and 6.17 °C in random rubble houses.

J. Vijayalaxmi, Dhananjay Hete
Applications of Smart Building Materials in Sustainable Architecture

With advances in material research, there is a growing interest in the knowledge of smart materials and their application in improving energy efficiency and the indoor environmental quality of a building. Smart materials can sense and react to their environment, and thus, they behave like living systems. Smart materials and technology produce useful effects in response to an external condition. They can be combined to provide changing and dynamic solutions for problems encountered while designing for energy efficiency. This paper is an introduction to the characteristics of smart materials and their application in the construction industry. Due to their small scale, smart materials enable us to design dynamic thermal environments. Smart materials are applied for façade systems, lighting systems, and energy systems. By focusing on the phenomena rather than the material artifact, the use of smart materials has the potential to dramatically increase the sustainability of buildings. We can save energy by operating discretely and locally only when necessary.

J. Vijayalaxmi
Optimization of the Integrated Daylighting and Natural Ventilation in a Commercial Building

Integrating a building with a more efficient natural ventilation and daylighting system reduces the dependency on artificial lighting and HVAC systems that account for more than 50% of the total building energy. As commercial buildings are one of the main typologies of buildings that are largely dependent on active systems, maximizing the natural ventilation and daylighting potential can make the building more resilient. For this study, the atrium space, which forms a central connectivity point in a commercial space, is selected and optimized for maximum natural ventilation and daylighting while maintaining occupant comfort. A field study of an existing commercial building, similar to the proposed case, is conducted and data is collected for validation. A quantitative analysis is done to study the impact of various natural ventilation and daylighting strategies on indoor thermal and visual comfort through simulations. It is found that among the 11 design variables selected, the window-to-wall ratio and the type of glazing have the most impact on the daylighting and thermal comfort of the space. The opening schedule, vent area, and the size of the opening have the maximum impact on natural ventilation.

Harshita Sahu, J. Vijayalaxmi
Study of Indoor Thermal Performance Due to Varying Ceiling Heights in a Hot-Humid Climate

This study explores the impact of changing ceiling height on the indoor thermal performance of a building for various combinations of room orientation and opening sizes. The methodology followed is building simulation and validation with field study data for some ceiling heights. This study uses the predictive model established in Chap. 7 . Thermal performance of rooms along 8 different orientations for 11 opening conditions and 10 different ceiling heights is assessed using the predictive model. The results are validated by examining the indoor thermal performance of real-scale rooms with varying ceiling heights. For the first time, the indoor thermal performance due to varying ceiling height, opening size, and orientation is examined in naturally ventilated rooms in a hot-humid climate. It is found that for every 30 cm rise in ceiling height, there is a change of up to 0.1 °C. The indoor temperature at the working level increases by 0.5 °C when the ceiling height is increased from 3.0 to 6.0 m. The percentage of indoor air temperature difference reduces exponentially as ceiling height increases. For any ceiling height, the indoor temperature is the same two times a day. Conditioned rooms with large ceiling heights consume more energy to be cool. This study can direct the vent of the air conditioner at various levels for optimized cooling. In this manner, this study is useful in the design of air conditioner vents and the location of goods in warehouses and silos for minimizing energy use.

J. Vijayalaxmi
Empirical and Dynamic Simulation-Based Assessment of Indoor Thermal Performance in Naturally Ventilated Buildings

This study attempts to investigate a parametric model to assess the thermal performance of naturally ventilated residential buildings for various parameters. A predictive model using DesignBuilder and Rhino is generated for 14 opening sizes in rooms along eight orientations in the hot-humid climate of Chennai city. The results of indoor temperature simulated from the model and collected from field measurements are compared and found to correlate well. The model is validated with two commonly used influencing factors, namely ceiling fan and flyscreen, and is found to correlate well with field measurements. The South-West room showed better thermal performance. For the same opening size in different room orientations, there is a temperature variation of 4 °C. The indoor average temperature is higher in rooms oriented along the cardinal directions than in the semi-cardinal directions. There is not much variation in indoor temperature for openings below 35%. Precautions must be taken to ensure that the outdoor temperature at site correlates with the EPW data. The model can be used to test the implications of the affecting factors to arrive at an optimized design at an early design stage resulting in enhanced thermal comfort. The results are useful in assessing the implications of changing one or more parameters on the indoor thermal performance in rooms with varying orientation and opening size. Since the model is based on the most prevailing and preferred building configurations, the study will assist architects and designers in optimizing for the most effective design of naturally ventilated buildings.

J. Vijayalaxmi
Methods of Assessing Thermal Performance of Buildings

Buildings are responsible for approximately 30% of CO2 emissions and about 40% of the world’s energy usage. This energy is mainly used in buildings for thermal comfort. Built structures are one of the major energy consumers that are eventually to blame for environmental degradation in these times of escalating environmental concerns. Dealing with the building’s energy demand will help us mitigate this issue. Heat transfer from the outside to the inside occurs within the building envelope, and the quantities are determined using some fundamental ideas. To better understand the building’s energy requirements, the thermal performance of the structure can be evaluated a number of methods. The goal of this research is to investigate the suitability of various methodologies for evaluating the thermal performance of buildings. To comprehend their applications and adaptability, the three methods of evaluating thermal performance—numerical, simulation, and physical data—have been investigated.

J. Vijayalaxmi
Steady-State Assessment of Vertical Greenery Systems on the Thermal Resistance of the Wall and Its Correlation with Thermal Insulation

Vegetation coupled with buildings proved to be efficient in mitigating excessive cooling, heating loads of buildings through achieving thermal comfort, microclimatic cooling, and control of insolation through the building envelope. This is possible through the shading effect, insulation, cooling by evapotranspiration, and wind barrier effects of the foliage layer. This study focuses on assessing the thermal resistance of the façade generated through the addition of vertical greenery systems in a steady state adopting a theoretical approach. A total of nine construction types are considered of varying insulation and vegetation strategies, and the influence of thermal insulation of the structure upon the resistive capacity of the façade improved by vertical greenery systems is evaluated. It is found that the effect of foliage in increasing the resistive capacity in cases with less insulated envelope is greater with a green façade showing 12.76% and living wall system showing 93.6%, and with the increase in the insulation of the construction type, the greening measures showed less impact in increasing the resistive capacity. The percentage increases system, respectively. The theoretical approach is adopted due to the complex metabolic processes in plants. This theoretical approach does not consider any other effect caused by the plant layer except the resistance generated by foliage when used in vertical greenery systems. Further, this study tries to explore conventional, vertical greenery system facades and their influence on material efficiency. The results will be useful to architects in designing energy-efficient and sustainable buildings.

J. Vijayalaxmi, Kiranjee Gandham

Open Access

Chapter 4. Grounding Dynamics of Labour Control and Labour Agency in GPNs Through an ‘Extended Single Embedded Case Study Design’

This chapter introduces the research design and methodology of this study. It starts by setting out the key philosophical assumptions underpinning this study, characterised by a constructivist or reflexive research approach. Drawing on Burawoy’s extended case study method and Yin’s single embedded case study model, the chapter then develops an ‘extended single embedded case study’ design for studying the interrelations between place-specific dynamics of labour control and labour agency, and broader governance dynamics in the garment GPN. The chapter further illustrates how, for this study, the Bangalore export-garment cluster was constructed as a single case with three local garment unions representing embedded sub-units of analysis. Thereafter, the data collection process through participant observations and in-depth interviews is described. In this context, the chapter discusses challenges and strategies for interviewing managers and state actors as well as workers and unions in light of the power relations, which structure interactions between the researcher and the research subjects. The chapter concludes by outlining the data preparation, analysis and interpretation process.

Tatiana López
Chapter 4. Innovation Process Workflow Approach to Promote Innovation in the Food Industry

The economic, social, environmental and safety issues have created a global crisis and no industry is immune to its effects. The consequences of this crisis not only bring uncertainty but also the opportunity to improve. With this regard, the food industry is now facing the significant challenge of improving its innovation process during the development of new products. Products should be economically competitive, easily adaptable to changes to the shifts of the consumer behaviors, and competitive to the new market circumstances. Thus, to overcome this challenge, it is necessary to develop the capability to accelerate the new product development (NPD) process, to understand the innovation process, and to improve the capacity to solve problems inventively. In response, this research presents an approach for a systematic problem-solving process that fits within the innovation process with the aim to promote innovation in food technology. This approach has its roots in the Theory of Inventive Problem Solving (TRIZ) and the General Theory of Powerful Thinking (OTSM). The approach guides through the stages of the innovation process; describes the main phases and their sequence; and manages the knowledge, the activities, the tools, and the methodologies to support the problem-solving process. A case study related to the improvement of the spray drying technology depicts the usefulness of this approach and highlights its capabilities.

Jesus-Manuel Barragan-Ferrer, Jonas Damasius, Stéphane Negny, Diana Barragan-Ferrer
Pre-service Teacher Training in an Immersive Environment

In close co-operation with training experts using virtual reality and teachers of the faculty preparing teachers, a training module called the “Virtual Classroom” was created. Users from the ranks of future teachers put on the head display and find themselves in an immersive classroom environment. There is common school equipment that is normally found in the classroom, and there are avatars—pupils or avatars—parents. The avatars are controlled by the teachers, or students who assist the teachers, and respond to avatars in specific situations given by framework-prepared scenarios. Avatars communicate with the future teacher, and also implement several events that have a visual representation (e.g. reporting, interrupting—talking together, etc.). The future teacher always has a specific assignment—what to tell the students or parents, and at the same time, he/she is forced to improvise in various partial communication situations initiated verbally, or by visual action by didactics and avatars. The paper presents the gradual development of the virtual classroom, as the model was improved during testing, and also presents the results of pilot studies. The paper also describes a three-phase model of training, which proved successful in working with a virtual classroom during pilot studies.

Václav Duffek, Jan Fiala, Petr Hořejší, Pavel Mentlík, Tomáš Průcha, Lucie Rohlíková, Miroslav Zíka
Impact of 5S Method in Apparel Industry

This paper explores the 5S management concept, its advantages and the framework required to implement the system in the Apparel Industry. The 5S method is a Japanese system that involves workplace management improvements. It further analyses the effect of the 5S method in the apparel & textile industry as a case study and how its implementation directly impacts saving time and costs. 5S process helps to eliminate waste, reduce defects, and help in improving the productivity of the business. Further, reducing clutter and junk at the workplace is hygienic and ergonomic. This research paper identifies how the 5S method is implemented in the Apparel industry and how it has helped the industry improve its waste reduction targets, profitability, and productivity. The apparel industry uses the 5S to dispose of the wastage and use the factory production layout to smooth working, wastages are cleared, and regularly maintained space. This would help minimize handling time, moving part used in apparel from one process to another quickly, and final apparel can be stitched and packed in a faster time process. This will improve the business’s profitability and enhance its competitiveness in the local and international market sectors.

Muzoon Alasbali, Abdulaziz T. Almaktoom
An Interactive Augmented Reality Tool to Enhance the Quality Inspection Process in Manufacturing—Pilot Study

Quality inspection processes are an essential part of most industrial systems. These are repetitive and precise operations that are often very complex and require multiple steps to be performed correctly by different inspectors or operators. Augmented Reality (AR), one of the most promising and enabling technologies for assisting employees and engineers in the manufacturing workplace, has the potential to help operators to better focus on tasks while having virtual data at their disposal. Therefore, it is important to verify whether, with the support of this technology, it is possible to help workers perform these activities faster, more efficiently, and with less mental effort than with traditional paper-based documents. This paper describes a pilot interactive AR tool designed to support quality controllers in their inspection of welded products in the manufacturing environment. This tool is designed to guide the employee through the inspection process, provide them with all relevant information and help them find any discrepancies or deviations. The presented AR tool is created with the game engine Unity 3D and SDK Vuforia to assure compatibility with commonly used devices, such as tablets or smartphones. It does not use markers, but uses the object tracking method instead. A pilot study was conducted with a group of five probands to test the usability and functionality of the solution using SUS (System Usability Scale) standard questionnaires. The average SUS value rated by the probands was 78, confirming both a high level of usability and user satisfaction.

Kristýna Havlíková, Petr Hořejší, Pavel Kopeček
How the Covid-19 Pandemic Affects Housing Design to Adapt With Households’ New Needs in Egypt?

As of late, “STAY AT HOME” is the main slogan; household needs are constantly changing for many reasons, such as the change in the human life cycle, the shift to smart cities, and adopting new modern technologies to reduce the risks. However, while moving to a smart solution, many forgotten social dimensions are being interpreted into the design of many services, including housing. Accordingly, this study aims to explore housing flexibility through a review of relevant literature and how housing design will change to accommodate new needs through quarantine and spread of Corona virus Disease 2019 (Covid-19) to formulate new design codes for stakeholders and real-estate developers to consider in the future. It examines the impact of quarantine on personal household priorities, house design, and how they innovate in their interior design to suit their new needs by conducting a wide online social survey. The research uses an online survey to evaluate the importance of the new arrangement of household requirements using quantitative analysis tools and techniques. The findings display housing guidelines to apply housing design flexibility to cope with any external or internal changes that may happen in the next period and affect household needs in smart cities and others.

Rania Nasreldin, Asmaa Ibrahim
An Application of Machine Learning in the Early Diagnosis of Meningitis

Meningitis is an infectious disease that can lead to neurocognitive impairments due to an inflammatory process in the meninges caused by various agents, mainly viruses and bacteria. Early diagnosis, especially when dealing with bacterial meningitis, reduces the risk of complications and mortality. Able to identify the most relevant features in the early diagnosis of bacterial meningitis. The model is designed to explore the prediction of specific data through the Logistic Regression, K Nearest Neighbour, and Random Forest algorithms. Early identification of the patient’s clinical evolution, cure, or death is essential to offer more effective and agile therapy. Random Forest Algorithm is the best performing algorithm with 90.6% accuracy, Logistic Regression with 90.3% performance and KNN with 90.1%. The most relevant characteristics to predict deaths are low education level and red blood cells in the CSF, suggesting intracranial haemorrhage. The best-performing algorithm will predict the evolution of the clinical condition that the patient will present at the end of hospitalisation and help health professionals identify the most relevant characteristics capable of predicting an improvement or worsening of your general clinical condition early on.

Pedro Gabriel Calíope Dantas Pinheiro, Luana Ibiapina C. C. Pinheiro, Raimir Holanda Filho, Maria Lúcia D. Pereira, Plácido Rogerio Pinheiro, Pedro José Leal Santiago, Rafael Comin-Nunes
Kapitel 1. Einführung

Dieses Buch gilt als Fortsetzung zu Solid Edge 2023 für Einsteiger – kurz und bündig. Das Einführungskapitel gliedert sich in mehrere Abschnitte. Es werden die verwendeten, grundlegenden Begriffe und die Benutzungsoberfläche von Solid Edge wiederholt. Als Einstieg werden einige Funktionalitäten bzgl. Volumenmodellierung, Zusammenbau und Zeichnungserstellung aus dem Einsteigerbuch in Form einer Klausur wiederholt.In den folgenden Kapiteln werden weiterführende Funktionen gezeigt, die den Konstruktionsprozess erleichtern und verbessern. Um alle gezeigten Funktionen zu nutzen, wird die Installation von Microsoft Excel vorausgesetzt. Es werden alle Excel-Versionen ab 2007 unterstützt.In den folgenden Kapiteln bildet eine kurze Zusammenstellung einfacher Kontroll-fragen den Abschluss. Diese dienen dem Anwender als Selbstkontrolle zum vermittelten Inhalt des Kapitels.

Michael Schabacker
Polar Decoder-Based Full Adders: Implementation and Comparative Analysis Using 180 nm and 90 nm Technologies in Cadence

In the case of digital applications, addition is the most often utilized mathematical operation. Because they impact floating-point and arithmetic logic units, as well as cache/memory address computations, the stability of FA cells is considered to be critical. Full adders are critical elements in applications like DSP systems and microprocessors. The design of polar decoder-based full adders is crucial since the polar decoder architecture is extensively used in majority of the digital systems including processors. As a result, adder design is important in digital design. This study looks into Cadence implementations of polar decoder-based full adders in 180 nm and 90 nm technologies, with consideration of delays and power consumption.

T. Vijayalakshmi, J. Selvakumar
Improved Logistic Map and DNA-Based Video Encryption

In the recent era, data security is important for multimedia communication such as image and video. Secured communication and confidentiality of data play an important role in many online services like authentication, video conferencing, online classes, etc. In this paper, a video encryption technique using improved logistic map and DNA sequencing has been proposed to encrypt the video. The proposed video encryption technique has mainly three phases, firstly key is generated using SHA-256. Second, permutation of the video frame is completed using improved logistic map and SHA-256. In the third step, diffusion on permuted video frame is performed using SHA-256 key and DNA sequencing. Efficiency of the encryption technique is measured through PSNR, entropy, correlation coefficients and NPCR. The presented encryption technique is resistant to the brute force and statistical attack.

Sweta Kumari, Mohit Dua
Tuning XGBoost by Planet Optimization Algorithm: An Application for Diabetes Classification

Recent years have seen an increase in instances of diabetes mellitus, a metabolic condition that if left untreated can severely decrease the quality of life, and even cause the death of those affected. Early diagnostics and treatment are vital for improving the outcome of treatment. This work proposes a novel artificial intelligence-based (AI) approach to diabetes classification. Due to the ability to process large amounts of data at a relatively quick rate with admirable performance, the XGBoost approach is used. However, despite many advantages, the large number of control parameters presented by this algorithm makes the process of tuning delicate and complex. To this end, the planet optimization algorithm (POA) is tasked with selecting the optimal XGBoost hyperparameters so as to achieve the best possible classification outcomes. In order to demonstrate the improvements achieved, a comparative analysis is given that presents the proposed approach alongside other contemporary algorithms addressing the same classification task. The attained results clearly demonstrate the superiority of the proposed approach.

Luka Jovanovic, Marko Djuric, Miodrag Zivkovic, Dijana Jovanovic, Ivana Strumberger, Milos Antonijevic, Nebojsa Budimirovic, Nebojsa Bacanin
A Decision-Making System for Dynamic Scheduling and Routing of Mixed Fleets with Simultaneous Synchronization in Home Health Care

Globally, the growing number of elderly people, chronic disorders and the spread of COVID-19 have all contributed to a significant growth of Home Health Care (HHC) services. One of HHC’s main goals is to provide a coordinated set of medical services to individuals in the comfort of their own homes. On the basis of the current demand for HHC services, this paper attempts to develop a novel and effective mathematical model and a suitable decision-making technique for reducing costs associated with HHC service delivery systems. The proposed system of decision making identifies the real needs of HHCs which incorporate dynamic, synchronized services and coordinates routes by a group of caregivers among a mixed fleet of services. Initially, this study models the optimization problem using Mixed Integer Linear Programming (MILP). The Revised Version of the Discrete Firefly Algorithm is designed to address the HHC planning decision-making problem due to its unique properties and its computational complexity. To evaluate the scalability of this proposed approach, random test instances are generated. The results of the experiments revealed that the algorithm performed well even with the different scenarios such as dynamic and synchronized visits. Furthermore, the improved version of nature-inspired solution methodology has proven to be effective and efficient. As a result, the proposed algorithm has significantly reduced costs and time efficiency.

R. V. Sangeetha, A. G. Srinivasan
Chapter 6. Abuse Response of Batteries Subjected to Mechanical Impact

Electrochemical and thermal models to simulate nominal performance and abuse response of lithium-ion cells and batteries have been reported widely in the literature. Studies on mechanical failure of cell components and how such events interact with the electrochemical and thermal response are relatively less common. This chapter outlines a framework developed under the Computer Aided Engineering for Batteries program to couple failure modes resulting from external mechanical loading to the onset and propagation of electrochemical and thermal events that follow. Starting with a scalable approach to implement failure criteria based on thermal, mechanical, and electrochemical thresholds, we highlight the practical importance of these models using case studies at the cell and module level. The chapter also highlights a few gaps in our understanding of the comprehensive response of batteries subjected to mechanical crash events, the stochastic nature of some of these failure events, and our approach to build safety maps that help improve robustness of battery design by capturing the sensitivity of some key design parameters to heat generation rates under different mitigation strategies.

Jinyong Kim, Anudeep Mallarapu, Shriram Santhanagopalan
Chapter 4. Development of Computer Aided Design Tools for Automotive Batteries

To accelerate the development of safe, reliable, high-performance, and long-lasting lithium-ion battery packs, the automotive industry requires computer-aided engineering (CAE) software tools that accurately represent cell and pack multi-physics phenomena occurring across a wide range of scales. In response to this urgent demand, General Motors assembled a CAEBAT Project Team composed of GM researchers and engineers, ANSYS Inc. software developers, and Professor Ralph E. White of the University of South Carolina and his ESim staff. With the guidance of NREL researchers, the team collaborated to develop a flexible modeling framework that supports multi-physics models and provides simulation process automation for robust engineering. Team accomplishments included clear definition of end-user requirements, physical validation of the models, cell aging and degradation models, and a new framework for multi-physics battery cell, module, and pack simulations. Many new capabilities and enhancements have been incorporated into ANSYS commercial software releases under the CAEBAT program.

Taeyoung Han, Shailendra Kaushik
Numerical Analysis of Precast Shear Wall with Opening and Unspliced Vertical Distribution Bars

To investigate the effect of opening on the lateral behaviour of precast concrete shear wall with unspliced vertical distribution bars, a series of nonlinear numerical analyses is carried out. Cohesive elements and nonlinear springs are adopted to simulate the joints around the precast wall panel and the steel bars across the joints, respectively. The numerical analysis model was verified based on test results of the lateral behaviour of precast shear wall with unspliced vertical distribution bars. Based on the verified model, parametric studies are carried out for the influence of the opening’s characteristic parameters and the length of the boundary element. The results show that the influence of the area ratio and location of the opening on the lateral behaviour of the precast shear wall is similar to that of cast-in-situ shear walls with openings. Compared with the cast-in-situ shear wall with opening, the load-carrying capacity and stiffness of the precast shear wall similarly decreased with the increasing opening area ratio; and the stiffness of the precast wall decreased up to 47.6%. The wall beneath the window opening can slightly compensate for the weakening effect of the unspliced vertical bars on cracking, but the vertical location of the opening had little effect on the lateral behaviour of the precast shear wall with unspliced vertical distribution bars. Finally, the length of the cast-in-situ boundary elements of the precast shear wall was more influential in the stiffness rather than the load-carrying capacity of the shear walls.

Qi Cai, Xiaobin Song, Xuwen Xiao
Damage Statistics and Integrity Assessment of Brick Masonry Structures in Historic Buildings

Deterioration faced by brick masonry structure of historical buildings is more and more concerned. In this paper, two methods, statistical classification and quantitative mapping, have been carried out to analyze the damage apperance of the southeast and northwest facades based on a historical building project in downtown Shanghai. Then damage distribution characteristic and integrity assessment have been further analyzed. Statistics show that the damage area ratios on southeast and northwest facade are 8.71% and 8.06%, respectively. Specifically, dampness, saltpetering and peeling are three main damage features to these two facades. The damage distribution curves of the both in vertical direction have the similar appearance, Both show that the sum of damage reaches the maximum value at the level of 2 m above ground surface, and then gradually decreases with the elevation increase. In addition, the total damage length also increases near height of 9 m.

Haiyang Qin, Yongjing Tang, Jiao He, Zhiwang Gu
Application of Ai-based Deformation Extract Function from a Road Surface Video to a Road Pavement Condition Assessment System

In Japan, there is a concern that civil infrastructure will rapidly age in the near future. This study focuses on asphalt road surfaces, which are typically renovated every 10 years depending on the amount of traffic and roadbed properties. Existing MCI (Maintenance Control Index) measurement systems come at a high cost to local governments and are not efficient in allowing engineers to detect cracks and deficiencies. New road pavement assessment systems, as developed by our research group, are needed to ensure sustainable road maintenance and management. Pavement surface evaluation system development involves the use of a video camera and a 3D motion sensor, which can be used for simple and low-cost inspections. However, 3D motion sensors can only capture acceleration. Because of this, they can only be used to illustrate the roughness of the road surface, not to detect cracks. In this study, to utilize road surface video recorded while driving, we have developed a method of automatic extraction of deformations by an AI object detection function. This function specifically serves to extract cracks, joints, manholes, and repair marks detections from the surface video. However, in using this function, the accuracy for detecting cracks was less than 40% (Shiga et al. 2020). In this study, we aim to apply this method to detect deformations and suggest annotation rules for improving the accuracy of crack detection, as well as overall accuracy. To discuss the accuracy of detecting cracks and other deformities, cracks are divided into different types and deep learning is performed. In addition, we enlarged images of the cracks.The results of this study show that the AI object detection function for cracks is made more accurate by utilizing annotation rules and making a learning data rule set divided by the crack type classification.

Hisao Emoto, Miori Numata, Atsuki Shiga
Chapter 5. Understanding Peacebuilding

The UN has created a set of organizational structures under the broad heading of ‘peacebuilding’ that build support for rule of law and justice sector reform under the transitional arrangements between ‘peace’ and ‘war.’ Much of what passes for peacebuilding in conflict and post-conflict zones is ineffective, because it is not locally owned or driven. Nonetheless, some small-scale human rights and humanitarian projects, have negotiated access, provided training on the international legal obligations that exist and established mechanisms for monitoring and reporting violations. The UN has also sometimes been able to use its influence to leverage support for these efforts and an increasing number of international legal mechanisms are also being created to hold both states and armed non state actors (ANSAs) to account for violations of IHL and IHRL. Strengthening these mechanisms and getting support to where it is needed, when it is needed, to promote legal accountability demands both political will and technical coordination.

Conor Foley
Essay 2. Advice on Climate Policy for the 2020 Presidential Candidates

The prospects for climate action will be influenced by the credibility of its treatment in political debate. It will be important to be clear about both risks and opportunities, recognize the distribution of cost, and preserve room for needed bipartisan action.

Gary Yohe, Henry Jacoby, Richard Richels, Benjamin Santer
Essay 16. Extreme Events “Presage Worse to Come” in a Warming Climate

One consequence of a warming planet is an increase in climate-related events, each of which causes billions of dollars in damage. The disasters to date are only a taste of what is to come if society does not cut greenhouse gas emissions.

Gary Yohe, Henry Jacoby, Richard Richels, Benjamin Santer
Essay 4. Who Is Holding Up the War on Global Warming? You May Be Surprised

The U.S. public largely accepts that climate change is real, human caused and serious—yet concern with the issue does not rise to a high enough priority to command political action. Greater effort is needed not only to explain the risk of inaction on climate but also to better understand the public response to the information that is available.

Gary Yohe, Henry Jacoby, Richard Richels, Benjamin Santer
Essay 6. Adapt, Abate, or Suffer—Lessons from Hurricane Dorian

Humanity knows from experience it cannot protect itself from all of the hazards that plague the world. It follows that abatement and adaptation cannot be perfect. There will always be residual damage, and so can only choose how much suffering it wants to try to avoid.

Gary Yohe, Henry Jacoby, Richard Richels, Benjamin Santer
Concept, Architecture, and Performance Testing of a Smart Home Environment for the Visually Impaired Persons

With the development of assistive technologies, it is possible to enhance the degree of independence of life in a home for the visually impaired persons. Assistive technologies today are not just a device, but represent a technology, service, and a system. Modeling of the system for applying assistive technologies needs to be considered through a precisely defined taxonomy of the technology and the devices that are used and the appropriate system architecture. The aim of such an approach is the delivery of precise and real-time information to the end user. Advanced communication technologies can provide all the relevant information to the end user. This chapter proposes the taxonomy of information and communication technologies and devices and the conceptual system architecture for the delivery of the service in a smart home of the visually impaired persons. Also, the efficiency of the operation of the most represented virtual assistants for the management of sensors, actuators, and devices in the smart home environment has been tested.

Marko Periša, Ivan Cvitić, Petra Zorić, Ivan Grgurević
Opportunities of Using Machine Learning Methods in Telecommunications and Industry 4.0 – A Survey

Artificial intelligence can be considered a leading component of industrial transformation, but its application is also present in other areas, such as the telecommunications sector. Methodologies based on artificial intelligence, the most significant of which is machine learning, support these areas when predicting maintenance needs and reducing downtime. Machine learning has many algorithms and methods of its own. This research presents machine learning methods and the possibilities of their use in the areas of telecommunications and Industry 4.0. With the improvement of new technologies such as higher internet speeds and 5G mobile networks in the field of telecommunications or Industry 4.0, comes the need for new and improved management and support for systems that use these new technologies. Some types of machine learning can be used for collecting data to improve user’s quality of service. Also, some types of them can be used to collect data on network traffic or, in general, for any system that needs to collect data, cluster data points, and analyze data.

Dragan Peraković, Marko Periša, Ivan Cvitić, Petra Zorić, Tibor Mijo Kuljanić, David Aleksić
Polygonization of the Surface Digitized Using Helios2 Time-of-Flight Camera

This chapter discusses the 3D time-of-flight digitization solution Helios2 along with the ArenaSDK software package and their use in digitizing objects with different surface properties. Helios2 ToF camera is an advanced device that allows you to digitize physical objects in real time, which can be used for computer and robotic vision. The aim of this research was to verify the use and functionality of the camera for the needs of digitization of objects. The first part of the article describes the Helios2 device, its physical and software capabilities, and important features of the ArenaView GUI which affect the quality of the obtained point cloud. The second part describes polygonization process and software GOM Inspect. The preparation of the digitization was divided into hardware, software, and scene preparation. Three boxes made of cardboard with natural color were digitized. Subsequently, the obtained surface was imported into the GOM Inspect software and polygonized. Polygonization parameter values have been adjusted. The result was a polygonal model of the surface. In the process of polygonization, a decrease in the quality of the obtained data occurred caused by the field of view of the camera, black matte material, and multipath effect. Further research is needed for accurate and detailed digitization of objects using time-of-flight technology.

Adrián Vodilka, Martin Pollák, Marek Kočiško
k-Transmitter Watchman Routes

We consider the watchman route problem for a k-transmitter watchman: standing at point p in a polygon P, the watchman can see $$q\in P$$ if $$\overline{pq}$$ intersects P’s boundary at most k times—q is k-visible to p. Traveling along the k-transmitter watchman route, either all points in P or a discrete set of points $$S\subset P$$ must be k-visible to the watchman. We aim for minimizing the length of the k-transmitter watchman route.We show that even in simple polygons the shortest k-transmitter watchman route problem for a discrete set of points $$S\subset P$$ is NP-complete and cannot be approximated to within a logarithmic factor (unless P=NP), both with and without a given starting point. Moreover, we present a polylogarithmic approximation for the k-transmitter watchman route problem for a given starting point and $$S\subset P$$ with approximation ratio $$O(\log ^2(|S|\cdot n) \log \log (|S|\cdot n) \log |S|)$$ (with $$|P|=n$$ ).

Bengt J. Nilsson, Christiane Schmidt
Reflective Guarding a Gallery

This paper studies a variant of the Art Gallery problem in which the “walls” can be replaced by reflecting edges, which allows the guards to see further and thereby see a larger portion of the gallery. Given a simple polygon P, first, we consider one guard as a point viewer, and we intend to use reflection to add a certain amount of area to the visibility polygon of the guard. We study visibility with specular and diffuse reflections where the specular type of reflection is the mirror-like reflection, and in the diffuse type of reflection, the angle between the incident and reflected ray may assume all possible values between 0 and $$\pi $$ . Lee and Aggarwal already proved that several versions of the general Art Gallery problem are $${ NP}$$ -hard. We show that several cases of adding an area to the visible area of a given point guard are $${ NP}$$ -hard, too.Second (A primary version of the second result presented here is accepted in EuroCG 2022 [1] whose proceeding is not formal), we assume that all edges are reflectors, and we intend to decrease the minimum number of guards required to cover the whole gallery.Chao Xu proved that even considering r specular reflections, one may need $$\lfloor \frac{n}{3} \rfloor $$ guards to cover the polygon. Let r be the maximum number of reflections of a guard’s visibility ray.In this work, we prove that considering r diffuse reflections, the minimum number of vertex or boundary guards required to cover a given simple polygon $$\mathcal P$$ decreases to $$\mathbf \lceil \frac{\alpha }{1+ \lfloor \frac{r}{8} \rfloor } \rceil $$ , where $$\alpha $$ indicates the minimum number of guards required to cover the polygon without reflection. We also generalize the $$\mathcal {O}(\log n)$$ -approximation ratio algorithm of the vertex guarding problem to work in the presence of reflection.

Arash Vaezi, Bodhayan Roy, Mohammad Ghodsi
Chapter 5. Polymeric Materials

Plastics represent one of the most pervasive types of materials in our society. This chapter describes the structure, formation mechanisms, and nomenclature of various classes of polymers. The applications described in this chapter span biomaterials (e.g., biodegradable medical stents, contact lenses, drug delivery), lithography, conductive polymers, polymer additives, and self-healing plastics.

Bradley D. Fahlman
Chapter 2. Solid-State Chemistry

The properties of materials are governed by the interactions among its associated sub-units. This chapter will describe the bonding motifs of both crystalline and amorphous solids. Details of common archetypical crystal structures will be given, as well as introductory X-ray crystallography. The various types of defects in solids are also described, which are critical in understanding electrical conductivity and optical properties of crystalline solids. The structure vs. property relationship for key classes of materials such as ceramics, glasses, semiconductors, insulators, and gemstones will be described for a myriad of applications.

Bradley D. Fahlman
Chapter 7. Materials Characterization

Once a material has been fabricated, how does one assess whether the synthetic technique has been successful? This chapter describes a plethora of sophisticated techniques that may be used to characterize the structure of various classes of materials. Precedents from the literature are used to provide examples of real-world characterization studies to illustrate the utility of the various techniques.

Bradley D. Fahlman
Chapter 4. Semiconductors

Without question, semiconductors represent the most utilized and under-appreciated class of materials in our society. From our cell phones that keep us connected to the world around us, to our vehicles that bring us home from work each day, semiconductor-based computer chips impact virtually every part of our lives. This chapter will describe the types and properties of semiconductors, and applications such as integrated circuits (chips), light-emitting diodes (LEDs), thermoelectrics, and photovoltaics (solar panels). Thin-film deposition techniques such as chemical vapor deposition and atomic layer deposition are also described, as well as advanced patterning techniques such as EUV photolithography.

Bradley D. Fahlman
Chapter 6. Nanomaterials

Nanotechnology is more than science fiction or a passing fad. The synthesis of nanoscale materials is used to fine-tune the properties of any existing solid-state material, or design an entirely new material from the bottom-up to afford a desired set of properties. This chapter begins by discussing the increasingly relevant topic of nanotoxicity for various classes of nanomaterials. Structures, properties, applications and synthetic techniques for a variety of 0-D, 1-D, and 2-D nanomaterials are described throughout this chapter, citing many precedents from the scientific literature.

Bradley D. Fahlman
Größere Wirkung polizeilicher Strategien durch Analyse der Branchenstruktur
Was beeinflusst Wirkung und Leistungen der Polizei?

Branchenstruktur und Branchenentwicklung beeinflussen die aktuellen und potenziellen Leistungswirkungen der Polizei und auch ihre Strategien. Michael Porter hat das Modell einer Branchenanalyse („Five Forces-Modell“) für die Privatwirtschaft entwickelt. Ziel ist die Beurteilung einer Branche nach ihrem Gewinnpotenzial durch Identifikation und Beschreibung der Einflussfaktoren (Triebkräfte) in fünf Bereichen Identifikation von Chancen und Risiken in der Branche Beurteilung der Art und Stärke der Einflussfaktoren auf die Organisation. Aus Sicht der Polizei geht es um Wirkungspotenzial. Nach Analyse des erweiterten Modells kann gesagt werden, dass die Polizei gute Chancen hat, in der angestammten Branche für Sicherheitsleistungen gemäß ihrem gesetzlichen Auftrag hohe Wirkung zu erzielen; sie muss sich aber auch systematisch der Risiken bewusst sein.

Helmut Siller
Coaching als Instrument zur Beanspruchungssteuerung und Karriereplanung

Polizeibeamte sind unstrittig einer Berufsgruppe zuzuordnen, die überdurchschnittlich häufig und überdurchschnittlich hohen Belastungen ausgesetzt ist. Dramatische oder sogar traumatische Ereignisse, Nachtschichten und Überstunden, Eintönigkeit in langen Bereitschaftsphasen und neuerdings zunehmend Gewalt bereits bei niedrigschwelligen Einsätzen sind Stichworte für die Vielfalt und Vielzahl von Auslösern, die ohne psychosoziale Unterstützung das Wohlbefinden oder sogar die Arbeitsfähigkeit dauerhaft einschränken können Für die Aufarbeitung von Belastungsereignissen steht u. a. Coaching zur Verfügung. Dieses bietet sich auch an für die Karriereplanung, Führungskräfteunterstützung, Gesundheit oder Konfliktmanagement. Die Corona-Krise hat der Onlineberatung einen enormen Entwicklungsschub verschafft. Auch für Angehörige der Polizei kann Onlinecoaching eine attraktive Alternative zum Face-to-Face-Coaching darstellen.

Peter Weber
Life as a Cyber-Bio-Physical System

The study of living systems—including those existing in nature, life as it could be, and even virtual life—needs consideration of not just traditional biology, but also computation and physics. These three areas need to be brought together to study living systems as cyber-bio-physical systems, as zoetic systems. Here I review some of the current work on assembling these areas, and how this could lead to a new Zoetic Science. I then discuss some of the significant scientific advances still needed to achieve this goal. I suggest how we might kick-start this new discipline of Zoetic Science through a program of Zoetic Engineering: designing and building living artefacts. The goal is for a new science, a new engineering discipline, and new technologies, of zoetic systems: self-producing far-from-equilibrium systems embodied in smart functional metamaterials with non-trivial meta-dynamics.

Susan Stepney
Simulation-Based Impact Assessment of Electric and Hydrogen Vehicles in Urban Parcel Delivery Operations

Alternative energy sources are increasingly being considered to power vehicles used in freight transport. This is in a bid to reduce emissions generated by the transportation sector. Moreover, opportunities for implementation exist in urban logistics, particularly in the last-mile stage of the delivery chain. In this paper, the potentials for using ecological-friendly vehicles for parcel delivery is evaluated. To do this, we apply the microscopic agent-based simulation framework, MATSim, and the integrated logistics behaviour model, Jsprit to the parcel market scene in Berlin, Germany. This study provides quantitative insight into the transport-related, economic and environmental implications of deploying electric and hydrogen vehicles to replace traditional diesel-powered vehicles, used in urban parcel delivery operations. We compare the simulation results generated for key transport-related, economic and ecological key indicators in three scenarios: (i) status quo diesel-driven vehicles, (ii) 100% adoption of electric vehicles and (iii) 100% adoption of hydrogen vehicles. We show that electric and hydrogen vehicles can reduce emissions generated in parcel delivery operations but are unable to reduce the transport-related impacts and transport costs.

Ibraheem Oluwatosin Adeniran, Abdulrahmon Ghazal, Carina Thaller
Literature Review on Current Approaches to Ergonomic Order Allocation in Order Picking

In the logistics sector, there is a serious lack of skilled operational staff. As a result of an ageing population and the increasing number of work-related physical musculoskeletal disorders (MSDs) and mental illnesses, ergonomic resource planning and control in order picking is becoming increasingly important. Nevertheless, approaches to order allocation that distribute orders to employees based on capacity utilisation without considering workload or even subjective stress are still popular in research and industry. Within the framework of a literature review it is examined which approaches of order and workforce allocation considering ergonomic aspects do exist and how they are distinguished as well as which ergonomic criteria are taken into account. To compare the researched papers a content-based analysis is conducted applying previously defined comparison criteria that focus on the order picking context, considered ergonomic criteria, their combined assessment and databased validation. The findings provide an overview of current approaches to ergonomic order allocation in order picking. Additionally, the results are presented transparently in a comparison matrix.

Linda Maria Wings, Christian Fahrenholz, Aylin Uludag
Microscopic Agent-Based Parcel Demand Model for the Simulation of CEP-Based Urban Freight Movements to and from Companies

Recently, a substantial increase in parcel volumes has been observable, primarily shipped by courier, express, and parcel service providers (CEPSPs). Especially in urban areas, existing space conflicts are intensified, while emissions are steadily rising due to the higher need of parcel transportation. From a regional planning perspective, understanding the net effect of increased parcel volumes is essential for transportation planning and policies, for which freight demand models are commonly used. Existing models primarily focus on parcel deliveries to private customers, although parcel shipments to and from companies considerably contribute to the overall parcel volumes. Hence, this study aims to develop an agent-based model that explicitly represents the in- and outgoing parcel volumes of companies in urban areas delivered by CEPSPs. An approach based on Open Data and self-conducted expert interviews with CEPSPs is developed. First, OpenStreetMap data is used to geographically represent companies with the corresponding sector assignment within a study area in Karlsruhe, Germany. Second, a concept for modeling the weekly in- and outgoing parcel volume for each company in the study area is developed using literature-based data. The approach is integrated into the existing agent-based framework logiTopp considering all relevant CEPSPs of the respective area. The application shows that modeling CEP-based transportation volumes of companies based on Open Data is possible though restrictions apply to the granularity of the used data. However, potential is seen in generating a well-funded empirical database of companies’ in- and outgoing parcel demand structures to improve the model further.

Lukas Barthelmes, Mehmet Emre Görgülü, Jelle Kübler, Martin Kagerbauer, Peter Vortisch

Premium Partner