Skip to main content
Top

2021 | Book

Data and Information in Online Environments

Second EAI International Conference, DIONE 2021, Virtual Event, March 10–12, 2021, Proceedings

insite
SEARCH

About this book

This book constitutes the refereed post-conference proceedings of the Second International Conference on Data Information in Online Environments, DIONE 2021, which took place in March 2021. Due to COVID-19 pandemic the conference was held virtually.

DIONE 2021 presents theoretical proposals and practical solutions in the treatment, processing and study of data and information produced in online environments, the latest trends in the analysis of network information, media metrics social, data processing technologies and open science.
The 40 revised full papers were carefully reviewed and selected from 86 submissions. The papers are grouped in thematical sessions on evaluation of science in social networking environment; scholarly publishing and online communication; and education in online environments.

Table of Contents

Frontmatter

Evaluation of Science in Social Networking Environment

Frontmatter
Depicting Recommendations in Academia: How ResearchGate Communicates with Its Users (via Design or upon Request) About Recommender Algorithms

Academic social networking sites (ASNSs) are increasingly using recommender systems to deliver relevant content to their users. Meanwhile, the profound opacity of the algorithms that filter information makes it difficult to distinguish among the elements that may suffer and/or exert influence over the interactions within ASNSs. In this article, we investigate how ResearchGate communicates with its users about the recommender algorithms used in the platform. We employ a walkthrough method in two steps (interface analysis and company inquiry using the General Data Protection Regulation (GDPR)) to investigate ResearchGate’s communication strategies (via the design of the platform or upon request) regarding the use of recommender algorithms. The results show six main entities (Researcher, Institution, Research project, Publication, Job and Questions), along with the large amount of metadata involved in the recommendations. We show evidence of the mechanisms of selection, commodification and profiling, and in practice demonstrate the mutual shaping of the different actors (users, content, platform) in their interactions within the platform. We discuss the communication strategy of the company to shy away from providing details on automated profiling.

Luciana Monteiro-Krebs, Bieke Zaman, Nyi-Nyi Htun, Sônia Elisa Caregnato, David Geerts
A Study on the Process of Migration to Training in Brazil: Analysis Based on Academic Education Data

In recent years, a fact that stands out at the national level is the movement of individuals to other locations at some point in their lives. There are several causes that motivate these types of displacement, among them, one of the main reasons for training, especially at the level of academic training. Given this scenario, this work aims to carry out an analysis of how Brazilian academic mobility occurred, through data extracted from currencies and institutions registered in the Lattes Platform. Thus, extracted from the curricula of Brazilians, resulting in 308,317 records. From the extraction, the data was filtered, obtaining the following relevant research items, performed, treatment of the analysis items, removing irrelevant and incomplete terms and, subsequently, improving the data with information on the geographical location of each institution. At each level of education the entire group analyzed, from the place of birth to the individual’s current professional performance. Soon after, with the set of detected data, it was possible to carry out the monitoring in the measurements of the social networks, in which the networks were characterized considering the displacements between the places in the academic formation process used. As a result, an image of how the scientific formation process took place after a long process of capacitation was used, making it possible to measure the migratory flow of individuals and trends in the formation processes.

Higor Alexandre Duarte Mascarenhas, Thiago Magela Rodrigues Dias, Patrícia Mascarenhas Dias
Indexes for Evaluating Research Groups: Challenges and Opportunities

Several indexes have been proposed to evaluate the scientific research productivity. These indexes are useful for several management decisions, such as career development and distribution of financial capital to scientific researches. In the last two decades, several works have proposed indexes to measure how productive and relevant the work of a researcher is. Some of these indexes aim to be applicable to evaluate an individual researcher and/or groups of researchers, applying the same formula for both situations. However, in some cases, this application is not straightforward. In this work, we studied the main indexes used to evaluate productivity of scientific researchers, discuss their aspects and organize them according to their characteristics, by pointing up the variables that are used to compose each index. We also analyzed works that applied the h-index for a group of researchers, highlighting the different ways adopted for this application. Furthermore, we discussed the opportunities to expand these metrics in a fairer way to evaluate quantitative and qualitative aspects for groups of researchers. After a literature review of these indexes, we conclude that the h-index can be used in many ways to evaluate groups of scientific researchers. Hence, there are other challenges and opportunities in the proposal of indexes to evaluate these groups, such as performing experiments for smaller groups, evaluating social aspects and defining better ways for selecting articles to evaluate the research productivity of groups.

Areli Andreia dos Santos, Moisés Lima Dutra
Expert Bibliometrics: An Application Service for Metric Studies of Information

Following the current trend of developing software as a service, we propose a software application to handle and support metric studies of information by applying the main laws of Bibliometrics, such as Lotka, Bradford, and Zipf, by means of integrating content and format in a single analysis process. Our application manages metrics according to the relevance of data, dispersion, rule of three and square root, in order to generate new indexes by relying on theories and laws already established among the scientific community. In addition, the developed tool has a pleasant aesthetic, along with a low cognitive effort for the user. To achieve such a scenario, a standardization of the interface combined with the fluidity of navigation within the application were used. The proposed application is suitable for those who work with academic and/or scientific issues, offering quick results compared to manual work, in which a lot of time is spent, either creating a system or analyzing spreadsheets. The result is a beta tool, available online at http://expertsbibliometrics.ufsc.br/ , with login and password, respectively: ‘ebbc2020@ufsc.br’ and ‘trocar123’.

Adilson Luiz Pinto, Rogério de Aquino Silva, André Fabiano Dyck, Gustavo Medeiros de Araújo, Moisés Lima Dutra
The Role of Artificial Intelligence in Smart Cities: Systematic Literature Review

The increase in urban population has brought climate, technological and economic changes that may negatively affect the quality of life in cities. In response, the concept of a smart city has emerged referring to use of novel ICTs to reduce the adverse effects on cities and its inhabitants. Among other technologies, Artificial Intelligence (AI) is used in that context, evolving rapidly and playing an essential role in supporting intelligent city-wide systems in different domains. It is thus beneficial to identify current research advances and get a better understanding of the role the AI plays in this particular context. Consequently, there is a need to systematically study the connection between AI and smart cities, by focusing on the findings that uncover its role, possible applications, but also challenges to using the concepts and technologies branded as AI in smart cities. Therefore, the paper presents a systematic literature review and provides insights into the achievements and advances of AI in smart cities pertaining to the mentioned aspects.

Ivana Dominiković, Maja Ćukušić, Mario Jadrić

Web Librarianship

Frontmatter
International Initiatives and Advances in Brazil for Government Web Archiving

This study aimed to illustrate some government web archiving initiatives in several countries and stablish an overview of the Brazilian scenario with regard to the preservation of content published on government websites. In Brazil, although there is a robust set of laws that determine the State to manage, access and preserve its documents and information, there is still no policy for the preservation of web content. The result is the erasure and permanent loss of government information produced exclusively through websites. It is noticed that there are several government initiatives for web archiving around the world, which can be used as examples for the implementation of a Brazilian policy. It is concluded that the long-term maintenance of governmental information available on the web is fundamental for public debate and for monitoring governmental actions. To ensure the preservation of this content, the country must define its policy for the preservation of documents produced in a web environment.

Jonas Ferrigolo Melo, Moisés Rockembach
Metadata Quality of the National Digital Repository of Science, Technology, and Innovation of Peru: A Quantitative Evaluation

The National Digital Repository of Science, Technology, and Innovation (ALICIA) of Peru is in charge of harvesting the scientific production of the universities from their institutional repositories. The aim is to determine and evaluate the availability of resources in the repositories, and if the quality of the metadata harvested by ALICIA has any relation to the metadata of the institutional repositories. To this end, a non-experimental, descriptive, comparative study was carried out after recovering the data from ALICIA, using its API rest and the data from the institutional repositories, and using organic techniques for validating broken links and web scraping, which are then stored in a database on which queries were made using non-SQL statements. There is 97.7% and 95% availability of resources in public and private institutional repositories respectively, and on average, the quality of the metadata describing the resource is between 54% and 57%. It is notable that despite all the documented interventions and the results found, there are still quality problems in the metadata registering process considering that universities are responsible for its publication.

Miguel Valles, Richard Injante, Victor Vallejos, Juan Velasco, Lloy Pinedo
Elements for Constructing a Data Quality Policy to Aggregate Digital Cultural Collections: Cases of the Digital Public Library of America and Europeana Foundation

Institutions around the world have sought to aggregate different cultural heritage data sources in order to provide society with comprehensive and useful services. This qualitative exploratory study presents a comparative analysis of two paradigmatic institutional aggregators, namely the Europeana Foundation in the European Union and the Digital Public Library of America in the United States. To that end, strategic aggregation documents were identified and quality policy elements analyzed and compared. As a result, nine quality-oriented data aggregation elements were selected: data providers; application process; metadata model; data exchange agreement; copyright license; call for applications; metadata use; technical criteria for data quality and data validation and publication. The elements identified and described are important in formulating processes that make it possible to aggregate digital cultural heritage collections from different Brazilian institutions and provide support for the solution currently being developed in collaboration with the Brazilian Institute of Museums (Ibram).

Joyce Siqueira, Danielle do Carmo, Dalton Lopes Martins, Daniela Lucas da Silva Lemos, Vinicius Nunes Medeiros, Luis Felipe Rosa de Oliveira

Scholarly Publishing and Online Communication

Frontmatter
A Strategy for the Identification of Articles in Open Access Journals in Scientific Data Repositories

This work aims to identify articles published in open access journals registered in the Lattes Platform curricula. Currently, the curricular data of the Lattes Platform has been the source of several studies that adopt bibliometric metrics to understand scientific evolution in Brazil. However, when registering a publication in a curriculum, only basic information of the journal is informed. Therefore, in order to quantify the publications that were made in open access journals, a strategy that uses DOAJ data is proposed, validating the publications and thus obtaining a process that allows identifying which publications were made in this format of communication. As a result, it was possible to quantify in an unprecedented way the set of publications by Brazilians in open access journals.

Patrícia Mascarenhas Dias, Thiago Magela Rodrigues Dias, Gray Farias Moita
How to Spot Fake Journal: 10 Steps to Identify Predatory Journals

Nowadays, with overload information on the web, it is common to come across information and questionable content. This scenario is called fake news (for news), fake conference (for events), and predatory journals (for suspicious academic journals). Predatory journals explore the model of academic productivism, meeting the need for speed on the part of researchers in publishing, publish, or perish. A device created by academic immediacy. This study focused on analyzing the theme through methods and tools for the identification of predatory journals. A model is proposed for identifying these journals, using already established websites, databases, and repertoires. After monitoring somes emails between January and April 2020, we identified a series of patterns in the forms of communication of these fake editorial boards. As a result of this research we suggest a ten practical recommendations steps to identify predatory journals: 1) ISSN; 2) inclusion in predatory journal lists; 3) web page and domain; 4) editorial information (call for papers, previous issues; indexing, plagiarism identification, editorial board members); 5) standards of published papers; 6) DOIs, and ORCID identification; 7) indexing status; 8) Article Processing Charge; 9) spelling and typographical errors on the web page and the papers; 10) If none of these actions have any effect and you still have doubted about the seriousness of the journal, consult a specialist on the subject.

Adilson Luiz Pinto, Thiago Magela Rodrigues Dias, Alexandre Ribas Semeler
Dissemination Strategies for Scientific Journals on YouTube and Instagram

Strategies for dissemination to scientific journals on YouTube and Instagram are proposed. It is characterized as exploratory and descriptive, with a qualitative approach to the data collected. First, searches were made in international databases to find scientific production that would cover the theme of dissemination in scientific journals through social media. Second, strategies for scientific dissemination in social media were established. As a result, six fundamental guidelines for the use of social media were described, covering both strategic and operational aspects: 1) Basic purpose of the use of social media; 2) Properties of the content; 3) Definition of the target audience; 4) Strategic and operational aspects of the use of social media and production of content; 5) Ethical and legal aspects involving the use of social media and production of content and; 6) Crisis management through a complaint or negative criticism about the posted content. The fourth guideline also outlined eight steps for scientific journals. Also, guidelines on the strategic use of YouTube and Instagram for the dissemination of science by scientific journals were listed. It is concluded that there is an incipient number of investigations that provide strategies for the use of social media by scientific journals and that the use of these for scientific dissemination requires a well-defined strategy to achieve the objectives proposed by the scientific journal.

Mayara Cabral Cosmo, Priscila Machado Borges Sena, Enrique Muriel-Torrado
Digital Humanities and Open Science: Initial Aspects

The digital humanities have acquired more and more visibility and their field of action has expanded due to the increasing digitalization and the large volume of data arising from these processes. Collaborative researches on the impact that is in line with the dimensions of open science impact the scientific production chain. It aims to identify which aspects of open science are approached in the publication regarded to digital humanities. To achieve the general objective it indicates some specific objectives: it identifies the scientific production about open science and digital humanities indexed in the databases Web of Science (WoS), Scopus, and Scientific Electronic Library Online (SciELO) and; it describes how open science is approached in each paper from the corpus and how it is related to digital humanities. It uses the bibliographic manager Zotero to organize the bibliographic data and it uses the software Atlas.ti to the qualitative analysis and applies the data mining tool Sobek and Voyant Tools in the data. From the 13 papers analyzed, only 3 do not use projects or programs related to digital humanities to present the discussion. The data mining tools do no show the relation between digital humanities and open science. It shows the importance of data management and the necessity to have guiding documents. It also points to the relevance of metadata pattern, to work to make the data suitable to FAIR principles, to train researchers and citizens to promote collaboration among different institutions and people that have a diverse background to value open science.

Fabiane Führ, Edgar Bisset Alvarez
Research Data Sharing: Framework of Factors that Can Influence Researchers

Health emergencies contribute to a greater sharing of research data among the scientific community, however, there are other factors that can influence researchers to share or retain their data. The objective is to identify the factors that can influence the perception and attitude of researchers in sharing data sets. The specific objective was to present a framework with the identified factors. Documentary and exploratory research using the method of content analysis and application of the institutional theory, the theory of planned behavior and the research model for data sharing behaviors. The data revealed that indicators of the Cognitive, Normative, Career, Resources and Social pillars influence the perception and attitude of sharing data. It is hoped that the results will allow a perception of the factors that influence researchers to share their data.

Elizabete Cristina de Souza de Aguiar Monteiro, Ricardo César Gonçalves Sant’Ana
The Dilemma of Fake News Criminalization on Social Media

The present research analyzes the dilemma faced in Brazil about the criminalization of fake news disseminated mainly on social networks. The growth of social networks use around the world promotes more skillful ways of communicating and spreading true or false information in the information society. The objective of this study is to examine the trend of fake news spreading, coined by fake news, and the reaction of Brazil seeking to stop this dissemination through a criminalization process, as a way to confront this phenomenon. This study investigates effects of fake news around the world including Brazil, seeking to examine social changes and the role of Brazilian government in confronting this phenomenon. In Brazil, this battle starts even before a criminal process, as it begins in the criminal investigation carried out by the Brazilian police. However, it is through criminal proceedings that Brazil solves conflicts that arise from criminal conduct defined by law. The dilemma is that no Brazilian law considers fake news crime. In addition, the relationship between the information society and the spread of fake news on social networks is addressed in the research, bringing a reflection about freedom of speech and maintaining the individual responsibility of each individual. Aspects about the challenges of police investigation concerning fake news effects were addressed taking into consideration the issue of related crimes commited on social networks, such as copyright infringement and defamation, benefiting from the reach of information on digital communication networks.

Marcio Ponciano da Silva, Angel Freddy Godoy Viera

Online Data Processing Technologies

Frontmatter
Evaluating the Effect of Corpus Normalisation in Topics Coherence

Probabilistic topic models are extensively used to better understand the content of documents. Due to the fact that topic models are totally unsupervised, statistical and data driven, they may produce topics not always meaningful. This work is based on the hypothesis that, since LDA takes into account the number of occurrences of words, we could affect the quality of topics by semantically normalising the text, where each concept would be represented by the same word. We can find a formal description of lexemes found in text using a knowledgebase and extract the several forms of mentioning a lexeme to normalize a corpus. We use topic coherence metric, as it represents the semantic interpretability of the terms used to describe a particular topic, to quantify the influence of semantic corpus normalisation in topics. The first tests on the semantic normalisation framework of texts showed prominent results, and shall be investigated in depth in future.

Luana da Silva Sousa, Vinicius Melquiades de Sousa, Rogerio de Aquino Silva, Gustavo Medeiros de Araújo
Fostering Open Data Using Blockchain Technology

While open science is growing in popularity and especially publishing as open access is common and has proven its success in today’s research, sharing of research data in terms of open data is still lacking behind. INPTDAT provides a platform to share research data in the field of plasma technology. In the course of this, the project QPTDat aims to increase the incentives to publish, share, and reuse research data, following FAIR principles and fostering the idea of open data. QPTDat identified the following main success factors: Authors need a secure proof of authorship to guarantee that they are credited for their work; a proof of data integrity, to ensure that reused data has not been modified or fabricated; a convincing system for quality curation, to ensure high quality of published data and metadata; and comprehensive reputation management, to give an additional incentive to share research data. This paper discusses these requirements in detail, presents use cases and concepts for their implementation using blockchain technology and finally draws a conclusion regarding utilisation of blockchain technology in the context of open data, summarising the findings in form of a research agenda.

Simon Tschirner, Mathias Röper, Katharina Zeuch, Markus M. Becker, Laura Vilardell Scholten, Volker Skwarek
A Roadmap for Composing Automatic Literature Reviews: A Text Mining Approach

Due to accelerated growth in the number of scientific papers, writing literature reviews has become an increasingly costly activity. Therefore, the search for computational tools to assist in this process has been gaining ground in recent years. This work presents an overview of the current scenario of development of artificial intelligence tools aimed to assist in the production of systematic literature reviews. The process of creating a literature review is both creative and technical. The technical part of this process is liable to automation. For the purpose of organization, we divide this technical part into four steps: searching, screening, extraction, and synthesis. For each of these steps, we present artificial intelligence techniques that can be useful to its realization. In addition, we also present the obstacles encountered for the application of each technique. Finally, we propose a pipeline for the automatic creation of systematic literature reviews, by combining and placing existing techniques in stages where they possess the greatest potential to be useful.

Eugênio Monteiro da Silva Júnior, Moisés Lima Dutra
Interactive Domain-Specific Knowledge Graphs from Text: A Covid-19 Implementation

Information creation runs at a higher rate than information assimilation, creating an information gap for domain specialists that usual information frameworks such as search engines are unable to bridge. Knowledge graphs have been used to summarize large amounts of textual data, therefore facilitating information retrieval, but they require programming and machine learning skills not usually available to domains specialists. To bridge this gap, this work proposes a framework, KG4All (Knowledge Graphs for All), to allow for domain specialists to build and interact with a knowledge graph created from their own chosen corpus. In order to build the knowledge graph, a transition-based system model is used to extract and link medical entities, with tokens represented as embeddings from the prefix, suffix, shape and lemmatized features of individual words. We used abstracts from the COVID-19 Open Research Dataset Challenge (CORD-19) as corpus to test the framework. The results include an online prototype and correspondent source code. Preliminary results show that it is possible to automate the extraction of entity relations from medical text and to build an interactive user knowledge graph without programming background.

Vinícius Melquíades de Sousa, Vinícius Medina Kern
A MapReduce-Based Method for Achieving Active Technological Surveillance in Big Data Environments

Technological Surveillance systems stand out as a structured way to assist organizations in monitoring their internal and external technological environments, in order to anticipate changes. However, since the volume of digital data available keeps growing, it becomes increasingly complex to keep this type of system running without proper automation. This paper proposes an automated MapReduce-based method for technological Surveillance in Big Data scenarios. A prototype was developed to monitor key technologies in specialized portals in the Furniture and Wood sector, in order to illustrate the proposed method. The proposal was evaluated by industry experts, and the preliminary results obtained are very promising.

Daniel San Martin Pascal Filho, Douglas Dyllon Jeronimo de Macedo, Moisés Lima Dutra
Feature Importance Investigation for Estimating Covid-19 Infection by Random Forest Algorithm

The present work raises an investigation about the feature importance to estimate the COVID-19 infection, using Machine Learning approach. Our work analyzed 175 features, using the Permutation Importance method, to assess the importance and list the twenty most relevant ones that represent the probability of infection of the disease. Among all features, the most important were: i) the period comprised between the date of notification and symptom onset stand out, ii) the rate of confirmed in the territory of health units in the last 14 days, iii) the rate of discarded and removed from the health territory, iv) the age, v) variables of the traffic flow and vi) symptoms features as fever, cough and sore throat. The model was validated and reached an accuracy average of 78.19%, whereas the sensitivity and specificity achieved 83.05% and the 75.50% respectively in the infection estimate. Therefore, the proposed investigation represents an alternative to guide authorities in understanding aspects related to the disease.

André Vinícius Gonçalves, Ione Jayce Ceola Schneider, Fernanda Vargas Amaral, Leandro Pereira Garcia, Gustavo Medeiros de Araújo
Neural Weak Supervision Model for Search of Specialists in Scientific Data Repository

With the growing volume of data produced today, it is clear that more and more users are using different types of systems, such as, for example, professional and academic data storage systems. Given the large amount of stored data, the difficulty of finding candidates with appropriate profiles for a particular activity is noteworthy. In this context, to try to solve this problem comes the expertise retrieval, a branch of information retrieval, which consists of, given a query, documents are recovered and used as indirect units of information for the candidates and some aggregation techniques are used in these documents to generate a score to the candidate. There are several models and techniques to work with this problem, some have been tested extensively but the search for specialists in the academic field with neural models has a smaller amount of research, this fact is due to the complexity of these models and the need for large volumes of data with judgments of relevance or labeled for your training. Therefore, this work proposes a technique of expansion and generation of weak supervised data where the relevance judgments are created with heuristic techniques, making it possible to use models that require large volumes of data. In addition, is proposed a technique of deep auto-encoder to select negative documents and finally a ranking model based on recurrent neural networks and that was able to overcome all the baselines compared.

Sergio Jose de Sousa, Thiago Magela Rodrigues Dias, Adilson Luiz Pinto
Propensity to Use an Aerial Data Collection Device in Agricultural Research

The access to information as a success factor in areas of human activity gains relevance with the intensive adoption of ICT. An important sector such as agriculture cannot be left out of this phenomenon. The results obtained in agricultural research show an increasing dependence on the intensive use of data, with a bigger volume and variety, origin in different environments, and through different technologies. The objective of this study is to categorize the possibilities of using Aerial Data Collection Device described in agriculture academic studies, to understand how these devices are being used in research carried out by Brazilian universities. The research is limited to the results of thesis and dissertations between the years 2015 and 2017, considering the graduate programs of the USP, UNICAMP, and UNESP. Five categories were defined regarding the propensity for the use of Aerial Data Collection Devices: Soil Diagnosis, Plant Diagnosis, Management of Grazing Areas, Crop Management, and Hydrographic Monitoring. It was identified that more research is needed to reflect on how science can help its application in this strategic productive sector.

Jacquelin Teresa Camperos-Reyes, Fábio Mosso Moreira, Fernando de Assis Rodrigues, Ricardo César Gonçalves Sant’Ana
A Model for Analysis of Environmental Accidents Based on Fuzzy Logic
Case Study: Exxon Valdez Oil Spill

This research aims to present a fuzzy-logic-based conceptual model for environmental accidents analysis , to reveal corporate social responsibility initiatives by companies responsible for the disasters. We studied one of the biggest environmental man-made disasters in history, the one that occurred on March 24, 1989 in Prince William Sound, Alaska when the oil tanker Exxon Valdez spilled 10.8 million gallons of American crude oil. The data was collected from the online database of the newspaper The New York Times for the timespan 03/24/1989–09/01/2017 . As a central point of the research, we investigate ethical issues based on the mapping of an ethical vocabulary carried out in the corpus of the analyzed documents. The results show that the proposed model can be replicated, after some adjustments, to verify actions in accordance with the principles of corporate social responsibility for other environmental accidents.

Ana Claudia Golzio, Mirelys Puerta-Díaz
Luminiferous Funeral
Journeying in Delusional Pavilions

In response to the growing climate crisis, Luminiferous Funeral is an interdisciplinary Virtual Reality game-art work with a physical sensory perception installation. This work explores the invisible erosion of climate change and environmental breakdown by offering audiences an opportunity to dialogue with nature and seeks to focus participants on the inner communication with oneself about the essential nature of life and death. The relationship to nature is harnessed by our open-source framework in which we seek collaborative interactivity from others - encouraging them to journey within their local nature space and document their phenomenological relationship with the environment through sound clips, sketches, video, photographs, and other forms of digital media. Through communication with corresponding environmental and climate scientists, and by combining this user-centric data input with known local climate and weather models, the playable game-art is continuously evolving - downloadable game patches periodically transform a player’s virtual world. With a Zen inspired ideology, our cloud-based Artificial Intelligence systems employ Natural Language Processing on texts describing Eastern and Western philosophies of nature, power, fear and love, space and environment - crafting responses into poetic expressions, and physical interpretations of, this ongoing accumulation of climate content used to create the downloadable game.

Sarah Vollmer, Racelar Ho
Design of Artificial Intelligence Wireless Data Acquisition Platform Based on Embedded Operating System

Advances in microprocessor technology, sensor technology and wireless communication technology have promoted the generation and development of wireless data acquisition systems. The wireless data acquisition system is an ad hoc network system formed by a large number of sensor nodes through wireless communication. Data acquisition is an important means for people to obtain external information, it is an indispensable and important link for preparing a measurement and control system. With the advent of the network era, the traditional data acquisition method has been unable to meet the new production requirements. Based on the embedded operating system, the design of the artificial intelligence wireless data acquisition platform focuses on the characteristics of high compatibility and flexible interface. This paper presents an artificial intelligence wireless data acquisition platform based on embedded operating system. Through this platform, wireless data acquisition and USB interface transmission can be carried out to realize centralized monitoring and management.

Amei Zhang
Differential Steering Control System of Lawn Mower Robot

Aiming at the problem of inflexible steering and motion of robot, this paper applies fuzzy adaptive PID control algorithm to the differential steering control system of mowing robot to improve its rapidity and accuracy. In this paper, according to the domestic green environment and the overall parameters of the mowing robot, the overall design of the mowing robot is made, the fuzzy adaptive PID control algorithm of the mowing robot differential steering control system is designed, the mathematical model of the mowing robot differential steering is constructed, and the MATLAB/Simulink is used to simulate the fuzzy adaptive algorithm. The simulation results show that the response speed and overshoot of fuzzy adaptive PID algorithm are 2.42 s and 16.4, respectively, The response speed and overshoot of PID are 7.18 s and 36.3% respectively, so the dynamic performance of the differential steering control system based on Fuzzy Adaptive PID algorithm is better than that based on ordinary PID control.

Xu Guo, Jinpen Huang, Lepeng Song, Feng Xiong
Innovation of Enterprise Management Mode in the Era of Big Data

With the rapid development of Internet technology, the advent of the era of big data and began to penetrate into all walks of life in society, and more and more in the enterprise management mode to play a positive role. After entering the new century, the huge data and changeable market environment have formed a new situation of economic development, which puts forward higher requirements for enterprise management. Under the influence of big data, various advanced technologies have improved the informatization level of enterprise management. It has become the default consensus of all walks of life to use various information technologies to innovate enterprise management mode. As the era of big data has a certain impact on the traditional management mode of enterprises, the traditional management mode gradually shows its shortcomings. Therefore, only by conforming to the trend of the development of the times and innovating the enterprise management mode, can we make full use of the advantages brought by the era of big data and quickly help enterprises identify massive information. Each industry should strengthen its own reform and make continuous innovation, so as to make greater contribution to the society. Based on the reality, this paper first gives a brief overview of the concept of the era of big data, then discusses the problems existing in the management mode of enterprises in the era of big data, and finally puts forward effective strategies for the innovation of enterprise management mode in the era of big data, so as to create more favorable conditions for the development of enterprises.

Yong Zhang, Junyi Shi
Research on Automatic Test Technology of Embedded System Based on Cloud Platform

Automated testing is actually a kind of software testing. Previous testing work was completed by testing engineers manually executing test cases. Embedded systems have been widely used in real life, and the corresponding embedded software scale is also expanding day by day, but the requirements for its development cycle and product quality have not decreased at all. With the development of embedded system, we urgently need a testing system that can test and analyze the software of embedded system on-line in real time in the unit phase, integration phase, system phase and other phases of software development to ensure the quality and reliability of the software. This paper designs and implements an embedded system automation test platform based on infrastructure cloud. The platform is built on infrastructure cloud environment, which can make full use of hardware resources and reduce hardware costs.

Xia Wei
Research on Embedded Humanoid Intelligent Control and Instrument Based on PLC

Intelligent control is formed on the development of computer technology. Embedded humanoid intelligent control system has excellent self-organization and self-adaptation capabilities, and has good performance in large-scale complex industrial system control. With the development of instruments and meters, instruments and meters have penetrated into all fields of people's life and become an important tool for human beings to acquire information, understand nature and transform nature. Now the development level of instruments and meters is an important symbol of the development level of modern science and technology. Programmable logic controller (PLC) is a kind of digital operation and operation control device. It is an electronic system developed instead of the traditional relay, which integrates computer technology, communication technology and automatic control technology. Embedded humanoid intelligent control system has superior self-organization, self-adaptive and self-learning ability. The arithmetic operation function and data processing ability of PLC are greatly enhanced, which makes the realization of complex control algorithm on PLC possible.

Meiyan Li
Software Development and Test Environment Automation Based on Android Platform

With the advent of the mobile Internet era, the quality of Android application software and the level of user experience have become the key factors that determine the success or failure of market competition. The construction of software development and testing environment is an important part of the whole software development process. It is imperative to use machines instead of manpower to complete complicated tests that require precision. The concept of automated testing arises at the historic moment. The construction of software development and testing environment is an important link in the whole software development process. Different versions of operating systems, databases, network servers and application services, combined with different system architectures, make the types of software testing environments to be constructed various. The opening of Android system makes the development of Android Software easier. Any developer, whether a professional company or an individual, can develop their own applications. The combination of automated testing and manual testing makes up for the shortcomings found in automated testing, which requires a lot of initial investment and special personnel to maintain.

Xiuping Li
Sports Information Communication Model Based on Network Technology

Aiming at the sports information service platform of communication subject, communication object and media are analyzed, on this basis from the perspective of statistics in sports as an example to analyze the characteristics of information and channel sink. By using the analytic hierarchy process, from the network sports information content, audience experience, network sports information organization and dissemination of sports information and network environment in four aspects construct the sports information dissemination model evaluation index system, and determine the weight of each index. The results show that: the premise of sports information dissemination model based on network technology in order to serve the public, the right to guide the public to build a harmonious sports information network for the purpose of the business model, to the development of sports website, expand business partners, re positioning the sports network station commercial operation mode, break the old ideas, improve the connotation of operation mode sports website.

Guohua Shao
Analysis of Computer Graphics and Image Design in Visual Communication Design

In the process of visual communication design, graphics and images are the most basic elements. Computer graphics and images using the relevant hardware equipment and processing technology, software, etc. can further improve the level of visual communication. At present, graphics and image design has gradually become an important part of the visual communication design process, and also presents significant information characteristics. In order to study the related contents and main design methods of computer graphics and image design in visual communication design, this paper mainly analyzes the main contents, application advantages and design methods of computer graphics and image design. The results show that computer graphics and image design can consolidate the effect of visual communication design. Computer graphics and image design have significant advantages of visual communication design, such as easy to modify, product preview, unique image and so on. Computer graphics and image design are widely used in visual communication, such as text design, illustration design, packaging design, etc. Therefore, in the future, we should continue to maintain this advantage in the process of computer graphics and image design, so that the design methods of computer graphics and images can be further optimized and its practical value can be improved.

Hongjie Chen
Relationship Between External Search Breadth and Process Innovation Performance Under the Background of Big Data

With the advent of the era of big data, this article selects the 2014 World Bank survey data on Indian private companies from the perspective of knowledge search channels, and analyzes the relationship between external search breadth and enterprise process innovation performance from the perspective of organizational learning theory, and it also explores the moderate effect of the attention allocation process, namely, senior management's tenure and financing constraint. The research finds that: External search breadth and enterprise process innovation performance have an inverted U-shaped relationship. Senior management's tenure plays a positive moderate role. In addition, in order to ensure the correctness and reliability of the selected model, the paper tests the applicability and endogenous problems of the inverted U-shaped model of the sample. The research conclusions provide a theoretical reference for companies to effectively allocate attention and improve their ability to benefit from external search.

Yanxia Ni, Jiangman Yu

Education in Online Environments

Frontmatter
Construction of Bilingual Teaching Hierarchy Model Based on Flipping Classroom Concept

Flipping classroom is a brand-new teaching mode, in which teachers and students interact and communicate to jointly complete troubleshooting and knowledge construction . The emphasis on personalized autonomous learning, diversified cooperative exploration and open communication and interaction in the reverse classroom have pointed out a new direction for deepening the reform of English teaching. Turning over the classroom teaching mode is simply a change to the teaching mode, and teachers play a more guiding role in a complete teaching activity. Corpus provides a wide range of learning resources for bilingual teaching, while flipped classroom provides a new teaching mode for bilingual teaching. The construction and application of this model will further promote the deep integration of information technology and English curriculum, and promote the transformation of College English teaching concept and the reform of English curriculum system. College teachers should play the positive role of technology as much as possible to improve the quality and efficiency of teaching and promote the reform and development of teaching mode.

Rongle Yao
Construction of English Multimodal Classroom in Colleges and Universities Under Big Data Environment

With the multi-modality of college foreign language cognitive style and dimensions, China's higher education curriculum reform also has higher requirements for the traditional class teaching system based on practice. It is of great urgency and necessity for teachers to establish effective teaching concepts and guide classroom teaching to realize effective teaching. Under the background of big data, the current traditional British teaching mode does not meet the application requirements of independent colleges. Massive open online course and microteaching have become trends in the field of education. The introduction of multi-modal discourse analysis theory brings new ideas, methods and teaching design to English teaching. College English takes classroom teaching as the main body, and the collaborative application of multi-modal discourse changes the rigid explanation mode. Based on the analysis of the integration of big data and multimodality in the current college English curriculum, this paper has initially formed a theoretical framework for the construction and evaluation of multimodal classroom environment in China. Teachers need to use their rich working experience and profound knowledge to create a good learning atmosphere for students.

Limei Ma
Diversification of the Evaluation Mode of English Mixed Gold Course Learning Mode in Colleges and Universities Based on Mixed Learning

With the development of higher education information, hybrid teaching as a new teaching mode is being gradually built and applied by many colleges and universities. The hybrid teaching mode not only realizes the integration of online learning and traditional classroom, but also improves students’ ability of self-learning. The author’s research mainly expounds from three levels, firstly clarifies the connotation of mixed teaching mode, secondly analyzes the construction strategy of mixed teaching mode in colleges and universities, and finally takes English professional teaching as an example, and elaborates the application of mixed teaching mode in practice.

Min Zhang
Ecological Education Mode of Foreign Language Based on Computer Network Technology

This paper starts from the perspective of the ecological teaching of foreign languages in the context of computer networks. The current situation of the lack of an ecological model in college English classroom teaching under the current computer network environment is proposed, and the construction of a college English ecological classroom teaching model under the computer network environment is studied. The theoretical method has certain reference value for the construction and optimization strategies of the ecological teaching mode under the computer network environment and the realization and development of foreign language courses.

Yanping Zhang, Weiping Zhang
FCST Synergy Education Model Based on Mobile Internet Technology in Chinese Higher Vocational Colleges

FCST collaborative education model is a creative idea for vocational college students cultivating, especially under the background of COVID-19 pandemic. Family, society, school and teacher work together to provide comprehensive education to students. Based on the background of mobile Internet technology, this paper proposes an interactive mode of “FCST” to train higher vocational education students from families, schools, teachers and society. Research methods: literature retrieval method; questionnaire survey; sample interview; logical induction. Research conclusions: Mobile Internet technology is a mature technology and can be widely used in higher vocational education in China. In the process of ideological education for college students, families, schools, teachers and society jointly construct the “1+2+3+N” “FCST” collaborative education model.

Fen Tan
Reform and Innovation of Financial Teaching in the Environment of Financial Crisis

How to apply Internet technology to the classroom in the teaching process of applied undergraduate colleges is a question worth studying. At present, in the context of the domestic financial crisis, it is in this form to create learning videos, students can watch at home or outside the classroom, can study completely at home, or can return to the classroom to study. Face to face communication between teachers and students through the Internet reduces a sense of oppression. And the emergence of the financial crisis has replaced the lack of traditional classroom teaching, highlighting the dominant position and encouraging changes in teaching methods. In the environment of financial crisis, the reform and reform of education is still the same theme of the times, especially the development and innovation of educational methods, combined with the current reform of education and learning forms, in fact, the use of information resources sharing. Based on this, a new model suitable for undergraduate education in China is put forward, which can be used for reference to the innovative research of “Internet+” model of undergraduate education. This paper focuses on the financial teaching reform and innovation path under the financial crisis.

Pei Sheng
Research on the Cultural Mission and Path Orientation of School Physical Education Under the Background of Core Values

The humanistic view and socialization of sports are the internal and external factors of the development of school culture, and are the premise and fundamental motive force of the development of school sports . In order to adapt to the development of physical education teaching concept and meet the needs of physical education teaching, school physical education should establish the core values of physical education curriculum suitable for the requirements of contemporary social development. In the process of deepening educational reform, colleges and universities should also recognize the social needs in the new era and consciously undertake their own sports cultural mission. Colleges and universities should further consider the important practical significance of the current sports core values on the basis of re understanding the sports core values. The development of school physical education should be guided by the new concept of health, and the teaching process of college physical education should be people-oriented.

Daiyong Li
Backmatter
Metadata
Title
Data and Information in Online Environments
Editor
Edgar Bisset Álvarez
Copyright Year
2021
Electronic ISBN
978-3-030-77417-2
Print ISBN
978-3-030-77416-5
DOI
https://doi.org/10.1007/978-3-030-77417-2

Premium Partner