Skip to main content

2022 | Buch

High-Performance Computing Systems and Technologies in Scientific Research, Automation of Control and Production

11th International Conference, HPCST 2021, Barnaul, Russia, May 21–22, 2021, Revised Selected Papers

insite
SUCHEN

Über dieses Buch

This book constitutes selected revised and extended papers from the 11th International Conference on High-Performance Computing Systems and Technologies in Scientific Research, Automation of Control and Production, HPCST 2021, Barnaul, Russia, in May 2021.

The 32 full papers presented in this volume were thoroughly reviewed and selected form 98 submissions. The papers are organized in topical sections on Hardware for High-Performance Computing and Signal Processing; Information Technologies and Computer Simulation of Physical Phenomena; Computing Technologies in Discrete Mathematics and Decision Making; Information and Computing Technologies in Automation and Control Science; and Computing Technologies in Information Security Applications.

Inhaltsverzeichnis

Frontmatter

Hardware for High-Performance Computing and Signal Processing

Frontmatter
Modeling of Processor Datapaths with VLIW Architecture at the System Level

The article discusses an approach to designing a datapath for a processor with a VLIW architecture. A feature of this architecture is the ability to implement an arithmetic-logic device with a complex structure, which leads to an explosive growth of possible options. The choice of the best option is complicated by conflicting requirements for the functionality and characteristics of the topological implementation of the processor. The article discusses the application of modeling at the transaction level to assess the characteristics of the processor when performing model tasks, followed by a description of the resulting solution at the RTL level. To describe the structure of the arithmetic-logical unit, a modification of the known description is proposed with the help of four parameters characterizing the number of operations, operands and latency in the datapath. The proposed modification makes it possible to describe asymmetric datapaths as part of an arithmetic-logic device with a complex structure. A consistent description of the high-level programming model and the register transfer layer allows for reduced design time and allows for joint design of hardware and support tools.

Ilya Tarasov, Larisa Kazantseva, Sofia Daeva
Calculation of Activation Functions in FPGA-Based Neuroprocessors Using the Cordic Algorithm

The article discusses the use of configurable IP-cores that implement the CORDIC algorithm for calculating activation functions in neuroprocessors based on FPGA and VLSI. Currently, a large number of neuron activation functions based on transcendental operations are known in the field of artificial intelligence algorithms. The elementary step of the CORDIC algorithm uses the addition/subtraction and shift operations, which are also used for the elementary step of the sequential multiplication algorithm. This makes it possible to develop a unified computational node that, in various modes of signal switching, performs either a multiplication (or multiplication with accumulation) step to calculate the output of a neuron, or a CORDIC algorithm step to calculate the activation function. Combining elementary computational nodes into a pipelined module allows you to build a VLSI accelerator for neural network computing, which can use both simple activation functions and select some elementary nodes to implement more complex activation functions by reducing the number of multiply-accumulate operations. This approach expands the capabilities of specialized accelerators when they are implemented in VLSI and when using the considered architecture in FPGA, involving logic cells for calculating transcendental functions.

Ilya Tarasov, Dmitry Potekhin
Implementation of LTC2500-32 High-Resolution ADC and FPGA Interface for Photodiode Circuit Harmonic Distortions Analysis

This article describes the implementation of the data exchange interface between Intel Cyclone IV FPGA and the LTC2500-32 32-bit ADC with a sampling rate of 1 MSps for the photodiode circuit of a fiber-optic gyroscope. Dedicated hardware features that allow the optimal implementation of the interface are considered. The fact that the interface is not directly applicable to a standard synchronous interface such as Source Synchronous or System Synchronous complicates timing analysis. This article proposes a methodology for describing timing constraints for this special case and provides an example of commands in sdc (Synopsis Design Constraint) format. The result of solving the timing analysis problem and the result of testing the input stage of the ADC using the Rohde & Schwartz SMB100A generator are presented. The resulting spurious free dynamic range of the ADC input stage is 90.1 dB. Results of using the LTC2500-32 ADC for a presence of harmonics distortion in photodiode circuit are presented. The resulting spurious free dynamic range of the photodiode circuit accounts for 70 dB.

Vadim S. Oshlakov, Ivan G. Deyneka, Artem S. Aleynik, Ilya A. Sharkov
Hardware and Software Suite for Electrocardiograph Testing

Standard-compliant testing of manufactured high-technology products is one of the most important production steps necessary for quality assurance. This paper considers the development of a hardware and software suite for portable electrocardiograph testing for compliance with the international standard IEC 60601-2-25: 2011. The software and hardware suite for electrocardiograph testing described here consists of an Arduino Mega 2560 board, a digital-to-analog converter, conditioning amplifiers, and a computer with the LabVIEW15 visual programming environment installed. The generator program is written in the Arduino C language. To establish compliance of frequency response of electrocardiograph with the standard, 4 types of waveforms were generated with different frequencies, which were selected from the standard-compliant band (21 frequency values in total). In the course of the work, portable electrocardiographs from various manufacturers were tested. The paper demonstrates that the hardware and software suite developed on the basis of microprocessor equipment and virtual instrument allows one to achieve cost-effective testing of portable electrocardiographs.

Sergei A. Ostanin, Denis Yu. Kozlov, Maksim A. Drobyshev
Digital Device for the Computer Stabilometry Based on the Microcontroller ATmega328

The article is devoted to the development of a digital measuring and computing complex for stabilometric studies based on the ATmega328 microcontroller on the Arduino Uno R3 board. The block diagram of the developed device is given. The analysis of the electronic components of the device and the main design, technological, technical, and operational characteristics of the stabilometric platform is carried out. A physical model and an algorithm for calculating the main stabilometric indicators are presented. Trial measurements and calculations have shown that the designed installation fully meets the technical requirements for devices for stabilometry. The device allows you to register and calculate the spatial and time characteristics of the movement of patients.

Ravil Utemesov, Elena Shimko
Real-Time Correlation Processing of Vibroacoustic Signals on Single Board Raspberry Pi Computers with HiFiBerry Cards

The paper discusses the implementation of a time-frequency correlation algorithm for time delay estimation (TDE) on Raspberry Pi single-board computers. The implemented correlation algorithm is based on Fourier transform with the frequency sweep. In the paper, we analyzed the task of real-time acquisition and processing of acoustic signals with the Raspberry Pi computers. Then we modified the algorithm of computation of time frequency-correlation function to be applicable in real-time and implemented it as a C++ object. To increase the performance, we implemented GPU acceleration using GPU_FFT and Vulkan FFT libraries. The first library is a firmware that utilizes VideoCore IV on Raspberry Pi 3B+. Vulkan FFT library was implemented as an alternative compatible with VideoCore VI on Raspberry 4B. To estimate the efficiency of applying the graphical cores to we conducted a set of experiments. Those experiments were designed to measure the reduction in processing time after accelerating the most time computationally operation of inverse Fourier transform with the GPU. According to the results, we have concluded that GPU acceleration is efficient and makes possible the real-time processing of acoustic signals even of Raspberry Pi 3B+. The GPU acceleration proved to be the most crucial when large Fourier transform window size and the significant number of frequency bands are used.

Vladimir Faerman, Valeriy Avramchuk, Kiril Voevodin, Mikhail Shvetsov
Processing of a Spectral Electromyogram by the Method of Wavelet Analysis Using the Modified Morlet Function

Spectral electromyography (EMG), which supplements classical electromyography (EMG), is one of the diagnostic techniques of the physical health of a person. Different spectral analysis techniques are suitable for carrying out EMG; Fourier transform and wavelet analysis are the basic ones. Fourier transform has one serious drawback, i.e. meaningful measurements are misleading due to the Gibbs phenomenon. According to the authors, the best solution is the Morlet wavelet function, which also has drawbacks. Firstly, compensation for the Gibbs effect is incomplete. Secondly, the basic view of the Morlet wavelet function prevents changing the properties of functions for different applications. Thirdly, it requires significant computing resources (millions of multiply-accumulate operations per second). The article is devoted to the ways of solution of these problems using myosignal processing as an example.

Dmitry Potekhin, Yuliya Grishanovich
Two-Stage Method of Speech Denoising by Long Short-Term Memory Neural Network

This work is devoted to the development of a method for cleaning single-channel speech audio signals from additive noise. The main feature of the method is the application of a two-stage neural network. At the first stage, wideband processing of the input noisy signal is carried out, which allows the effective estimation of noise with a sophisticated spectral structure. The second stage of signal processing is a two-component neural network over the result of matrix representations that reveals the quasi-stationary characteristics of the clean and noise signal components in separate overlapping narrow frequency bands. The use of two components of noisy signals, which estimate complex masks of the clean and noise parts, makes it possible to improve useful information extraction about the formant structure of speech and effectively clean the input signal from noises of various nature. All of this together made it possible to develop a new method of speech enhancement, surpassing the best existing solutions in most quality metrics.

Rauf Nasretdinov, Ilya Ilyashenko, Andrey Lependin

Information Technologies and Computer Simulation of Physical Phenomena

Frontmatter
Method for Intermetallide Spatial 3D-Distribution Recognition in the Cubic Ni@Al “Core-shell” Nanoparticle Based on Computer MD-Simulation of SHS

The paper presents the results of computational experiments (CEs) on computer MD-simulation of “self-propagating high-temperature synthesis (SHS)” in a cubic Ni@Al “core-shell” nanoparticle. In the center of the cubic nanoparticle 25 × 25 × 25 nm in size, there is the spherical core with the diameter of 25 nm, containing Ni atoms, and in the shell surrounding the core, Al atoms. By heating the flat layer with dimensions of 10 × 25 × 25 nm to 1200 K, SHS ignition was initiated. Based on the results of CEs carried out using the LAMMPS software package, the analysis of SHS microkinetics was carried out using the calculated one-dimensional distributions of the averaged values of the temperature and matter density (temperature and matter density profiles). One-dimensional profiles of temperature and matter density were obtained by “integral” averaging in layers with dimensions of 4 × 25 × 25 nm. In addition, the authors have developed and software implemented the new method for recognizing the spatial 3D-distributions of synthesized intermetallides in the volume of a nanoparticle using pre-calculated sets of 3D-distributions of the matter density. In contrast to one-dimensional matter density profiles, the new method proposed by the authors makes it possible to effectively recognize the structure of formation and transformations of intermetallic phases at the “core-shell” interface of a nanoparticle in a given time interval. On the basis of the performed CEs, a sufficiently high efficiency of this recognition method and its advantage in comparison with the methods of similar analysis implemented in the well-known OVITO software package have been shown. In other words, the authors have created a sufficiently effective software toolkit for studying the microkinetics of SHS in nano- and micro-sized binary atomic systems (for example, Ni-Al, Ti-Al, etc.), and, in particular, for studying the process of structure and phase formation at the interface boundaries of such heterogeneous systems in a given time interval.

Vladimir Jordan, Igor Shmakov
Ab Initio Computer Modeling of a Diamond-Like 5–7 Bilayer

An ab initio investigation of the atomic structure, thermostability, electronic characteristics and methods obtaining a novel layer diamond-like nanostructure is carried out by using the Quantum ESPRESSO software package, which supports MPI parallelization of computations. The density functional perturbation theory method is used for the computations. This diamond-like bilayer can be obtained by polymerization of two defect 5–7 graphene layers at a pressure exceeding 12 GPa. The diamond-like 5–7 bilayer has a centered rectangular unit cell with the following parameters: a = 0.8261, b = 0.6483 nm, and Z = 32 atoms. The calculated surface density of the novel bilayer is 1.192 mg/m2, which is 60% higher than the corresponding density of ordinary hexagonal graphene. The bilayer structure contains pentagonal and heptagonal prismatic units, the maximum diameter of which is 0.1983 nm. The diamond-like 5–7 bilayer should be stable up to 300K, but its corrugation occurs at temperatures above 200K. This bilayer is a semiconductor with a straight bandgap of 2.89 eV. The experimental identification of the 5–7 bilayer is possible using the calculated Raman spectrum.

Vladimir Greshnyakov, Evgeny Belenkov
Mathematical Simulation of Coupled Elastic Deformation and Fluid Dynamics in Heterogeneous Media

Mathematical simulation of deformation processes occurring in fluid-saturated media requires solving multiphysical problems. We consider a multiphysical problem as a system of differential equations with special conjugation conditions for the physical fields on the interfragmentary surfaces. The interfragmentary contact surface between solid and liquid phases is a 1-connected contact surface. Explicit discretization of the interfragmentary contact surfaces leads to an increase in the degrees of freedom. To treat the problem, we propose a hierarchical splitting of physical processes. At the macro-level, the process of elastic deformation is simulated, taking into account the pressure on the inner surface of fluid-saturated pores. At the micro-level, to determine the fluid pressure inside the pores, the Navier-Stokes equations are numerically solved with the external mechanical loading. For coupling the physical fields, we use the matching conditions for the normal components of the stress tensor on the interfragmentary surfaces. Mathematical simulation of the coupled processes of elastic deformation and fluid dynamics is a resource-intensive procedure. In addition, a computational scheme has to take into account the specifics of the multiphysical problem. We propose modified computational schemes of multiscale non-conforming finite element methods. To discretize the mathematical model of the elastic deformation process, we apply a heterogeneous multiscale finite element method with polyhedral supports (macroelements). To discretize the Navier-Stokes equations, the non-conforming discontinuous Galerkin method with the tetrahedral supports (microelements) is used. This strategy makes it possible to apply a parallel algorithm to solve the elastic deformation and fluid dynamics problems under the assumption of the hydrophobicity of macroelements surfaces. In computational experiments, we deal with idealized models of heterogeneous natural media. The developed computational schemes make it possible to accelerate the solution of problems more than five times.

Ella P. Shurina, Natalya B. Itkina, Anastasia Yu. Kutishcheva, Sergey I. Markov
Numerical Modeling of Electric and Magnetic Fields Induced by External Source in Frequency Domain

To study effective electromagnetic properties of the heterogeneous media, it is important to know both electric and magnetic field distributions. In this paper, we consider the harmonic electromagnetic field induced by an external current source in the heterogeneous media. We propose an approach that couples the electric field and the magnetic field via special boundary conditions for the magnetic field strength, which act as a source of the magnetic field. Discretization of the mathematical model is performed by the vector finite element method in a space with partial continuity H (curl). Electric and magnetic fields are calculated on the same unstructured tetrahedral mesh. We analyze the behavior of the magnetic field obtained by means of our approach at the interfaces separating the media and contrasting conductive or magnetic inclusions.

Nadezhda Shtabel, Daria Dobroliubova

Computing Technologies in Discrete Mathematics and Decision Making

Frontmatter
Improving the Heterogeneous Computing Node Performance of the Desktop Grid When Searching for Orthogonal Diagonal Latin Squares

The main goal of the work was aimed to create a parallel application using a multithreaded execution model, which will allow the most complete and efficient use of all available computing resources. At the same time, the main attention was paid to the issues of maximizing the performance of the multithreaded computing part of the application and more efficient use of available hardware. During the development process, the effectiveness of various methods of software and algorithmic optimization was evaluated, taking into account the features of the functioning of a highly loaded multithreaded application, designed to run on systems with a large number of parallel computing threads. The problem of loading all available computing resources at the moment was solved, including the dynamic distribution of the involved CPU cores/threads and the computing accelerators, installed in the system.

Alexander M. Albertian, Ilya I. Kurochkin, Eduard I. Vatutin
Visual Metamodeling with Verification Based on Surrogate Modeling for Adaptive Computing

The issue of creating a methodology for designing computing systems for space probes is discussed in this paper. According to the authors, it is the modern paradigm of metamodeling that makes it possible to reduce labor costs when creating software and hardware systems for deep space scientific missions. The paper proposes a semantic approach, based on metamodeling. The approach makes a conceptually new low-level metamodel by combining two well-known IDEF0 graphic symbols and a flowchart notation. This solution makes it possible to practically eliminate the disadvantages inherent in each of the methodologies separately. A new methodology and rules for its application based on the semantic element have been obtained. An example of a metamodel is presented. The proposed approach allows designing and simulating the operation of on-board computing systems of a space probe. We assume that this virtually eliminates the shortcomings of the initial models. In our opinion, this will also reduce labor costs in the design of various on-board computing systems, increasing the quality and unambiguity of work. The ideology of metamodeling has a peculiarity. The created model can fully comply with the rules, but not provide the required parameters. Surrogate modeling can be the solution to the problem. An iterative process to achieve the required metamodel parameters is proposed.

Alexander A. Lobanov, Aleksey N. Alpatov, Irina P. Torshina
Recognition Algorithms Based on the Selection of 2D Representative Pseudo-objects

Building of a recognition algorithms (RA) based on the selection of representative pseudo-objects and providing a solution to the problem of recognition of objects represented in a big-dimensionality feature space (BDFS) are described in this article. The proposed approach is based on the formation of a set of 2D basic pseudo-objects and the determination of a suitable set of 2D proximity functions (PF) when designing an extreme RA. The article contains a parametric description of the proposed RA. It is presented in the form of sequence of computational procedures. And the main ones are procedures for determining: the functions of differences among objects in a 2D subspace of representative features (TSRF); groups of interconnectedness pseudo-objects (GIPO) in the same subspace; a set of basic pseudo-objects; functions of differences between the basic pseudo-object in a TSRF. There are also groups of interconnectedness and basic PF; the integral recognizing operator with respect to basic PF. The results of a comparative analysis of the proposed and known RA are presented. The main conclusion is that the implementation of the approach proposed in this paper makes it possible to switch from the original BDFS to the space of representative features (RF), the dimension of which is significantly lower.

Olimjon N. Mirzaev, Sobirjon S. Radjabov, Gulmira R. Mirzaeva, Kuvvat T. Usmanov, Nomaz M. Mirzaev
Applied Interval Analysis of Big Data Using Linear Programming Methods

The article describes the problems of mathematical modeling of processes using an experimental database and a knowledge base. This research relates to multidimensional dependency building. It uses regression analysis and machine learning techniques within the framework of probability theory and mathematical statistics. A large observation table often cannot be processed on a single computer. The analysis of such data requires parallel computations and in this article it is carried out by the method of interval mathematics, which allows performing such computations. The analysis of linear dependences on parameters is reduced to solving systems of interval linear algebraic equations. Among the approaches to systems study known in the literature, an approach was chosen that takes into account the so-called “single set of solutions”. This method provides a guaranteed estimate of the required dependencies and allows the use of linear programming in some cases. Using this method, interval forecasts of the output variable of the modeled process are calculated. Interval estimates of the parameters of the studied dependence were also obtained. Two methods of sequential and parallel analysis of a large database are proposed, using methods for solving large-scale linear programming problems. The optimality of the algorithms is substantiated using the well-known technique of removing constraints in optimization problems of large dimension. The research was carried out on model processes and on real data of statistics of road traffic accidents in England.

Nikolay Oskorbin
Parallel Computing in Problems of Classification of Teenagers Based on Analysis of Digital Traces

This paper considers a model for classifying high school students by digital traces obtained from the VKontakte social network. The classification is based on the belonging of social network users to communities, the number of which is about hundreds of thousands, which leads to the emergence of big data in the process of analysis. The problem of working with big data is solved by parallelizing computations. The classification model was developed with the aim of recovering information from digital traces of users of social networks. On the basis of the trained model, the identification of users of the VKontakte social network was carried out by place of residence (village or city of the Altai Territory) and age (9 or 11 grade) among teenagers with incomplete information on the grade and place of study in the digital traces. The best prediction accuracy for the trained model was of the order of 0.9. In the future, it is planned to build an extended classification model by including in the data sample of users of social networks of other age groups and to develop a support system for making managerial decisions for the university's admissions campaign.

Vera Zhuravleva, Anastasiya Manicheva, Denis Kozlov
Identification of Key Players in a Social Media Based on the Kendall-Wei Ranking

The paper concerns studying the effectiveness of Kendall-Wei ranking procedure applied to the problem of identification of opinion leaders in social media. In order to achieve this, authors conducted both model data and real data experiments, using the original technique of constructing social graphs, and compared the sets of key players, obtained by the popular methods as well as by the procedure under consideration. The comparison results demonstrate that in some instances Kendall-Wei ranking procedure outperforms other methods of ranking nodes in a social graph, which plays a significant part in solving the problem of detecting major actors in social networks.

Alexander A. Yefremov, Elena E. Luneva
Using Time Series and New Information Technologies for Forecasting Sugarcane Production Indicators

Sustainable development of the agricultural sector is an important role in assuring the national security of any state. The article reveals the possibilities of using the R program code for statistical processing and visualization of analysis results. The characteristics of the ARIMA model for forecasting agricultural production are described in this article. An ARIMA model selection algorithm is presented for a specific time series that describes the sugarcane production in one state of the Federative Republic of Brazil. Data processing and visualization of the main stages of the construction of the ARIMA model are coded in the R language. The directions and advantages of the new information technologies used in agriculture within the framework of the paradigm of the fourth industrial revolution are shown in this article. The results of the study show that the forecast using the ARIMA model can be successfully applied to time series that describe agricultural production. The conclusions and results of this study can be used to develop sustainable agricultural practices.

Bruno Pissinato, Carlos Eduardo de Freitas Vian, Tatiana Bobrovskaya, Caroline Caetano da Silva, Alex Guimarães Pereira
Software and Methodology for the Design of System Dynamics Models Based on the Situation-Activity Approach

An original approach to the system dynamics models’ design as a projection of situation-activity analysis, expanding the possibilities of predictive-analytical research, is proposed for consideration. System dynamics models in this aspect are applied to human reasoning with the reference to the specific objects of activity. This makes it possible to establish the specified criteria for assessing the state and trajectory of the development of the system being modeled. This article proves the hypothesis that methods of knowledge representation in situational, expert and system dynamics models are similar. At the same time, the conceptual structures of situation-activity analysis incorporate the tools inherent in the system dynamics language. Conceptual structures are the core of the design of system dynamics models in notations: levels and flows. The result of the situation-activity design is a model represented in a conceptual language of knowledge representation, where the basic element is the act of doing. The conceptual structures of activity acts can be used to build production rules for the expert system. However, it is possible to extract partial representations from the conceptual structures of activity acts: process plans and regularities. On the aggregate of these plans, a graphical image for building system dynamics models is implemented. The knowledge of the fact that constructing conceptual structures is very challenging sparked off different searches for the best way to deal with this problem. Eventually the best solution has become a software toolkit “Designer + Solver + Interpreter which allows one to visualize conceptual structures, implement knowledge bases for expert and system dynamics models, as well as to conduct research on completeness and adequacy of the model.

Aleksey Sorokin, Elena Brazhnikova, Liliya Zheleznyak

Information and Computing Technologies in Automation and Control Science

Frontmatter
Architecture of an Intelligent Network Pyrometer for Building Information-Measuring and Mechatronic Systems

The work is devoted to the creation of an autonomous device based on an embedded computing platform that supports the network organization of information-measuring and mechatronic complexes. On the basis of a three-level information model of a specialized pyrometer, it is shown that endowing the device with intellectual capabilities reduces the messaging traffic within the complex, and the use of modern network technologies makes it possible to build complex information and control systems. The features of intelligent network devices and their interaction protocols are demonstrated by the example of an experimental complex for the study of structural phase transitions in materials. The development of the intelligent network pyrometer is based on the LR1-T digital spectrometer and the nVidia Jetson Nano embedded system. With the help of software, remote control of the device was realized, its web interface was built and the inference of a neural network trained on the results of pyrometric studies of phase transformations in thin films was organized. For streaming processing of spectral data, the CUDA cores of the GPU processor are used.

Alexey Dolmatov, Pavel Gulyaev, Irina Milyukova
Implementation of a Network-Centric Production Storage System in Distributed High-Performance Computing Systems

The main components of a network production data storage system in distributed high-performance computing systems are defined. A hybrid method of organizing a data storage network is considered. A feature of distributed data storage is the semantic distribution of data blocks. The interoperability of local data storage (LDS) and data export utilities is to transform the low-level organization of LDS data. When organizing a resource center, LDS plays the role of a DBMS, and the data export utility serves to organize data in a form suitable for processing by the main storage module. Thus, the interoperability of local data stores and data export utilities implies the formalization of DBMS (LDS) data for their subsequent processing. The data of interaction of the storage system when accessing computing clusters is reduced to the form of a JSON file, divided into “entities”, “actions” and “links”.

Anna Bashlykova
Internet of Things for Reducing Commercial Losses from Incorrect Activities of Personnel

Examples of using the Internet of Things in ERP systems to reduce commercial losses from incorrect activities of personnel are considered. The classification of business processes to reduce commercial losses from incorrect activities of personnel using the Internet of Things is given. Methods are proposed: exclusion of the human factor in the implementation of business processes using the Internet of Things, human factor management and a combination of both methods. It is noted that the risks associated with incorrect actions of the personnel are recognized as the main risks in the company’s activities. The proposed classification of business processes in terms of reducing commercial losses from incorrect activities of personnel on the basis of the Internet of Things makes it possible to determine business processes that require control of incorrect actions of personnel. The application of the developed methods based on the Internet of Things for each selected category of business processes provides an opportunity to minimize potential losses.

Elena Andrianova, Gayk Gabrielyan, Irina Isaeva
Method of Constructing the Assigned Trajectory of a Multi-link Manipulator Based on the “Programming by Demonstration” Approach

The article discusses an approach to the construction of control of an anthropomorphic (multi-link) manipulator of a robotic complex, based on applying the “Programming by Demonstration” approach. The proposed approach is based on the developed methodology for constructing a knowledge base based on the data obtained from the sensors of the copying suit as a result of training the robotic complex by the operator. The structure of the knowledge base and the data processing mechanism for filling it are given. An example of a procedure for averaging data obtained from a copying suit as a result of “training” a manipulator of a robotic complex is given. An approach to constructing the trajectory of a manipulator based on data from a knowledge base, based on the use of ideas of terminal control, is considered. In addition to the precise execution of the manipulator movements, the approach proposed in the article ensures the fulfillment of the initial and final conditions imposed on the manipulator movement. The proposed method allows organizing the control of manipulators of a robotic complex without building or using a complex and often not always accurate mathematical model. The developed technique was tested on the SAR-401 anthropomorphic robot.

Vadim Kramar, Vasiliy Alchakov, Aleksey Kabanov
Developing a Microprocessor-Based Equipment Control Panel for Rifle Sports Complexes

The article is dedicated to problems in development of microprocessor-enabled range equipment control panels. Such a panel is intended to control firing range equipment consisting of stationary and movable popup targets in target shooting sports centers, as well as auxiliary equipment, alarms and communication with other facilities. The solution is intended to be used in rifle sports complexes. A short explanation is given on the philosophy of signal receiving and transferring by means of radio communication links, as well as by power cable from the control panel located at the command observation post and receiving radio station located at a switchgear. Separation of control and information signals takes place at the reception module of the switchgear by means of seven filters in parallel arrangement set to one lead frequency and six command frequencies. The switchgear identifies two command sequences: one of them provides energizing the input unit relay corresponding to the object’s address; the other sequence is related to the duration of the command execution. The active relay supplies power to the controlled object through its closed contacts. Advantages of microprocessor-based range equipment control panels have been identified that provide flexible control of equipment, acquisition of reliable information on status of all the relevant objects, step-by-step querying and information display on the screen.

Ishembek Kadyrov, Nurzat Karaeva, Zheenbek Andarbekov, Khusein Kasmanbetov
Methods for Domain Adaptation of Automated Systems for Aspect Annotation of Customer Review Texts

Today, social media and instant messengers are a widely used information channel that has a powerful impact on a public opinion. Therefore, the rapid identification of the main topics and generalized content of a set of texts becomes an important task. In fact, the problem arises of the aspect-based annotation of texts, interrelated on some of the topics presented in them. A set of texts can contain data for a variety of semantic categories. Therefore, it is of interest to obtain annotations for categories automatically extracted from texts. This task essentially depends on specific subject areas. Therefore, the issue of quick and effective adaptation of existing models to new domains is highly relevant. This paper proposes a hybrid method of aspect-oriented analysis and text annotation based on data extracted from both common dictionaries and domain-oriented unstructured texts. The introduced characteristic functions and numerical metrics make it possible to assess the significance of individual terms within the entire domain. An algorithm for the categorization of texts is proposed, based on the selection of semantic clusters in a domain semantic graph. A method for highlighting the most significant text fragments included in the annotation, based on statistical data, is proposed. The results of experiments are presented, which makes it possible to evaluate the quality of the algorithms.

Elena Kryuchkova, Alena Korney
Neuro-Computer Interface Control of Cyber-Physical Systems

The paper proposes an approach to and solves the problem of controlling a robot by using neural interface technology, describes the general scheme and working principle of the main idea of non-invasive neural interface control of a robot using the original convolutional neural network. The authors describe the principles of an original convolutional neural network and an approach to the modern network design, present a model of a one-dimensional convolutional network based on the principles of a human inner ear. The structure of a software package is proposed. The results of a comparison of algorithms for the analysis of human brain evoked potentials used in the design of brain-computer interfaces are presented. The authors used the Fourier transform algorithm and the multidimensional synchronization index (MSI) algorithm in various modifications to perform the experiment. Analysis of the initial signal, the accumulated evoked potential, in addition to the accumulated evoked potential spectrum were proposed as variations. Linear correlation was also evaluated with analysis using a user-derived reference signal sample and various variations of wavelet filtering. In addition, model signals, which were a combination of white noise and a harmonic oscillation simulating a stable visual evoked potential, were used. The best results (error rate <10%) with an analysis time of 3 s were obtained for the MSI of the original signal, MSI with the Fourier transform. Also in this list, there is a MSI where the wavelet filtering result of coherent accumulation was used as an etalon, a linear correlation coefficient. In a MSI the evoked potential, recovered after the wavelet transform, was used as an etalon.

Yaroslav Turovskiy, Daniyar Volf, Anastasia Iskhakova, Andrey Iskhakov
Software Development for Agricultural Tillage Robot Based on Technologies of Machine Intelligence

The article is devoted to the development of software for robots designed for spot mechanical tillage. The need to develop software for the digital twin of the agro-robot with the use of artificial intelligence technologies is dictated by the need of farmers in its practical use. The article describes four high-level nodes of an agricultural robot: the control unit, which is an NVIDIA Jetson NANO computing module; the executive mechanism, which is a 6-axis desktop robotic arm; the machine vision unit, consisting of an Intel RealSense camera; the chassis unit, represented as crawler tracks and drivers for their control. The implementation of the software is carried out independently of the manufacture of the robot, so for the developer there is a task to minimize the risk of its implementation in the manufactured robot. The developed software fully meets the requirements imposed by the customer. For instance, the digital robot twin takes into account the environmental conditions, as well as the terrain in which the prototype robot will work, and then the serial device. Second, the use of ROS (Robot Operating System) in software development will allow one with minimal effort to transfer the digital model to the physical one (prototype and serial robot), without changing the source code. Third, taking into account the physical environment conditions when programming the digital robot twin allowed one to build mathematical models of device control that are close to reality, as well as to debug and test them.

Roman N. Panarin, Lubov A. Khvorova

Computing Technologies in Information Security Applications

Frontmatter
Implementing Open Source Biometric Face Authentication for Multi-factor Authentication Procedures

This study proposes a solution that extends the capabilities of web information systems with single-factor authentication by introducing an additional authentication factor based on biometric face recognition. The proposed solution design and its main operation steps are presented and discussed. The solution utilizes the standard multimedia functionality of popular web browsers and supports available or built-in image capturing devices (photo and web cameras). Robust program algorithms from the open source computer vision library are used for face image processing and analysis. Experimental testing and validation of the algorithms for face localization and recognition are conducted with image sets produced with consideration of reality. Experimental results demonstrate high effectiveness with a success rate of 80% … 93% for the solution based on the local binary pattern face localization algorithm with the local binary pattern histogram face recognition algorithm.

Natalya Minakova, Alexander Mansurov
Application of Recurrent Networks to Develop Models for Hard Disk State Classification

This article discusses the possibilities of using machine learning technologies to solve the problem of classifying the state of hard disks. The use of machine learning algorithms is implemented with the use of recurrent neural networks, specifically, the SimpleRNN and LSTM (Long Short-Term Memory) architectures. Classification models are developed using a data set formed based on the values of SMART (Self-Monitoring, Analysis and Reporting Technology) technology indicators. The analysis of aspects of the formation of a representative data set based on SMART-sensor indicators containing relevant information for the development of the classification model is carried out. The nature of recording changes in SMART sensor indicators suggests their use in the format of multidimensional time series. Binary and multiclass classification models are proposed, which contain two LSTM layers, as well as a Dropout layer and a Dense layer. The parameters of the implemented classification models are given. The proposed classification models are tested on the basis of the publicly available data set of the BackBlaze cloud storage company. Graphical dependencies for training and validation losses are provided. The main classification quality indicators are evaluated to confirm the feasibility of further development of the implemented models.

Anton Filatov, Liliya Demidova
Software Implementation of Neural Recurrent Model to Predict Remaining Useful Life of Data Storage Devices

This article explores the problem of predicting the remaining useful life, which often arises when working with disk drives. The approaches to effectively solving this problem using recurrent neural networks, in particular, SimpleRNN, GRU (Gated Recurrent Unit), and LSTM (Long Short-Term Memory) are considered. At the same time, for the development of predicting models, the dataset of the BackBlaze service is used, which is publicly available. The data are presented as multidimensional time series, which were formed according to the readings of SMART (Self-Monitoring, Analysis and Reporting Technology) sensors of the data accumulators. Approaches to improving predicting accuracy are considered. The software implementation of the predicting models was performed in Python 3.8. The models were trained over 20 epochs. The results of predicting the remaining service life of disk drives from the BackBlaze database, as well as graphical dependences of the loss function and comparative tables with neural networks used in the study are presented.

Liliya Demidova, Ilya Fursov
Testing Methods for Blockchain Applications

A blockchain application is a form of modern software that runs in its ecosystem and interacts with other application instances. Such applications run decentralized on a large number of nodes and process requests from a large number of users. Thus, they are high-performance applications that are prone to specific errors due to difficult-to-predict network behavior. In addition, they are predisposed to errors inherent in all software systems. Since bugs can potentially lead to losing an immense amount of funds in cryptocurrencies, learning how to test such applications is an important task. In this paper, we explore the internal quality assurance methods of the Bitcoin and Ethereum platforms at their various levels of logical organization. Next, we describe our test bench designed for functional testing of cryptocurrency payment gateways. The solution provides a software abstraction for making API calls to virtualized nodes of various platforms using emulators of real blockchain networks.

Sergey Staroletov, Roman Galkin
Backmatter
Metadaten
Titel
High-Performance Computing Systems and Technologies in Scientific Research, Automation of Control and Production
herausgegeben von
Vladimir Jordan
Ilya Tarasov
Vladimir Faerman
Copyright-Jahr
2022
Electronic ISBN
978-3-030-94141-3
Print ISBN
978-3-030-94140-6
DOI
https://doi.org/10.1007/978-3-030-94141-3