Skip to main content

2024 | Buch

High-Performance Computing Systems and Technologies in Scientific Research, Automation of Control and Production

13th International Conference, HPCST 2023, Barnaul, Russia, May 19–20, 2023, Revised Selected Papers

herausgegeben von: Vladimir Jordan, Ilya Tarasov, Ella Shurina, Nikolay Filimonov, Vladimir A. Faerman

Verlag: Springer Nature Switzerland

Buchreihe : Communications in Computer and Information Science

insite
SUCHEN

Über dieses Buch

This book constitutes the revised selected papers of the 13th International Conference on HPCST 2023, held in Barnaul, Russia, during May 19–20, 2023.
The 21 full papers included in this book were carefully reviewed and selected from 81 submissions. The papers are organized in topical sections as follows: Hardware for High-Performance Computing and Signal Processing; Information Technologies and Computer Simulation of Physical Phenomena; Computing Technologies in Data Analysis and Decision Making; Information and Computing Technologies in Automation and Control Science; Computing Technologies in Information Security Applications.

Inhaltsverzeichnis

Frontmatter

Hardware for High-Performance Computing and Signal Processing

Frontmatter
Design of a Pipeline Computing Module as Part of a Specialized VLSI
Abstract
The article discusses the process of designing a computing node based on a synchronous pipelined architecture for operation as part of a specialized VLSI. During the design process, individual pipeline levels generate conflicting requirements for the implementation of individual nodes and the pipeline as a whole. This requires resolution in the form of finding suboptimal pipeline tuning options. The approach discussed in the article involves the use of a high-level synthesizer to distribute calculations between individual stages of the pipeline, which allows to reduce the signal delay at the expense of increased pipeline latency Analyzing the interaction of the pipelined calculator with other components of the computing system makes it possible to include in the model of estimated characteristics the cost of waiting for the result. This method, as the simulation results have shown, significantly corrects the approaches to finding the optimal solution, which also includes the costs of the system resources as a whole. Preliminary estimates have received qualitative confirmation when synthesizing modifications of the test pipeline in CAD FPGA.
Ilya Tarasov, Daniil Lyulyava, Nikita Duksin, Ilona Duksina
Speech Enhancement Based on Two-Stage Neural Network with Structured State Space for Sequence Transformation
Abstract
In this paper, a new method for improving speech quality using the Structured State Space for Sequence (S4) transformation was proposed. This method inherits existing two-stage denoising methods using recurrent neural networks. However, the use of S4 layers instead of long-term short-term memory brought improvements in two ways. Firstly, it was possible to achieve a reduction in the number of trained parameters of the neural network, while maintaining the quality of speech enhancement. Secondly, due to the use of the convolutional representation of S4 transformations, the network training time per one epoch has decreased. The proposed two-stage neural network model for denoising was implemented using the PyTorch library. For training and testing, a standard DNS Challenge 2020 dataset was used. The optimal type of the loss function for training, and the best number of S4 layers was selected. Comparison with existing real-time speech enhancement methods showed that the developed model was one of the best performers for all quality metrics.
Andrey Lependin, Valentin Karev, Rauf Nasretdinov, Ilya Ilyashenko
Designing a Graphics Accelerator with Heterogeneous Architecture
Abstract
The article discusses the architecture of a graphics accelerator, based on a combination of general-purpose processor cores and pipeline accelerators for performing operations with matrices and transcendental operations. The article proposes a general architecture for GPUs of this type and suggests the main options for computing nodes designed to implement target group algorithms. In order to reduce technical and organizational risks, it is planned to simplify the hardware component of Very Large Scale Integrated Circuits (VLSI) and transfer the functions of managing calculations to embedded software, for which control processor cores have been introduced into VLSI. The VLSI project involves the development of a GPGPU-class computing accelerator, in which the ability to work with three-dimensional graphics is an additional feature. This allows you to take advantage of an architecture based on a large number of simple computational cores, using such VLSI in conjunction with a general-purpose processor.
Ilya Tarasov, Dmitry Mirzoyan, Peter Sovietov
Spectrophotometer for Field Studies
Abstract
A spectrophotometer for monitoring the condition of plants and vegetation covers has been developed. The device allows to determine the spectral composition of incident and reflected radiation and fluorescence spectra of plants, to calculate vegetation indices for numerical estimation of surface character or leaf condition. The spectrophotometer is realized as a BLE peripheral device. The structural diagram and description of individual components of the device are presented. Block diagrams of programs for working with sensors and interaction with the user are given. The instrument is designed as a compact portable device intended for operation in field conditions, has low power consumption and ergonomic interface. The spectrophotometer will find practical application in conducting research in the field of photosynthesis and plant biology, implementation of environmental monitoring activities, as well as in various branches of agriculture.
Aleksandr Kalachev, Vladimir Pashnev, Yuriy Matyuschenko
Comparative Study of Practical Implementation of Time Delay Estimation Methods on Single Board Computer
Abstract
The article discusses the practical implementation of various methods for time delay estimation (TDE) on a Raspberry Pi single-board computer. The relevance of the research is due to the importance of the implementation of TDE methods in the tasks of object positioning and localization. The demand for real-time operation, as well as the requirement to use single-board computers as sensor nodes, imposes high demands on the efficiency of computing. The paper compares various time-domain and frequency-domain TDE methods, including those that utilize a limited set of spectral bins, applicable to problems of localization of acoustic signal sources. The paper considers various methods, their advantages, disadvantages, and computational features. In addition, we have carried out their comparative analysis as well as conducted experimental validation of theoretical estimates of the demands on computing resources. During a series of computational experiments carried out through specially developed software, the computing time and the memory usage are estimated. Based on empirical research on a single-board computer, the Raspberry Pi 4B, we reasonably advise certain methods to be employed in particular scenarios for localization of an acoustic source in space using the Raspberry Pi single boards.
Vladimir Faerman, Kirill Voevodin, Valeriy Avramchuk

Information Technologies and Computer Simulation of Physical Phenomena

Frontmatter
Study of the Functional Characteristics of TiNi Coatings by the Computer-Aided Simulation Using Parallel Computing
Abstract
The architecture and functionality of a program system that implements the paradigm of parallel execution of SIMD-tasks, in which a certain class of coatings with different parameter sets of particles sprayed onto the surface of technical products are simulated, is considered. Each SIMD-task uses a copy of the same computing module, which implements the computational foundations of a previously created software package for simulation the layered structure of a gas-thermal coating and calculating its functional characteristics. The program system allows you to run a series of computational experiments with the least amount of time due to the parallel running of SIMD-tasks for simulation a class of functional coatings, taking into account various spraying modes, and save electronic reports with illustrative research material in archival storage. The paper presents the results of simulation a class of coatings made of titanium nickelide (TiNi), taking into account various sets of “key physical parameters” (KPPs) of particles, characterizing different modes of coating spraying. In addition, based on the simulation results, an analysis was carried out of the variability of the coatings functional characteristics (porosity and adhesive strength of coatings, surface roughness of coatings) when varying the KPP-values in certain ranges. As a result of analyzing the simulation results of TiNi coatings and calculating their functional characteristics, optimal modes for the stable spraying of TiNi coatings on two types of substrates (on a Steel45 substrate and on a titanium substrate) were established.
Vladimir Jordan, Vitaly Blednov
On Hierarchical Convergence of the Heterogeneous Multiscale Finite Element Method Using Polyhedral Supports
Abstract
Numerical simulation of physical processes in such complex formations as permafrost is impossible without modern multiscale methods. For example, the heterogeneous multiscale finite element method (FE-HMM) can be used to simulate elastic deformation of solids. However, it is necessary to have some control tools of computational errors as well as a priori information on limitations of the method in solving practical problems. In this article, we investigate the influence of different mesh hierarchy levels on the accuracy and speed of solving the elastic deformation problem using FE-HMM with polyhedral supports at the macrolevel and with tetrahedral supports at the microlevel. The obtained estimates allow us to adjust the planning strategies of computational experiments, which are largely related to the construction of a set of meshes with different accuracy. We apply the h-refinement technology at the edges of macro-polyhedra in the microlevel mesh. Increasing in accuracy of computational solution up to two orders of magnitude with maintaining the total size of discretizations is obtained. Also, the applicability of the method for numerical simulation of physical processes in media with elongated inhomogeneities intersecting several macroelements is shown.
Anastasia Yu. Kutishcheva, Sergey I. Markov, Ella P. Shurina
Parallelization of Finite-Volume Numerical Methods of Computational Fluid Dynamics by Means of Shared Memory Computing Systems
Abstract
The article considers the problems of parallelization of finite-volume numerical methods concerning the calculation of the streaming flow of conservative quantities through the boundaries of computational cells for systems operating with shared memory. Three approaches to solving this problem were analyzed, for which a theoretical justification was carried out, implementation, testing, analysis of the performance and scalability of the proposed methods were performed.
Anastasia Gulicheva

Computing Technologies in Data Analysis and Decision Making

Frontmatter
Development and Comparative Study of Data Compression Methods Used in Technical Monitoring Systems
Abstract
The article presents the results of the development and research of methods for compactifying the transmission and storage of data generated by technical monitoring systems. The research was carried out using the information and measurement system of the Altai State Technical University. The system is designed to monitor the air temperature in the premises of the university campus and surrounding area, and consumption of resources accounting (heating, hot and cold water). Developed and modified lossy and lossless data compression methods and their experimental studies are considered, including online compression methods during data transfer to the server. Database structures designed to store information collected during the monitoring process are also described. The study of the compression efficiency of the developed algorithms was carried out mainly on temperature monitoring data. This was due to the fact that control and monitoring of temperature processes is most widespread in various technical process control systems, heat metering systems both on the consumer side (housing and communal services), and on the side of the heat supply organization, and during meteorological observations. The results of the studies demonstrate that in most cases, lossless compression methods allow to compress the information received from the monitoring system by more than 10 times, and lossy compression methods – even more, without losing any pragmatic value of the original information.
Alexey G. Yakunin
Object Recognition Based on Three-Dimensional Computer Graphics
Abstract
This paper considers the issue of training convolutional neural networks based on the representation of a data set in the form of three-dimensional computer graphics. The principle of operation of convolutional neural networks is to train and recognize categories of objects using orthogonal projections of a three-dimensional object. In the recognition process, difficulties arise in mirroring the object, which are solved by using the Pearson correlation coefficient. To use a convolutional neural network, a comparative analysis of the architectures of existing effective image recognition solutions for recording visual information was carried out. For the correct operation of the trained neural network, three-dimensional graphics objects were modeled. After obtaining the desired three-dimensional model, a black-and-white image was obtained, which was used to obtain the contours of the white spots. It is on such black-and-white drawings that the object recognition program is trained. The success of the neural network has been proven experimentally. Thus, it is possible to recognize real objects based on convolutional neural networks trained in virtual space.
Nikolay Lashchik
Adaptive Methods for the Structural Optimization of Neural Networks and Their Ensemble for Data Analysis
Abstract
The article is devoted to studying the effectiveness of ensemble methods of neural network solvers for solving a regression problem. The problem of ensuring the efficiency of using a prediction model based on a set of artificial neural networks, which should provide increased efficiency compared to a single regressor, is considered. One of the requirements to ensure an increase in efficiency is that the solvers in the ensemble must be sufficiently different to ensure that the ensemble model exits the region of the local minimum error of a single solver. For the considered case of constructing neural network ensembles, it is proposed to provide distinction based on the generation of different structures of neural network regressors. To do this, a modification is introduced into the structure of the previously developed probabilistic method for designing neural networks. This modification is based on the use of special coefficients that determine the adaptation of the probability of using various activation functions depending on the presence of those in already formed neural networks in the ensemble. The proposed approach was implemented in a software system and tested using generated datasets and real industrial data sets, described in the article. The results obtained indicate the relatively high efficiency of constructing collective regressors when forming neural networks using the proposed approach while maintaining diversity in conditions of noisy samples.
Vladimir Bukhtoyarov, Vladimir Nelyub, Dmitry Evsyukov, Sergei Nelyub, Andrey Gantimurov
Self-adaptation Method for Evolutionary Algorithms Based on the Selection Operator
Abstract
Genetic algorithms are a class of effective and popular black box optimization methods that are inspired by evolutionary processes in the nature. Genetic algorithms are useful in cases where nothing is known about the optimization object except the inputs and outputs. Such an algorithm iteratively searches for a solution in the solution space based on a predefined fitness function that allows comparing different solutions. If a researcher desires to use genetic algorithms, it becomes necessary to choose genetic operators and numerical parameters of the algorithm, the choice of which may be a difficult task. Self-adaptation methods that alter the behavior of the algorithm while it is running help to deal with the task of choosing the optimal settings of the algorithm. Such methods are called methods of self-adaptation of evolutionary algorithms and are usually divided into self-tuning, which performs the tuning of numerical parameters, and self-configuring, which makes the choice of genetic operators. In recent decades, various strategies for self-adaptation of evolutionary algorithms have been actively developed, including metaheuristic algorithms, as a result of which a researcher can obtain a specialized evolutionary algorithm that solves problems from a certain class better than conventional algorithms. However, even when using the metaheuristic approach, there is a need to choose genetic operators and numerical parameters of the algorithm. Therefore, the subject of the development of self-adaptive algorithms is one of the most relevant fields in the study of evolutionary algorithms. In this paper, a new approach to the adaptation of evolutionary algorithms based on the selection of genetic operators is proposed. The method is applied to the genetic algorithm and compared with the most popular SelfCGA self-configuring approach and shows an improvement in efficiency on both real and binary optimization problems.
Pavel Sherstnev
Application of U-Net Architecture Neural Network for Segmentation of Brain Cell Images Stained with Trypan Blue
Abstract
This article discusses the problem of semantic segmentation of images of rat brain cells stained with trypan blue. The purpose of this work is to develop software that will automatically perform the segmentation of living brain cells. The originality of this article is a non-standard approach to modifying the architecture of the U-net, which is used to detect brain cells. To solve the problem a mathematical model was developed in the form of a convolutional neural network based on the U-Net architecture with a convolution depth of 4. The initial sample consisted of 30 images, data augmentation was performed, which made it possible to increase the number of examples up to 150 samples. During augmentation images were rotated by 90 degrees clockwise and counterclockwise, and flipped along the X-axis and along the Y-axis. The effect of adding DropOut layers with a probability of 0.3 and/or BatchNormalization on learning and obtaining predictions was considered. When training a deep convolutional neural network, optimizers Adam, SGD, RMSprop were used. The result of applying the selected model according to metric Dice is 0.8513, according to the accuracy of detecting the number of living neurons is 98.62%. A possible future improvement of the current models due to small changes in the architecture of the convolutional neural network, the selection of hyperparameters for optimizers, an increase in the initial samples, and a change in the approach to image preprocessing are considered.
Vadim Tynchenko, Denis Sukhanov, Aleksei Kudryavtsev, Vladimir Nelyub, Aleksei Borodulin, Daniel Ageev
Language Model Architecture Based on the Syntactic Graph of Analyzed Text
Abstract
The methods and techniques of graph structures for text processing are considered. The task of processing Russian-language text and extracting semantic structures is an important stage in the development of artificial intelligence systems. Existing models of intelligent assistants are unable to handle a large volume of noisy information and take a long time to process requests. To solve this problem, the article proposes methods for working with graph structures for the analysis and classification of necessary data. By performing initial processing according to the proposed conceptual structure, it becomes possible to use a syntactic graph for a more accurate representation of each part of speech in the processed context. The results of the tested model provided data on the accuracy of word identification in Russian-language sentences. A table comparing the accuracy with existing natural language processing models is presented. The results were obtained based on the fact that 70% of the text volume is required for the training set, and analysis was conducted on the remaining portion, which is true for each of the compared model.
Roman Semenov

Information and Computing Technologies in Automation and Control Science

Frontmatter
Classic and Modern Methods of Automatic Parking Control of Self-driving Cars
Abstract
The problem of automatic parking control of an unmanned car is considered. The formulation and formalization of the problem of the car parking control taking into account the restrictions that ensure the safety of the parking maneuver are given. Classical and modern methods of the automatic parking control of unmanned cars are analyzed. Based on the Dubins and Reeds-Shepp motion models, optimal algorithms developed for the car parking control are synthesized. A fast-growing random tree algorithm RRT is used to construct a path between two points. Based on the method of machine learning involving reinforcement, a car parking control algorithm is synthesized. The algorithm convergence is investigated, and the optimal values of the training parameters are determined. The results of the computer testing of synthesized parking algorithms implemented in Python using mathematical libraries Matplotlib and NumPy are presented.
Ilya D. Tyulenev, Nikolay B. Filimonov
Application of Smoothing Technique to Model Predictive Traffic Signal Control
Abstract
This paper introduces a new adaptive traffic signal control algorithm that operates within the model predictive control framework. Its primary objective is to enhance the traffic network's throughput under heavy load. To achieve this goal, a comprehensive traffic flow model and a specially tailored target function were used. The predictive model utilized in this algorithm is second-order macroscopic traffic model, enabling accurate prediction of traffic phenomena such as wave formations and nonlinear effects. Furthermore, the model can be refined using historical data, which enhances the precision of predictions. The paper outlines both the model itself and the numerical scheme utilized for its computations. The proposed target function considers the characteristics of a traffic dynamics and aims to provide a uniform distribution of vehicles in the transport network. Optimal control can be found as a solution to a continuous optimization problem related to a noisy zero-order oracle. The smoothing technique is used to solve the optimization problem. It allows using first-order stochastic optimization methods in the situation when the gradient of the target function is unknown. The developed traffic light control algorithm has been tested in the traffic simulation environment SUMO in the set of RESCO benchmarks.
Sergey V. Matrosov, Nikolay B. Filimonov
Design Features of the Frequency-Controlled Electric Drive for Positioning Mechanisms
Abstract
The paper is devoted to the design peculiarities of the frequency-controlled electric drive of the hydraulic distributor of the ultra-high pressure hydraulic press. It justifies the need to power the windings of an induction motor (IM) with a short-circuit rotor from the output of a direct frequency converter (DFC) assembled according to a symmetrical scheme using six-pulse or three-pulse thyristor transducers. The paper also shows the method of the electric drive synthesis for positioning mechanisms, including the press hydraulic distributor, as well as the need to build a three-circuit system of subordinated coordinate control for the frequency-controlled electric drive of the hydraulic distributor. Unlike positioning DC electric drives the synthesis of the torque inner circuit in a frequency-controlled electric drive is significantly different, the features of which are described in detail in the paper. As a result of the measures taken, the required dynamic characteristics of the electric drive are provided in the torque circuit, the required processing time of the alignment is ensured in the speed loop, and the required accurate positioning of the working element is achieved in the position circuit. The study demonstrates the method when currents are generated in the AC stator windings with adjustable frequency, amplitude and phase by supplying sine signals from the output of the microprocessor generator to the inputs of the control system of the DFC power units. The paper shows the adjustment method, when the input of the torque regulator of the positive feedback, as well as its critical setting, is provided to the electric drive of the torque source property. The results of the synthesis of the electric drive control system are confirmed by a widely proposed experiment on the industrial model of the hydraulic distributor of the 30,000-ton press. Practical recommendations are formulated for the selection of rational power circuit layout diagrams of the direct frequency converter-induction motor system, as well as the principles for building an electric drive control system, which can be used in the design of the automation process with the participation of AC electric drives of positioning mechanisms.
Ishembek Kadyrov, Baktybek Turusbekov, Bermet Zhanybekova, Baktybek uulu Azamat
Development of a Mathematical Model to Study the Energy Indicators of Electric Drives Using the DFC-IM System
Abstract
The paper is devoted to the peculiarities of constructing a mathematical model of a frequency-controlled electric drive, and with regard to powerful units justifies the need to power the windings of an induction motor with a short-circuited rotor from the output of a direct frequency converter (DFC) assembled according to a symmetrical scheme using six-pulse or three-pulse thyristor converters. The justifications are given using the example of electric drives of ditching machines and hydraulic presses. These machines perform operations that are fundamentally different from each other, but have the same structure of the electric drive control system. At the same time, these differences do not affect the choice of the basic structure for creating a mathematical model of a frequency-controlled electric drive. The givenblock diagram, which is common for both machines and is made taking into account the power of the convertible electric energy, makes it possible to justify the issue of powering the stator windings of the induction motor from the output of the direct frequency converter, if the converter operates in the current source mode. The chosen processing units involved in various technical processes give the basis for the possibility of applying the results of the study to any units where the AC electric drives are used according to the DFC-IM system. The developed mathematical model will allow studying the energy characteristics of the consumed electric energy during the operation of the electric drive in a steady mode. Equations describing the power part of the DFC-IMsystem are compiled using the state variable method. At the same time, such elements of the power circuit as the transformer and the induction motor are represented by their replacement schemes with reduced parameters; thyristors of the frequency converter are replaced – in a closed state by their dynamic resistances RT, in a closed state – by zero current sources. The obtained mathematical model of the electric drive according to the DFC-IMsystem allows studying the energy processes occurring during energy conversion and formulating practical recommendations for choosing the rational design schemes of power circuits.
Ishembek Kadyrov, Nurzat Karaeva, Alymbek uulu Chyngyzbek
Intelligent Data Analysis for Materials Obtained Using Selective Laser Melting Technology
Abstract
In this study, we present a software solution (toolkit) for intelligent data analysis obtained using the selective laser melting (SLM) technology. We have developed a program that uses Data Science approaches and machine learning (ML) algorithms for analyzing and predicting the mechanical properties of materials obtained using the SLM method. The program was trained on a large dataset of SLM materials and was able to achieve an accuracy of 98.9% in terms of the average particle size, using a combination of crystal plasticity and finite element methods (CPFEM) for the Ti-6Al-4V alloy. It allows predicting mechanical properties, such as yield strength, ductility, and toughness, for the structures of Ti-6Al-4V and AlSi10Mg alloys. The study proposes an approach to intelligent data analysis of properties and characteristics of various materials obtained using the SLM technology, based on a formed multidimensional digital model of processes using the developed software solution. The developed set of technologies for intelligent data analysis aimed at optimizing the SLM process demonstrates the potential of machine learning algorithms for improving understanding and optimization of materials obtained through additive manufacturing technologies. Overall, our research emphasizes the importance of developing intelligent solutions for data analysis in materials science and engineering, especially for additive manufacturing technologies such as SLM. By using the developed toolkit that applies machine learning algorithms, specialists can minimize technological production and implementation costs up to 1.2 times, by optimizing the processes of designing and developing materials for various applications, from aerospace industry to biomedical engineering.
Dmitry Evsyukov, Vladimir Bukhtoyarov, Aleksei Borodulin, Vadim Lomazov

Computing Technologies in Information Security Applications

Frontmatter
Fourier Chromagrams for Fingerprinting, Verification and Authentication of Digital Audio Recordings
Abstract
In this paper, a new approach for calculating binary audio fingerprints was proposed. This approach was based on the analysis of Fourier chromagrams obtained from the processed music recordings (audio files). The calculated binary audio fingerprints allow for bit-by-bit matching and comparison of original and modified music recordings. For performance testing, a dataset of over 50 original recordings of music played on a variety of instruments using different playing techniques was collected. In addition, distorted versions of the original recordings with altered tempo and realistic additive noise were produced and added to the test dataset. Calculations of similarity values between different audio fingerprints within the same groups of music recordings help reveal the expected robustness of the proposed approach against the possible distortions mentioned earlier. The impacts of distortions on chromagrams and calculated audio fingerprints were thoroughly analyzed and discussed in the paper. The median values of the similarity between the original and distorted recordings were found to be greater than 85%. The proposed approach proves to be quite useful in real life forensic studies and tasks of verification and authentication of music pieces and recordings.
Andrey Lependin, Pavel Ladygin, Valentin Karev, Alexander Mansurov
Methodology of Expert-Agent Cognitive Modeling for Preventing Impact on Critical Information Infrastructure
Abstract
This scientific article presents a comparative analysis of different threat prediction models used in security systems. Our expert-agent cognitive model has shown to be the most effective across a range of threat levels, with the highest performance compared to other models. The Circular Protection and Life Tree models also showed promising results but are less effective at higher threat levels. However, the Interval Confidence Interval model showed the worst performance in this comparative analysis. We have also identified the optimal input parameter values for our expert-agent cognitive model, which result in a 95% improvement in its prediction accuracy. Our model achieves the highest quality of prediction at Knowledge Base = 100, Expert Rating = 5, and Threats = 500. The performance of our model starts at around 60% accuracy with 50 threats and reaches a peak of 80% accuracy at 200 threats, gradually decreasing at higher threat levels. Our findings indicate that our proposed model is recommended for predicting and warning about potential threats in security systems. Further research can help optimize the parameters of our model for even more effective threat prediction and warning.
Pavel Panilov, Tatyana Tsibizova, Georgy Voskresensky
Backmatter
Metadaten
Titel
High-Performance Computing Systems and Technologies in Scientific Research, Automation of Control and Production
herausgegeben von
Vladimir Jordan
Ilya Tarasov
Ella Shurina
Nikolay Filimonov
Vladimir A. Faerman
Copyright-Jahr
2024
Electronic ISBN
978-3-031-51057-1
Print ISBN
978-3-031-51056-4
DOI
https://doi.org/10.1007/978-3-031-51057-1