Zum Inhalt

Parallel and Distributed Processing Techniques

30th International Conference, PDPTA 2024, Held as Part of the World Congress in Computer Science, Computer Engineering and Applied Computing, CSCE 2024, Las Vegas, NV, USA, July 22–25, 2024, Revised Selected Papers

  • 2025
  • Buch
insite
SUCHEN

Über dieses Buch

Dieses Buch bildet den Abschluss der 30. Internationalen Konferenz über parallele und verteilte Verarbeitungstechniken, PDPTA 2024, die im Rahmen des Weltkongresses für Informatik, Computertechnik und angewandte Computertechnologie 2024 vom 22. bis 25. Juli 2024 in Las Vegas, USA, stattfand. Die 24 Beiträge in diesem Buch wurden sorgfältig überprüft und aus 143 Einreichungen ausgewählt. Sie gliederten sich in thematische Abschnitte wie folgt: Parallele und verteilte Verarbeitungstechniken und Anwendungen sowie HPC und Workshop on Mathematical Modeling and Problem Solving.

Inhaltsverzeichnis

Frontmatter

Parallel and Distributed Processing Techniques and Applications + HPC (PDPTA)

Frontmatter
A Methodical Approach to Parallel IO Analysis in Distributed Deep Learning Applications
Abstract
Deep learning applications have become crucially important for the analysis and prediction of massive volumes of data. However, these applications impose substantial input/output (I/O) loads on computing systems. Specifically, when running on distributed memory systems, they manage large amounts of data that must be accessed from parallel file systems during the training stage using the available I/O software stack. These accesses are inherently intensive and highly concurrent, which can saturate systems and adversely impact application performance. Consequently, the challenge lies in efficiently utilizing the I/O system to allow these applications to scale. When the volume of data increases, access can generate high training latency and add overhead significantly when data exceeds the main memory capacity. Therefore, it is essential to analyze the behavior of the I/O patterns generated during the training stage by reading the data set to analyze the behavior when the application scales and what amount of resources it will need. The paper presents a methodology to analyze parallel I/O patterns in Deep Learning applications in this context. Our methodological approach mainly aims at providing users with complete and accurate information. This involves a thorough understanding of how the application, the dataset, and the system parameters can significantly influence the parallel I/O of their deep learning application. We seek to empower users to make informed decisions through a structured methodology that allows them to identify and modify configurable elements effectively.
Edixon Parraga, Betzabeth Leon, Sandra Mendez, Dolores Rexachs, Remo Suppi, Emilio Luque
Parallel N-Body Performance Comparison: Julia, Rust, and More
Abstract
This paper explores parallelism performance for C, C++, Go, Java, Julia, and Rust on N-body simulations. We begin with a basic O(\(N^2\)) simulation for each language based on the n-body benchmark in the Benchmark Game. The original benchmark is adjusted to include a larger number of particles and run in parallel. We also add parallelism to the force calculations using a kD-tree. This work builds on previous work by including parallelism and adding the Julia programming language to our survey. We find that for straight number-crunching, all of these languages provide similar performance, and all have sufficient support for parallelism that runtimes scale well with thread counts. On the other hand, when a spatial data structure, such as the kD-tree, is introduced, the runtimes vary dramatically between languages. In that situation, Julia’s performance looks more like Python, taking over 100 times as long as Rust/C/C++ to finish. Rust comes out on top with an impressive 50% lead over C and C++.
Mark C. Lewis, Clarissa Garcia, Audrey Tollett, Seven Aguirre, Henry Hafner, John McMahon, Amanda A. Sickafoose
REFT: Resource-Efficient Federated Training Framework for Heterogeneous and Resource-Constrained Environments
Abstract
Federated Learning (FL) is vital in distributed systems, especially for ensuring data privacy, particularly in IoT and edge-based setups. However, existing research mainly focuses on data heterogeneity, leaving gaps in addressing varying device capabilities and communication efficiency. To bridge this, we propose the “Resource-Efficient Federated Training Framework for Heterogeneous and Resource-Constrained Environments (REFT)”. REFT leverages Variable Pruning to adapt pruning strategies to client computational capabilities, enhancing resource utilization. Additionally, our approach employs knowledge distillation to reduce bidirectional client-server communication, reducing bandwidth usage. Experimentation in image classification tasks demonstrates the effectiveness of REFT in resource-limited environments. Our method preserves data privacy and performance standards while accommodating diverse client devices, offering a minimal bandwidth solution for FL-based systems.
Humaid Ahmed Desai, Amr Hilal, Hoda Eldardiry
An Efficient Data Provenance Collection Framework for HPC I/O Workloads
Abstract
Scientific data is essential for research and development in many fields, and its provenance and lineage are crucial for ensuring the validity of these findings. However, traditional data management methods fall short of transparency and accountability, leading to data manipulation and falsification of research findings. By offering a transparent and impermeable mechanism for logging and verifying data integrity, tracking the provenance, and viewing the lineage of scientific data, blockchain technology provides a promising solution to address these issues. Metadata, verifiable research data, and configuration changes can be stored transparently and reliably using private blockchain technology. This paper proposes a framework to support secure scientific data provenance with minimum overhead on application performance while requiring minimal user intervention.
Md Kamal Hossain Chowdhury, Purushotham V. Bangalore
Using Minicasts for Efficient Asynchronous Causal Unicast and Byzantine Tolerance
Abstract
We present an implementation of asynchronous causally ordered unicast that requires linear space for message size, which is a significant improvement compared to the best existing algorithms which require quadratic space in the worst case. This algorithm is a modification of the Raynal-Shiper-Toueg algorithm and broadcasts a small control message, defined here as a minicast, to augment the unicast message to preserve causal ordering. The smaller message size is at the cost of additional traffic on the network. With the addition of cryptography in the form of digital signatures, this algorithm can be made tolerant to byzantine failures. For existing versions of causal unicast, byzantine tolerance has previously only been possible with the addition of bounded latency.
Laine Rumreich, Paolo A. G. Sivilotti
A Comparative Study of Two Matrix Multiplication Algorithms Under Current Hardware Architectures
Abstract
A widely used computationally intensive scientific kernel, the matrix multiplication algorithm is at the heart of many scientific routines. Resurging fields, such as artificial intelligence (AI), strongly benefit from fast and accurate processing of large matrices. Through the years, multiple efforts have been made to derive new algorithms capable of achieving better performance than the naive matrix multiplication approach \(\Theta (n^{3})\). One of those is Strassen’s variant \(\Theta (n^{2.81})\). This research compares the benefits and differences of using an optimal version of Strassen’s algorithm versus the naive algorithm. The performance analysis makes use of the two most dominant high-performance computing (HPC) architectures available within the Lonestar6 cluster at Texas Advance Computing Center (TACC), the multi-core (CPU) and many-core (GPU) architectures.
Samuel Olatunde, Eduardo Colmenares
Is Manual Code Optimization Still Required to Mitigate GPU Thread Divergence? Applying a Flattening Technique to Observe Performance
Abstract
We examine the impact of manual elimination of thread divergence in GPU code through removal of all branches using a flattening technique. The goal is to investigate the necessity of manual mitigation of thread divergence on GPU, compared with automated, modern compiler optimization and architectural improvements. We apply our previously presented flattening technique called Algorithm Flattening (AF), which eliminates all branches, producing divergence-free code with increased ILP at the expense of minor to moderate increased instruction overhead. We observe the effect of said optimization on kernel performance across historical architectures and compilers, up to recent offerings. We theorize that modern GPU improvements will eventually eliminate the need for programmer intervention of thread divergence coding issues for GPU, although further study is necessary.
Lucas Vespa
Towards Automatic, Predictable and High-Performance Parallel Code Generation
Abstract
High-performance architectures have complex features so that reliable production of parallel software is beyond the reach of many Computer Science graduates. Compilers alone cannot guarantee the highest performance and multiple APIs with complex performance features are difficult to master. As a first step towards more comprehensive solutions we are building key elements of a pre-compiler system that will automatically produce predictable, scalable and high-performance code from declarative tensor expressions. In this paper we summarize and analyze a large set of timing experiments of matrix multiplication variants that are mapped to vectorized and multithread code. The analysis covers two high-end target architectures and exhaust a whole space of code, compiler, pragma and parallelism parameters. Our analysis shows how the best choice of parameters is produced from a small set of tests that can converge in a matter of seconds and then predict performance of larger instances to within 25% or much less. Inefficient choices of parameters is also shown to be reliably predicted from small tests, so that our design for a precompiler is guaranteed to be a realistic and portable tool. The generality of our Mathematics of Arrays tensor algebra, and very broad applicability of tensor operations (signal processing, scientific computing, AI, etc.) supports our claim that these experiments and design can be generalized to a general purpose parallel programming tool.
Lenore Mullin, Gaétan Hains
Attack Graph Generation on HPC Clusters
Abstract
Attack graphs (AGs) are graphical tools to analyze the security of computer networks. By connecting the exploitation of individual vulnerabilities, AGs expose possible multi-step attacks against target networks, allowing system administrators to take preventive measures to enhance their network’s security. As powerful analytical tools, however, AGs are both time- and memory-consuming to be generated. As the numbers of network assets, interconnections between devices, as well as vulnerabilities increase, the size and volume of the resulting AGs grow at a much higher rate, leading to the well-known state-space explosion. In this paper, we propose the use of high performance computing (HPC) clusters to implement AG generators. We evaluate the performance through experiments and provide insights into how cluster environments can help resolve the issues of slow speed and high memory demands in AG generation in a balanced way.
Ming Li, John Hale
Analyzing the Influence of File Formats on I/O Patterns in Deep Learning
Abstract
Deep Learning applications have become an important solution for analyzing and making predictions with massive amounts of data in recent years. However, this type of application introduces significant input/output (I/O) loads on computer systems. Moreover, when executed on distributed systems or parallel distributed memory systems, they handle much information that must be read during training. This persistent and continuous access to files can overwhelm file systems and negatively impact application performance. A file format defines how information is stored, and the choice of a format depends on the use case. Therefore, it is important to analyze how the file format influences the training stage when loading and reading the dataset, as opening and reading many small files could affect application performance. Thus, this paper will analyze the I/O pattern of different file formats used in deep learning applications to characterize their behavior.
Betzabeth Leon, Edixon Parraga, Sandra Mendez, Dolores Rexachs, Remo Suppi, Emilio Luque

Workshop on Mathematical Modeling and Problem Solving (MPS)

Frontmatter
Inference of Cell–Cell Interactions Through Spatial Transcriptomics Data Using Graph Convolutional Neural Networks
Abstract
Understanding cell–cell interactions is crucial for unraveling the complexities of multicellular organisms and holds promising implications for advancements in medical science. These interactions, mediated through specific ligand-receptor pairs, remain partially identified. The rapid evolution of gene expression analysis technologies, especially spatial transcriptomics, now allows for the precise capture of gene expression while maintaining cellular localization. While studies using spatial transcriptomics data to visualize known cell–cell interactions are achieving great success, their application to infer unknown cell–cell interaction pairs has not yet been fully investigated.
In this study, we introduce a novel approach utilizing Graph Convolutional Neural Networks (GCNN) to infer cell–cell interactions from spatial transcriptomics data. Previous efforts have demonstrated the utility of GCNNs for data obtained through the continuous FISH (fluorescence in situ hybridization) method. We propose an alternative strategy to adapt GCNN-based cell–cell interaction prediction methods to data acquired by in situ capture methods. Additionally, we address the challenge of properly generating training data for the model, implementing a solution that significantly enhances the estimation process. Our findings reveal that the method used to transform Spatial Transcriptomics data into a graph significantly impacts the accuracy of interaction predictions, with prediction accuracies ranging from 80% to 90% under certain conditions.
Takahiro Hiura, Shigeto Seno, Hideo Matsuda
Natural Product-Like Compound Generation with Chemical Language Models
Abstract
Natural products are substances produced by organisms in nature and often possess biological activity and structural diversity. Drug development based on natural products has been common for many years. However, the intricate structures of these compounds present challenges in terms of structure determination and synthesis, particularly compared to the efficiency of high-throughput screening of synthetic compounds. In recent years, deep learning-based methods have been applied to the generation of molecules. In this study, we trained chemical language models on a natural product dataset and generated natural product-like compounds. The results showed that the distribution of the compounds generated was similar to that of natural products. We also evaluated the effectiveness of the generated compounds as drug candidates. Our method can be used to explore the vast chemical space and reduce the time and cost of drug discovery of natural products.
Koh Sakano, Kairi Furui, Masahito Ohue
Improved Early–Modern Japanese Printed Character Recognition Rate with Generated Characters
Abstract
The National Diet Library’s digital collection contains about 400, 000 valuable books from the Meiji period to the early Showa period. The books are stored as image data and have not been converted into text. Therefore, the use of information is limited. There are manual and automatic methods of texting Early–modern Japanese printed book, but manual methods cost a fortune. OCR is used for automation, but Early–modern Japanese printed book’s characteristics reduce recognition rates. Therefore, it is necessary to develop a character recognition method specific to Early–modern Japanese printed book. Collecting Early–modern Japanese printed character is also manual, and it is difficult to collect many characters evenly. In this paper, we propose a method to improve Early–modern Japanese printed character recognition accuracy using images generated from modern characters. CycleGAN is used to generate images of modern characters from modern characters. The generated image is incorporated into train data to create a character recognition model. The experiment showed that the recognition rate was improved by using the generated image in train data.
Norie Koiso, Yuki Takemoto, Yu Ishikawa, Masami Takata
Improved Method for Similar Music Recommendation Using Spotify API
Abstract
In this study, we improved a similar music–recommendation method. A similar music recommendation method using the Spotify API was proposed as a music retrieval method. The baseline method computes the Euclidean distance between the audio features obtained from the Spotify API. In this method, the normalization of the obtained audio features and validation of the features used were insufficient. Therefore, in this study, we improved this method by adopting normalization, audio feature selection, and similarity computations based on cosine similarity. It was verified through experiments that the method of normalizing appropriate features by adopting the min–max method and computing similarity using the Euclidean distance was effective.
Miho Chiyonobu, Masami Takata
Reconfigurable Virtual Accelerator (ReVA) for Large-Scale Acceleration Circuits
Abstract
In recent years, hardware acceleration in large-scale computing fields such as Artificial Intelligence and High Performance Computing, faces hardware resource shortages. To overcome this problem, we propose Reconfigurable Virtual Accelerator (ReVA), which allows a large-scale acceleration circuit built using multiple FPGAs, processors and memory subsystems, to accelerate application programs.
We have designed and implemented a prototype of Virtual Accelerator (VA) Generator for ReVA to investigate its performance. The VA Generator prototype employs an open-source HLS automated split compilation tool, RapidStream, to automatically generate placed-and-routed virtual accelerators that can be implemented on multiple FPGAs based on HLS dataflow designs. The VA Generator, which places VAs on the appropriate regions of FPGAs, allows large circuits to be performed like a single accelerator using several FPGAs. In addition, RapidStream’s parallel compilation technology allows the VA Generator to suppress the increasing compilation time according to the circuit size. Moreover, with our VA Generator prototype we have built and evaluated a VA of a large-scale circuit which cannot be fit in a single FPGA with multiple FPGAs.
Kazuki Yaguchi, Eriko Maeda, Shunya Kawai, Daichi Teruya, Yasunori Osana, Takefumi Miyoshi, Hironori Nakajo
Building Simulation Environment of Reconfigurable Virtual Accelerator (ReVA)
Abstract
In recent years, research in AI and HPC has explored accelerating computations using FPGAs. High-Level Synthesis (HLS) is beneficial for implementing algorithms from these fields onto FPGAs as circuits. However, since the circuits generated by HLS are generally larger than those designed with HDL. Moreover, the operations in these fields tend to increase in number and complexity, and the FPGA resources required are increasing accordingly. Therefore, using FPGAs in practice presents challenges regarding resource restrictions. To address these issues, we are researching Reconfigurable Virtual Accelerator (ReVA), which allows the sharing of resources across multiple FPGAs and enables the implementation of large-scale circuits. ReVA creates and shares virtual accelerators (VAs) using the resources of multiple FPGAs. Processors and VAs in a ReVA share data using distributed shared memory (DSM). Furthermore, the data on the VA and DSM are dynamically arranged so that access from each of the processors used is the shortest. In this paper, we propose and implement ReVA Simulator. ReVA Simulator can reproduce ReVA operation without the need to prepare an actual device with an FPGA, processor and memory connected. Furthermore, we estimated the execution time when utilizing ReVA and conducted evaluations. The evaluation result shows that ReVA simulator achieves FFT reduced by \(36\%\) against execution in C.
Shunya Kawai, Eriko Maeda, Kazuki Yaguchi, Yasunori Osana, Takefumi Miyoshi, Hironori Nakajo
Vector Register Sharing Mechanism for High Performance Hardware Acceleration
Abstract
In recent years, demand for data-parallel processing has been growing, and this parallelism often appears in AI processes. One method to accelerate these processes is using DSA, domain-specific architecture. A common data transfer method on DSA is DMA, which is direct memory access. There are several studies on DMA-based accelerators. However, few studies focus on data transfer methods. In this paper, a vector register-sharing mechanism has been proposed as a new data transfer method. Our proposed mechanism is named “SHAVER”. In this mechanism, a part of vector registers is directly shared with an accelerator. An open-source RISC-V vector co-processor is used to evaluate the mechanism’s potential. It has been implemented on an FPGA and a simulator for the evaluations. The results indicate the possibility of the proposal mechanism to achieve a maximum of 7.13% speedup over DMA transfer.
Tomoaki Tanaka, Michiya Kato, Yasunori Osana, Takefumi Miyoshi, Jubee Tada, Kiyofumi Tanaka, Hironori Nakajo
Efficient Compute Resource Sharing of RISC-V Packed-SIMD Using Simultaneous Multi-threading
Abstract
AI tasks are gaining popularity in the area of IoT and edge devices. To run such tasks on devices, QNNs are used because of their reduced size and ability to be computed with simple integer arithmetic. There have been many implementations to support such a network format. However, when considering thread-level parallelism to speedup the program, many often implement a multi-core architecture or clusters which needs to copy all resources for each core. In this paper, we introduce a new RISC-V Out-of-Order Simultaneous Multi-Threading core “B4SMT” with RISC-V Packed-SIMD extension for evaluation. We also show that even a single executor could increase the performance of a 1D median filter by over 100\(\times \), and a matrix multiplication by over 30 on more than 16 threads efficiently. Furthermore, we suggest that other infrequently used executors may be placed as a shared resource efficiently in an SMT core.
Shogo Takata, Hironori Nakajo
Introducing Competitive Mechanism to Differential Evolution for Numerical Optimization
Abstract
This paper introduces a novel competitive mechanism into differential evolution (DE), presenting an effective DE variant named competitive DE (CDE). CDE features a simple yet efficient mutation strategy: DE/winner-to-best/1. Essentially, the proposed DE/winner-to-best/1 strategy can be recognized as an intelligent integration of the existing mutation strategies of DE/rand-to-best/1 and DE/cur-to-best/1. The incorporation of DE/winner-to-best/1 and the competitive mechanism provide new avenues for advancing DE techniques. Moreover, in CDE, the scaling factor F and mutation rate Cr are determined by a random number generator following a normal distribution, as suggested by previous research. To investigate the performance of the proposed CDE, comprehensive numerical experiments are conducted on CEC2017 and engineering simulation optimization tasks, with CMA-ES, JADE, and other state-of-the-art optimizers and DE variants employed as competitor algorithms. The experimental results and statistical analyses highlight the promising potential of CDE as an alternative optimizer for addressing diverse optimization challenges.
Rui Zhong, Yang Cao, Enzhi Zhang, Masaharu Munetomo
Hyper-heuristic Differential Evolution with Novel Boundary Repair for Numerical Optimization
Abstract
Inspired by the architecture of the hyper-heuristic (HH) algorithm, we design a mutation operator archive, a crossover operator archive, and a boundary repair operator archive to propose a novel hyper-heuristic differential evolution (HHDE). The mutation operator archive and the crossover operator archive contain multiple representative search operators derived from different versions. A learning-free selection function, which utilizes an unbiased probability approach, is employed to autonomously determine the optimization sequence from these archives. This function serves as the high-level component of the HH framework. Additionally, we focus on the boundary repair operator, an element often overlooked in the design of the evolutionary algorithm (EA). Based on the previous research, our designed boundary repair operator archive introduces two novel boundary repair techniques: optimum inheritance and iterative opposite-based mapping. Comprehensive numerical experiments on 10-D and 20-D CEC2022 benchmark functions and six engineering optimization problems are conducted to assess the efficacy of our proposed HHDE. The performance of HHDE was compared against a range of other state-of-the-art competitor optimizers. The experimental results and statistical analysis confirm the competitiveness and efficiency of HHDE. The source code of HHDE can be found in https://​github.​com/​RuiZhong961230/​HHDE.
Rui Zhong, Jun Yu, Masaharu Munetomo
Jump Like a Frog: Optimization of Renewable Energy Prediction in Smart Gird Based on Ultra Long Term Network
Abstract
Renewable energy generation forecasting plays crucial roles in advanced smart grid and sustainable practices. Although many RNN related methods have been utilized to predict power generation time series data, they often struggle to capture very long-term correlations efficiently due to the vanishing gradient issue. To address this challenge, we have introduced the Ultra long term network model that incorporated LSTM, SKIP LSTM and Dense components. This model effectively captures long-term patterns while mitigating the vanishing gradient problem associated with capturing very long term patterns. Our application of this model to renewable power prediction has yielded better performance when compared through metrics like MSE and MAE than previous models such as LSTM, GRU and Simple RNN models in time series analysis within smart grids. The integration of this model holds promise for enhancing the intelligence of renewable energy grids.
Xingbang Du, Enzhi Zhang
Vision Transformer-Based Meta Loss Landscape Exploration with Actor-Critic Method
Abstract
Detecting and mitigating overfitting in deep neural networks remains a critical challenge in modern machine learning. This paper investigates innovative approaches to address these challenges, particularly focusing on vision transformer-based models. By leveraging meta-learning techniques and reinforcement learning frameworks, we introduce Transformer-based Loss Landscape Exploration (TLLE), which utilizes the validation loss landscape to guide gradient descent optimization. Unlike conventional methods, TLLE employs the Actor-Critic algorithm to learn the mapping from model weights to future values, facilitating efficient sample collection and precise value predictions. Experimental results demonstrate the superior performance of TLLE-enhanced transformer models in image classification and segmentation tasks, showcasing the efficacy of our approach in optimizing deep learning models for image analysis.
Enzhi Zhang, Rui Zhong, Xingbang Du, Mohamed Wahib, Masaharu Munetomo
Fast Computation Method for Stopping Condition of Range Restricted GMRES Method
Abstract
In this paper, we propose a method for fast computation of the stopping condition of the Range Restricted General Minimum Residual (RRGMRES) method. The RRGMRES method is iterative. As the number of iterations increases, the matrix size increases. The stopping condition for the iterations requires a condition number. The condition number is expressed as the ratio of the largest singular value to the smallest singular value. The proposed method employs the Cholesky LR method and inverse iteration. By comparing the experimental results with the conventional method, the proposed method has been 10 times faster than the conventional method.
Miho Chiyonobu, Masami Takata, Kinji Kimura, Yoshimasa Nakamura
Implementation of the OQDS Method for Principal Component Analysis
Abstract
In this paper, we propose a method for computing partial singular values and the corresponding singular vectors. PCA (Principal Component Analysis) requires only larger singular values and the corresponding singular vectors. Generally, it is obtained by combining the bisection method and the inverse iteration method. However, there are some input matrices, such as the glued Kimura matrix, for which the inverse iteration method fails. Therefore, the OQDS (Orthogonal QD with Shift) method is adopted in this paper. The OQDS method can compute smaller singular values and the corresponding right singular vectors from a bi–diagonal matrix with high accuracy, in the case that the matrix is not split during the decomposition. However, usually, split occurs. Under split, it is not clear which side of the split the smaller singular values fall on. Therefore, to adopt the OQDS method to the PCA, it is necessary to consider how to deal with split. Thus, in this paper, we propose a new implementation of the OQDS method that is not effected by split. Experiments have confirmed that the method is fast while maintaining reliability.
Miho Chiyonobu, Masami Takata, Kinji Kimura, Yoshimasa Nakamura
Backmatter
Titel
Parallel and Distributed Processing Techniques
Herausgegeben von
Hamid R. Arabnia
Masami Takata
Leonidas Deligiannidis
Pablo Rivas
Masahito Ohue
Nobuaki Yasuo
Copyright-Jahr
2025
Electronic ISBN
978-3-031-85638-9
Print ISBN
978-3-031-85637-2
DOI
https://doi.org/10.1007/978-3-031-85638-9

Die PDF-Dateien dieses Buches entsprechen nicht vollständig den PDF/UA-Standards, bieten jedoch eingeschränkte Bildschirmleseunterstützung, beschriebene nicht-textuelle Inhalte (Bilder, Grafiken), Lesezeichen zur einfachen Navigation sowie durchsuchbaren und auswählbaren Text. Nutzer von unterstützenden Technologien können Schwierigkeiten bei der Navigation oder Interpretation der Inhalte in diesem Dokument haben. Wir sind uns der Bedeutung von Barrierefreiheit bewusst und freuen uns über Anfragen zur Barrierefreiheit unserer Produkte. Bei Fragen oder Bedarf an Barrierefreiheit kontaktieren Sie uns bitte unter accessibilitysupport@springernature.com

    Bildnachweise
    AvePoint Deutschland GmbH/© AvePoint Deutschland GmbH, NTT Data/© NTT Data, Wildix/© Wildix, arvato Systems GmbH/© arvato Systems GmbH, Ninox Software GmbH/© Ninox Software GmbH, Nagarro GmbH/© Nagarro GmbH, GWS mbH/© GWS mbH, CELONIS Labs GmbH, USU GmbH/© USU GmbH, G Data CyberDefense/© G Data CyberDefense, FAST LTA/© FAST LTA, Vendosoft/© Vendosoft, Kumavision/© Kumavision, Noriis Network AG/© Noriis Network AG, WSW Software GmbH/© WSW Software GmbH, tts GmbH/© tts GmbH, Asseco Solutions AG/© Asseco Solutions AG, AFB Gemeinnützige GmbH/© AFB Gemeinnützige GmbH