Zum Inhalt
Erschienen in:

Open Access 08.01.2025 | Originalarbeit

On Performance of Data Models and Machine Learning Routines for Simulations of Casting Processes

verfasst von: Amir M. Horr, Rodrigo Gómez Vázquez, David Blacher

Erschienen in: BHM Berg- und Hüttenmännische Monatshefte | Ausgabe 1/2025

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Der Artikel vertieft sich in die Anwendung von Datenmodellen und maschinellen Lernroutinen zur Simulation von Gießprozessen, wobei der Schwerpunkt auf deren Leistung und Herausforderungen liegt. Er diskutiert die Integration datenwissenschaftlicher Techniken in technische Prozesse und betont die Vorteile schneller und Echtzeit-Datenmodelle zur Optimierung und Steuerung materieller Prozesse. Die Studie unterstreicht die Bedeutung präziser und zuverlässiger Datenmodelle, insbesondere in dynamischen Prozessen wie Strangguss, wo Recheneffizienz und Produktqualität entscheidend sind. Die Forschungsarbeiten umfassen eine Fallstudie zum vertikalen Gießen, in der die Implementierung einer webbasierten Gussanfrageplattform dargestellt wird, die Echtzeitmodelle zur Optimierung von Prozessparametern und zur Verbesserung der Produktqualität einsetzt. Der Artikel schließt mit der Betonung des Potenzials dieser Werkzeuge zur Kostensenkung und Verbesserung der vorausschauenden Steuerung in Fertigungsprozessen.
Hinweise

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

1 Introduction

Empirical, physics-based, and numerical solvers have extensively been used to model the casting processes including their associated thermal evolution, heat transfer, melt flow, and solidification. In recent years, by implementation of data science techniques into several branches of engineering, various fast and real-time data models have also been developed to facilitate the optimisation and controlling of material processes [1, 2]. Although these fast data models might sacrifice some level of accuracy for computational ease and efficiency, they often achieve acceptable levels of performance with the help of ML routines and data training. These models are particularly effective for processes with many varying parameters where the number of possible scenarios for numerical simulations and physics-based simulations would be computationally unworkable.
There are many reasons for the implementation of fast and real-time models for material processes like continuous casting where the operation environments are highly dynamic. These processes are often associated with significant material waste and downtime preparations, and they might require rapid adjustments during process time to maintain product quality. In this sense, efficient and fast data models can reduce computational time and resources leading to lower energy consumption, better final product quality and cost savings. The integration of these fast prediction tools into online and off-line auxiliary platforms can also facilitate the ability to optimize processes quickly [3]. This would help casting houses to reduce the preparation downtime by developing their initial plan faster and safer, which can ultimately result in the overall reduction of production costs.
This contribution summarizes the performances and challenges associated with data reduced model techniques for continuous casting processes where issues of accuracy, reliability, and applicability have been scrutinized for real-world manufacturing processes. Furthermore, the outcome of data training and data management using ML routines for improving real-time predictions are also examined. The integration of these trained models into a web-based casting enquiry platform are also explored and their applications for real-world vertical casting processes are presented as a case study.

2 Numerical Simulations for Casting Processes

The implementations of numerical simulations for casting processes including their associated cooling and solidification sub-processes have widely been discussed and special attention has been paid to their strong multi-physical and multi-scale attributes. Different types of simulation strategies have been developed including coupled fluid-thermal, fluid-thermal-mechanical, and fluid-thermal-microstructure simulations where different aspects of the casting processes were examined [4]. In the research work herein, a fluid-thermal CFD simulation framework has been setup to study the thermal evolution and solidification phenomena during the continuous casting processes.

2.1 CFD Simulations for Casting Processes

Various numerical simulation techniques based on Eulerian, Lagrangian, Arbitrary Lagrangian/Eulerian (ALE), Mixed Lagrangian/Eulerian (MiLE), and Multi Material ALE (MMALE) formulations have been proposed for the casting processes [4]. Many of the current CFD solvers for industrial casting processes are based on these rigorous mathematical and analytical methods, which utilize some conventional, dynamic, and hybrid mesh strategies.
However, the dynamic nature and transient generation of billets for continuous and semi-continuous casting processes are among the great difficulties for the simulation of these processes, since the location and the shape of the boundaries/interfaces are dynamic [4]. The numerical grids need to either be generated at the start of the simulations and activated later during the simulation or a dynamic mesh scheme needs to be implemented from the start of simulation [5].
In this research work, a fixed domain strategy has been utilized for the fluid-thermal CFD simulations where the entire mesh for the numerical domain has been created at start of simulation. Since the results of CFD simulations need to be postprocessed and filtered for the database building exercise, all casting scenarios have been carried out using identical mesh and geometric boundaries. However, variations of initial melt temperature, cooling and casting speed for different scenarios were implemented using varying input values.

2.2 CFD Modelling of Solidification

The solidification phenomena during casting processes have their unique complex attributes where change of phase, grain formation and thermal evolution require special numerical attention. Conventional numerical approaches include simplified conduction-based energy transfer mechanism, explicit conduction-convection scheme, and enthalpy concepts which can be analytical calculated during the casting process. The most popular phase transition analytical equation for industrial casting applications is the Scheil-Gulliver equation [6] where the solid fraction is defined by melt flow, partition coefficient, solidus and liquidus temperatures.
In this research work, the directChillFoam open-source solver [7] has been adopted for our vertical casting application which can be used as an extension of OpenFOAM simulation package [8]. For the phase transformation herein, the continuum mixture formulation based on a single-region incompressible approach is adopted where the conservation of mass, momentum, energy, and species are considered, according to the model originally proposed in [9]. Thermal and solutal buoyancy effects are hence considered by means of additional source terms added to the conservation equations (i.e. Boussinesq formulation). The solver takes advantage of the PIMPLE algorithm, which allows for choice of either transient (i.e. PISO algorithm) or steady-state (i.e. SIMPLE algorithm) calculations [10]. Porosity and slurry flows are considered independently and the transition from one to another happens at the coherence fraction. For the thermal evolution during the casting process, the heat transfer coefficients (HTCs) are provided along the billet’s outer surface.

2.3 Continuous Casting-Numerical Prerequisite

For the casting case study herein, the validation case in directChillFoam [7], based on the vertical direct-chill casting experiment by Vreeman et al. [11], was taken. A general description of the case setup is publicly available in the official documentation [12]. Only minor changes have been applied to the original case, mainly for allowing the parametrical study required to train/validate the data model with different combinations of initial melt temperature, cooling flow rate, and casting speed.
A wedge-shaped axisymmetric pseudo-2D domain, meshed with a structured grid is adopted. Here the element density along the axial length was slightly increased with respect to the original setup and a linear grading employed to avoid sudden changes in the mesh element size. (see Fig. 1). Material properties correspond to the binary alloy Al-6 wt% Cu. The phase-change obeys to temperature-dependent tabular values of the melt fraction (e.g. obtained from CALPHAD software package) and the solute properties are set to follow the Lever-rule. The HTCs at the mould-melt interface (primary cooling) were locally averaged, depending on the value of the solid fraction. The heat transfer at the billet-water interface (secondary cooling) was introduced as tabular values obtained from the correlation proposed in [13].
Fig. 1
Schematic view of vertical die chill casting along with mesh and CFD solver assumptions
Bild vergrößern
The simulations were run in transient mode for ensuring a correct description of the process physics at the steady process conditions. The runtime was set to 2000 s, more than doubling the period required for reaching quasi-steady state. It should be noted that some conditions show stronger permanent fluctuating behaviour than others due to the chaotic nature of the coupled phenomena involved. For that reason, time-averaged results between over the last 1000 s of the temperature and melt-fraction magnitudes were saved for all cell-centers using ParaView software (i.e. “temporal statistics” filter). These values were then exported in a CSV format, along with their spatial information (i.e. x‑ and z‑coordinates) for allowing further postprocessing.

3 Data Models for Casting Processes

Data science techniques, data solvers/interpolators, and real-time models are essential components of digitalisation and twining/shadowing framework for material processes. They are capable of process controlling and monitoring using real-time accurate predictor/corrector data models to ensure smooth and stable processing. They also contribute to the optimization of process parameters which lead to a more cost efficient and higher quality product output.

3.1 Data Solvers and Database Buildings

The increasing use of reduced order and real-time models and their crucial role in modelling and control of material processes for twinning and shadowing frameworks have inspired much research in recent years, especially for dynamic processes like casting. Although these data models are based on databases which are made of existing and live data for manufacturing processes, their level of accuracy for engineering applications has thoroughly been scrutinized by researchers [14, 15]. Some of the most popular data solvers for material processes application is the proper orthogonal decomposition (POD), singular value decomposition (SVD), and proper generalized decomposition (PGD), which are based on mathematical eigen-base schemes [15]. Data solvers based on these techniques can be defined as decomposition solvers (spatial and time) where algebraic approximations of system responses can be created to allow for fast reconstruction of system characteristics. The initial database for these reduced order models (ROM) can be originated from experimental work, numerical simulations, or mined data where the variations of pre-defined parameters are considered to fill the so-called snapshot matrix. For the POD and SVD the factorization of the responses can be written in mathematical form as [14, 15]:
$$Ax=F\left(x,t\right)\; \; \;\; \;R(x,t)=U\Sigma V^{T}$$
(1)
where x is the eigenvector of A, R is the full or reduced system responses in the database, the de-composed matrices of U and VT are orthogonal matrices related to spatial and temporal decomposition and Σ is the eigen value matrix.
Among other popular data solvers are the two-stage kriging and principal component analyses (PCA) solvers where the primary dimensional-reduction task is performed by eliminating less important dimensions while keeping data characteristics by maintaining important data dimensions [16]. Other conventional regression and symbolic regression methods are also among most established techniques for data solvers where variations of input data can be ordered based on their impacts. Symbolic regression (SR) and genetic algorithm symbolic regression (GASR) are among more sophisticated regression techniques where regression analysis can be performed within a multi-dimensional search space [16]. The clustering method is also one of the famous data solvers that work based on data-points grouping and clustering by detecting some common data features. It can be defined as an unsupervised learning scheme to trace patterns in available data for better data insights.

3.2 Real-time Models and Machine Learning

Different data science schemes based on mathematical decomposition, projection and classification techniques have already been developed for fast predictor\corrector models. However, in recent years by advances of data science techniques and ML routines, attention has been given to the new generation of real-time models with very fast response times. To elaborate on the development of these real-time models for industrial applications, the governing equation for a simple case of transient mechanical vibration can be considered as:
$$\left[K\right]\left[u\right]+\left[M\right]\left[\ddot{u}\right]+\left[C\right][\dot{u}]=F[t]$$
(2)
where [K], [M], [C] and [u] are stiffness, mass, damping and displacement matrices of the vibrating system. If the tempo-spatial characteristic of the vibrating system is decomposed, the frequency domain system characteristic equation can be written as;
$$u\left(x,t\right)=\sum _{n}\dot{u}_{n}\left(x,\omega _{n}\right)e^{i{\omega _{n}}t}$$
(3)
where \(\omega_{n}\) and \(\dot{u}_{n}\) are the system natural frequencies and Fourier coefficients. Thus, the final governing equation in frequency domain can be written as \(\left[\widehat{K}_{f}\right]\left[u\right]=\widehat{F}[t]\) where \(\left[\widehat{K}_{f}\right]\) and \(\widehat{F}[t]\) are damped dynamic stiffness and force spectral decomposed matrices. However, since the characteristic equation of the system herein is calculated using the system properties (i.e. geometry, material properties, boundaries), the computational time and efforts to solve the equations are substantial. The alternative data driven governing equation for the response of the vibrating system can be considered as [16]:
$$\left[C\right]\left[X\left(x,t\right)\right]=[Y\left(x,t\right)]$$
(4)
Where [C] is the estimated system characteristics (using data techniques), [X(x,t)] is the input values (from snapshot matrix cases) and \([Y\left(x,t\right)]\) can be interpreted as desired response at specific values for variables. Here these data techniques are used which are based on the same mathematical decomposition and eigenvalue concepts to form the system characteristic equation. However, data from calculated or measured responses of the system for the range of variables are used to formulate the system matrices and characteristic equations. The snapshot matrices are used to define potential scenarios for database building using variation of process parameters. The calculated/measured data from these scenarios can be utilized to form a response database which holds basic characteristic information about the system. If data eigen solvers like SVD or POD are used to solve system responses for desired parameters, the optimal expansion of the matrix can be written as:
$$\left[X\left(x,t\right)\right]=U\Sigma V^{T}$$
(5)
Where U represents the spatial eigenvector decomposition, Σ is the singular values diagonal matrix (e.g., eigenvalues) and VTrepresent temporal decomposition for transient system responses. For casting processes where the thermal evolution is driving the solidification and grain structure formation, the temperature prediction can be carried out using:
$$[T_{k}\left(x,y,z\right)]=U_{k}\sum _{k}{V}_{k}^{T}$$
(6)
where Tk are temperatures at process time t. For these processes, the data solver needs to be accompanied with an appropriate data interpolator for accurate predictions.

3.3 ROM for Continuous Casting Processes

The application of real-time models for material processes like casting can be examined to investigate the accuracy and reliability of these techniques for the rapid design and real-time optimizations. For this research work, the concepts and challenges of creating reliable real-time models for casting processes were undertaken and a vigorous framework has been set up to create enough data for database building exercise. The development work involves key steps and milestones to guarantee the accuracy, reliability, and relevance of the models as:
  • For the initial step, the prediction goals/objectives and their implementations are defined for the industrial casting processes.
  • In the second step, the snapshot matrix is created for the process characteristics.
  • In the next step, process scenarios are simulated using open-source solver in [7].
  • In a further step, real-time models are iteratively generated for the casting processes.
  • For the next step, further DOE scenarios are carried out for validation exercises.
  • In the final steps, a comparative study for the performance of these models is conducted using statistical and deterministic schemes.
Although, for the full multi-physical and multi-scale aspects of the casting processes, full fluid-thermal-mechanical and coupled macrostructure simulations are required, real-time models are only developed based on fluid-thermal simulations using the CFD solver in this research work.

4 Performance Study

Various combinations of data solver-interpolators can be employed for the transient casting simulations where the thermal, mechanical stress/strain evolutions and even microstructure formation might be considered. During the research for this study, for the implementation and utilization of the best performing solvers and interpolators, rigorous comparative assessments have been performed to examine the validity and reliability of these models.

4.1 Solver-Interpolator Best Performers

The combination of an appropriate data solver and a suitable interpolator should be considered for dynamic processes like casting. In the research work herein, different data solvers including Eigen (e.g. POD, SVD), regression, clustering, and FFT solvers along with Kriging, InvD (inverse distance), and radial basis function (RBF) data interpolators have been employed to build efficient real time solvers for casting processes. The combination of eigen solvers with Kriging interpolators has been successfully used for predictions in dynamic and transient processes (i.e., time-dependent predictions). On the other hand, regression solvers with both normalzation and standardization schemes have been used for data with complex patterns.
Data interpolators like Kriging can generally provide accurate predictions even at unobserved points of interest, while it is capable of estimating uncertainty associated with these predictions. Consider grid points in a simulation as N (x, y, z, t, T), where x, y, z are the spatial coordinates and t and T represent the time and calculated temperatures at those points. For Kriging to predict the temperature value at unobserved desired point Nk (xk, yk, zk, tk, Tk), it can estimate the value as:
$$\widehat{N}\left(x_{k},y_{k},z_{k},t_{k},T_{k}\right)={\sum }_{i=1}^{n}\lambda _{i}N(x_{i},y_{i},z_{i},t_{i},T_{i})$$
(7)
where \(\lambda_{i}\) are the Kriging weights, \(N(x_{i},y_{i},z_{i},t_{i},T_{i})\) are the calculated temperature values at nearby points at that time and n is the considered number of nearby calculated points.

4.2 Accuracy, Reliability, and Challenges

The industrial applications of data real-time models for dynamic processes like casting have their unique benefits and challenges. From the start, for the data model building exercise, the poor quality of data and their under-representative nature for the multi-dimension search space (i.e. defining variations of process parameters) can lead to errors and inaccuracies. The balance of data density and their spatial distributions/resolutions can also affect the model predictions, especially for processes like casting with high data gradient (e.g. high cooling/heating curves). In addition, the selection of an appropriate data solver and its associated data interpolar can greatly affect the performance of the data model for material processes. The issues of accurate fitting into process initial conditions (e.g. initial melt temperature) along with precise fitting to zones with high cooling/heating curves are among challenging aspects of model building exercises. The right calibration and data training schemes using ML routines are among other challenges for the building of an accurate data model. Finally, the proper validation of the real-time models for normal, near boundary, and extreme cases along with their associated computational time are also among cumbersome tasks for model building scientists.
In the work herein, using efficient sampling techniques, snapshot matrices are defined in a way to represent the basic process scenarios in a balanced way. The combination of process parameter variations representing the spatial size of the search space (i.e. parameters variation limits) have been defined for basic scenarios. Furthermore, several DOEs which represent the normal, near boundary and extreme process scenarios were utilized for strict validations of data models for real world scenarios. The data postprocessing for real-time models can also be performed in 1D, 2D, or 3D manners, where results from DOEs and data models are compared along lines, within plane contours or bodies. For this study, only the 1D and 2D contour results were compared and validated using DOEs scenarios. Figure 2a shows the vertical casting machine with the round shape cast billet, while Fig. 2b shows the rotated 2D temperature contour for a validation scenario where a snake shape path is defined to record temperatures at computational points. The challenge here is that, since the data produced by CFD solver is best optimized for the smallest solver matrices’ sizes, the data order might not be appropriate for the data model. The CFD solver data order might generate data strings with a very high gradient (jumping from end of billet to top of melt pool), which is unfeasible for the data model. In this sense, extra postprocessing is required to prepare and transform the data for appropriate use of data models where more moderate gradients along with less sharp change/discontinuities are observed.
Fig. 2
a Vertical casting machine and b rotated view of snake path followed to process the successive temperature values
Bild vergrößern

4.3 Application for Web-based Casting Platform

Digital shadowing of the casting process allows us to evaluate various scenarios (sets of process parameters) and the consequent impact on the quality of e.g. the microstructure. At the same time, the risk of causing a bleed-out or freeze-in can be assessed. While CFD solvers need hours to simulate each scenario, a reduced order model allows for fast screening of various process parameter combinations. Furthermore, during the production the ROM can suggest optimized parameters to workers to ensure their safety and high product quality. These human-machine interactions are currently being designed and developed in the research project ‘opt1mus’ and are shown in Fig. 3. Hence, end users can request advice for a specific alloy/format combination and a desired microstructure quality via a web interface which has been implemented using the “PhysicsWorks” framework by “Solidification” (https://​physicsworks.​io/​). This request triggers the simulation and training of a ROM to suggest suitable parameters before and real time guidance during casting. The model is consequently improved from correlations between the actual process conditions and the measured microstructure to serve improved predictions for the next requests. Figure 3 shows the schematic view of the whole framework.
Fig. 3
Use case “casting advisor system”: training and application of ROM. Web interface has been used to send user requests and visualise outcome based on simulations & model predictions
Bild vergrößern

5 Discussion

The data-reduced and real-time model developments for casting processes with their multi-physical and multi-scale aspects are challenging, and attention needs to be paid to proper data handlining training. As described in previous sections, various data solvers like the eigen-based solver along with efficient data interpolators can be employed for model building exercises. However, the performance of these models needs to be strictly scrutinized. For the 2D temperature contour comparison, using the snake ordering scheme, validations have been carried out using different data solver-interpolator combinations. Figure 4 shows the calculated and estimated temperature results by CFD and data model along the designated path for DOE scenarios one and three (i.e. for lower and upper bound initial melt temperatures).
Fig. 4
Comparison of calculated temperature time histories for CFD versus data model for a DOE1 and b DOE3
Bild vergrößern
The computational times for these DOE scenarios are about 720–1200 s for the CFD simulations (i.e. wall clock time using eight parallel cores) and about 1.3 s for the data real-time solver (using a single core) to estimate the time history responses. Figure 5 shows the normalized error graphs for estimated temperatures compared to CFD results for these two DOE scenarios. At the first glimpse of calculated errors, it is obvious that the maximum normalized errors (about 6 to 8%) are corresponding to points with highest temperature gradients (points near the bottom of the melt pool). As the snake-like estimation path passes through the solidified billet towards the bottom of the melt pool, temperature variations are high. Hence, the real-time data model, even with the best combination of solver-interpolator, (e.g. SVD solver with Kriging interpolator) is unable to fully capture the high rate of temperature changes near the melt pool bottom. For the estimation values at specific points, Fig. 6 shows the comparison of CFD and data model predictions for the melt pool depth and solidified shell thicknesses.
Fig. 5
Calculated normalized errors for a DOE1 and b DOE3
Bild vergrößern
Fig. 6
a Temperature contour results for CFD (top section) and data model (reflected bottom section) along with; b comparison of specific query results within melt pool
Bild vergrößern

6 Conclusions

The use of dimensional-reduction techniques and their associated real-time models for material processes like casting is becoming more popular and valuable for industrial productions. These tools can be employed to optimize manufacturing processes, improve product qualities, increase the predictive-corrective controlling power, and reduce the cost/time by avoiding long trial and error loops. In the first part of the contribution, a brief description of the CFD technique for the casting simulation has been presented where basic technical aspects of the numerical domain and solver technologies are discussed. In the following sections, descriptions of real-time solver technologies along with data interpolation and trainings have been elaborated and their applications for dynamic material processes like casting are examined. Finally, in the last part of the contribution, a vertical casting case study has been carried out to examine the validity and accuracy of data models for thermal predictions.
At the first glance, for the thermal predictions in 2D (e.g. temperature contours) and 1D specific query points, some challenges in the data sorting and predictions can be observed. The thermal gradient and rapid temperature variations can produce further challenges to the data solvers where further data trainings are required to learn the changing data trends. However, with the right combinations of data solver-interpolators and further ML exercises, it is possible to achieve better predictions even for the high gradient data. With the introduction of more advanced dynamic databases with data updating features, further opportunities arise where more accurate data models can be utilized. This would lead to development of more versatile and efficient dynamic data tools for more general casting applications which will be the subject for the future research works.

Acknowledgements

The authors would like to thank the Austrian Federal Ministry for Climate Action, Environment, Energy, Mobility, Innovation and Technology and also the Austrian Institute of Technology (AIT) for the technical/financial support in this research work. The financial support by Austrian Research Promotion Agency (FFG) for the opt1mus project (FFG-Nr. 899054) is also greatly appreciated.
Open Access Dieser Artikel wird unter der Creative Commons Namensnennung 4.0 International Lizenz veröffentlicht, welche die Nutzung, Vervielfältigung, Bearbeitung, Verbreitung und Wiedergabe in jeglichem Medium und Format erlaubt, sofern Sie den/die ursprünglichen Autor(en) und die Quelle ordnungsgemäß nennen, einen Link zur Creative Commons Lizenz beifügen und angeben, ob Änderungen vorgenommen wurden. Die in diesem Artikel enthaltenen Bilder und sonstiges Drittmaterial unterliegen ebenfalls der genannten Creative Commons Lizenz, sofern sich aus der Abbildungslegende nichts anderes ergibt. Sofern das betreffende Material nicht unter der genannten Creative Commons Lizenz steht und die betreffende Handlung nicht nach gesetzlichen Vorschriften erlaubt ist, ist für die oben aufgeführten Weiterverwendungen des Materials die Einwilligung des jeweiligen Rechteinhabers einzuholen. Weitere Details zur Lizenz entnehmen Sie bitte der Lizenzinformation auf http://​creativecommons.​org/​licenses/​by/​4.​0/​deed.​de.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Literatur
6.
Zurück zum Zitat Scheil, E.: Bemerkungen zur Schichtkristallbildung. In: Metallk, Z. (ed.) Band, vol. 34, p. 70. (1942) Scheil, E.: Bemerkungen zur Schichtkristallbildung. In: Metallk, Z. (ed.) Band, vol. 34, p. 70. (1942)
Metadaten
Titel
On Performance of Data Models and Machine Learning Routines for Simulations of Casting Processes
verfasst von
Amir M. Horr
Rodrigo Gómez Vázquez
David Blacher
Publikationsdatum
08.01.2025
Verlag
Springer Vienna
Erschienen in
BHM Berg- und Hüttenmännische Monatshefte / Ausgabe 1/2025
Print ISSN: 0005-8912
Elektronische ISSN: 1613-7531
DOI
https://doi.org/10.1007/s00501-024-01537-6

    Marktübersichten

    Die im Laufe eines Jahres in der „adhäsion“ veröffentlichten Marktübersichten helfen Anwendern verschiedenster Branchen, sich einen gezielten Überblick über Lieferantenangebote zu verschaffen.