Skip to main content
main-content
Top

Hint

Swipe to navigate through the articles of this issue

19-03-2020 | Quality Assurance | Issue 3/2020 Open Access

Production Engineering 3/2020

Statistical approaches for semi-supervised anomaly detection in machining

Journal:
Production Engineering > Issue 3/2020
Authors:
B. Denkena, M.-A. Dittrich, H. Noske, M. Witt
Important notes

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

1 Introduction

Machine tools are a central element within the value chain of industrial production processes. Process monitoring systems are used to increase the availability of machine tools and to reduce scrap as well as subsequent re-work [ 1]. Therefore, these systems must be highly sensitive to process errors and at the same time robust to false alarms. Typical process errors include tool breakages, tool wear and process instabilities. To generate a suitable reaction, critical process states have to be detected reliably and in time [ 2, 3]. Adaptivity with regard to changing machining processes, tools and machines is crucial for the practical use of process monitoring systems [ 4].
Monitoring systems fulfil the following subtasks: data acquisition, signal processing, feature extraction and selection as well as process evaluation [ 5, 6]. Data acquisition forms the basis for generating knowledge about the current process state. Information about the machining process is captured by internal and external sensors. Frequently used external sensors include dynamometers, accelerometers, acoustic emission sensors and current sensors [ 6].
The subsequent evaluation of machining processes can be done by simple monitoring procedures such as fixed boundaries or tolerance bands. Fixed boundaries compare signals or derived signal features with a constant value. Traditional tolerance bands use a lower and an upper boundary for monitoring and have a largely constant signal-to-boundary ratio [ 4].
Machining processes can also be evaluated using supervised learning methods [ 5]. Commonly used methods include artificial neural networks, fuzzy systems and support vector machines [ 6]. In supervised anomaly detection, a labelled dataset containing information of normal and non-normal classes is required. In addition, the labelling process is often performed manually and is, therefore, time-consuming. Within semi-supervised anomaly detection, only data describing the normal state is available [ 7]. Statistical methods also belong to this class and represent the most relevant methods for process monitoring in series production due to their transparency. These approaches create decision boundaries for the evaluation of machining processes. The created boundaries need to be robust to slight process variations and sensitive to process anomalies [ 8].
Various methods have been developed for statistical process monitoring in machining. Lee et al. [ 9] use control charts based on Hotelling’s T 2-statistic and Q-statistic for tool wear monitoring during face milling. To detect worn states of tools, control limits are computed using kernel density estimation (KDE) and a predefined risk level. Kernel principal component analysis is chosen to extract useful features from a multidimensional dataset containing information from several sensors such as motor current, acoustic emission and acceleration data. The proposed approach requires data representing the normal state when the tool is engaged. Yu [ 10] applies Gaussian Mixture Models (GMM) for tool wear monitoring for face milling operations. For model evaluation, the same dataset used by Lee et al. is considered. Features from time and time–frequency domain are extracted and principal component analysis is applied to reduce the dimensionality of the dataset. GMM are employed to model the normal tool state based on the extracted principal components and a tool performance index is derived to evaluate tool degradation. Wang et al. [ 11] proposed a method for tool wear monitoring during milling based on data describing normal process conditions. Control limits are derived using control charts based on T 2- and SPE-statistic. Discrete wavelet transformation is applied to decompose data from force and acceleration sensors into different scales. For dimensionality reduction, multi-scale principal component analysis is used. Grasso et al. [ 12] developed an approach to detect tool breakages for end milling of titanium using spindle and axis drive signals. The process is evaluated based on T 2-statistic and Q-statistic control charts. For dimensionality reduction, a traditional and a moving window principal component analysis is performed.
Most of the developed statistical monitoring systems consider only one type of process or one type of process anomaly. Moreover, many of the approaches presented in the literature focus on flank wear monitoring to achieve a desired surface quality. However, other types of anomalies (e.g. cutting edge breakages) have to be considered due to their impact on the surface quality and the workpiece geometry. Therefore, the objective is to develop a robust monitoring approach for complete machining, which can be used for machining processes with changing cutting conditions and process types. This paper proposes a non-parametric approach for the computation of decision boundaries based on envelopes for the detection of process anomalies. Additionally, the non-parametric approach is compared with a parametric approach based on the monitoring quality.
The parametric approach is presented in Sect. 2 and assumes that sensor signals and the corresponding envelopes are normally distributed. The non-parametric approach presented in Sect. 3 does not require envelopes to be normally distributed. Instead, the distribution of the envelopes is estimated from a random sample using kernel density estimation (KDE). Section 4 describes the experimental setup and the machining processes considered as well as the generated process anomalies. In Sect. 5, performance indicators to evaluate the monitoring quality are described and the results are discussed for the proposed monitoring approaches.

2 Parametric approach

In the following, it is assumed that the measured signal values are recorded equidistantly and are available in form of discrete signal sequences \( x_{k} \left( i \right) \in {\mathbb{R}} \) for \( k \in \left\{ {1, \ldots , n_{p} } \right\} \) processes and \( i \in \left\{ {1, \ldots , I} \right\} \) time steps.
Decision boundaries can be interpreted as confidence limits that will be adhered to by the signal with a specified probability. During process monitoring, these boundaries are used to detect process anomalies. Parametric methods assume a known distribution of the input data. Therefore, the probability of a certain deviation of the sensor signal from the mean value can be calculated. This enables the evaluation of signal deviations based on the underlying distribution [ 8].
The parametric approach developed by Brinkhaus [ 8] assumes normally distributed sensor signals. If a sensor value x( i) follows a normal distribution, it can be described by the mean value \( \bar{x}(i) \) and the standard deviation s( i). The estimated mean value can be calculated according to Eq. ( 1):
$$ \bar{x}\left( i \right) = \frac{1}{{n_{P} }} \mathop \sum \limits_{k}^{{n_{p} }} x_{k} \left( i \right). $$
(1)
If this value is determined empirically, it converges in a large number of processes to the mean value of the population. The estimated standard deviation s( i) is calculated according to Eq. ( 2):
$$ s\left( i \right) = \sqrt {\frac{1}{{n_{p} - 1}}\mathop \sum \limits_{k}^{{n_{p} }} \left( {\bar{x}\left( i \right) - x_{k} \left( i \right) } \right)^{2} } . $$
(2)
The uncertainties in the calculation of the estimated parameters decrease with the number of n p correctly running processes. In order to compensate these uncertainties, safety margins are taken into account. To prevent sporadic signal fluctuations from leading to an unnecessary number of false alarms, an upper and a lower envelope \( \left[ {h\_up_{k} \left( i \right), h\_lo_{k} \left( i \right)} \right] \) around \( x_{k} \left( i \right) \) are formed in a first step according to Eqs. ( 3) and ( 4):
$$ h\_up_{k} \left( i \right) = Max \left[ {x_{k} \left( {i - \theta } \right), \ldots , x_{k} \left( {i + \theta } \right)} \right], $$
(3)
$$ h\_lo_{k} \left( i \right) = Min \left[ {x_{k} \left( {i - \theta } \right), \ldots , x_{k} \left( {i + \theta } \right)} \right]. $$
(4)
It is also assumed that the envelopes follow a normal distribution. The parameter \( \theta \) must be set before the teach-in procedure and corresponds to the maximum expected shift of the time-series. After calculating the envelopes, the decision boundaries are calculated according to Eqs. ( 5) and ( 6):
$$ GP\_up\left( i \right) = \overline{h\_up} \left( i \right) + C \cdot s\left[ {h\_up\left( i \right)} \right], $$
(5)
$$ GP\_lo\left( i \right) = \overline{h\_lo} \left( i \right) - C \cdot s\left[ {h\_lo\left( i \right)} \right]. $$
(6)
The distributions of the envelopes \( h\_up_{k} \left( i \right) \) and \( h\_lo_{k} \left( i \right) \) are estimated by the mean values \( \overline{h\_up} \left( i \right) \), \( \overline{h\_lo} \left( i \right) \) and the standard deviations \( s\left[ {h\_up\left( i \right)} \right] \), \( s\left[ {h\_lo\left( i \right)} \right] \) over n p processes. Larger values of the standard deviation lead to larger distances between the decision boundaries. The safety factor C affects the distance between the decision boundaries and the estimated mean values of the envelopes.
The memory of the monitoring system can be controlled by adjusting Eqs. ( 5) and ( 6) by introducing a memory parameter a. The result of this adjustment is represented by Eqs. ( 7) and ( 8):
$$ \bar{h}_{k + 1} \left( i \right) = \left( {1 - a} \right) \cdot \bar{h}_{k} \left( i \right) + a \cdot h_{k + 1} \left( i \right), $$
(7)
$$ s_{k + 1} \left( i \right) = \sqrt {\left( {1 - a} \right) s_{k} \left( i \right)^{2} + a\left[ {h_{k + 1} \left( i \right) - \bar{h}_{k + 1} \left( i \right)} \right]^{2} } . $$
(8)
The memory parameter a affects the influence of past measured values. If a is increased, the weight of past measured values for the calculation of the mean and the standard deviation decreases and vice versa.
In practice, systematic changes lead to slight deviations from a normal distribution. Systematic changes include minor clamping deviations, different grinding patterns of the cutting edge, NC-program time delays and material deviations. For this reason, mixed distributions of the sensor data and the corresponding envelopes can be observed [ 8]. A non-parametric approach is presented in the next section which takes into account deviations from a normal distribution.

3 Non-parametric approach

Parametric methods follow the assumption that the underlying distribution of a random variable is known [ 13]. The monitoring accuracy can be improved by estimating the underlying distribution of the envelopes. KDE (also called Parzen-Rosenblatt window) can be used to estimate the probability distribution of a random variable [ 14]. For this reason, KDE is used for estimating the probability density function \( \hat{f}\left( {h,i} \right) \) of the envelopes \( \left[ {h_{1} \left( i \right),h_{2} \left( i \right), \ldots , h_{{n_{p} }} \left( i \right)} \right] \) according to Eqs. ( 9) and ( 10)
$$ \hat{f}\_up\left( {h,i} \right) = \frac{1}{{n_{p} b\left( i \right)}} \mathop \sum \limits_{k = 1}^{{n_{p} }} K\left( {\frac{{h - h\_up_{k} \left( i \right)}}{b\left( i \right)}} \right), $$
(9)
$$ \hat{f}\_lo\left( {h,i} \right) = \frac{1}{{n_{p} b\left( i \right)}} \mathop \sum \limits_{k = 1}^{{n_{p} }} K\left( {\frac{{h - h\_lo_{k} \left( i \right)}}{b\left( i \right)}} \right), $$
(10)
where \( b\left( i \right) \) > 0 represents the bandwidth and \( K\left( h \right) \) the kernel function. A commonly used kernel function is the Gaussian kernel
$$ K\left( h \right) = \frac{1}{{\sqrt {\left( {2\pi } \right)} }} e^{{ - \frac{{h^{2} }}{2}}} . $$
(11)
The bandwidth \( b\left( i \right) \) can be determined using the Silverman rule [ 15]
$$ b\left( i \right) = \left( {\frac{{4s\left[ {h\left( i \right)} \right]^{5} }}{{3n_{p} }}} \right)^{0.2} , $$
(12)
where \( s\left[ {h\left( i \right)} \right] \) is the estimated standard deviation of the envelopes for n p processes. The Gaussian kernel was chosen, because mixed distributions close to normal are assumed. Other kernel functions propose that the probability density beyond the maximum and minimum values of a random sample is zero, leading to inaccurate boundaries in practice.
To compute the decision boundaries, a risk factor \( \beta \) is defined, which determines the sensitivity of the monitoring system. The decision boundaries \( GNP\_up\left( i \right) \) and \( GNP\_lo\left( i \right) \) are calculated using (13) and (14) such that the probability of an envelope value h being below \( GNP\_lo\left( i \right) \) or above \( GNP\_up\left( i \right) \) is \( \beta \):
$$ \mathop \int \limits_{GNP\_up\left( i \right)}^{\infty } \hat{f}\_up\left( {h,i} \right)dh = \beta ,\quad \forall \;i, $$
(13)
$$ \mathop \int \limits_{ - \infty }^{GNP\_lo\left( i \right)} \hat{f}\_lo\left( {h,i} \right)dh = \beta ,\quad \forall \;i. $$
(14)
Thereby, \( 1 - \beta \) corresponds to the probability that a given envelope value h is smaller than \( GNP\_up\left( i \right) \) or larger than \( GNP\_lo\left( i \right) \).
During monitoring, it is examined whether a sensor value x( i) lies between the decision boundaries \( GNP\_up\left( i \right) \) and \( GNP\_lo\left( i \right) \). If this is not the case, an alarm is issued. The value \( \beta \) can be chosen based on the expected process scatter. In order to analyse the effects of \( \beta \) on the monitoring quality for several machining processes, different values of \( \beta \) are chosen in Sect. 5.

4 Experimental setup and process analysis

An auxiliary variable is acquired for indirect monitoring in order to evaluate the machining processes. It is assumed that process anomalies affect the course of the monitored auxiliary variable. In this paper, the torque is used for indirect monitoring. The test setup is shown in Fig.  1.
The machining experiments for the acquisition of process data are carried out on a turning center (Gildemeister CTX 420 linear). During machining, the spindle torque M Sp for turning processes and the torque of the turret M Tu for drilling, circular milling and pocket milling are measured with a frequency of 83 Hz. The data is acquired from a Siemens 840D Powerline via an OPC-server. The workpiece consists of the material S335J2 and has a diameter of 60 mm. The reference workpiece is shown in Fig.  2.
After data acquisition, the entire manufacturing process is divided into four sub-processes in a segmentation step: pocket milling, circular milling, drilling and turning. The process data is adjusted so that the number of time steps within a sub-process is equal. A total of 45 correctly running processes are generated for turning and 50 processes for drilling, pocket milling and circular milling, respectively. Chip clamps are observed during drilling which led to unexpected signal peaks.
In addition, various process anomalies are generated to evaluate the monitoring quality. A detailed description about the anomalies can be found in Table  1.
Table 1
Description of the generated process anomalies
Process type
Process anomaly
Process notation
Turning
Worn tool
ZT01-ZT07
Metal pin
ZT08
Missing material
ZT09-ZT10
Pocket milling
Metal pin
ZT07-ZT10
Removed edge
ZT01-ZT02
Wrong tool length
ZT03-ZT05
Missing material
ZT11-ZT12
Circular milling
Metal pin
ZT03-ZT06
Removed edge
ZT01-ZT02
Missing material
ZT07-ZT08
Drilling
Metal pin
ZT06-ZT10
Removed edge
ZT01-ZT02
Missing material
ZT11-ZT12
Wrong tool length
ZT03-ZT05
Solid metal pins are inserted into the workpiece to generate shank and edge breakages. Another anomaly type including an incorrect calibrated tool length. Additionally, cutting edges are broken out manually. To generate further anomalies, defects are inserted into the workpiece and experiments with a worn tool are carried out. In both cases, 12 anomalies were generated for pocket milling and drilling. A total of 10 anomalies were produced for turning and 8 anomalies for circular milling. Figure  3 visualizes the influence of four generated process anomalies on the spindle/turret torque.
It can be observed that the signal level shift significantly over time. To reduce these influences, the sensor data is scaled by a normalization value. For this purpose, an observation window \( \left[ {WB,WE} \right] \) is defined which contains sensor values in the idle phase of a process k. The normalization value \( norm_{k} \) is given by Eq. ( 15) as the mean value of all sensor values in the observation window:
$$ norm_{k} = \frac{{\mathop \sum \nolimits_{i = WB}^{WE} x_{k} \left( i \right)}}{WE - WB}. $$
(15)
To obtain the normalized sensor values \( x_{k}^{norm} \left( i \right) \), the normalization value \( norm_{k} \) is subtracted from all sensor values \( x_{k}^{old} \left( i \right) \) for \( \forall i \in \left\{ {WE + 1, \ldots ,I} \right\} \) according to Eq. ( 16)
$$ x_{k}^{norm} \left( i \right) = x_{k}^{old} \left( i \right) - norm_{k} . $$
(16)
The actually realized distribution of the envelopes determines the monitoring quality. Therefore, it is examined whether the envelopes follow a normal distribution. Statistical hypothesis tests are used to check whether the assumption of a normal distribution (null hypothesis) of a sample can be accepted. A well-known statistical hypothesis test is the Shapiro–Wilk test [ 16]. In cases where the computed test statistic W is smaller than a critical value W crit, the sample deviates significantly from a normal distribution. The SciPy library for the Python programming language was used to calculate the test statistic [ 17]. The largest deviations from a normal distribution are determined for circular milling and turning. Figure  4a shows 48 training processes for turning. The computed test statistic W for the upper envelopes is depicted in Fig.  4b. It can be noticed that most of the values of the test statistic W are below the critical value W crit = 0.929 at a 1% significance level. Therefore, the null hypothesis can be rejected at a confidence level of 99% for most of the test instances. In summary, it can be stated that significant deviations from a normal distribution can arise.

5 Assessment of the monitoring quality

In this section, the proposed statistical methods for process monitoring are evaluated using predefined performance indicators. The detection rate (DR) corresponds to the ratio between the number of detected faulty processes and the total number of faulty processes. For example, a faulty process includes anomalies such as shank or cutting edge breakage, worn tool, missing material or incorrectly calibrated tool. In addition, the false alarm rate (FR) indicate the ratio between the number of incorrectly classified processes (without an anomaly) and the total number of normal running processes. The monitoring system classifies a process as faulty if at least one data point is declared as an anomaly.
The performance indicators are calculated as follows:
$$ DR = \frac{detected\;faulty\;processes}{number\;of\;faulty\;processes}, $$
(17)
$$ FR = \frac{misclass.\;normal\;processes}{number\;of\;normal\;processes}. $$
(18)
After data scaling, the proposed methods are evaluated according to the described performance indicators. In the first evaluation step, the FR is determined iteratively. Both methods are first trained using a training set of ten normal running processes. For all other normal running processes, the system checks iteratively whether false alarms are issued. After each iteration, the tested process is included in the training set. The sequence of iteratively considered processes is the same for both monitoring approaches. After determining the FR, the DR is measured using the faulty processes. A value of 250 ms is chosen for the expected shift \( \theta \). The memory factor a is set to 0.4 for the first 10 processes and to 0.1 for all further processes. A safety factor of C = 6 is chosen in order to avoid false alarms. These values are used by our industrial partners for robust monitoring. The final decision boundaries for both methods are shown in Fig.  5. The results of the parametric approach are presented in Table  2.
Table 2
Performance indicators using the parametric approach
Process
Turning
Drilling
Circular milling
Pocket milling
ZT01
ZT02
ZT03
ZT04
ZT05
ZT06
ZT07
ZT08
ZT09
ZT10
ZT11
ZT12
DR (%)
90.0
100.0
75.0
91.7
FR (%)
5.7
5.0
2.5
0.0
Legend: ● detected, ○ not detected
No false alarms are issued for pocket milling. For the drilling process, a FR of 5.0% is achieved. For turning and circular milling, a FR of 5.7% and 2.5% are measured. For those turning processes leading to false alarms, a higher torque level is recognized. However, there were no optically visible defects on the surface, so that the processes are characterized as running correctly.
It can be seen that all process anomalies are detected for drilling. One anomalous process is not recognized for turning and pocket milling, respectively, leading to a DR of 90.0% and 91.7%. In pocket milling, a manually broken cutting edge (ZT01) is not detected by the monitoring system. Additionally, a worn tool (ZT07) could not be detected during turning. For circular milling, a DR of 75.0% is achieved. A manual broken cutting edge (ZT02) and a broken cutting edge after metal pin collision (ZT06) are not detected.
For the evaluation of the non-parametric approach, different values for \( \beta = 10^{ - 2} \), 10 −3, 10 −7 are examined and the effect on the created decision boundaries are investigated. Table  3 summarizes the evaluation results for the non-parametric approach. It can be observed that the FR decreases for smaller values of \( \beta \) for all processes.
Table 3
Performance indicators using the non-parametric approach
Process
Turning
Drilling
Circular milling
Pocket milling
\( \beta \)
\( \beta \)
\( \beta \)
\( \beta \)
10 −2
10 −3
10 −7
10 −2
10 −3
10 −7
10 −2
10 −3
10 −7
10 −2
10 −3
10 −7
ZT01
ZT02
ZT03
ZT04
ZT05
ZT06
ZT07
ZT08
ZT09
ZT10
ZT11
ZT12
DR (%)
100.0
100.0
100.0
100.0
100.0
100.0
100.0
100.0
75.0
100.0
91.7
75.0
FR (%)
8.6
8.6
2.9
62.5
32.5
15.0
25.0
5.0
2.5
10.0
5.0
0.0
Legend: ● detected, ○ not detected
In pocket milling, a DR of 100.0% is achieved using a \( \beta = 10^{ - 2} \). Smaller values for \( \beta = 10^{ - 3} \), 10 −7 resulted in a decreased DR (91.7%, 75.0%). This is due to the fact that manually broken cutting edges (ZT01/ZT02) and an incorrect calibrated tool length (ZT03) lead only to slight signal changes.
A DR of 100.0% is measured for circular milling using values for \( \beta = 10^{ - 2} \), 10 −3 respectively. Using \( \beta = 10^{ - 7} \), a DR of 75.0% is achieved. Due to a high scatter of the envelopes, a higher FR is achieved compared to the pocket milling process. Larger values for \( \beta = 10^{ - 2} \), 10 −3, 10 −7 result in an increasing number of false alarms (25.0%, 5.0%, 2.5%).
For drilling, a DR of 100.0% is achieved independent from the chosen \( \beta \). The observed chip clamps result in an increased number of false alarms. Smaller values for \( \beta \) lead to a decreased FR (62.5%, 32.5%, 15.0%). In order to compensate these uncertainties, the monitoring system requires significantly more processes for training or information from different sensors.
Compared to the parametric approach, the non-parametric approach reaches a higher DR (100.0%) for the turning process. However, the FR is slightly higher (8.6%) for \( \beta = 10^{ - 2} \), 10 −3. Using \( \beta = 10^{ - 7} \), the FR is reduced to 2.9%.
Two main reasons explaining the lower DR of the parametric approach. It is observed that the standard deviation of the envelopes are very low in time periods without material removal. Since a high safety factor C is chosen globally, wide decision boundaries are computed in time periods in which the tool is engaged. Due to the high process scatter, this is observed especially for the circular milling process. The lower DR for the turning process can be explained by deviations from a normal distribution of the envelopes leading to wider boundaries.

6 Conclusion

Statistical approaches are suitable for process monitoring in machining. In this paper, a new non-parametric approach has been presented for the computation of decision boundaries based on envelopes. The proposed method is compared with a parametric approach regarding the monitoring quality for several machining processes. It is shown that the presented method can be used for complete machining of components.
For evaluation, two performance indicators (detection rate, false alarm rate) were introduced. The parametric approach is characterized by a low false alarm rate over all processes. This is due to the fact that a high safety factor is chosen in practice. The distribution of the envelopes can deviate from a normal distribution, leading to wider distances between the boundaries. Since only two parameters (mean, standard deviation) have to be estimated, the parametric approach is robust using a small number of available processes for training. However, several anomalies in turning, pocket milling and circular milling were not detected.
The non-parametric approach computes decision boundaries based on a given risk factor \( \beta \). Using \( \beta = 10^{ - 2} \), the non-parametric approach is able to achieve higher detection rates for turning, pocket milling and circular milling. A larger number of false alarms were generated for drilling due to chip clamps. These false alarms can be significantly reduced by small values for the chosen risk factor. For circular and pocket milling, the detection rate is reduced for a small risk factor \( \beta = 10^{ - 7} \).
Future work may focus on the transferability to other type of processes and the use of signal data from sensory machine components to increase the reliability of the monitoring system.

Acknowledgements

Open Access funding provided by Projekt DEAL. Funded by the Lower Saxony Ministry of Science and Culture under Grant Number ZN3489 within the Lower Saxony “Vorab” of the Volkswagen Foundation and supported by the Center for Digital Innovations (ZDIN). We also thank the members of the Production Innovations Network (PIN) for their support.
Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://​creativecommons.​org/​licenses/​by/​4.​0/​.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Literature
About this article

Other articles of this Issue 3/2020

Production Engineering 3/2020 Go to the issue

Premium Partners

    Image Credits