1 Introduction

The issue of detection of characteristic face areas in thermovision concerns the analysis of thermal images of the head in screening systems which make a rapid initial selection of people with fever. Thermal cameras inform that the currently observed individual has a temperature, which may indicate that the person is ill. Such systems may be used at airports, ports, train stations, border crossings, schools in order to detect people with fever and limit the spread of diseases [25]. The first attempts to apply the method in practice took place at airports in China and Singapore during the SARS epidemic in the years 2002/2003. A similar solution was also introduced in studies conducted in hospitals in the United States where thermovision was used to confirm whether a patient has a fever [26]. Thermal imaging is now more widely used in various branches of medicine, both in the disease diagnosis and in the treatment process [6, 27].

2 Related works

In the literature, there exist studies concerning the use of head thermograms in medical applications, for example in locating diseases in the head [19]. In these applications, localization of the face and its characteristic points is always the first step in a complex process and the effectiveness of the whole system depends largely on its results. Face segmentation is also a very important and crucial element of biometric systems [8, 24].

Therefore, several studies addressed the issue of face segmentation using algorithms for image analysis and processing. Their authors, however, always pay attention to greater difficulties which occur in the face segmentation in thermograms in relation to the visible light [11, 21].

In paper [22], the authors present a method for locating the eyes and nostrils in a thermogram. This algorithm used the Haar wavelet to detect pixel clusters around the eyes and nostrils and SVM or GentleBoost classifiers (better results were obtained for the latter one) to classify points. In this process of analysis, an area of the face is initially divided into smaller “sub-windows” of a pre-specified size of 16x16 pixels for the detector and 32 × 32 pixels for the classifier. Next, the windows are analyzed using wavelets of appropriately selected sizes. The final stage described by the authors is the classification of the results using GentleBoost classifier. The nostril localization process is similar. However, the authors draw attention to the difficulties in detecting these points. To determine the position of the nostrils, they apply the Haar wavelet and GentleBoost classifier again, and the whole process is the same as for the eye detection in this method. They only use a different training set for the nostrils. In the presented solution, the areas of interest are marked only in the form of rectangles. In another paper [35], the authors assume that in the case of thermograms, the most important information in the image is in places where the signal is the strongest–places with brightness changes. To detect these points, a special detector based on the intensity of image pixels was used. On the basis of its results, groups of points with the greatest changes of intensity are designated. In the next step, these points are clustered (using the k-means method) and their belonging to particular characteristic regions is determined. On the basis of their coordinates, the authors marked rectangular areas of interest including the eyes and nostrils.

In conclusion, a significant problem in the described systems is an automatic image analysis (performed without operator intervention) designed to determine characteristic points of the face but in combination with the precise identification of areas of interest. Therefore, the primary objectives of this study were to present major features of the proposed method, evaluate the efficiency of the proposed algorithm (by using the method described in [22]) and compare the results to [22]. Another element was to consider the use of this method for fever detection (whether the tested subject has a fever) by measuring temperature in selected areas. The possibility of automatic segmentation of the areas of the face (as defined both by the characteristic points and their corresponding areas) enables to consider the possibility of using this method in screening systems, which could meaningfully improve the process of selecting people with higher temperature in the head area.

The first version of the proposed method and its preliminary results were presented by the authors in [23]. The authors signalled the problems that may occur during thermogram segmentation and attempts to eliminate them. In this paper, some of the algorithm blocks have been developed and modified so as to achieve the greatest efficiency and versatility. In order to compare the results with reference solutions, the method for evaluating the effectiveness of facial characteristic point localization [22] was implemented, which helped to assess the effectiveness of the test set of thermograms. Moreover, using the proposed solution as a comprehensive system for detecting people with elevated temperature in the head area was considered. For this reason, the greater part of the Methodology section is similar to [23].

3 Material–collection of thermograms

The research material was a total of 125 patients’ thermograms performed in the Department of Pediatrics and Child and Adolescent Neurology in Katowice, Poland. The patients’ ages ranged between 15 and about 40 years, weight from 15 to 98 kg. The number of patients was 98. In the case of 22 people, 2–4 images were taken per person, whereas in the other cases one thermogram was taken for each individual. The images were acquired interchangeably with several thermal cameras: AGEMA 590 PAL (sensitivity of 0.1 °C), ThermaCam S65 (sensitivity of 0.08 °C), A310 (sensitivity of 0.05 °C), T335 (sensitivity of 0.05 °C) with a 320 × 240 pixel optical resolution of detectors. The principles related to taking thermal images for medical thermography were maintained [29], i.e.:

  • adequate preparation of the test room (temperature 21 °C, humidity 45 % to 55 %, lack of both radiation sources and air flow),

  • measurements at a distance of about 1 m from the tested patient,

  • proper preparation of the patient for the test: taking off all the bands, washing off any makeup, etc.

The method of taking images is presented in Fig. 1. The optical axis of the camera was positioned at eye level and the camera was placed at a distance of 1 m from the tested patient.

Fig. 1
figure 1

Images acquiring method and adopted geometric assumptions of the algorithm. The patient was about 1 m away from the thermal camera lens in a room temperature of 21 °C with a relative humidity of 40 % to 50 %. The automatic analysis concerned selected areas of the face which are marked in red on the right hand side of the figure i.e. the area of forehead, eyebrows, eyes, nose. Suitable sizes of face areas are dependent on an automatically designated head height yG. THV–thermovision camera

Figure 1 presents demonstratively adopted assumptions concerning the geometry of the face and the localization of areas subject to processing. Red hatched areas of the image (Fig. 1) are determined automatically based on the designated size of the head yG and on the analysis with a detector, hereinafter referred to as TPattern (described in detail below in this paper).

Appropriate values rSK, x0, y0 were determined according to the known anthropometric relationships (the golden ratio) in conjunction with an automatically designated head height yG. Then, localization of individual parts of the face was carried out in the found area including the head.

In order to verify the sensitivity of the proposed algorithm to affine transformations (translation and rotation), additional tests were performed (increasing the analyzed group of images to more than 7,000 by introducing artificial rotation and translation of the figure in the image). In the following presentation of thermal images, an artificial color palette was used as well. It is often used in the presentation of thermograms, where the brightness of each pixel of the image depends on the temperature of a specific point of the tested object. Intuitively bright yellow pixels indicate a high temperature and dark blue ones indicate a low temperature.

Figure 2 presents selected areas of the face with temperature measurements and analysis presented later in this paper.

Fig. 2
figure 2

Selected face areas in which temperature was measured. In figure (a), characteristic points of the face are marked. Figure (b), on the other hand, shows these areas which are determined fully automatically and are described in the following sections of the paper (each of the areas is separated in the subsequent stages of the algorithm operation). Such properties as minimum and maximum values or mean temperature in the area or circuit length will be calculated for the selected regions

Figure 2 presents characteristic points and face areas marked manually. These elements are localized by the algorithm. This process is described in the subsequent sections of the paper. Detection of the presented areas is subject to the analysis described in this paper. Based on the selected areas, such properties as minimum and maximum values or mean temperature in the area and circuit length are calculated.

The block diagram with the examples of the analyzed thermogram has been presented—Fig. 3. The main steps are presented in the form of images. The individual blocks are described below.

Fig. 3
figure 3

Block diagram of the presented method

4 Methodology

4.1 Initial image processing

Initial image processing is applied to determine the approximate position of the head, its potential center, shape and height. In previously published studies and publications on similar subjects e.g. [19], initial image standardization and elimination of background interference were carried out based on a binary image for the constant binarization threshold TU = 28.3 °C. When this value was directly applied in the studies presented in the paper, in individual cases there were problems with the appointment of a proper outline of the head, determination of its size and temperature threshold in order to avoid the impact of hair. Attempts to apply Otsu’s method used in the literature [19] were also not fully satisfactory. Consequently, attempts were made to automate the process of temperature selection taking into account the properties of the designated areas of the image after the thresholding operation.

Automatic selection of the temperature and binarization threshold rests on the assumption that the analyzed area of the patient’s head should be the largest possible area containing no holes and covering no more than 60 % of the entire image surface. In the process of determining the binarization threshold, the algorithm analyzes the specified temperature range with the step ∆t = 0.25 °C. For each i-th binary image L B(i) designated in this way, coefficient w (i) is determined i.e.:

$$ {w}_{(i)}=\frac{P_0-{P}_{M(i)}}{\left|{E}_{(i)}\right|\cdot {I}_{(i)}} $$
(1)

where:

P 0 :

surface of the tested image sized M-lines × N-columns

P M(i) :

surface of the biggest area created after thresholding for the temperature t (i)

E (i) :

Euler number for a sequence of images created after thresholding

I (i) :

number of areas in the analyzed image after thresholding operation.

As a result of calculating the value w (i), the algorithm determines the area of the patient’s head, which is the smallest area without holes (minimum value w (i)). The adopted temperature t (i) is in the range from the minimum value occurring in the image to TU. It is a fully automatic process, the operator does not set any parameter value (in this case the binarization threshold). The position of the head determined in this way (the lowest value w (i)) represents the final stage of initial image processing. Ideal situations are cases when only one area without holes is formed after binarization and the value of I(i) is equal to 1. The algorithm attempts to obtain the results (or similar ones) where the value of PM(i) is the minimum (without holes), and the values of E(i) and I(i) also reach the minimum. The cases in which e(i) is equal to zero are eliminated; the algorithm ignores them searching further for another value of the index i. In some cases, after this operation, there can occur several areas of similar size (but only one is the area of the head). TPattern, which is described below, is used to classify the relevant area. A characteristic distribution of intensity (temperature) occurring in an image consistent with the assumptions adopted when designing TPattern helps in the classification process by determining the correct position of TPattern for the maximum matching.

The next step is to determine the orientation of the head and various areas of the face.

4.2 Analysis of the head area by TPattern

Based on the data related to the size of the head and anthropometric relationships of the face and analyzing the examples shown in the literature [3, 9, 19, 32] a detector hereinafter referred to as TPattern was prepared—Fig. 4. TPattern detector (first version proposed in [23]) was expanded by adding the function of analysis of a number of possible positions of the eye area (represented by extra arms/sections of TPattern) whose position is determined by rSK*DLO where DLO = {0.15, 0.2, 0.25} (so far, the authors have used the value of 0,2 and a single segment representing the eye line). With this modification, the results obtained by means of TPattern have been improved and the algorithm/method has worked correctly in the entire set of thermograms.

Fig. 4
figure 4

Schematic representation of the detector – TPattern and examples of point OG localization. Figures (b) and (c) show the results of detection with TPattern for variable orientation of the head. In this way the position of the eyebrows and the position of the nose symmetry axis with respect to the head are marked. The value y G is marked in figure (c) as the segment AB

The basic value of TPattern is its arm rSK (Fig. 4). This value is determined based on the height of the head yG and anthropometric data and is r SK  = 0.3 ⋅ y G . The arm rSK enables automatic detection of areas of the eyebrows, nose and eye sockets. In the construction of this detector, it was assumed that characteristic areas comply with the following relationships:

  • brightness of the eyebrow area is lower than the area of the upper part of the eye sockets,

  • there is symmetry in the distribution of the line brightness of the eyebrow and eye socket areas,

  • the brightness of the nose area is lower than that of the eye socket area.

When the eyebrow line and the nose symmetry axis are precisely determined, it is possible to narrow down the search area in detection of the eye sockets and nose. Figure 4b and c show the correct detection results of the head orientation for its different inclination values relative to the axis 0y. As mentioned above, based on the orientation of the segment y G and its length, TPattern adopts initial values for further analysis: r SK and additionally the angle α G . For reasons of computation time, the analysis with TPattern is performed in two ranges of image resolution:

  • 1:2 scale, preliminary stage, the angle α TS1 is pre-determined i.e.: α G -10° ≤ α TS1  ≤ α G  + 10° with the step ∆α TS1  = 2°.

  • 1:1 scale, verification stage, the following value is determined: α TS1 -5° ≤ α TS2  ≤ α TS1  + 5° with the step ∆α TS2  = 1°.

where:

α G :

orientation of the head determined based on the section which is the height of the head

α TS1 :

TPattern orientation angle in 1:2 scale–preliminary stage

α TS2 :

TPattern orientation angle in 1:1 scale–verification stage.

The operation principle of this part of the algorithm is shown in Fig. 5.

Fig. 5
figure 5

Figure (a) shows the method for calculating the head orientation and height using active contour. Rough orientation is the angle αG of the segment AB and axis Y. The length of AB is used for the approximate height of the head y G and rSK in 1:2 scale). Figure (b) shows the operation of TPattern in 1:2 scale. TPattern analyzes the image section (marked with the white contour in the image (b) created based on the scaled active contour shape (image a) blue color) in the specified range of 20° with respect to αG. Figure (c) shows the final stage of determining orientation in 1:1 scale. Here, TPattern analyzes the range of 10° with respect to the angle of orientation α TS1 calculated as shown in (b)

Initially, TPattern (in 1:2 scale) analyzes the image section (marked with white dots in Fig. 5) in the specified range of 20° with respect to αG. Then, in the final step of determining the orientation in 1:1 scale, TPattern analyzes the range of 10° with respect to the set orientation angle α TS1 . Based on the declination angle of the head αTS2 (determined in this way) and point OG (Fig. 4), standardization of the head orientation and position takes place. As a result, in the subsequent stages, the algorithm analyzes the area of the head in an upright position, which greatly accelerates the process of analysis.

Active contour (balloon model) used at this stage of the algorithm operation was built from 36 nodes. It was initialized as a circle whose center is at the point marked in blue—Fig. 3c. Active contour grows until the nodes reach the contour of the head. The nodes move as shown in Fig. 3d. The value of the active contour energy is determined using a simple Eq. (2). The algorithm looks for the case where the energy has the maximum value.

$$ {\mathrm{E}}_{\mathrm{TOTAL}}={\displaystyle \sum_{i=1}^{36}\left({E}_{contour}(i)+{E}_{image}(i)\right)} $$
(2)

where:

Econtour(i) :

Euclidean distance between subsequent active contour nodes–D—Fig. 6

Eimage(i) :

pixels value in the binary image after edge detection

i:

index of active contour nodes—Fig. 6.

Fig. 6
figure 6

Fragment of the active contour with marked nodes (5,6,7), angles α, distances D

Another limitation of the active contour is its stiffness determined by the values of the angles existing between three subsequent nodes. According to the golden-ratio proportions of the ideal face [13], an ideal ellipse representing the head was prepared. Based on the values of the angles for an ideal ellipse, it was assumed that the angles α(i) between three subsequent nodes of the active contour should have a value between 165 and 180°.

4.3 Detection of the eye socket areas

After successful detection of the eyebrows, nose and head orientation, the next step is detection of the eye socket areas and their centers. Due to warmer interior of the eye sockets in comparison to their surroundings [19, 31], an edge detection method can be used to determine their outline. The best results were obtained using the Canny edge detection method. Previously obtained information about the size and position of the eyebrow lines allowed to determine the potential search area for eye sockets. The Hough Transform was used to detect the eye sockets. It is used in this kind of tasks, but most often in visible light [16, 28, 30, 34]. The biggest disadvantages of this method, that is, problems with the selection of ellipse sizes and with their analysis in the case of rotation, were here eliminated. In calculations, socket ellipse size ranges were adopted depending on the value r SK. Shaft sizes for ellipses in the ranges x O (longer horizontal line) and y O (shorter vertical line) (Fig. 1) were calculated in the following way:

$$ 0.2\cdot {r}_{sk}\le {x}_O\le 0.4\cdot {r}_{sk} $$
(3)
$$ 0.1\cdot {r}_{sk}\le {y}_O\le 0.2\cdot {r}_{sk} $$
(4)

As a result of segmentation of the area including the eyes and eyebrows (Fig. 1b), the algorithm determines eye socket centers and the maximum size of the ellipses. On this basis, the localization and outline of the areas are marked (W OL , W OP ), which is shown in Fig. 2b. The algorithm presented in [23] has been improved by the method of curve calculation. To obtain precise segmentation of curves representing the eyebrows, the algorithm uses a portion of the ellipses representing the eye sockets. The sum of the intensity of image points included in the curve corresponding to the right and left eyebrows is analyzed in order to determine the minimum value. On the basis of this value, the final position of the eyebrows is determined—Fig. 9a, b and c–bottom images.

4.4 Nose detection

Detection of characteristic points of the nose in the case of thermal images is a more complex process than in the case of visible light, since the methods proposed for visible light such as the detection method of the nostrils [32] or their axis of symmetry [33] cannot be applied here. The detection examples used in thermovision and based on the brightness (temperature) in the nose area [35] were also not ideal. The use of the described detector, TPattern, enables to pre-determine the position of the nose, with a precision close to manual localization (Fig. 1–the red rectangle covering the nose). Therefore, the height and symmetry axis of the nose (Fig. 7) and thus the search area can be relatively easily determined using the structural elements SE NL and SE NP shown in Fig. 7d. Extreme lower left and right points of the nose were taken as the characteristic ones–Fig. 7a. Figure 7b shows the points determined by the algorithm and two more which enable to mark the upper range of the paranasal sinuses, i.e. points O NLG and O NPG . The next image (Fig. 7c) shows detection of characteristic points along with marking the area of the sinuses. Specially prepared structural elements SE NL and SE NP (Fig. 7d, which were mentioned above, are used for the detection of point pairs O NL and O NP . Distribution pattern of brightness (temperature) and the relative position of the left and right elements (the distance D) are taken into account. The analyzed ranges are determined based on the localization and distance of the eye socket centers.

Fig. 7
figure 7

Assumptions and detection examples of the nose points together with structural elements SE NL and SE NP . Assumptions about the localization of the nose end points are marked in figure (a). Figure (b) shows the end points of paranasal sinuses determined automatically from the geometric relationships obtained as the intersection of the segment AB with side lines of the nose, which connect the points O NL and O NP with the point O G . The segment AB passes through the coordinates of the lowest point of previously determined eye socket ellipses

In order to detect the remaining points required for marking the sinuses, the algorithm also determines the points O NLG and O NPG, which are the intersection of the segment AB (coinciding with the lowest point of the eye sockets) with the nose side axes joining the point O G with O NL and O G with O NP (Fig. 7c). An outline of the sinuses is defined by a spline curve. Proper detection of the position of the nose points (O NL , O NP ) enables to determine the area of paranasal sinuses. Additionally, it enables verification of the symmetry axis of the face which is pre-determined by TPattern.

In relation to the research presented earlier [23], after expanding the set of analysed thermograms it was necessary to modify the structural elements SENL and SENP used to locate characteristic points of the nostrils (ONL i ONP). Additionally, in the studies presented here, the algorithm has been extended to enable segmentation and analysis of the nasal areas WNL and WNP—Fig. 2, which in the previous studies was not considered. This part is new in this paper compared to [23].

4.5 Forehead detection

Forehead detection is based on information obtained in the previous steps of the algorithm concerning the localization of TPattern. Forehead detection algorithm receives as input parameters a designated brow line and upper pre-determined outline of the forehead area (Fig. 1b) determined according to the size of TPattern–rSK. Based on the histogram analysis, the most common pixels occurring in the forehead area were identified and the threshold values were set. These values enable to pre-determine the area of the forehead without hair. Then, in order to ultimately determine the area of the forehead and the hairline, verification was used based on the standard declination of the area sized [5 × 5] for each pixel of the potential forehead area. Once the hairline is properly detected, it is also important to determine symmetrical areas of the forehead so that they can be compared to each other (e.g. to determine temperature differences between the left and right sides of the forehead). The symmetry axis of the forehead is accurately determined based on the coordinates of the points O NL , O NP —Fig. 7a). Then, as a result of the AND operation between the images of the right and left sides of the forehead, its proper common outline is determined. Proper segmentation of the forehead may be useful in a number of potential applications.

Examples of applying the analysis of the forehead areas in lie detection using thermal imaging and in the improvement of the algorithms for recognition and even determination of the subjects’ gender in visible light can be found in the literature [15, 37, 38].

5 Results and discussion

Different areas of the face designated according to the presented relationships were the basis for the analysis of temperature distributions. Comparing the temperature of the symmetrical areas of the face, the results were reproducible for the studied cases and a variable head position. Examples of the calculated mean values from the characteristic areas are given in Fig. 8. The included results indicate that despite the variable setting of the head and the distance of the patient from the camera, the difference between mean temperatures of the areas (W CL and W CP , W OL and W OP , W ZNL and W ZNP , W NL and W NP , respectively—Fig. 2) does not exceed 0.3 °C.

Fig. 8
figure 8

Examples of temperature measurements of characteristic areas for images obtained in different ways. Regardless of the patient’s position with respect to the camera and the degree of rotation of the head, the maximum difference in the mean value of corresponding areas does not exceed 0.3 °C

Figure 8 shows the results of mean temperature measurements for the designated areas for the thermogram obtained in different ways. Designation of areas has been adopted in accordance with Fig. 2b. Figure 8 confirms the characteristics of the algorithm, such as:

  • correct localization of medically important areas despite the changes in the position of the head–rotation and removal of the patient from the thermal camera,

  • reproducibility of calculations for medically significant temperature differences of symmetrical areas lower than 0.5 °C.

Below, there is a comparison with other authors’ results.

5.1 Comparison with other authors’ results

The presented method of analysis of thermal images of the head and a fully automatic determination of characteristic areas were compared with two other studies (Martinez B., Binefa X. [22], Trujillo L., Olague G., Hammoud R., Hernandez B [35]). These are the only found papers closely related to the discussed issue. Other known studies are not associated with a complete analysis of the above-mentioned areas of the face, they are not fully automatic or they are used in visible spectrum [3, 19, 32] etc.. Table 1 presents the solutions similar in terms of the subject to the presented algorithm along with a brief discussion of their possibilities.

Table 1 Comparison of methods presented in the literature with the method described in this paper

A detailed description of each method is shown below.

  1. Method 1.

    In paper [35], the authors did not specify the method for assessing the effectiveness of localization of the facial characteristic points, because the results were directly applied to the recognition of emotions from a thermogram. Therefore, the comparison criterion may be the fact that in the solution proposed here the authors were able to accurately determine both the facial characteristic points and the areas with regard to their anthropometric size, which in the case of method 1 described by Trujillo L., Olague G. [35] was not shown.

  2. Method 2.

    In the other paper [22], the authors focused on the process of localization of facial characteristic points in the thermogram. Hence it was possible to try to compare the results. In their studies, a smaller set of images was used and the authors did not describe the algorithm operation for changing the position or orientation of the head. However, they proposed a method for evaluating the effectiveness of the algorithm and determined the criteria accepted as the correct localizations. To evaluate the effectiveness of the process, it was necessary to mark the points (eye centers and nostrils) manually (which is regarded as a model) and then compare with the results obtained with the segmentation algorithm described by the authors. As a result, in Method 2, the authors proposed a method for evaluating the error in the localization of the eyes and nostrils δ en on the basis of the following formula (5) after the standardization of variables according to the adopted nomenclature (Fig. 2a):

    $$ \begin{array}{l}{\delta}_{en- eye}=\frac{ \max \left(\left\Vert {O}_{OL}-{\widehat{O}}_{OL}\right\Vert, \left\Vert {O}_{OP}-{\widehat{O}}_{OP}\right\Vert \right)}{\left\Vert {O}_{OL}-{O}_{OP}\right\Vert}\hfill \\ {}{\delta}_{en- nostril}=\frac{ \max \left(\left\Vert {O}_{NL}-{\widehat{O}}_{NL}\right\Vert, \left\Vert {O}_{NP}-{\widehat{O}}_{NP}\right\Vert \right)}{\left\Vert {O}_{NL}-{O}_{NP}\right\Vert}\hfill \end{array} $$
    (5)

    where:

    O OL/NL , O OP/NL are reference positions marked manually as the center of the left and right eye/nostril, the symbol „^” refers to the center of the left and right eye/nostril coordinates automatically determined by the algorithm, ||.|| is the standard.

    The authors of the compared method 2 treated the situation when the error δ en is less than δ en  ≤ 0.15 as the correct localization. In order to compare the algorithm presented here (method 3) with the results for method 2 (described in paper [22]), this dependency of error assessment was implemented. To carry out a full comparison, it was necessary to specify manually the reference coordinates of the points in a collected set of 125 thermograms.

    The results are shown in Table 2. It can be observed that the algorithm developed in the paper (method 3) achieves greater efficiency for the criterion δ en  ≤ 0.15. It is worth emphasizing that the values presented in Table 2 (obtained with the use of the described algorithm) were designated for various images including cases of variable orientation and scale of the patient.

    Table 2 Comparison of method 2 ([22]) with method 3 described in this paper

    In addition, it should be noted that apart from a significantly improved localization of the facial characteristic points in Method 3, the algorithm selects the areas corresponding to the selected ones. Determination of multiplicity of these areas was achieved by taking into account the anthropometric properties and setting the size and area of the head in the early stages of the algorithm operation.

In summary, the proposed algorithm offers new possibilities in comparison to the other methods:

  • enables selection of the faces areas and characteristic points in different scales and orientation,

  • does not require any prepared training sets,

  • size and range of the extracted areas reflect the changes in the size of the examined object,

  • evaluation of results based on the equation proposed in [22] indicates the advantage of the proposed method for the value δ eye/nostril ≤0.15 adopted in [22] (5).

Figure 9 shows the final results of segmentation of the three thermograms. High effectiveness and accuracy of the presented method for changing the patient’s position and orientation can be observed in the images. Upper section of Fig. 9 presents TPattern detection for each thermogram. Bottom section presents the head region with precisely marked areas and facial characteristic points after normalization of the position, scale and rotation. They are examples of thermograms from Fig. 8a, b and c) in 2 variants: when the head was in a position close to the vertical one Fig. 9a, b and α = − 21° Fig. 9c. The presented method for detection and localization of facial elements can be used not only as a tool to measure the temperature in the thermograms. It also seems possible to use it in related fields where attempts to analyze thermograms are already being made, e.g.: as the first step in the process of face recognition in biometric systems [2, 5, 14, 17, 18, 31] where the main task could be localization of the face features for comparison and recognition. Moreover, it can be used to improve the methods of face analysis in the visible light [1, 4, 10, 12, 36] to enable observation in poor illumination and finally in the design of computer-human interface [7, 20] to track human faces in infrared images.

Fig. 9
figure 9

Source image and segmentation results for selected cases. The algorithm correctly determines the area of the head and the orientation with the use of TPattern. Then, it presents the standard image of the head with marked, selected areas of the face. It can be observed that segmentation results are reproducible despite changes in the scale, orientation and temperature distribution of the tested subject. As a result of successful segmentation, it is possible to measure the morphological and temperature properties in the relevant areas

6 Conclusions

The paper presents a method for automatic segmentation of faces in thermograms used to determine the location and characteristic areas of the human head. Owing to the applied solutions, the process is completely automatic.

The algorithm does not require the user control and calculates the threshold temperature values in order to extract an optimal human silhouette. At this stage, the head area is determined and the influence of clothing or hairstyle is eliminated. TPattern detector prepared by the authors accurately determines the head orientation and locates its center, which enables normalization of the analyzed head area for further calculations. Thus, the process of segmentation of the face characteristic areas is considerably easier. The effectiveness of the algorithm has been calculated based on the criteria proposed in the literature and reached higher values than the reference solution. The results are presented in graphical form. Diversified collection of thermal images made it possible to test and observe the method in typical real situations. Based on the above findings, we believe that the proposed method may be useful in a number of areas such as medicine and biometrics.