Skip to main content

Open Access 13.03.2024 | Original Article

Human face identification after plastic surgery using SURF, Multi-KNN and BPNN techniques

verfasst von: Tanupreet Sabharwal, Rashmi Gupta

Erschienen in: Complex & Intelligent Systems

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Facial identification for surgical and non-surgical datasets is getting popular. The reason behind this popularity is the growing need of a robust facial recognition system which is consistent to occlusion, spoofing attacks and last but most important plastic surgery effects. Plastic therapies are undertaken by individuals to beautify their external appearance but it is also undertaken by impostors to commit crimes and falsify their true identities. This research work aims at developing a facial recognition system which can identify genuine and impostor pairs. The projected methodology optimizes face detection via Back-Propagation Neural Network (BPNN) and dimensionality reduction by means of Speeded Up Robust Features followed by Multi-K-Nearest-Neighbor technique. The novelty is the production of an innovative-fangled T-Database which trains the BPNN. Thus, BPNN converges faster and achieves higher recognition. The proposed scheme is not applied till date on a medically altered dataset. We have applied five distance metrics and integrated them to acquire T-Dataset, which is fed to the BPNN. This scheme is tested on surgical and non-surgical datasets and it is deduced that higher recognition is achieved with non-surgical databases as compared to surgical ones. For both surgical and non-surgical datasets, the computational cost attained is the modest.
Hinweise

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Introduction

A facial identification system is capable of recognizing an individual from a video/image series. Human identification systems spot individuals by their facial images. These schemes institute the occurrence of a genuine individual rather than just examining whether a particular detection is valid or not. Identification of an individual is done according to one’s facial morphology. Face acknowledgment is used in various domains such as public retreat systems, marketing, social media, image database investigation, criminal identification, video surveillance and identity verification [23, 26]. Biometric technology has emerged as a promising solution for improving digital security, while fingerprint scans and facial recognition are being used more and more to bring to a standstill theft committed online. Almost all major smart phones are now equipped with some kind of biometric sensor or scanner. Apple just stunned many by introducing face biometrics in place of their fingerprint scanner.
Figure 1 shows a sample image depicting human facial geometry and morphology. Human face detection is not an easy chore because of the deviations in facial gestures, personal looks, poses, clutter, occlusion and lighting effects [18]. Due to the variability in illumination, the quantity of illuminated resources, and the direction of the camera, it is a complicated job to project a forceful face detection scheme in reality with extraordinary identification precision.
A cosmetic therapy is defined as a clinical area of expertise committed towards renovation of facial and bodily imperfections due to birth turmoil’s, shocks, burns, illness, etc. Cosmetic therapy is envisioned to rectify dysfunctional areas of the human body with a reconstructive temperament. Medical/cosmetic alterations are becoming more popular than ever before in the present-day scenario. A considerable rise is witnessed in spending on medical procedures. An impending career lift is one reason as some hope a facial renovation (better personality) could lead to a better job. It has been recently reported that patients feel that their younger appearance will help them to survive in the competitive job market. Since some see it as an investment in their careers, they may use funding options. Funding options have allowed more people to undergo medical therapies who may not have been able to afford it otherwise. As a final point, there is more demand of medical/cosmetic/plastic procedures these days, where a majority of people approve of medical alterations. These surgeries are also popular now among senior citizens. Different medical procedures on the facial region led to transformations in skin consistency and global face appearance [17]. The most obvious benefit of medical surgery is the boost in confidence and self-esteem of an individual. A medical alteration can also help to improve a patient’s physical health and bodily functions.
Facial medical alterations change external face look, which instinctively affects the forcefulness of advent-based face acknowledgment systems. Though these medical surgeries are very useful for beauty and cure purposes, they pose a serious challenge for human identification after surgery as these procedures result into transformation of facial features. There are a variety of local/global medical methods. These include regional operations including Botox injections, laser hair removal, liposuction (treatment for removing extra fat), Blepharoplasty (eyelid surgery), cheek implant surgeries, tummy tucks, otoplasty (reshaping of ears), chin extension and rhinoplasty (restructuring of nose). Face uplift is a global surgery used to smooth out wrinkles and reduce signs of aging. Medical alterations which are local in nature are proficient in giving moderate acknowledgment/recognition consequences because here facial geometry rehabilitated is partial in nature (i.e., type of cosmetic surgery undertaken by individual does not change overall facial appearance such as otoplasty and rhinoplasty), whereas for global alterations, the complete face goes through a rejuvenation (i.e., overall facial appearance is completely changed such as Rhytidectomy and hormone replacement therapies.
Medical surgery supersedes over even the best of recognition algorithms. Solemn crimes are committed by deceivers, by duplicating one’s honest identity thus resulting into great monetary defeats [13]. This is a stern wrongdoing and appeals for widespread responsiveness for investigation in the biometric acknowledgment and authentication dominion for development of a vigorous and unique human identification system [38]. Human recognition is hampered by a variety of factors, including aging effects, position, illumination, expression, disguise, occlusion, presentation attack, type of operation, and dataset effects [42, 49]. The major breakthroughs (motivation and resolved issues) in the manuscript are presented as follows:
(a)
The manuscript emphasizes the necessity for a reliable recognition system that can handle the difficulties brought on by medical advancements.
 
(b)
The purpose of the manuscript is to discuss the benefits and drawbacks of medical changes. Medical interventions can improve one's physical appearance or treat incidental injuries, but they pose a major threat to the human race when they are utilized to falsify identification.
 
(c)
The type of identification scheme employed and the nature of surgery (local/global), a person has undertaken, determines the technical explanation for the performance drop.
 
The paper is arranged along these lines. After the introduction, the next section comprises of literature review. “Proposed method” describes the proposed procedure with an outline of SURF, Multi-KNN and BPNN. “Results and discussion” is the result section. Deduction and future scope in the facial acknowledgment and authentication domain are discussed in “Conclusion”.

Literature review

Face recognition after plastic surgery is an interesting research area as the entire facial geometry is transformed depending on the nature of surgery [4, 15, 16, 20, 25, 3537]. Local synthetic/plastic therapies reflect minor facial feature alterations while global surgeries change the entire facial geometry and orientation. Face acknowledgment after global synthetic/plastic surgery is difficult as compared to local surgery [8]. Table 1 demonstrates state-of-the-art recognition techniques w.r.t RR, local/global features, texture and 3D aspects. SURF surpassed most of the local feature detection schemes such as Principal Component Analysis (PCA) for dimensionality reduction, Independent Component Analysis (ICA) and Linear Discriminant Analysis (LDA), including PCA hereditary techniques for instance Diagonal/Curvelet/Kernel PCA and DPCA (two-dimensional PCA). It also performed better than other existing recognition methods in literature, namely PCA-DCT, Corr-PIFS, CASNN, and FFNN [7, 10, 21, 29, 31]. Abuzneid et al. [1] proposed a framework for human face recognition using LBPH descriptor, KNN classification and back-propagating neural network. The proposed scheme provided a training dataset with distinct patterns based on the correlation between the original training image samples. The above-mentioned references were in context to non-surgical datasets, whereas the following mentioned references are in context to surgical datasets. Singh et al. [43] offered medical changes as a challenge to facial recognition techniques. To objectively quantify the implementation of face identification techniques on a medically altered dataset that encompasses facial samples for both global and local therapies, the authors presented an experimental study. The outcomes of this study showed that additional study was necessary to create the best facial identification system. Six different facial identification methods were used, including PCA, Fisher Discriminant Analysis (FDA), Local Feature Analysis (LFA), Circular Local Binary Patterns (CLBP), Speeded Up Robust Features (SURF), and Neural Network Design based on Two-Dimensional Log Polar Gabor Transform (GNN). Singh et al. [44] investigated identification rates on the medically altered dataset. These methods were chosen because they offered a combination of feature detection and matching approaches based on emergence, facial characteristics, extractors, and quality. Despite combining the two global and local detection approaches, little equivalence was achieved. Marsico et al. [22] employed association centered facial empathy on brilliance and pose pre-processed illustrations. Bhatt et al. [5] utilized a progressive granular technique with SURF and CLBP feature detectors to process tessellated image samples. Aggarwal et al. [2] used an integrated amalgamation for facial recognition by parts and sparse-based technique. Lakshmiprabha et al. [19] examined Gabor characteristics to use periocular and face biometrics to develop a multimodal biometric system. The proposed methodology had the benefit of not requiring either the training procedure or any contextual information from other datasets. Ibrahim et al. [12] offered comparative examination for numerous recognition schemes applied to pre/post-surgically altered samples. The performance of several Gabor-based facial demonstrations, PCA, KPCA (Kernel PCA), Kernel Fisherface (KFA), and other feature abstraction techniques were compared. With the use of histogram normalization techniques, these strategies were combined. The results showed that gradient-face was the best lighting approach, histruncate was the best histogram normalizing technique, and GKFA was the best feature abstraction method. Mun et al. [24] also examined multimodal biometric systems. This technique matched facial samples before and after medical changes by utilizing several aspects from the face and periocular regions. Local binary patterns were used to extract the features, and PCA was used to reduce the number of dimensions. The categorization process employed Euclidean metrics. The suggested strategy improved the recognition rate by extracting form and textural data. Said et al. [40] estimated geometrical face acknowledgment after medical alterations. Concerned areas of the after-surgery samples were localized and their midpoints were identified. After this, the geometrical distances amid the concerned areas were located to ascertain post-geometrical face vectors. Lastly, a distance classifier (minimal in nature) was employed to equate the before trait vectors of the test illustration image with the after-trait vectors. Pre-feature attributes that gave utmost equivalence and minimum space were measured as the best match. Verghis et al. [48] proposed an algorithm that uses extracted granules from a facial image and is Multi-objective evolutionary granular in nature. First, Gaussian and Laplacian of Gaussian multi resolution image pyramids were created from facial data. Second, facial samples were broken up into horizontal and vertical granules with various sizes. The final step involved extracting data from local face regions. The ability to transition between SIFT and EUCLBP features was made possible through evolutionary feature selection, which also made it possible to encode discriminant information for each face granule. Results from the investigation beat cutting-edge techniques, including a commercial system for equating samples of medically changed faces. In terms of RR, MSE, and F-score, region-based identification techniques produced satisfactory results with levels of moderate to high computation complexity. Bouguila et al. [6] stated that there are limited medical articles discussing the requirement of a robust facial recognition system within the field of plastic surgery. Artificial intelligence is making its way through various aspects in communal and remote areas. The field of plastic surgery takes benefit of the widely available technologies in AI, i.e., deep learning, big data, and NLP; and the recent interests of policy makers in supporting the growth of AI [14]. Algorithms based on AI sub-fields are doing great in the facial recognition domain [3]. Suri et al. [46] proposed an approach which independently learnt shape, color and texture representations from a generic surgical dataset. An unsupervised dictionary learning technique based on a stage-by-stage least angle regression approach was used to learn representations. First, the pre-trained DenseNet for face recognition was used, and then a neural network classifier was developed utilizing features projected into the dictionary space. The only limitation in employing deep models for face recognition after surgery is their inherent complexity and GPU requirements (i.e., large processing power). Scientifically rigorous data are required for biometric identification after plastic surgery. The proposed method achieves a higher Identification Rate and an inferior error rate and computation time (CT) which is an obligation for robust face acknowledgment algorithms.
Table 1
Tabular evaluation of state-of-the-art techniques w.r.t RR, local/global features, texture and 3D aspects
References
Dataset
Key features
Algorithm
Global
Local
Tex
3d
RR%
Singh et al. [43]
NA
Y
Y
Y
N
40
PCA, FDA, GF, LFA, LBP, GNN
Singh et al. [44]
Plastic surgery facial dataset
Y
Y
Y
N
40
PCA, FDA, LFA, CLBP, SURF, GNN
De Marsico et al. [22]
Plastic surgery facial dataset
Y
Y
N
N
70
PIFS + region-based correlation index
Lakshmiprabha et al. [19]
Plastic surgery facial dataset
N
Y
Y
N
74.4
Gabor/LBP + PCA + Euclidean Distance
Mun and Deorankar [24]
Web available before/after surgery photos
Y
Y
Y
N
Multimodal biometric features PCA (face) + LBP (periocular region)
Aggarwal et al. [2]
Plastic surgery facial dataset
N
Y
N
N
77.9
Part-wise and Sparse representation
Liu et al. [20]
Plastic surgery facial dataset
Y
Y
Y
N
86.1
Gabor Patch classifiers via Rank-Order list Fusion (GPROF)
Bhatt et al. [5]
Plastic surgery facial dataset
Y
Y
Y
N
78.6
Uniform Circular Local Binary Pattern (UCLBP) + Speeded Up Robust Features (SURF) + genetic algorithm
Ibrahim et al. [12]
Plastic surgery facial dataset
Y
N
Y
N
83.2
PCA, KPCA, KFA, Gabor
Karuppusamy and Ponmuthuramalingam [16]
NA
N
Y
Y
N
Extended Uniform Circular Local Binary Pattern (EUCLBP) + SIFT + Particle Swarm Optimization (PSO)
Sun et al. [45]
Plastic surgery facial dataset
Y
Y
Y
N
77.5
Structural similarity (SSIM) index + weighted patch fusion
Said et al. [40]
Plastic surgery facial dataset
N
Y
N
N
76.1
Geometrical descriptors of ROIs + minimum distance classifiers
Verghis et Bhuvaneshwari [48]
Plastic surgery facial dataset
Y
Y
N
Y
87.3
Evolutionary granular algorithm + SIFT and EUCLBP
Sabharwal and Gupta [36]
Plastic surgery facial dataset
Y
Y
Y
89.7%
Region-based score level fusion of local facial features

Proposed method

The proposed technique is explained with a block diagram in Fig. 2. This method demonstrates an enhanced human facial identification system using SURF feature extractor, Multi-KNN, and BPNN network. Our main aim is centered on attaining a robust T-Dataset which helps the BPNN to congregate rapidly with enhanced recognition rate. T-Dataset is fabricated on the correlation among the training samples, and not on the concentration of image samples.

Algorithm

The algorithm for the proposed method is divided into the following segments.

System model

See Fig. 2.

Preprocessing/normalization procedure

(a)
Step one includes a number of pre-processing techniques on the raw training/testing image samples, such as reshaping and cropping, to get rid of the face contextual consequence. Three different databases (AT&T, YALE and Plastic surgery facial dataset) are used to validate the efficiency of the proposed method. Surgical and non-surgical databases incorporated contain image samples of diverse dimensions. The details about the image dimensions of all datasets are given in detail in the result section “Results and discussion”.
 
(b)
Clutter and brilliance were abridged by translating the image samples to grayscale. To construct a robust face detection system detected faces pre/post medical therapy were processed to zero mean and unit variance accompanied by histogram equalization (to widen image disparity) and Wiener filtering (to preestablish inaccurate boundaries).
 
(c)
Wiener filtering executes an optimum trade-off amid inverse filtering and noise smoothing. Both the blurring and the additional noise are simultaneously reversed. The Wiener filtering has the best error performance. It is a noise smoothing process; it minimizes the overall error. The original image is linearly estimated by Wiener filtering. The facial image samples after pre-processing had a dimension of 196 * 224 pixels each.
 

Dimensionality reduction and local facial attribute extraction

Dimensionality reduction is followed by abstraction of local facial attributes such as nose, lips and per ocular (eyes) regions to match diverse face images of the same individual. The above is achieved by SURF feature descriptor. SURF is a description centered acknowledgment method. It is an attribute extractor local in nature employed for blob concealment, registration of image, pattern recognition, reconstruction and classification. To locate points of concern/interest, it takes into account an integer approximate of the determinant of the Hessian blob indicator, which can be estimated by integer procedures via a pre-evaluated integral image sample. Its attribute descriptor is centered on the addition of the Haar wavelet outcome round the spot of attention. It can be summarized along these lines:
(i)
Detection: As an estimate of Gaussian filtering square-shaped filters are employed. If an integral sample is employed, smoothing an image by means of a square is considerably quicker:
$$S(x,y)={\sum }_{\begin{array}{c}0<i<x\\ 0<j<y\end{array}}I(i,j)$$
(1)
In Eq. (1), S symbolizes (sample space), I is the illustration, i, j are indices for a two-dimensional image, x, y are two-dimensional co-ordinates. To identify spots of concern, Hessian matrix (blob detector) is employed. Determinant of matrix (Hessian in nature) is employed as an estimate for local modification nearby the points and these are preferred where this determinant is supreme. Speeded Up Robust Features also employ determinant of the Hessian for picking the scale. Given a point p = (x, y) in I, H(p, σ) (Hessian matrix) at point p and scale σ is defined by Eq. (2):
$$H(p,\sigma )=\left(\begin{array}{cc}Lxx(p,\sigma )& Lxy(p,\sigma )\\ Lyx(p,\sigma )& Lyy(p,\sigma )\end{array}\right)$$
(2)
where Lxx(p, σ) is the convolution of 2nd-order derivative of Gaussian I(x,y) at point x.
 
(ii)
Localization of interest points: To identify scale space, an image pyramid is used. Image samples are repetitively smoothed with a Gaussian filter. As a result, filter size upscaling (9 × 9 → 15 × 15 → 21 × 21 → 27 × 27, etc.) is used to evaluate the scale space rather than iteratively shrinking the image size. Thus, for every additional octave, the filter size increases by a factor of two while also allowing for a factor of two rise in the sampling intervals for the extraction of interest points. A non-maximum suppression in a 3 × 3 × 3 neighborhood is used to pinpoint interest points in the image and over scales. They are then sub-sampled to get the succeeding eminent level of the pyramid. Numerous floors with different actions of the masks are evaluated by
$$ \sigma_{{{\text{approx}}}} = {\text{ present filter facet }} \times \, \left( {{\text{fundamental filter scale}}/{\text{base filter facet}}} \right). $$
 
Feature extraction is performed after identifying the points of interest. Descriptor goal is to supply a forceful depiction of an image attribute by relating the intensity allocation of pixels inside the vicinity of the point of concern. Detection is accompanied by equivalence i.e., by relating the descriptions acquired from different sample illustrations [41]. The main motivation behind choosing SURF feature extractor is that SURF algorithm performs better than other algorithms on complete data set. Performance of SURF doesn’t drop even in conditions such as low light photographs and image samples where only partial faces are visible. With SURF feature extractor results are compounded faster, which improves the quality of the system it is used in. LBPH for instance runs well on photographs where the face is posing frontwards. As the data set becomes more complex and the algorithm is unable to fully extract the features, it steadily decreases. Increased recognition accuracy is obtained by LBPH features but at a cost of increased time delay [30]. Due to the above-mentioned factors SURF is chosen over LBPH to address a challenging research problem i.e., face recognition after plastic surgery.

Multi-KNN algorithm

Figure 3 demonstrates a conventional face identification scheme using SURF and KNN. The KNN predictor returns the class of minimum distance with their related locations (pixel values). KNN predicts the peak apt pixels neighboring the points of interest determined by dimensionality reduction and feature extraction process. KNN predictor returns pulled out facial feature vectors, with their consequent positions from a facial illustration. It derives descriptors from pixels adjoining a point of interest. The pixels match feature vectors prevised by a single point. Each lone point stipulates the central location of a region. KNN uses Euclidean distance metric given as follows:
$${\text{Euclidean}}(A,B)=\sqrt{\sum_{i=1}^{n}d({A}_{i,}{B}_{i}{)}^{2}}$$
(3)
In Eq. (3), A and B are the test/train-geometrical attributes, n is the vector size and Ai refers to the ith attribute value for vector A. The KNN arrangement takes a pre-operative frontward face image as an input. Facial marks of the face are then sensed and collected as a distance vector. Such a vector is incorporated into a KNN forecaster which is trained on a set of feature vectors.
Based on the SURF presentation, we premeditated the distance amid each training sample with all further training image samples utilizing five distance metrics (Mahalanobis Eq. (7), Manhattan Eq. (6), Euclidean Eq. (3), Correlation Eq. (4), Canberra Eq. (5)) [1]. Distance metrics described above are given as follows:
$${\text{Correlation}}(a,b)=Cov(a,b)/{\sigma }_{a}{\sigma }_{b},$$
(4)
where Cov is the covariance and σa and σb are the standard deviations.
$${\text{Canberra}}\left(a,b\right)=\sum _{i=1}^{N}\frac{\left|ai-bi\right|}{\left|ai\right|+\left|bi\right|}.$$
(5)
The Canberra distance metric is a measure of the distance among two points in a vector space:
$$ {\text{Manhattan}} = \sum\limits_{i = 1}^{N} {\left| {a_{i} - b_{i} } \right|} . $$
(6)
The Manhattan metric is one more technique to measure the distance amid two vectors:
$${\text{Mahalanobis }}\left(a,b\right)=\sqrt{(ai-bi{)}^{T}{S}^{-1}(ai-bi)}.$$
(7)
Mahalanobis metric provides higher recognition over the minutest distance reliant on the covariance matrix amid the two vectors (a,b). Five distance metrics were integrated using the square-root of the summation of the squares (RSS) as shown in Eq. (8) to arrange for a strong characteristic T-Databank in a condensed facet. Each distance metric has an advantage over the other. Therefore, a strength factor α is added to improve the recognition accuracy. Manhattan and Mahalanobis distance metrics have an advantage over the other metrics. Therefore, the strength factors are assigned as: Manhattan and Mahalanobis = 0.3, Canberra = 0.2, Correlation and Euclidean = 0.1:
$$RS{S}_{\alpha }=\sqrt{\sum_{i=1}^{5}{\alpha }_{i}DI{S}_{i}^{2}},$$
(8)
where \(DISi\) is one of the distance metrics employed.

Back-Propagating Neural Networks

A technique called back-propagation, also known as backward propagation of errors, is employed to check errors as they travel backward from input nodes to output nodes. It is a crucial mathematical tool to increase the precision of predictions. Back-propagation is a very prevalent neural network learning algorithm as it is conceptually simple and computationally efficient. BPNN categorization is extensively used for training neural networks since BPNN is trouble-free and proficient at computing the gradient descent [28].
Figure 4 portrays a BPNN which is a multifaceted neural network comprising of the input layer, at least one or many concealed/hidden layers and an output layer. A neural network learns challenging tasks and performs efficiently owing to hidden layers. Raising the number of hidden layers may or may not enhance accuracy, depending on how complicated the issue is. Large numbers of hidden layers are the best approach if the goal is to improve accuracy; but, if the application’s main concern is time complexity, then huge numbers of hidden layers would not be effective. Back-propagation takes place in this network. The inexactness is identified at the output layer by comparing the aimed output with the authentic output. It is propagated backside towards the input layer. The training starts by setting the number of layers, neurons, iterations, threshold value, input matrix and the predictable outcomes from the earlier stage [47]. The weights are also randomly initialized. Figure 5 explains feeding of the BPNN network with different distance metrics obtained by extracted local facial features for two images of the same person. Sigmoid function is used for training. This function refers to an extraordinary case of the logistic function. Sigmoid functions have a sphere of all real numbers, which returns values increasing from 0 to 1. Sigmoid functions have been used to activate neurons. The Back-Propagation Neural Network (BPNN) network finally generates a set of final matched and optimized facial vectors. Training begins when Back-Propagation Neural Network (BPNN) parameters are initialized. After this, the overall system recognition rate (RR) is calculated.

Results and discussion

The projected methodology is validated on three datasets, namely AT&T, YALE and the Plastic surgery facial dataset. The projected scheme was applied in three different phases: 70% training/30% testing samples, 90% training/10% testing samples and finally 50% training/50% testing samples.

Evaluation metrics

Factors employed for the estimation of acknowledgment (identification) schemes are computation time (CT), recognition rate (RR) and expected error rate (EER) [36, 39]. CT is the overall time interval in seconds including facial characteristic abstraction and match. Recognition rate (RR) is the proportional resemblance match amid pre/post-surgically altered samples or non-surgical samples and is contrariwise (inversely proportional) to EER. False rejection rate (FRR) is the chance of declining a trustworthy person as a deceiver. False acceptance rate (FAR) is the prospect of obliging false match amid pre-post-surgical/non-surgical images. Larger the distance between matched feature points more is the EER value which further states that lower the EER higher the recognition rate (RR).
EER is defined when FRR equals FAR:
RR = (Total amount of true equivalents/M) × 100, where M is the number of facial illustrations/samples,
FAR (False Acceptance Rate) = total false acceptances/total recognition attempts (for testing data),
FRR (False Rejection Rate) = total false rejections/total recognition attempts (for testing data).

Results obtained with non-surgical databases

AT&T/ORL database

ORL stands for Olivetti Research Laboratory Dataset. ‘The ORL Database of Faces’ includes facial image samples taken in the middle of April 1992/April 1994 at Cambridge University Computer Research laboratory. There are ten diverse image samples of 40 discrete subjects. The image samples were taken at diverse moments, unstable illumination, facial gestures (such as closed/open-eyes, jovial/not-jovial) and facial aspects (such as eyeglasses/no-eyeglasses). The facial samples under consideration were in an erect, forward position (lenience for some side association). The dimension for each illustration was kept 92 × 112 pixels, with 256 grayscale levels/pixel. Figure 6 shows ten different images of the same person with different PIE (pose, illumination and expression) effects from the AT&T database. Figure 7 shows a sample image illustrating the matched extracted facial features between two images of the same person [9].

YALE dataset

YALE dataset contains 165 Gy scale illustrations of 15 human beings. It contains 11 images per subject, one per changed facial gesture i.e., center-light, with eyeglasses, cheerful, depressing, without spectacles, usual, depressing, heavy-eyed, astonished and wink. Samples are in grayscale, and the illustrations are reshaped to 92 × 112 pixels. 75 image samples are used to train the NN and the rest image samples are used to test the system.
Figure 8 shows diverse image samples of the same human being with altered pose, illumination and expression (PIE) effects from YALE database. Figure 9 shows the processed image depicting the corresponding extracted facial features between two images of the same person. For all the datasets after the dimensionality reduction and feature extraction process by SURF, multi KNN values are predicted by diverse distance metrics (which finds the most appropriate feature vectors corresponding to local features extracted) and are then fed to the BPNN network which optimizes the performance parameters by back-propagation technique.
Table 2 gives an overview of recognition rate and expected error rate for two facial databases. With both the databases, best recognition rate (RR) is obtained with Mahalanobis metric and minimum with Euclidean metric by the proposed methodology. Evaluation parameters are finest optimized when maximum values are obtained for RR and minimum for EER. The Multi-KNN approach gives best results in terms of acknowledgment rate and expected error rate. Table 3 shows that the proposed methodology employing SURF, Multi-KNN and BPNN networks yields utmost recognition precision in contrast to all other facial recognition techniques existing in the literature for non-surgical facial databases.
Table 2
Computation of recognition rate (RR) and expected error rate (EER) for non-surgical facial databases
Distance metric
RR (%)
YALE
EER (%)
YALE
RR (%)
AT&T
EER (%)
AT&T
Euclidean
91
1.09
94
1.06
Canberra
92
1.08
96
1.04
Correlation
91
1.09
95
1.05
Manhattan
97
1.03
97
1.03
Mahalanobis
96
1.04
97
1.03
Proposed
97
1.03
98
1.02
Table 3
RR (%) and EER (%) analysis in context to existing techniques in literature for YALE and AT&T (non-surgical) databases
Algorithm
RR (%)
YALE
EER (%)
YALE
RR (%)
AT&T
EER (%)
AT&T
PCA + BPNN
85.49
1.17
88
1.14
LDA
86.75
1.14
90
1.11
PCA, LDA, DCT, ICA
84, 86, 89, 85
1.18, 1.15, 1.10, 1.16
88, 89, 92, 88
1.16, 1.12, 1.09, 1.14
PCA-DCT Corr-PIFS
83.58
1.16
86.8
1.15
WT + PCA
89
1.06
92
1.05
CASNN, FFNN
86, 87
1.18, 1.65
87, 80
1.15, 1.25
SURF, Multi-KNN, BPNN (proposed scheme)
97.53
1.74
98
1.02

Results obtained with plastic surgery facial database

This dataset contains 1800 former/latter illustrations of 900 individuals. The databank encompasses 519 image sets for local surgeries and 381 image pairs for global plastic therapies. The dataset contains 74 samples for otoplasty (ear surgery); 60 samples for brow lift; 192 samples for rhinoplasty (nose surgery); 105 samples for Blepharoplasty (eyelid surgery); 32 samples for Dermabrasion (skin texture); 56 samples for other local procedures; 73 samples for skin peeling and 308 samples for face-uplift/Rhytidectomy surgery [27].

Rhytidectomy plastic procedure

Rhytidectomy is a very popular cosmetic surgery procedure used for face-upliftment and beautification. It is a global plastic procedure which results into prominent facial feature changes. It involves the elimination of surplus facial skin which gives one a young-looking emergence resulting into contraction of facial muscles/tissues. This plastic procedure is also used to reduce visible signs of aging, i.e., removal of wrinkles. Figure 10a and b shows before/after surgery images of face-upliftment (Rhytidectomy) therapy in females. Figure 10c demonstrates the extracted (matched) facial vectors (local facial attributes such as nose, eyes, and mouth) amid former/latter surgery illustrations.

Blepharoplasty plastic procedure

Figure 11a and b shows pre/post-surgery image samples for Blepharoplasty therapy. Eyelid surgery is a surgical therapy to enhance the formation of the eyelids. It is a local type of plastic procedure where a single facial trait is modified. Surgery can be done on both upper and lower eyelids. This plastic surgery is used to refresh the vicinity surrounding the eyes. Unfastened/flabby skin that disrupts the usual contour of the upper eyelid is a common reason for this cosmetic surgery. Figure 11c highlights the matched facial vectors between former and latter surgery image samples

Brow lift surgery

A brow lift/forehead lift reduces wrinkle lines that are developed through the forehead, besides those that crop up on the overpass of the nose, amid the eyes. A brow-lift procedure/surgery/therapy involves longer incisions than an endoscopic brow lift and is performed in combination with eyelid surgery. The cuts are made for an upper eyelid therapy and the region between the eyebrows is elevated to even out frown lines. It is a widespread plastic procedure endured by both male and female patients to magnify their external appearance. Figure 12a and b portrays before/after surgery images of brow-lift therapy, whereas Fig. 12c demonstrates equalized/matched facial vectors/attributes between pre/post-surgery images. This is a local synthetic/plastic/cosmetic procedure which signifies the fact that facial features are modified in a very restrained manner.

Skin peeling surgery

Skin peeling is a global plastic procedure which contributes to a remarkable change in the before and after appearance of an individual. It is a form of facial peel. A chemical peel improves and evens the consistency of the skin. It is an effectual cure for facial marks, wrinkles, and uneven skin pigmentation. Figure 13a and b demonstrates former/latter surgery samples for skin peeling therapy. Figure 13c demonstrates matched facial vectors between before/after surgery samples of an individual.

Discussion w.r.t computational complexity

Higher recognition rates are obtained for local plastic procedures as compared to global ones. For local procedures, slight modifications are performed on facial features, whereas for global procedures, the overall facial geometry is altered. Table 4 portrays RR/EER/CT metrics for state-of-the-art identification/recognition schemes w.r.t the proposed scheme. It is clearly understood from Table 4 that evaluation metrics (RR, EER, CT) are best optimized for the proposed scheme in context to existing recognition schemes. Table 4 compares RR, EER, CT values for diverse local and global cosmetic procedures in context to existing recognition algorithms. The metrics mentioned above (RR and EER) are biased metrics, i.e., linked to each other (dependent metrics) on the other hand CT is an unbiased (independent) evaluation metric [36, 39]. Moderate computation time was reported by the proposed scheme as compared to other recognition schemes. Evaluation metrics are best optimized with minimum values of EER/CT and higher values for RR. Running time in image dispensation systems enumerates the expanse of time occupied to run as a function of the range of the input illustration taken into account. Complexity in time is expressed by means of Big O symbolization where n (is input dimension).
Table 4
RR/EER/CT examination (surgical database)
Recognition technique
RR (%)
EER (%)
CT
GNN [44]
53.10
12.53
3.82
SURF [44]
49.60
15.00
1.95
Circular-LBP [44]
47.30
16.66
2.60
LFA [44]
37.98
25.06
3.87
FDA/LDA [44]
32.58
26.42
9.37
PCA [44]
29.70
36.42
12.17
Correlation scheme [22]
66.86
12.50
13.77
Parts/sparse technique [2]
76.67
7.50
7.36
Granular approach [5]
77.50
8.33
13.63
Gabor/LBP + PCA + Euclidean metrical [19]
74.40
10.67
10.74
Geometrical FRAPS [40]
77.30
1.30
11.85
Evolutionary granular process + SIFT and EUCLBP [48]
87.30
7.83
4.74
LBPH + KNN + BPNN [1]
83.74
4.53
8.77
Proposed scheme
90.29
1.26
5.27
Big O symbolization scrutinizes systems in terms of inclusive scalability and competency. Running time can be analyzed w.r.t scheme engaged for acknowledgment, feature type extracted (global/local) and the temperament of surgery undertaken. Singh et al. [44] had described six face detection schemes for recognition after medical alterations. Linear Discriminant Analysis (LDA) being one of them operates by reducing the within class covariance, thus getting the best out of the between class covariance. Its computation encompasses condensed matrices, eigen fragmentation which can be difficult both in terms of time and memory. LDA has O (mnt + t3) intricacy, m being the number of samples, n being the number of attributes and t = min (m, n). PCA’s time complexity is defined as O (p2n + p3) where p is the number of attributes and n is the number of data points. Implementation time of PCA varies linearly with the facet of the databank. Demonstrations for LFA are dimensionally low and sparsely disseminated. Here, object description is given in terms of statistically derived local attributes.
SURF has a hypothetical complexity in time demarcated as O (mn + k) where k is the number of extrema. Local binary patterns are computationally modest but if the dimension of the attributes surges exponentially with the number of neighbors it tips to an upsurge in computation intricacy. Facial attributes local in nature are likely to mend match amid before/after therapy samples but have an adverse influence on running time. FDA, LFA, SURF, PCA and CLBP excerpt local attributes while GNN excerpts both traits (local/global). GNN gave utmost acknowledgment speed at an expense of amplified calculation time. Bhatt et al. [5], Marsico et al. [22], Said et al. [40] and Aggarwal et al. [2] presented approaches where face is fragmented into several domains of attention to isolate noticeable facial traits. Techniques projected by Bhatt et al. [5], Marsico et al. [22], Said et al. [40] and Aggarwal et al. [2] pulled out local traits, while Bhatt et al. [5] proposed a granular arrangement accompanied by a genetic system which pulled out both the traits (local/global). Marsico et al. [22] estimated acknowledgment after medical surgery by Face Recognition against Expression Variations and Occlusions which separated the facial area into applicable sections and then coded them using Partitioned Iterated Module. Face Analysis for Commercial Entities was used to calculate image association index. Split Face Architecture was utilized to select the feature intellection method. Said et al. [40] proposed a system which functions in real-time scenario with squat hardware requisites and evaluation time. Local traits result into condensed matrices (maximum elements being non-zero whereas global one’s end into sparse matrices (maximum elements being zero). Matrices sparse in nature encompass enormous zero elements and can protect memory and accelerate the dispensation thus pulling down the deliberation time.
Aggarwal et al. [2] proposed region centered sparse illustration. It proceeded by localizing primary facial traits accompanied by initiation of training matrix for each facial part and finally sparse appreciation was consummated for each. Sparse spectral structures (such as Laplacian-eigen-maps and LLE) execute eigen examination for a n × n matrix. This matrix is sparse as it shrinks the time for eigen examination. Eigen examination of a sparse matrix has time O (pn2), here p is the proportion of non-zero components to the overall components, the memory time being O (pn2) as well [11]. Running time can be protected by scheming a data configuration navigating only non-zero components. The proposed scheme demonstrates a human identification system using SURF extractor, Multi-KNN and BPNN. The hypothetical complexity of SURF is already discussed above. Time complexity for KNN is O (n × m), where n is the total training samples and m is total dimensions in the training set where n ≫ m, thus complexity of nearest neighbor search is O (n). The learning phase (i.e., back-propagation) is slower than the inference phase (i.e., forward propagation) which states gradient descent has to be repeated many times. One way of making algorithms run faster is using parallel execution. Tables 5 and 6 compare the identification accuracy for a variety of local and global plastic procedures w.r.t the existing techniques in literature and the proposed scheme. It is observed from Table 6 that the proposed scheme gives acceptable recognition results in comparison to other recognition methods existing in literature.
Table 5
Analysis of recognition rate (%) w.r.t handcrafted (traditional) recognition methods
Surgery
PCA
FDA
LFA
CLBP
SURF
GNN
Dermabrasion
22.7
24.6
25.7
41.6
41.3
43.2
Brow lift
31.0
33.0
39.8
48.6
50.0
56.6
Otoplasty
58.9
59.2
61.0
68.3
65.1
69.9
Blepharoplasty
30.8
36.2
40.4
51.6
52.6
60.8
Rhinoplasty
25.6
25.3
35.6
44.3
50.2
53.7
Skin peeling
27.7
32.7
40.5
53.2
50.0
53.3
Rhytidectomy
21.1
21.2
21.8
40.4
39.0
41.5
Table 6
RR (%) analysis in relation to surgery type (local/global) and identification scheme
Surgery
Correlation
Parts/sparse
Granular
Gabor/LBP + PCA + Euclidean metric
Geometrical FRAPS
Evolutionary granular algorithm + SIFT and EUCLBP
LBPH + KNN classification + BPNN
Proposed
Dermabrasion
60.96
57.37
58.2
57.37
84.6
80.6
82.8
84.8
Brow lift
60.96
70.77
71.6
70.77
83.3
84.3
81.0
84.0
Otoplasty
74.26
84.07
84.9
84.07
85.4
79.4
86.2
89.2
Blepharoplasty
65.16
74.97
75.8
74.97
72.7
69.4
88.7
90.7
Rhinoplasty
58.06
67.87
68.7
67.87
70.8
81.3
80.4
79.4
Skin peeling
57.66
67.47
68.3
67.47
87.2
77.2
83.0
87.0
Rhytidectomy
45.86
55.67
56.5
55.67
65.0
73.5
68.0
76.0
The length of time it takes an algorithm to complete each set of instructions is known as time complexity. When there are multiple approaches to handle a straightforward problem, it is always preferable to choose the algorithm that is the most effective. Usually, the term “space complexity” refers to how much memory an algorithm uses. Complexity in time and space is most definitely not inversely proportionate. We can only optimize a system for one metric at a time, and it is merely a straightforward mathematical fact. It is unlikely that an algorithm will be efficient in space if it is optimized for time. It took 25 h to train the BPNN with a training rate of 0.1 and up to 8000 iterations. A personal laptop with an Intel CoreI7 CPU running at 3 GHz and 8 GB of built-in RAM was used for the research.
Figure 14 shows a graphical relationship between error and computational cost for the projected scheme. The shortcoming of the conventional scheme is the calculating time, because we had to match the test sample with all training samples (having O (t(n)) complexity, t(n) being the computation time). The chief contribution is attainment of a robust T-Dataset which helps the BPNN to congregate faster (i.e., less computational complexity) with improved accuracy (RR) and low computation error (EER). We assembled a forceful T-Dataset relying on the correlation amid the training samples and not on the concentration of the image samples. Aim of any facial recognition system is to achieve low training and test errors but it is not possible in real scenario’s (for instance for identification of surgical and non-surgical facial samples) and thus we have a trade-off between error and complexity. Simplified theories give elevated train/test miscalculations and the prototype tends to under fit. High computational errors can be bargained using multifaceted functions/accumulating more features. Thus, the accuracy (RR) increases at an expense of increased computational complexity. At certain points, the model becomes too difficult, and tends to over fit the training data, i.e., low computational errors for test data. To overcome this, we have used resampling technique (cross-validation) [34] to mend the performance on unseen data. In a nutshell to summarize with the proposed scheme, moderate computational complexity is attained for both non-surgical as well surgical facial samples.

Conclusion

Using SURF feature extractor, Multi-KNN, and BPNN approaches, we have proposed a system for human face detection. A training database with distinctive arrangements focusing on the correlation among the distinctive training samples has been obtained, thanks to the original SURF extractor and Multi-KNN. Since each distance metric has an advantage over supplemental metrics, this was made possible by combining the distance metrics. We increased recognition rate while lowering computational complexity. Overall, we outperformed current recognition methods. The proposed scheme indicates that using a correlated training database between image samples, higher recognition can be accomplished with conventional feature extraction and dimensionality reduction techniques. The proposed methodology when tested with plastic surgery facial database gives inferior recognition results as compared to non-surgical databases. Local plastic surgeries give satisfactory identification rates, but recognition outcomes for global plastic surgeries fall short of expectations. The significance of machine learning algorithms cannot be foreseen since they use classifiers to teach the unique asset of self-wisdom from prior experiences. Our research’s findings show that the process of facial identification, particularly after surgical modifications, is difficult and necessitates ongoing study. The projected scheme offers moderate computation time in context to other recognition techniques existing in literature. We also plan to test deep neural models for the medically altered facial dataset [32, 33]. Imminent research trends would be to abstract more specific local/global facial traits for a better-quality match between different surgical images of the same person. Employment of a supplementary distance metric can enhance the contest, acknowledgment level and calculation complexity. In addition, face detection done with the proposed method is for still images. This can further be extended for 3D images, i.e., videos and thus the detection will become significantly more complex. The projected methodology surpassed present-day state-of-the-art schemes. Identification after medical changes is a tedious task that necessitates constant investigative efforts. Thus, ongoing research is needed in the field of facial recognition following medical changes. Given that it relates to the fields of biometric authentication, human identification, and how contemporary medical advancements affect face recognition, this research project would greatly benefit society.

Declarations

Conflict of interest

The authors declare that they have no conflict of interest to report regarding the present study.

Ethical approval

This article does not contain any studies with human participants or animals performed by any of the authors.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://​creativecommons.​org/​licenses/​by/​4.​0/​.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Literatur
4.
Zurück zum Zitat Bhatt HS, Bharadwaj S, Singh R, Vatsa M & Noore A (2011) Evolutionary granular approach for recognizing faces altered due to plastic surgery. In: Proceedings of International Conference on Automatic Face Gesture Recognition and Workshops, Santa Barbara, pp 720–725. https://doi.org/10.1109/FG.2011.5771337 Bhatt HS, Bharadwaj S, Singh R, Vatsa M & Noore A (2011) Evolutionary granular approach for recognizing faces altered due to plastic surgery. In: Proceedings of International Conference on Automatic Face Gesture Recognition and Workshops, Santa Barbara, pp 720–725. https://​doi.​org/​10.​1109/​FG.​2011.​5771337
7.
Zurück zum Zitat Dhanaseely AJ, Himavathi S & Srinivasan E (2012) Performance comparison of cascade and feed forward neural network for face recognition system. In: International conference on software engineering and mobile application modelling and development. Chennai, pp 1–6. https://doi.org/10.1049/ic.2012.0154 Dhanaseely AJ, Himavathi S & Srinivasan E (2012) Performance comparison of cascade and feed forward neural network for face recognition system. In: International conference on software engineering and mobile application modelling and development. Chennai, pp 1–6. https://​doi.​org/​10.​1049/​ic.​2012.​0154
8.
Zurück zum Zitat Ebadi M, Rashidy Kanan H, Kalantari M (2020) Face recognition using patch manifold learning across plastic surgery from a single training exemplar per enrolled person. SIViP 14(6):1071–1077CrossRef Ebadi M, Rashidy Kanan H, Kalantari M (2020) Face recognition using patch manifold learning across plastic surgery from a single training exemplar per enrolled person. SIViP 14(6):1071–1077CrossRef
12.
Zurück zum Zitat Ibrahim RM, Abou-Chadi FEZ, Samra AS (2013) Plastic surgery face recognition: a comparative study of performance. IJCSI Int J Comput Sci Issues 10(2) Ibrahim RM, Abou-Chadi FEZ, Samra AS (2013) Plastic surgery face recognition: a comparative study of performance. IJCSI Int J Comput Sci Issues 10(2)
13.
Zurück zum Zitat Ibsen M, Rathgeb C, Fink T, Drozdowski P, Busch C (2021) Impact of facial tattoos and paintings on face recognition systems. IET Biometrics 10(6):706–719CrossRef Ibsen M, Rathgeb C, Fink T, Drozdowski P, Busch C (2021) Impact of facial tattoos and paintings on face recognition systems. IET Biometrics 10(6):706–719CrossRef
14.
Zurück zum Zitat Jarvis T, Thornburg D, Rebecca A, Teven C (2020) Artificial intelligence in plastic surgery. Plast Reconstruct Surg Global Open 8(10):e3200CrossRef Jarvis T, Thornburg D, Rebecca A, Teven C (2020) Artificial intelligence in plastic surgery. Plast Reconstruct Surg Global Open 8(10):e3200CrossRef
16.
Zurück zum Zitat Karuppusamy P, Ponmuthuramalingam P (2013) Recognizing pre and post surgery faces using multi objective particle swam optimization. Int J Adv Res Comput Sci Softwa Eng 3(10):316–320 Karuppusamy P, Ponmuthuramalingam P (2013) Recognizing pre and post surgery faces using multi objective particle swam optimization. Int J Adv Res Comput Sci Softwa Eng 3(10):316–320
17.
Zurück zum Zitat Koch W, Rettig EM, Sun DQ (2017) Head and neck essentials in global surgery. Global surgery. Springer International Publishing, Cham, pp 443–474CrossRef Koch W, Rettig EM, Sun DQ (2017) Head and neck essentials in global surgery. Global surgery. Springer International Publishing, Cham, pp 443–474CrossRef
18.
Zurück zum Zitat Lahasan BM, Lutfi SL & San-Segundo-Hernández R (2017) A survey on techniques to handle face recognition challenges: occlusion, single sample per subject and expression. Artif Intell Rev 1–31 Lahasan BM, Lutfi SL & San-Segundo-Hernández R (2017) A survey on techniques to handle face recognition challenges: occlusion, single sample per subject and expression. Artif Intell Rev 1–31
21.
23.
Zurück zum Zitat Mehta H (2009) On innovations in plastic surgery. J Plast Reconstr Aesthet Surg 62(4):437–441CrossRefPubMed Mehta H (2009) On innovations in plastic surgery. J Plast Reconstr Aesthet Surg 62(4):437–441CrossRefPubMed
24.
Zurück zum Zitat Mun M, Deorankar A (2014) Implementation of plastic surgery face recognition using multimodal biometric features. Int J Comput Sci Inf Technol 5(3):3711–3715 Mun M, Deorankar A (2014) Implementation of plastic surgery face recognition using multimodal biometric features. Int J Comput Sci Inf Technol 5(3):3711–3715
26.
Zurück zum Zitat Oloyede M, Hancke G, Myburgh H (2020) A review on face recognition systems: recent approaches and challenges. Multimedia Tools Appl 79(37–38):27891–27922CrossRef Oloyede M, Hancke G, Myburgh H (2020) A review on face recognition systems: recent approaches and challenges. Multimedia Tools Appl 79(37–38):27891–27922CrossRef
30.
Zurück zum Zitat Raafat M, Younis M (2017) The limitation of pre-processing techniques to enhance the face recognition system based on LBP. Iraqi J Sci 58:355–363 Raafat M, Younis M (2017) The limitation of pre-processing techniques to enhance the face recognition system based on LBP. Iraqi J Sci 58:355–363
32.
Zurück zum Zitat Rathgeb C, Dantcheva A, Busch C (2019) Impact and detection of facial beautification in face recognition: an overview. IEEE Access 7:152667–152678CrossRef Rathgeb C, Dantcheva A, Busch C (2019) Impact and detection of facial beautification in face recognition: an overview. IEEE Access 7:152667–152678CrossRef
33.
Zurück zum Zitat Rathgeb C, Dogan D, Stockhardt F, De Marsico M & Busch C (2020) Plastic surgery: an obstacle for deep face recognition?. In: 2020 IEEE/CVF conference on computer vision and pattern recognition workshops (CVPRW), pp 3510–3517 Rathgeb C, Dogan D, Stockhardt F, De Marsico M & Busch C (2020) Plastic surgery: an obstacle for deep face recognition?. In: 2020 IEEE/CVF conference on computer vision and pattern recognition workshops (CVPRW), pp 3510–3517
41.
Zurück zum Zitat Sikdar R (2021) Attendance system based on face recognition using python. Attendance system based on face recognition using python by Raihan Si… (slideshare.net) Sikdar R (2021) Attendance system based on face recognition using python. Attendance system based on face recognition using python by Raihan Si… (slideshare.net)
42.
Zurück zum Zitat Singh R, Agarwal A, Singh M, Nagpal S, Vatsa M (2020) On the robustness of face recognition algorithms against attacks and bias. Proc AAAI Conf Artif Intell 34(09):13583–13589 Singh R, Agarwal A, Singh M, Nagpal S, Vatsa M (2020) On the robustness of face recognition algorithms against attacks and bias. Proc AAAI Conf Artif Intell 34(09):13583–13589
43.
Zurück zum Zitat Singh R, Vatsa M, Noore A (2009) Effect of plastic surgery on face recognition: a preliminary study. IEEE Comput Soc Conf Comput Vis Pattern Recogn Workshops 2009:72–77 Singh R, Vatsa M, Noore A (2009) Effect of plastic surgery on face recognition: a preliminary study. IEEE Comput Soc Conf Comput Vis Pattern Recogn Workshops 2009:72–77
45.
Zurück zum Zitat Sun Y, Wang X, Tang X (2013) Deep convolutional network cascade for facial point detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 3476–3483 Sun Y, Wang X, Tang X (2013) Deep convolutional network cascade for facial point detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 3476–3483
46.
Zurück zum Zitat Suri S, Sankaran M, Vatsa M, Singh R (2018) On matching faces with alterations due to plastic surgery and disguise. In: IEEE Ninth Int‘l Conf. on Biometrics, Theory, Applications and Systems (BTAS), pp 1–8 Suri S, Sankaran M, Vatsa M, Singh R (2018) On matching faces with alterations due to plastic surgery and disguise. In: IEEE Ninth Int‘l Conf. on Biometrics, Theory, Applications and Systems (BTAS), pp 1–8
48.
Zurück zum Zitat Verghis TJ, Uma S, Bhuvaneshwari P (2014) A multi-objective evolutionary approach to face matching across plastic surgery. COMPUSOFT Int J Adv Comput Technol 3(2):529–532 Verghis TJ, Uma S, Bhuvaneshwari P (2014) A multi-objective evolutionary approach to face matching across plastic surgery. COMPUSOFT Int J Adv Comput Technol 3(2):529–532
49.
Zurück zum Zitat Zahradnikova B, Duchovicova S, Schreiber P (2018) Facial composite systems: review. Artif Intell Rev 49(1):131–152CrossRef Zahradnikova B, Duchovicova S, Schreiber P (2018) Facial composite systems: review. Artif Intell Rev 49(1):131–152CrossRef
Metadaten
Titel
Human face identification after plastic surgery using SURF, Multi-KNN and BPNN techniques
verfasst von
Tanupreet Sabharwal
Rashmi Gupta
Publikationsdatum
13.03.2024
Verlag
Springer International Publishing
Erschienen in
Complex & Intelligent Systems
Print ISSN: 2199-4536
Elektronische ISSN: 2198-6053
DOI
https://doi.org/10.1007/s40747-024-01358-7