1 Introduction
-
Do not rely on the existence of a trusted third-party server;
-
The attacker cannot associate the acquired facial features with other sensitive data;
-
Facial biometrics should be irreversible one-way conversion;
-
Computational complexity shows friendliness to resource-constrained equipment and can be extended to large-scale data processing occasions.
-
We propose a general privacy protection framework for an EFR system, in which the terminal device, the edge network center, and the remote cloud server construct a three-level FR architecture. The privacy protection of face data in the entire life cycle is focused on and realized.
-
We design a LDP algorithm that adaptively allocates a privacy budget according to the difference in the proportion of principal component feature information. The edge executes this algorithm after the dimensionality reduction of face images, which, on the one hand, protects the face feature data transmitted between edges and the cloud, and on the other hand, enhances the privacy security of the stored data and the published model on the cloud side.
-
We add an authentication mechanism to control the legitimacy of terminal devices connected to the edge network center for higher security. This mechanism ensures the reliability and quality of the data source.
2 Related work
2.1 Encryption for FR
2.2 De-ID for FR
2.3 Perturbation for FR
3 Preliminaries
3.1 Local differential privacy
3.2 Principal component analysis (PCA)
4 System framework
4.1 System model
-
The first layer is the local terminals, which collect face images and uploads them to the edge.
-
The second layer is the edge, which receives the face image uploaded by the local terminal and uploads it to the server after preprocessing.
-
The third layer is the cloud server, which receives the face image information submitted by the edge, carries out model training, or uses the trained model for recognition, and feeds back the recognition result to the edge.
4.2 Threat model
4.3 Security goals
5 Our proposed framework
5.1 Encrypted communication and authentication
5.2 Image processing and privacy protection
5.2.1 Constructing eigenface
5.2.2 Eigenface perturbation algorithm
6 Theoretical analysis
6.1 PEPI provides \(\varepsilon \)-LDP protection
6.2 System security analysis
7 Experimental
7.1 Data set
7.1.1 Feature extraction
7.1.2 Eigenface disturbance
7.2 Model training and recognition
7.2.1 Performance variance with respect to \(\varepsilon \)
7.2.2 Performance variance with respect to K
7.3 Security comparison
References | Methods | Eigenface | Defensible attack | |||
---|---|---|---|---|---|---|
Data poisoning | Differential | Man-in-the-middle | Tamper | |||
Erkin et al. [13] | PH cryptosystem | Non-privacy | \(\checkmark \) | |||
Sadehi et al. [18] | PH cryptosystem | Non-privacy | \(\checkmark \) | \(\checkmark \) | ||
Xiang et al. [14] | FH cryptosystem | Non-privacy | \(\checkmark \) | \(\checkmark \) | ||
Chamikara et al. [17] | LDP | Privacy | \(\checkmark \) | \(\checkmark \) | ||
The proposed | LDP, authentication | Privacy | \(\checkmark \) | \(\checkmark \) | \(\checkmark \) | \(\checkmark \) |