1 Introduction
1.1 Related works
1.2 Method used in our work
1.3 Contributions
-
A DCTF-based rogue device identification method is firstly proposed in this work. We demonstrate that the 2D DCTF is a promising feature for unknown device identification with low complexity and high precision.
-
We introduce the GAN for rogue device identification and improve the GANomaly to a completely unsupervised model. With the help of the proposed DCTF-GAN method, we can identify rogue devices with only the fingerprint characteristics of legitimate devices.
-
We used a much larger number of devices and conducted more experiments than any other relevant study of this kind. There has been no research on the true negative rate and true positive rate of such a large number of devices identification.
-
We design a novel threshold selection algorithm and rogue device identification strategy to ensure the accuracy of distinguishing rogue devices from legitimate devices. The accuracy of identifying rogue devices can exceed 90% and 95%, respectively, in 20 dB and 30 dB signal-to-noise ratio (SNR) environments through our method.
-
Recognized devices, which are known devices used to train the discriminator.
-
Accessing devices, which are unknown devices to be identified, including legitimate devices and rogue devices.
-
Legitimate devices, that is legal devices that should be allowed to access.
-
Rogue devices, that is, invading devices disguised as legitimate ones. Our main purpose is to distinguish the accessing devices into legitimate ones and rogue ones.
2 DCTF-based RF fingerprint extraction
3 GAN model construction
3.1 Advantages and basic structure of GAN model
-
Bow-tie automatic encoder G This network is the generator part of the entire model, spliced with an encoder network that maps from a high-dimensional tensor x to a low-dimensional tensor z, and a decoder network that remaps z to a high-dimensional tensor \(x'\). This generator learns the distribution of the input data and reconstructs them through the networks.
-
Encoder E The role of this part is to downsample the reconstructed image \(x'\) into a low-dimensional tensor \(z'\), whose dimension is consistent with z. It learns to minimize the distance between z and \(z'\) by parameterization. This distance, as a part of the total loss, serves as feedback to the entire learning process, which is utilized to calculate the model loss.
-
Discriminator D This network receives the original data x from the input and the forged data \(x'\) from the generator and distinguishes them. The structure of this part is the same as the previously mentioned decoder network with an extra softmax layer added at the end.
3.2 Model improvement
3.3 Threshold selection
4 Experimental design
4.1 Data extraction and plotting DCTF
4.2 Discriminator training
-
NetG It generates forged images and then downsamples it into a low-dimensional tensor. It consists of the bow-tie automatic encoder G and the encoder E, which are two sub-networks of GANomaly mentioned above.
-
NetD It determines whether an input image is a real one. It corresponds to the discriminator D in GANomaly mentioned above.
-
Since the input image dimension of GANomaly must be an integer multiple of 16, we crop the \(65\times 65\) pixel images to \(64\times 64\) pixels.
-
Unlike CNN, fully connected layers and pooling layers are not used in DCGAN and GANomaly which is developed based on DCGAN.
Layer | Output channels | Output dimension | Kernel, stride, padding | Parameters | Activation |
---|---|---|---|---|---|
Input—x | 1 | [64,64] | – | – | – |
Convolution-2D | 64 | [32,32] | [4,4], 2, 1 | 1024 | LeakyReLU |
Convolution-2D | 128 | [16,16] | [4,4], 2, 1 | 131,072 | – |
BatchNorm-2D | 128 | [16,16] | – | 256 | LeakyReLU |
Convolution-2D | 256 | [8,8] | [4,4], 2, 1 | 524,288 | - |
BatchNorm-2D | 256 | [8,8] | – | 512 | LeakyReLU |
Convolution-2D | 100 | [5,5] | [4,4], 1, 0 | 409,600 | Sigmoid |
Output—z | 100 | [5,5] | – | – | – |
Convolution transpose-2D | 256 | [8,8] | [4,4], 1, 0 | 409,600 | – |
BatchNorm-2D | 256 | [8,8] | – | 512 | ReLU |
Convolution transpose-2D | 128 | [16,16] | [4,4], 2, 1 | 524,288 | – |
BatchNorm-2D | 128 | [16,16] | – | 256 | ReLU |
Convolution transpose-2D | 64 | [32,32] | [4,4], 2, 1 | 131,072 | – |
BatchNorm-2D | 64 | [32,32] | – | 128 | ReLU |
Convolution transpose-2D | 1 | [64,64] | [4,4], 2, 1 | 1024 | Tanh |
Output—\(x'\) | 1 | [64,64] | – | – | – |
Convolution-2D | 64 | [32,32] | [4,4], 2, 1 | 1024 | LeakyReLU |
Convolution-2D | 128 | [16,16] | [4,4], 2, 1 | 131,072 | – |
BatchNorm-2D | 128 | [16,16] | – | 256 | LeakyReLU |
Convolution-2D | 256 | [8,8] | [4,4], 2, 1 | 524,288 | – |
BatchNorm-2D | 256 | [8,8] | – | 512 | LeakyReLU |
Convolution-2D | 100 | [5,5] | [4,4], 1, 0 | 409,600 | – |
Output—\(z'\) | 100 | [5,5] | – | – | – |
Layer | Output channels | Output dimension | Kernel, stride, padding | Parameters | Activation |
---|---|---|---|---|---|
Input | 1 | [64,64] | – | – | – |
Convolution-2D | 64 | [32,32] | [4,4], 2, 1 | 1024 | LeakyReLU |
Convolution-2D | 128 | [16,16] | [4,4], 2, 1 | 131,072 | – |
BatchNorm-2D | 128 | [16,16] | – | 256 | LeakyReLU |
Convolution-2D | 256 | [8,8] | [4,4], 2, 1 | 524,288 | – |
BatchNorm-2D | 256 | [8,8] | – | 512 | LeakyReLU |
Convolution-2D | 1 | [5,5] | [4,4], 1, 0 | 4,096 | Sigmoid |
Output | 1 | 1 | – | – | – |
4.3 Device identification
-
Train DCTFs of each recognized device to a discriminator as the fingerprint of each device, which are stored in the white list fingerprint database;
-
The DCTFs of each accessing device are sent to the discriminator in the fingerprint database for identification. When one recognized device is considered as legitimate, it is allowed to access.
-
Recognized devices RF information collection Extract the signal from recognized devices and perform pre-processing such as segmentation and normalization. Then differentially process it and plot DCTF.
-
Discriminator training The DCTF from recognized devices are sent to GAN and training for multiple times to obtain multiple discriminators. These discriminators are stored in a database to discriminate DCTF from accessing devices.
-
Recognized devices RF information collection Collect the signal from accessing devices and plot DCTF. This process is the same as plotting DCTF of recognized devices.
-
Accessing devices identification The DCTF from accessing devices are sent to multiple trained discriminators for discrimination. An accessing device is allowed to access only if all the discriminators believe that it is legitimate. Otherwise, it is considered rogue.
5 Experimental results and analysis
-
Identification of rogue devices From the 54 devices used in the experiment, one of them is selected as a recognized device whose DCTFs are used to train the discriminator. The remaining 53 are considered as rogue devices whose DCTFs are sent to the discriminator for identification in turn. In the following rogue device identification experiment, we conducted a round of aforesaid one-to-one experiments, that is, a total of \(54\times 53=2862\) experiments under each condition to obtain the true negative rate of device identification under different conditions.
-
Identification of legitimate devices From the 54 devices used in the experiment, the DCTF of one of the selected devices are divided into two sets. One training is used for training the discriminator, and one test set which is sent to the discriminator for testing. Since we only have 54 devices, the experimental results will lack universality if only one round of legitimate device identification experiments is carried out under each condition. Therefore, we conducted 10 rounds of repeated experiments, that is, a total of \(54\times 10=540\) experiments under each condition to obtain the true positive rate under different conditions.
5.1 Identification accuracy analysis
-
Multiple training epochs refer to sending data to a neural network and the network parameters are updated during each epoch of training. After several epochs of training, we obtain the discriminator network we need. Its block diagram is shown in Fig. 13.
-
Multiple training times refer to sending data into the neural network multiple times, performing several epochs of training each time, and obtaining multiple discriminator networks. Its block diagram is shown in Fig. 14.