1. Introduction
Image encryption is one of the well-known mechanisms to preserve confidentiality over a reliable unrestricted public channel. However, public channels are vulnerable to attacks and hence efficient encryption algorithms must be developed for secure data transfer. In [
1], the authors surveyed ten conventional and five chaos-based encryption techniques to encrypt three test images of different sizes based on various performance metrics, and the important conclusion was that none of the conventional schemes were designed especially for images and hence none of them have any dependence on the initial image. In this manner, the topic of image encryption remains open and several researchers are proposing the use of chaotic systems to mask information that can be transmitted in a secure channel. In this direction, this paper highlights the usefulness of Hopfield and Hindmarsh–Rose neural networks to generate chaotic behavior, and its suitability to design random number generators (RNGs) that are implemented using field-programmable gate arrays (FPGAs).
In [
2] J.J. Hopfield introduced the neuron model that nowadays is known as the Hopfield neural network. Ten years later, a modified model of Hopfield neural network was proposed in [
3], and applied in information processing. Immediately, the Hopfield neural network was adapted to generate chaotic behavior in [
4] where the authors explored bifurcation diagrams. In [
5] the simplified Hopfield neuron model was designed to use a sigmoid as activation function, and three neurons were used to generate chaotic behavior. In addition, the authors performed an optimization process updating the weights of the neurons interconnections. The Hopfield neuron was combined with a chaotic map in [
6] to be applied in chaotic masking. More recently, the authors in [
7] proposed an image encryption algorithm using the Hopfield neural network. In the same direction, the authors in [
8] detailed the behavior of Hindmarsh–Rose neuron to generate chaotic behavior. Its bifurcation diagrams were described in [
9], and the results were used to select the values of the model to improve chaotic behavior. Hindmarsh–Rose neurons were synchronized in [
10], optimizing the scheme of Lyapunov function with two gain coefficients. In this way, the synchronization region is estimated by evaluating the Lyapunov stability. Two Hindmarsh–Rose neurons were synchronized in [
11], and the system was used to mask information in continuous time. To show that the neurons generate chaotic behavior, one must compute Lyapunov exponents, and for the Hindmarsh–Rose neuron they were evaluated by the TISEAN package in [
12].
The Hopfield neural network has been widely applied in chaotic systems [
13,
14,
15]. This network consists of three neurons, and the authors in [
13] proposed a simplified model by removing the synaptic weight connection of the third and second neuron in the original Hopfield network. Numerical simulations were carried out considering values from the bifurcation diagrams, and Lyapunov exponents were evaluated to conclude that the simplified model exhibits rich nonlinear dynamical behaviors including symmetry breaking, chaos, periodic window, antimonotonicity and coexisting self-excited attractors. An FPGA-based modified Hopfield neural network was introduced in [
14], to generate multiple attractors, but there are no details of their hardware design from computer arithmetic. The authors in [
15] showed the existence of hidden chaotic sets in a simplified Hopfield neural network with three neurons. Similar to the Hopfield neural network, the Hindmarsh–Rose neuron is quite useful, for example: using the Hindmarsh–Rose neuron model, the authors in [
16] showed that in the parameter region close to the bifurcation value, where the only attractor of the system is the limit cycle of tonic spiking type, the noise can transform the spiking oscillatory regime to the bursting one. The fractional-order version of the Hindmarsh–Rose neuron was used in [
17], for the synchronization of fractional-order chaotic systems. In [
18], based on two-dimensional Hindmarsh–Rose neuron and non-ideal threshold memristor, a five-dimensional neuron model of two adjacent neurons coupled by memristive electromagnetic induction, was introduced. In a similar way, the authors in [
19] showed the effects of time delay on burst synchronization transitions of a neural network which was locally modeled by Hindmarsh–Rose neurons. On the one hand, the main drawback of those works was the lack of statistical tests according to the National Institute of Standard and Technology (NIST), as done in other chaotic systems given in [
20,
21,
22], to guarantee the randomness of the chaotic sequences. On the other hand, and in addition to NIST tests, the authors in [
23] recommend to improve the key space when using chaotic maps, thus enhancing the image encryption schemes. In this work we show the application of neural networks in the design of random number generators (RNGs), whose binary sequences are applied to implement an image encryption scheme [
24]. This idea has been previously exploited, for example: the Hopfield neural network was used in [
25] to design a RNG, but showed low randomness. In this manner, this paper introduces the selection of the best coefficients of both Hopfield and Hindmarsh–Rose neurons, from the bifurcation diagram, to generate robust chaotic sequences that improve NIST tests and enhance chaotic encryption of images.
Section 2 describes both the Hopfield and the Hindmarsh–Rose neuron models, showing their chaotic behavior.
Section 3 shows simulation results of the cases that generate better chaotic time series, applying the 4th-order Runge-Kutta method. Bifurcation diagrams are generated to select appropriate values that improve the generation of chaotic times series that are evaluated using TISEAN, in order to verify the Lyapunov exponents.
Section 4 details the FPGA-based implementation of both Hopfield and Hindmarsh–Rose neuron models.
Section 5 shows the selection of the series with the highest values of the positive Lyapunov exponent, which are used to generate binary sequences, and whose randomness is evaluated by performing NIST tests.
Section 6 shows the application of the generated binary sequences to encrypt an image in a chaotic secure communication system, and the success of the RGB image encryption system is confirmed by performing correlation, histogram, variance, entropy, and Number of Pixel Change Rate (NPCR) tests. Finally,
Section 7 summarizes the main results of this work.
2. Mathematical Models of Hopfield and Hindmarsh–Rose Neurons
This section describes the mathematical models of both the Hopfield and the Hindmarsh–Rose neural networks. For instance, the complex dynamics of the Hopfield-type neural network with three neurons are analyzed in [
26], as well as the observation of the stable points, limit circles, single-scroll chaotic attractors and double-scrolls chaotic attractors. By varying the parameters, the numerical simulations performed in [
27] show that the simple Hopfield neural networks can display chaotic attractors and periodic orbits for different parameters, and they associate different values of the Lyapunov exponents and bifurcation plots. The Hindmarsh–Rose neural network is analyzed in [
28], and by using the polynomial model previously introduced in [
29], the authors perform a detailed bifurcation analysis of the full fast-slow system for bursting patterns.
2.1. Hopfield Neuron
The Hopfield neural network can be modeled by Equation (
1), where
v represents the state variables,
c is a proportional constant,
W is the weights matrix, and
is associated to the activation function [
5].
Commonly,
c is made equal to one, and when a chaotic behavior is desired, the activation function is a hyperbolic tangent function and the weights matrix
W is modified, whose size depends on the number of neurons. In this work,
W has size 3 × 3, meaning that the Hopfield neural network has three state variables, associated with each neuron. That way, Equations (
2)–(
4), describe the model of the three neurons, as shown in [
5]. In this case,
v in (
1) is replaced by a three elements vector, including three state variables, namely:
x,
y, and
z; and the control parameter
p is set to
. This value is found by exploring values that maximize the positive Lyapunov exponent (LE+).
Figure 1 and
Figure 2 show the chaotic time series and the attractors obtained by applying the 4th-order Runge–Kutta method.
The equilibrium points are obtained by applying the Newton Raphson method , where is the Jacobian, because the neural network has nonlinear terms as . The Equilibrium points are: . The eigenvalues are obtained for each equilibrium point evaluating , so that the ones associated to are: =1.9416 and =−0.0658 ± j 1.8793. The eigenvalues associated to and are: =−0.9870 and =0.5381 ± j 1.2861.
2.2. Hindmarsh–Rose Neuron
The Hindmarsh–Rose neural network can be modeled by three state variables, as given by Equation (
5). This model is used to analyze the charge and discharge of a neuron, and in addition, when it provides chaotic behavior, its applications can be extended to cryptography, as shown in this work.
In Equation (
5),
x is associated to the membrane voltage,
y is the recovery variable associated to the current, and
z is the slow and adaptable current. The coefficients
are parameters of the neuron, and
and
are associated to the time scale. Their values are set to:
,
,
,
,
,
,
, and
[
11].
Figure 3 shows the time series of the state variable
x of Hindmarsh–Rose neuron, and
Figure 4 shows the phase-space portraits.
The equilibrium points from Equation (
5) are:
,
, and
. The eigenvalues associated to each equilibrium point are:
= 0.261784,
= 0.0204526,
=− 0.495936 for
; and
= 2.2075 ± j 1.5659,
=− 0.002488 ±j 0.0001127, and
= −0.67089 ± j 0.4012, for
and
.
3. Bifurcation Diagrams and Selection of the Best Values to Generate Enhanced Chaotic Time Series
Bifurcation diagrams are quite useful to find appropriate values of the mathematical models of the neurons, and in this paper, they are generated to find the best Lyapunov exponents and Kaplan–Yorke dimension, which are considered as appropriate metrics to enhance the generation of chaotic time series. In the case of the Hopfield neuron, the state variable
x is selected to plot the bifurcation with respect to the control parameter
p. This process must be performed following the variation of all the coefficients of the mathematical model and for all the state variables in various iterations until some dynamical characteristics of the chaotic system, like the positive Lyapunov exponent (LE+) and Kaplan–Yorke dimension [
30], are improved. Varying the weights matrix
W, one can find better characteristics. For example, in this paper the nine elements in
W were varied in the ranges and steps listed in
Table 1. All these cases generated different bifurcation diagrams from which the feasible values were selected. For example,
Figure 5 shows the bifurcation diagram varying
, where it can be appreciated that the feasible values to generate chaotic behavior must be chosen with values lower than −4.5. In this manner, after exploring the bifurcation diagrams by varying the values in the ranges given in
Table 1, three feasible sets of values are given in
Table 2, where it can be appreciated their variations with respect to the original values given in [
5]. The chaotic time series associated with those sets of values, obtained by applying the 4th-order Runge–Kutta method are shown in
Figure 6 for the state variable
x. Those chaotic time series are used to evaluate Lyapunov exponents and Kaplan–Yorke dimension using TISEAN.
Table 3 lists all Lyapunov exponents values and their associated Kaplan–Yorke dimension for each case from
Table 2.
In the case of the Hindmarsh–Rose neural network model given in (
5), one can count eight coefficients that can be varied. In this case, a heuristic process was performed with the goal of improving the chaotic behavior. Each coefficient
,
a,
b,
,
k,
,
, and
s, was varied in steps of 0.001 and observing the degradation of chaotic behavior. After performing the variations and observing the bifurcation diagrams, two sets of values were found, which are listed in
Table 4. In this manner,
Figure 7 shows the chaotic time series associated with these sets of values.
The simulation of the chaotic time series was performed using the initial conditions
,
, and
, and those series were introduced to TISEAN to evaluate Lyapunov exponents and Kaplan–Yorke dimension that are given in
Table 5. Since the maximum exponent is positive, then chaotic behavior is guaranteed. In this case, the sets of values HRNset1 is the best because the Kaplan–Yorke dimension is 3, i.e., the ideal value for a three-dimensional dynamical system.
4. FPGA-Based Implementation of the Neurons
The hardware implementation of both neurons can be performed from the discretized equations using a specific numerical method. For example, in [
31] one can find the discretization of a dynamical model by applying Forward Euler and 4th-order Runge–Kutta methods. It can be appreciated that there is a trade-off between exactness and hardware resources. Besides, since the Hopfield neural network is a small dynamical system, the 4th-order Runge–Kutta method is used herein to develop the FPGA-based implementation, as shown in
Figure 8, which is based on Equations (
2)–(
4). The details of the numerical method is sketched in
Figure 9, where one can appreciate the block for the hyperbolic tangent function given in Equation (
3), which is implemented as already shown in [
31]. The general architecture shown in
Figure 8 consists of a finite state machine (FSM) that controls the iterations of the numerical method, whose data is saved in the registers (
x,
y,
z). The block labeled Hopfield Chaotic Neuronal Network contains the hardware that evaluates the 4th-order Runge-Kutta method receiving the data at iteration (
,
,
) and providing the data at the next iteration (
,
,
). The Mux blocks introduce the initial conditions (
,
,
) and select the values (
,
,
) for all the remaining iterations. The Reg blocks are parallel-parallel arrays and save the data being processed within the dynamical system. The output of the whole architecture provide a binary string associated to a specific state variable (
x,
y,
z).
The hyperbolic tangent function given in [
31] and used in
Figure 9, is described by Equations (
6) and (
7), with
,
, and
.
Table 6 shows the hardware resources for the implementation of the four cases given in
Table 2 for the Hopfield neuron. The numerical method is the 4th-order Runge–Kutta and the FPGA Cyclone IV EP4CE115F29C7 is used.
In a similar way, the FPGA-based implementation of Hindmarsh–Rose neural network given in Equation (
5), is developed by applying a numerical method. In this case, and applying Forward-Euler method, the hardware description is shown in
Figure 10, where it can be appreciated the use of a finite state machine (FSM) to control the iterations associated to the numerical method, the use of multiplexers to process the initial conditions and afterwards the remaining iterations, the use of registers to save the data of the state variables and blocks to evaluate the discretized equations by Forward Euler. The whole iterative process to generate the next value of the state variables requires seven clock (CLK) cycles.
In both FPGA-based implementations for Hopfield and Hindmarsh–Rose neural networks, the computer arithmetic is performed using fixed-point notation of 5.27 for the Hopfield and 3.29 for the Hindmarsh–Rose neural networks. The FPGA resources for the three sets of values of the Hindmarsh–Rose neural network are listed in
Table 7.
5. Randomness Test: NIST
In this Section, the results of the NIST tests [
32,
33] for both neural networks are shown. The four cases of Hopfield neurons, and using the state variable
x with 1000 chaotic time series (binary strings) of 1 million bits each, generated the NIST tests given in
Table 8. The results using the original values taken from [
5], the three sets of values given in
Table 2, and the results using the weight matrix from [
14], can be compared. All cases passed NIST tests with proportions around 99%, and the set of values HNNset2 generated a higher
p-Value average of 0.7065.
The computer arithmetic for the Hindmarsh–Rose neural network is 3.29 but as the largest variation occurs in the least significative bits (LSB), then only the 16 LSB of each 32-bit number were used. The NIST tests were performed using 1000 chaotic time series of 1 million bits each. The results are summarized in
Table 9 including the averages of the
p-Values and proportions.
6. Image Encryption Application
The binary sequences tested in the previous section can be taken as pseudorandom number generators (PRNGs), and can be used to design a chaotic secure communication system to encrypt images, as shown in
Figure 11. Those PRNGs can be implemented by either the Hopfield or Hindmarsh–Rose neural networks because both provide high randomness. Both neurons can also be implemented using memristors, as shown in [
34], which constitutes another research direction of hardware security. For instance, using the four binary sequences from the FPGA-based implementation of the Hopfield neural network, we show the encryption of three images (Lena, Fruits and Baboon) in
Figure 12. The three RGB images have a resolution of 512 × 512 pixels.
The correlation analysis is performed using [
35]: Equation (
8)–(
10), and it provides the values given in
Table 10. The first row
, means that the chaotic time series of state
x is used to encrypt R (red),
y to G (green) and
z to B (blue). The second row means that all R, G and B are encrypted using the data from
x, and so on.
Figure 13 shows the histograms of Lena before and after encryption using HNNset2 from
Table 2. To describe the distribution characteristics of the histograms quantitatively, the variance of the histograms for the three images (Lena, Fruits and Baboon) is calculated according to [
36], and the results are shown in
Table 11.
The entropy is evaluated by Equation (
11), where
represents the probability of the datum
. Using 8 bits (
),
Table 12 shows the entropies for the three images (Lena, Fruits and Baboon).
To verify the encryption capability against differential attacks, the NPCR test is evaluated by Equations (
12) and (
13) [
37], where
and
are two cipher images that are encrypted from two plain images with only one-bit difference. In this case, using HNNset2 for the Lena, Fruits and Baboon images, the NPCR values are: 99.2672 when using state variable
x, 99.2886 when using the state variable
y, and 99.3420 when using
z. All NPCR results pass the criterium given in [
38].
The binary sequences from the FPGA implementation of the Hindmarsh–Rose neural network shown in
Figure 7, were used as PRNG to encrypt the Lena, Fruits and Baboon images, which results are shown in
Figure 14.
The correlation analysis between the images and the sets of values from
Table 7, provides the values given in
Table 13. The first row
, means that the chaotic time series of state
x is used to encrypt R (red),
y to G (green) and
z to B (blue). The second row means that all R, G and B are encrypted using the data from
x, and so on.
Figure 15 shows the histogram of the Lena image before and after encryption using HRNset1 from
Table 4. The variance of the histograms for the three images (Lena, Fruits and Baboon) using HRNset1, is calculated according to [
36], and the results are shown in
Table 14.
The entropy is evaluated by (
11) using HRNset1, and using 8 bits (
),
Table 15 shows the entropies for the three images (Lena, Fruits and Baboon).
The key space is equal to 160 bits because each datum is encoded by 32 bits, the initial condition for each state variable counts, the step-size of the numerical method can change and is also encoded using 32 bits. The NPCR tests results using HRNset1 for the color images, were: 99.60365 when using the state variable x, 99.49608 when using the state variable y, and 98.49586 when using the state variable z.
In both FPGA-based implementations for the Hopfield and Hindmarsh–Rose neural networks, the image is transmitted from a personal computer running MatLab to the FPGA using the serial port RS-232, as described in [
31].
7. Conclusions
The use of two well-known neural networks, the Hopfield and Hindmarsh–Rose ones, for image encryption applications has been described. With the help of bifurcation diagrams, new feasible sets of values were proposed in order to generate binary strings with more randomness than the ones previously published in the literature. We proposed three sets of values for the Hopfield neuron and two sets of values for the Hindmarsh–Rose neuron. The chaotic time series were analyzed by TISEAN to compute Lyapunov exponents and Kaplan–Yorke dimension. The proposed sets of values were much better than the already published ones.
By applying numerical methods, we showed the descriptions of the hardware design of both neurons and the FPGA resources were listed for the Hopfield and Hindmarsh–Rose neurons, respectively. The binary strings that were generated by the FPGA-based implementations of both neurons were taken as PRNGs to perform the encryption of the RGB Lena, Fruits and Baboon images. The success of the encryption system has been confirmed by the results obtained from correlation, histogram, variance, entropy, and NPCR tests. This demonstrates that both neurons are very useful for chaotic image encryption.