Skip to main content
Erschienen in: Neural Computing and Applications 15/2024

Open Access 11.02.2024 | Original Article

Fractal dimensions and machine learning for detection of Parkinson’s disease in resting-state electroencephalography

verfasst von: Utkarsh Lal, Arjun Vinayak Chikkankod, Luca Longo

Erschienen in: Neural Computing and Applications | Ausgabe 15/2024

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Parkinson’s disease (PD) is an incurable neurological disorder that degenerates the cerebrospinal nervous system and hinders motor functions. Electroencephalography (EEG) signal analysis can provide reliable information regarding PD conditions. However, EEG is a complex, multichannel, and nonlinear signal with noise that problematizes identifying PD symptoms. A few studies have employed fractal dimension (FD) to extract distinguishing PD features from EEG signals. However, no exploratory study exists, as per our knowledge, on the efficiency of the different FD measures. We aim to conduct a comparative analysis of the various FDs that, as feature extraction measures, can discriminate PD patients who are ON and OFF medication from healthy controls using ML architecture. This study has implemented and analyzed several techniques for segmentation, feature extraction, and ML models. The results show that k-nearest neighbors (KNN) classifier with Higuchi FD and 90% overlap for segmented window delivers the highest accuracies, yielding a mean accuracy of \(99.65\pm 0.15\%\) for PD patients ON medication and \(99.45\pm 0.18\%\) for PD patients OFF medication, respectively. The model accurately identifies the signs of the disease in resting-state EEG with almost equivalent accuracy in both OFF and ON medication patients. To enhance the interpretability in our study, we leveraged XGB’s feature importance to generate brain topographic plots. This integration of explainable AI (XAI) enhanced the transparency and comprehensibility of our model’s classifications. Additionally, a comparison between the performance of FD and a few entropy measures has also been drawn to validate the significance of FD as a superior feature extraction measure. This study contributes to the body of knowledge with an architectural pipeline for detecting PD in resting-state EEG while emphasizing fractal dimension as an effective way of extracting salient features from EEG signals.
Hinweise

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

1 Introduction

Parkinson’s disease (PD) is a long-term progressive neurodegenerative disorder that affects a part of the brain called the substantia nigra, which lies in the midbrain region [1]. The neurons in the midbrain start dying, leading to dopamine deficit. PD affects more than 10 million people globally [2]. The main symptoms of PD are rhythmic shaking (tremors) that originates from the limbs, slowed movement (bradykinesia), stiffness (co-wheel rigidity), loss of balance, postural instability, and slurred speech [3, 4]. Although PD is an incurable condition, treatments include dopamine boosters, antidepressants, and other medications that help alleviate the symptoms.
PD can manifest in multiple forms, which may remain undetected for several years and gradually progress till the symptoms become discernible [5]. Unified Parkinson’s disease rating scale (UDPRS) [6] and the Hoehn and Yahr [7] (HY) rating are the two most widely used measures to assess PD. The UDPRS consists of the following scales:
1.
Mood, behavior, and mentation evaluation (non-motor experiences).
 
2.
Self-evaluation of day-to-day routine work such as eating, drinking, swallowing, speech, writing, dressing, walking, turning in bed, and general well-being (motor experiences).
 
3.
Monitored evaluation of motor skills by a clinician.
 
4.
Complications of therapy.
 
These scales are assessed by interrogation and medical observation of the patient. The HY system is similar to UDPRS and has been essentially supplanted by UDPRS.
The UDPRS diagnosis is non-automated and is subject to human error [6]. Consequently, there is a growing need to automate the methods for effective PD detection. Machine learning (ML) and deep learning (DL) are immensely useful in building automated pipelines for the detection and diagnosis of neurological conditions like Parkinson’s disease (PD), especially when there is a need to process enormous data volumes, which is manually impossible to analyze [8].
However, neurological data exhibit high dimensionality with noise, which impedes the performance of ML and DL models [911]. In such cases, a suitable feature extraction technique becomes essential to extract the underlying salient features from the data to give better results with higher efficiency.
As observed from the current research, two types of feature extraction and analysis methods are broadly used for capturing features from EEG, namely, linear and nonlinear. Plenty of research has been done on Parkinson’s disease (PD) by examining EEG signals, using linear analysis methods such as wavelet transforms [12, 13], Fourier transforms [14, 15], and autoregressive methods [16, 17] to extract features. Among nonlinear methods, numerous studies based on entropy [18, 19], Lyapunov exponent [20], Hurst exponent [21, 22], and fractal dimensions (FD) have been proposed to understand crucial aspects of Parkinson’s disease (PD) by deriving insights from the EEG signal. However, fractal dimensions have several key advantages over previously mentioned measures as mentioned below: -
  • Non-Stationary and Nonlinear Analysis—Electroencephalogram (EEG) signals exhibit non-stationary characteristics, implying variations in their properties with respect to time. Additionally, EEG signals are nonlinear (i.e., do not follow a straight-line relationship). Traditional methods of time-series analysis often assume stationarity and linearity, which can lead to inaccuracies when these conditions are not met. Fractal dimensions, however, can handle non-stationary and nonlinear time series, making them more suitable for EEG analysis [23].
  • Understanding Complexity—Parkinson’s disease induces perturbations in global brain activity, which are evidenced by changes in the variability, complexity, and unpredictability of EEG signals [24]. Fractal dimensions, by definition, measure the complexity or "roughness" of a time series. Hence, by analyzing EEG signals using fractal dimensions, researchers can quantify these changes in complexity, which can aid in analyzing Parkinson’s disease efficiently.
  • Time-domain Feature Extraction—Another advantage of fractal dimensions over many other methods is that they can directly extract features from the temporal domain of the EEG signal [25]. In contrast, methods such as partial directed coherence (PDC), Shannon entropy, wavelet transforms, and Fourier transforms can only derive accurate features from the EEG signal’s frequency domain.
  • Simplicity and Efficiency—Despite their ability to capture complex characteristics of EEG signals, fractal dimensions are mathematically straightforward and do not require heavy computational resources [26]. They can be computed directly from the raw EEG data without the need for complex transformations. This makes them efficient for analyzing large datasets or for use in systems where computational resources or processing time may be limited. This dynamic nature of fractal dimensions can be very effective when working with real-time data for use cases that require a quick turnaround time.
Though fractal dimensions provide an effective way for feature extraction from the EEG signal, the use of various fractal dimensional methods for detecting Parkinson’s disease has not been deeply explored in the previous studies. A discernible gap exists in the literature concerning the comparative efficacy of diverse fractal dimensional methods in accurately capturing the essence of Parkinson’s disease from resting-state EEG signals. Our study aims to fill the gap and throw light upon the efficiency of fractal dimensional methods in detecting Parkinson’s disease. Thus, this study also shows that fractal dimensions (FD) can be used for feature extraction for all EEG-based studies.
The objectives designed for this study are as follows:
  • To assess the efficacy of various fractal dimensional methods in extracting features from the EEG signal’s temporal domain to discern Parkinson’s disease (PD) patients from healthy individuals.
  • To determine the optimal machine learning pipeline that is constituted by the best-performing segmentation technique, feature extraction method, and supervised learning algorithm for detecting PD in patients who are ON and OFF medication, with nearly equal performance.
  • To identify and visualize the parts of the brain that is impacted more than other parts by using explainable AI.
  • To investigate if fractal dimensions can be used to accurately understand the significance of the brain regions most affected by PD.
The study’s objectives are structured to address the following research question (RQ):
RQ: How effectively can fractal dimensions, as feature extraction measures, discriminate Parkinson’s disease (PD) patients who are ON or OFF medication from healthy controls (HC)?
The rest of this paper is organized as follows. Section 2 comprehensively reviews the state-of-the-art Parkinson’s disease (PD) detection techniques. The overall methodology and solution design to construct an optimal pipeline by experimenting with various pre-processing, segmentation, feature extraction, and machine learning techniques are presented in Sect. 3. Section 4 unveils the study’s findings. These findings are then elaborated upon in Sect. 5. Finally, Sect. 6 derives inferences from the observations while throwing light upon the contribution of this study to the existing literature.
Numerous works have employed machine learning (ML) and deep learning (DL) for PD detection [2729]. The data retrieved utilizing feature extraction techniques, including partial directed coherence (PDC), have been used to train the machine learning (ML) models [30]. PDC is a Granger causality measure applied to EEG data in the frequency domain to infer the magnitude of information flow over the brain [31]. PDC and random forest (RF) classifier have been implemented for detecting early-stage PD [30]. A novel method combining PDC with the analysis of blood oxygenation level-dependent (BOLD) signal variations has been developed to comprehend the regional peculiarities of oscillatory processes in the human brain and to examine functional connectivity in resting-state EEG-fMRI [31]. The same approach can be applied to detect Parkinson’s disease (PD). Similar to the previous studies that make use of machine learning (ML), deep learning (DL) models have also been utilized for detecting early-stage PD with bio-indicators such as rapid eye movement, olfactory loss, cerebrospinal fluid data, and dopaminergic imaging markers [32]. Detection accuracy of \(96.45\%\) was achieved from 183 healthy individuals and 401 early PD patients. Furthermore, a sensor-based mobility analysis was conducted using wearable sensors to detect bradykinesia, one of the PD’s main symptoms, and a convolutional neural network to achieve high classification accuracy [33]. Some other studies utilize intertial wearable sensors and employ a combination of supervised and unsupervised learning for an in-depth analysis of mobility-based disorders and symptoms [34]. Without the use of ML and DL models in the studies mentioned in this section, it would not be possible to process the large volume of data, let alone understand the biomarkers that indicate the symptoms of Parkinson’s disease (PD). Therefore, machine learning and deep learning are essential in understanding and automating the diagnosis of neurological disorders, especially Parkinson’s disease (PD).
ML models can be applied to all kinds of neurological data acquired from MRI (magnetic resonance imaging), fMRI (functional MRI), BOLD (blood oxygen level-dependent) signals, PET (positron emission tomography), MEG (magnetoencephalography), and EEG (electroencephalography), which are physiological bio-signals that throw light upon reliable information about the onset of a neurological disorder/condition. EEG is a direct measure of electrical brain activity that captures the voltage fluctuations of the brain in a span of milliseconds [35, 36]. EEG’s main advantage over other methods is its high temporal resolution and precision, which can match cognition response. Though MEG has a few advantages over EEG due to its higher sensitivity to superficial cortical activity [37], capturing MEG signals requires expensive apparatus and a magnetically shielded room, making it less feasible than EEG. EEG analysis has been used as an approach to successfully detect many other neurological conditions such as Alzheimer’s [38, 39], epilepsy [4043], brain stroke [44], schizophrenia [4547], depression [48, 49], insomnia [50], autism [51], and attention-deficit hyperactivity disorder (ADHD) [52, 53]. From the previous studies, it is reasonably evident that EEG captures the nuances and salient features of the brain on a millisecond scale, which can lead to the detection of neurological disorders. Therefore, EEG has been chosen as the modality for this study that aims to find the salient biomarkers to accurately and efficiently detect Parkinson’s disease (PD) using the resting-state EEG signal.
Multiple approaches have been designed in the past for the analysis of motor and cognitive impairments in PD by utilizing the EEG signals of patients. The previous research focuses on the analysis of the deterioration of motor functions of the patient, including the study of gait signals [5456] and handwriting signals [5759]. Assessing gait characteristics such as speed, asymmetry, and variability during natural and demanding conditions such as twisting-turning, multi-tasking, and day-to-day life routines enhances the sensitivity of gait quantification and throws light upon exactly how gait gets impacted in PD and also how it progresses through various stages of PD [60]. Freezing of gait (FOG) is one of the most severe and disabling conditions of PD, wherein the patient is unable to move their feet, feeling a sensation as if their feet are frozen. Analysis of EEG signal has proved to be useful in identifying the onset of FOG, using wavelet decomposition [6163]. Therefore, by analyzing and detecting the gait signals (from EEG) or any such motor function impairment at its onset, the progression stage of PD can be detected early and diagnosed before it exacerbates.
Besides the deterioration of motor functions, PD also causes multiple cognitive impairments. These non-motor impairments can be analyzed very efficiently using EEG. In multiple studies, EEG was captured while performing tasks such as listening to music [64] or watching a movie [65], and the emotional response to these stimuli was analyzed through the EEG signal. Numerous clips of the movie expressing various emotions such as tragedy, happiness, romance, and many others were shown to the patient, and their response was observed. Some studies also used odors as stimuli to induce an emotional response to the smell [6668]. The previous studies designed on the same approach show that PD patients emote less spontaneously to video clips and bad odors [67]. Therefore, it can be inferred that there is an underlying link between Parkinson’s disease (PD) and the emotional response of the patient, which can be attributed to the deficit of dopamine in the brain, as dopamine influences the emotional response of the brain [69].
Degeneration of dopamine-producing neurons in the part of the brain called substantia nigra disrupts effective communication between the cortex and basal ganglia. These effects are captured by the EEG signal [70, 71], but an effective feature extraction method is often required to bring these effects to the foreground. EEG is characterized by nonlinear dynamics; hence, nonlinear methods such as fractal dimensions (FD) and entropies provide reliable and accurate information that is instrumental in distinguishing between a PD patient and healthy control (HC).
Fractal dimensions (FD) have been employed in some previous studies for the analysis of Parkinson’s disease (PD). Few studies, such as [72], utilized optical coherence tomography angiography to obtain images of superficial and deep capillary plexuses and retinal anatomy to investigate the capillary complexity of the retina. Fractal dimensional analysis of retinal capillary complexity was performed to differentiate between PD patients and healthy controls (HC). The experiment was conducted on 25 PD patients and 25 HC. It was observed that PD patients had a lower value of fractal dimensions (FD), indicating lower retinal capillary complexity. In another study [22], fractal dimensions of EEG were extracted to analyze the deterioration of emotional response in PD patients. Twenty PD patients consisting of 10 left side-affected and 10 right side-affected were recruited along with 10 HC for this experiment. Nonlinear EEG signals were analyzed for identifying emotions using fractal dimensions (FD), entropy, Hurst exponent, and multiple other feature extraction measures. Classification of PD was performed using fuzzy KNN and support vector machine (SVM). The results revealed that neuronal degeneration might contribute to the decline of emotional responsiveness in PD patients.
As investigated in the studies mentioned above, fractal dimensions have been explored as a good feature extraction measure for the analysis of PD using EEG. However, the previous studies are limited, and there is scope to conduct an exploratory analysis of the various fractal dimensional techniques. Therefore, it is essential to investigate: A) What is the optimal fractal dimensional measure to extract salient PD features from EEG? B) What ML model yields the best performance in detecting PD in both ON Meds and OFF Meds PD patients? and C) Can fractal dimensions accurately identify the parts of the brain severely impacted by PD?

3 Materials and methods

The following research hypothesis was set by considering the EEG data to investigate the robustness of fractal dimensions (FD) as biomarkers for detecting Parkinson’s disease (PD).
IF EEG features are extracted using fractal dimensions (FD), which are generated by the sliding window technique that segments multichannel EEG signals along with overlapping and is given as input to the classifier, THEN, there exists an optimal FD and classifier combination that discriminates Parkinson’s disease (PD) patients who are ON and OFF medications from healthy controls with greater than 99% classification accuracy.

3.1 Dataset

The dataset utilized in this paper is the UC San Diego Resting-State EEG data derived from patients with Parkinson’s disease (PD), which can be accessed from OpenNeuro (dataset accession number: ds002778) [7377]. The dataset consists of 31 subjects, of which 16 were healthy with an average age of 63.5 ± 9.6, and 15 were PD patients with an average age of 63.2 ± 8.2. Healthy subjects comprised nine females and seven males, whereas PD patients comprised eight females and seven males. During EEG recording, all subjects were advised to relax and fixate their eyes on a cross displayed on the screen.
For the PD patients, two sessions were conducted on separate days, one for on medication (ON Meds) and the other for off medication (OFF Meds). During the ON Meds session, patients brought their regular medication regimen, which they had also taken for the past 12 h. The healthy subjects went through only one session. The EEG recording was conducted with a 32-channel Biosemi Active Two apparatus, where the electrodes were placed in accordance with 10–20 international standard system with the channels: Fp1, AF3, F3, F7, FC5, FC1, C3, T7, CP5, CP1, P3, P7, PO3, O1, Oz, Pz, Fp2, AF4, Fz, F4, F8, FC6, FC2, Cz, C4, T8, CP6, CP2, P4, P8, PO4, and O2. The recording was done at a sampling frequency of 512 Hz for around 3 min. Apart from the 32 EEG channels, the dataset also contained eight EXG channels. However, this study was designed on 32 EEG channels.

3.2 Pre-processing

A pre-processing pipeline was implemented in Python, making use of the NumPy and MNE [78] libraries (phase A in Fig. 1). A finite impulse response (FIR) filter was used for filtering purposes, adopting a high pass cutoff at 1 Hz to mitigate low-frequency variations and a low pass cutoff at 40 Hz. The FIR filter utilized the Firwin method, incorporating a hamming window with 0.0194 passband ripple and 53 dB stopband attenuation. The eight EXG channels were then dropped. The duration of the EEG signal was cropped to 150 s, making the signal length uniform across all subjects, leading to the same number of fractal dimensions for every channel. Independent component analysis (ICA) was employed to identify and manually eliminate artifacts from the EEG signal. In EEG, multiple sensors are placed on the scalp to record voltage fluctuations from individual channels simultaneously. However, fluctuations due to heartbeats, eye blinks, muscular activity like jaw clenching, or head rotation can cause ripples in the EEG recording and cause irregularities due to external source signals, which are undesirable and contaminate the EEG signal. If these source signals maintain non-Gaussian characteristics while exhibiting statistical independence, they can be effectively distinguished into individual components through ICA. In this study, ICA was employed to decompose the input EEG signal into 20 components using the FastICA algorithm. Components containing artifacts, such as eye blinks and muscle movement, were manually removed, and the EEG signal was then reconstructed without the artifacts. MNE Python package was leveraged to perform the ICA and the signal reconstruction.
The EEG signal was segmented into overlapping sliding windows of 1 s using the NumPy Python package. Two runs of the whole experiment were executed, wherein each run incorporated a sliding window with a different overlap percentage. A sliding window with 50% and 90% overlap was implemented in this study, and comparisons were drawn utilizing individual results. Each window had 512 data samples corresponding to the EEG signal’s sampling rate. The fractal dimension (FD) of each of these windows was calculated using the various feature extraction methods (phase B in Fig. 1). For this purpose, the AntroPy and PyEEG [79] Python packages were employed. Thus, the dimensions of the final table can be represented as - (Number of windows * Length of window (in seconds) * Sampling Rate * Number of subjects, Number of channels), where the number of channels was 32, the length of the window was 1 s, sampling rate was 512 Hz, and the number of subjects was 31. For every subject, there were 300 windows for 50% overlap, whereas the number of windows was 1505 for 90% overlap.

3.3 Feature extraction

A fractal is a curve or a geometrical structure with a characteristic recurring pattern that can be symmetric or asymmetric. Examples are Koch’s snowflake, Sierpinski’s triangle, and the Mandelbrot set fractal. The pattern keeps repeating itself when zoomed in to the geometrical structure incrementally [80, 81]. A structure’s fractal dimension is a measure of its complexity. The same concept can be applied to the EEG signal to measure the magnitude of its complexity [82], which is the amount of nonlinear information transmitted over time. In other words, FD represents the signal’s dimension in the temporal domain. A straight line has an FD equal to 1, and as the complexity of the line increases, the FD also increases. Fractal dimensions provide a practical pathway for spatial analysis of a time series.
Along with FD, entropy has also been used as a feature extraction measure to extract relevant features from the EEG signal to analyze PD. Entropy is a concept that has its roots in the field of thermodynamics. Entropy typically represents the disorder of a system, which is described by the probability distributions of molecules in gaseous and fluid systems [83]. Applying entropy to electroencephalographic (EEG) signals provides a way to statistically compute the amount of uncertainty or randomness in a pattern, which is directly proportional to the amount of information in the signal. In this article, spectral entropy and permutation entropy have been used to extract features from EEG signals in the time domain.
The proposed work employed five feature extraction measures: Higuchi’s FD, Katz FD, Petrosian FD, spectral entropy, and permutation entropy. All the mentioned feature extraction methods were implemented using Antropy, Pandas, and NumPy Python libraries. A brief overview of each method is given in the following subsections.

3.3.1 Higuchi’s fractal dimension

Formulated by T. Higuchi in 1988, this method constructs multiple time series by sub-sampling the original signal [82]. For each derived time series, the curve’s length is computed, averaged across the entire sample, and then represented on a log scale. The gradient of this plotted curve conveys the Higuchi FD measure of the signal. Initially, considering a signal having data points represented as x(1), x(2), x(3),... x(n), this approach crafts a novel time series as follows:
$$X_{k}^{m} ;X(m),X(m + k),X(m + 2k), \ldots ,X\left( {m + \left\lfloor {\frac{{N - m}}{k}} \right\rfloor k} \right)$$
(1)
where m=1, 2, 3, ..., k and k = 1, 2, 3, ...\(k_{max}\). In the case of a time duration of length k, there are k unique signal reconstructions. In this context, X represents the original signal, while m and k stand for the starting point and interval length, respectively. For each of these new signals, the length of the signal curve formed by \(X_k^m\) is calculated as follows:
$$\begin{aligned} L_m(k)=\dfrac{\sum _{i=1}^{\lfloor \frac{N-m}{k}\rfloor }\mid X(m+ik)-X(m+(i-1))k\mid y}{k} \end{aligned}$$
(2)
where y represents the normalization factor as:
$$\begin{aligned} y = \dfrac{(N-1)}{\lfloor \frac{N-m}{k}\rfloor k} \end{aligned}$$
(3)
For a given interval k, the duration of the time series, symbolized by L(k), is determined by averaging the values over k clusters of \(L_m(k)\) through the formula described below:
$$\begin{aligned} L(k) = \dfrac{\sum _{i=1}^{k}L_m(k)}{k} \end{aligned}$$
(4)
The slope of the linear least squares fit of log(L(k)) plotted against log(1/k) gives the Higuchi FD. Consequently, the mean across k groups of \(L_m(k)\) computed as L(k) in Eq. 4 follows the power law given by:
$$\begin{aligned} \text {log}(L(k)) \propto D.\text {log}(1/k), \end{aligned}$$
(5)
The time series is characterized by a fractal dimension, denoted as D [84]. For the computation of Higuchi FD in this research, a standard \(k_{max}\)=10 was adopted.

3.3.2 Katz fractal dimension

For a signal (\(x_i, y_i\)) of length N, the line length of the time-series data is computed, which involves calculating the sum of the Euclidean distances between each successive pair of points in the time series. Considering a subset waveform of length L, the Katz FD is computed as follows [85]:
$$\begin{aligned} D=\dfrac{\text {log}(N)}{\text {log}(N)+\text {log}(d/L)} \end{aligned}$$
(6)
where d represents the greatest distance between the first data point (\(x_i, y_i\)) and the furthest data point in the time series [86]. L and d are defined as follows:
$$\begin{aligned} L= & {} \sum _{i=0}^{N-2} \sqrt{(y_{i+1} - y_i)^2 + (x_{i+1} - x_i)^2} \end{aligned}$$
(7)
$$\begin{aligned} d= & {} \text {max}(\sqrt{(x_i - x_1)^2 - (y_i - y_1)^2}) \end{aligned}$$
(8)
The line length (L) is divided by the maximum distance (d). The ratio of the natural log of this quotient to the natural log of the amount of data points in the time series represents the Katz fractal dimension of the time series [87].

3.3.3 Petrosian fractal dimension

Petrosian’s method delivers a fast computation of the fractal dimension of a time series by translating the series into a binary sequence [88]. Multiple variations of the technique have been implemented, which primarily vary in how the translation into the binary sequence is done. In this paper, the Petrosian C FD algorithm has been employed, wherein successive subdivisions from the series are subtracted, and the binary series is produced based on the difference. A “−1” or a “+1” is assigned for every negative or positive result, respectively, and the number of sign changes is counted [89]. Consider a time series \(x_1\), \(x_2\), \(x_3\), \(\dots\), \(x_N\) and its waveform consisting of a series of points \(y_1\), \(y_2\), \(y_3\), \(\dots\), \(y_N\). The binary matrix \(z_i\) is obtained by converting the sequence into the binary format as follows:
$$\begin{aligned} z_i={\left\{ \begin{array}{ll} 1, &{} \text {if } x_i > \text {mean}(y), i=1, 2, 3,... N \\ -1, &{} \text {if } x_i\le \text {mean}(y), i=1, 2, 3,... N \end{array}\right. } \end{aligned}$$
(9)
The Petrosian FD is calculated as follows:
$$\begin{aligned} D=\dfrac{\text {log}_{10}N}{\text {log}_{10}N+\text {log}_{10}\dfrac{N}{N+0.4N_\triangle }} \end{aligned}$$
(10)
where N is the overall length of the series, and \(N_\triangle\) represents the count of sign changes occurring in the binary series. \(N_\triangle\) is computed as follows:
$$\begin{aligned} N_\triangle =\sum _{i=1}^{N-2}\mid \dfrac{z_{i+1}-z_{i}}{2}\mid \end{aligned}$$
(11)

3.3.4 Spectral entropy

Spectral entropy quantifies the signal’s spectral power distribution. It is derived from Shannon’s entropy [90]. Shannon’s entropy assesses the signal’s energy by analyzing a set of relational factors that fluctuate linearly with the logarithm of the number of permutations of its states. It calculates the average rate at which a stochastic data source produces information. It also measures the spread of the data and is most often used for the assessment of the dynamic order of a system [91]. Shannon’s entropy is represented as
$$\begin{aligned} {H(X):=-\sum p(x)\text {log}{p(x)}} \forall x\in X\end{aligned}$$
(12)
where p(x) denotes the probability of a given event.
Spectral entropy, drawing from Shannon’s entropy principles, utilizes the amplitude components of the time-series’ power spectrum to assess entropy. This provides a measure of the spectral intricacy present in the EEG signal. Spectral entropy is the negative of the product of the power in each frequency \(p_f\) and the logarithm of the same power.
$$\begin{aligned} {SE=\sum _f p_f \text {log}(\frac{1}{p_f})} \end{aligned}$$
(13)

3.3.5 Permutation entropy

PE operates by identifying the ordinal structures within a signal. For a sequence of m real values \((x_1, x_2,... x_m)\), the ordinal structure elucidates the interplay among its constituents. Through the extraction of a probability distribution of ordinal configurations, PE evaluates the intricacy of the signal by observing the sequence in which its values appear. [92].
For a signal consisting of m data points as: \((x_1, x_2,... x_m)\), where each data point is marked by distinct index \(k_i \in \{1, 2, 3,... m\}\), wherein the greatest element is assigned the variable and the second greatest m-1, and so on. Then, taking an example of a random set of data points: \((x_2, x_5, x_9)\), the ordinal pattern would be represented as (2, 5, 9) or simply 259. Furthermore, for these data points, the number of possible ordinal patterns would be the number of possible arrangements of the three numbers: (259, 295, 529, 592, 952, 925). As evident, there are a total of \(3! = 6\) arrangements. Thus, for any set of n distinct data points: \((x_1, x_2,... x_n)\), the total number of possible ordinal patterns would be m! [93].
In our investigation, from each epoch with a duration D, the segments \(w(i) = w(i1), w(i2), w(i3), \dots , w(iD)\) are extracted from the primary time series \(x_1, x_2, x_3, \dots , x_N\). This extraction involves selecting consecutive components \(x_i\) that may be interspersed by a latency denoted as \(\tau\) as:
$$\begin{aligned} \tau :x_i, x_{i+\tau }, x_{i+2\tau },..., x_{i+(D-1)\tau } \end{aligned}$$
(14)
The recommended value of D \(\in [3,7]\) and \(\tau = 1\) [92]. Therefore, using D = 3 and \(\tau = 1\) would give a two-dimensional matrix with each column vector containing three elements, wherein there would be a single time period between each element of \(\tau\) between each element in the vectors. The number of column vectors created for a signal with N data points would be \(N - (D - 1)\tau\), which implies that when D=3 and \(\tau = 1\), the matrix would contain three rows and five columns, with each row one \(\tau\) ahead of the previous row.
Afterwards, the D-dimensional vectors from the matrix are mapped into unique permutations that capture the ordinal rankings of the data:
$$\begin{aligned} \pi = \{r_0, r_1,... r_{D-1}\} = \{0, 1,... D - 1\} \end{aligned}$$
(15)
Following the example above, there are \(3! = 6\) different possible permutations (i.e., ordinal patterns in total as follows: -
$$\begin{aligned} \begin{array}{c c} \pi _1:&{} {0, 1, 2}\\ \pi _2:&{} {0, 2, 1}\\ \pi _3:&{} {1, 0, 2}\\ \pi _4:&{} {1, 2, 0}\\ \pi _5:&{} {2, 0, 1}\\ \pi _6:&{} {2, 1, 0} \end{array} \end{aligned}$$
(16)
Based on the ordinal position of the values within the vector, the above permutations assign values to each partitioned vector. For example, a vector: -
$$\begin{aligned} \begin{bmatrix} 4 \\ 7 \\ 9 \end{bmatrix} \end{aligned}$$
(17)
The permutation for this vector is \(\pi _1 = \{0, 1, 2\}\) because \(4< 7 < 9\). After finding the respective ordinal patterns, the relative frequency of each permutation is calculated. This is achieved by determining the occurrence rate of a given permutation in the series and normalizing it by the overall count of sequences [94].
In the final step, PE is computed using the probabilities calculated in the previous step by: -
$$\begin{aligned} PE_D = -\sum _{i=1}^{D!}p_i\text {log}_2p_i \end{aligned}$$
(18)

3.4 Classification

The classification was performed on the dataframe, with features extracted using FD and entropies to distinguish between PD patients and healthy controls. Machine learning algorithms such as KNN, quadratic discriminant analysis (QDA), XGBoost, light gradient boosting machine (LGBM), extra trees classifier (ET), and random forest (RF) have been employed to discriminate patients with on and off medications from healthy controls. K-fold cross validation was designed with \(k=15\), which divided the dataset into 15 subsets, where 14 were used for training and 1 for validating the classifier. This procedure was iterated 15 times with different train and test subset combinations. It was observed that \(k=15\) gave the highest average accuracy among all integer values of k for \(k\le 20\). Two runs of the pipeline were executed for the following cases:
1.
PD patients who were off medication vs. healthy controls (HC).
 
2.
PD patients who were on medication vs. healthy controls (HC).
 

3.4.1 Hyperparameter tuning

The GridSearchCV Python package was employed to tune the hyperparameters for the models. The following parameters were tuned for the selected models in this study. The optimal parameter setting for each model has also been presented:-
  • KNN -
    • leaf_size(minimum points in a given node) = 30
    • metric (for calculating the distance between points) = ‘euclidean’
    • n_neighbors (Number of neighbors to use) = 6
    • p (Power parameter for the Minkowski metric) = 2 (p=2 for Euclidean distance)
    • weights (The weight function to use in the prediction task) = ‘uniform’
  • XGBoost -
    • learning_rate or eta (Step size shrinkage to prevent overfitting) = 0.1
    • n_estimators (Number of boosting rounds) = 280
    • max_depth (Maximum depth of a tree) = 8
    • colsample_bytree (Proportion of features to randomly sample for building each tree) = 1
    • reg_alpha (L1 regularization) = 0.05
  • LGBM -
    • learning_rate (Step size shrinkage to prevent overfitting) = 0.1
    • n_estimators (Number of boosting rounds) = 260
    • max_depth (Maximum depth of a tree) = -1
    • min_child_samples (Minimum data instances needed in a child leaf) = 41
    • num_leaves(number of leaf nodes)= 70
    • reg_lambda (L1 regularization) = 3
    • bagging_fraction = 0.6
    • bagging_freq = 2
    • boosting_type = ‘gbdt’
  • Extra Trees -
    • criterion (Function for measuring the split quality) = ‘gini’
    • n_estimators (Number of trees to grow) = 150
    • max_depth (Maximum depth of a tree) = None
    • min_sample_split (Minimum number of samples required to split an internal node) = 30
    • min_samples_leaf (Minimum count of data points required for a leaf node of a tree) = 15
  • Random Forest -
    • criterion (Function for measuring the split quality) = ‘gini’
    • n_estimators (Number of trees in the forest) = 120
    • max_depth (Maximum depth of the trees) = None
    • min_sample_split (Minimum number of samples required to split an internal node) = 28
    • min_samples_leaf (Minimum count of data points required for a leaf node of a tree) = 10
  • Quadratic Discriminant Analysis -
    • reg_param (Regularization parameter, which introduces bias to the estimated covariance matrices to ensure they are well-conditioned) = 0.1

3.4.2 Explainable AI

To bolster the credibility and elucidate the decision-making process of the classifiers, techniques from the explainable artificial intelligence (XAI) domain, specifically the concept of feature importance, were integrated into the post-classification stage [95]. A feature importance vector, with each element corresponding to an electroencephalography (EEG) channel, was extracted from the extreme gradient boosting (XGBoost) classifier, enabling the identification of key channels in effectively distinguishing Parkinson’s disease (PD) patients from healthy control (HC) subjects.

3.5 Evaluation metrics

The metrics employed to assess the performance of the models employed in this study are as follows:
  • Precision: It attempts to answer the question, “Out of all the predictions made by the machine learning model, how many were actually correct?"
    $$\begin{aligned} \text {Precision} = \dfrac{\text {True Positives}}{\text {True Positives} + \text {False Positives}} \end{aligned}$$
    (19)
    As represented in Eq. 19, precision is inversely proportional to the number of false positives.
  • Recall: It attempts to answer the question, “Out of all the actual positive values, how many were correctly identified by the machine learning model?"
    $$\begin{aligned} \text {Recall} = \dfrac{\text {True Positives}}{\text {True Positives} + \text {False Negatives}} \end{aligned}$$
    (20)
    As represented in Eq. 20, recall is inversely proportional to the number of false negatives.
  • F1 score: It is the harmonic mean of precision and recall that indicate the model’s overall performance while incorporating the number of false negatives and false positives.
    $$\begin{aligned} F1 \text {score} = 2 * \dfrac{\text {Precision} * \text {Recall}}{\text {Precision} + \text {Recall}} \end{aligned}$$
    (21)
  • Accuracy: It answers the question, “Out of all the predictions made by the machine learning model, how many were correct?"
    $$\begin{aligned} \text {Accuracy} = \dfrac{\text {TP} + \text {TN}}{\text {TP} + \text {FP} + \text {TN} + \text {FP}} * 100\% \end{aligned}$$
    (22)
    where TP=True Positive, TN=True Negatives, FP=False Positives, and FN=False Negatives.
    $$\begin{aligned} \text {Accuracy} = \dfrac{N_{\text {correct}}}{N_{\text {total}}} * 100\% \end{aligned}$$
    (23)
  • Area under the ROC curve (AUC): An ROC curve (receiver operating characteristic curve) is a curve representing the performance of a classifier with respect to two parameters: true-positive rate (TPR) which is synonymous to recall and false positive rate (FPR).
    $$\begin{aligned} \text {TPR}= & {} \dfrac{\text {TP}}{\text {TP} + \text {FN}} \end{aligned}$$
    (24)
    $$\begin{aligned} \text {FPR}= & {} \dfrac{\text {FP}}{\text {FP} + \text {TN}} \end{aligned}$$
    (25)
    The ROC curve plots TPR on the y-axis and FPR on the x-axis at different thresholds. When the threshold is lowered, more elements are classified as positive. AUC quantifies the 2D area under the ROC curve, which represents the total performance measure for all potential classification thresholds.

4 Results

4.1 Sliding windowing with 50% overlap

4.1.1 PD patients OFF medication

Table 1 presents the results associated with discriminating PD patients OFF medication against healthy controls. It was observed from the results that Higuchi FD gives the best performance across all other feature extraction measures. Every classifier with Higuchi’s FD yielded high accuracies. Figure 4 gives the comparison of the accuracy of the different classifiers with Higuchi’s FD. KNN classifier with Higuchi’s FD gave the highest accuracy of \(94.45\%\) ((a) in Fig. 2). The accuracies of XGB, LGB, QDA, ET, and RF with Higuchi’s FD were 93.00%, 92.27%, 91.86%, 91.27%, and 90.86%, respectively. Petrosian FD, permutation entropy, and spectral entropy also yielded considerably good accuracies but lower than Higuchi FD and in that order. Katz FD gave the least accuracy.
Moreover, the order of accuracy score for the feature extraction methods was retained in every other classifier, with Higuchi FD performing the best and Katz FD performing the worst. AUC, recall, precision, and F1 score results were in line with the accuracy metric across all the feature extraction–classifier models.
Table 1
Classification results for PD patients OFF Meds vs. HC with 50% overlap
Classifier model
Feature extraction
Accuracy
AUC
Recall
Precision
F1 score
KNN
Higuchi
94.45
98.38
93.61
95.65
94.61
Katz
74.13
81.53
89.53
69.46
78.19
Petrosian
89.82
95.76
88.65
91.40
89.98
Spec-ent
86.20
92.95
90.78
84.15
87.31
Perm-ent
89.67
95.57
89.41
90.79
90.08
QDA
Higuchi
91.86
97.32
91.37
92.87
92.11
Katz
78.10
86.29
83.39
76.51
79.78
Petrosian
85.68
93.63
83.86
87.93
85.81
Spec-ent
84.50
92.77
90.06
82.13
85.87
Perm-ent
84.94
93.15
82.98
87.69
85.23
XGB
Higuchi
93.00
98.13
92.41
94.07
93.22
Katz
77.06
84.72
80.90
76.31
78.47
Petrosian
88.59
95.55
87.44
90.18
88.76
Spec-ent
87.10
94.68
88.75
86.88
87.78
Perm-ent
88.03
95.28
87.31
89.61
88.42
LGB
Higuchi
92.27
97.81
91.21
93.82
92.47
Katz
77.29
85.20
81.38
76.39
78.77
Petrosian
88.24
95.22
86.80
90.08
88.39
Spec-ent
86.76
94.15
88.67
86.42
87.49
Perm-ent
87.70
95.04
86.12
90.01
88.00
ET
Higuchi
91.27
97.21
88.78
94.18
91.36
Katz
75.79
83.24
83.06
73.65
78.04
Petrosian
87.76
94.51
84.94
90.79
87.74
Spec-ent
85.89
93.59
87.48
85.82
86.63
Perm-ent
87.12
94.31
84.73
90.10
87.32
RF
Higuchi
90.86
96.48
88.94
93.27
91.01
Katz
74.76
85.26
80.09
73.56
76.66
Petrosian
86.29
93.59
83.78
89.06
86.29
Spec-ent
84.69
92.52
86.12
84.86
85.45
Perm-ent
85.81
93.35
84.13
88.29
86.13

4.1.2 PD patients ON medication

Table 2 presents the results associated with discriminating PD patients ON medication against healthy controls. Similar to the observations in PD patients OFF medication, every classifier delivered higher accuracy with Higuchi’s FD than the other measures. KNN classifier with Higuchi’s FD performed the best, achieving an average accuracy of \(96.46\%\), while RF performed the worst with \(93.94\%\) accuracy (Table 2 and (b) in Fig. 2). XGB, LGB, QDA, and ET accuracies with Higuchi’s FD were 95.26%, 94.81%, 94.18%, and 94.70%, respectively. Petrosian FD, permutation entropy, and spectral entropy also produced considerably good accuracies after Higuchi FD and in that order. Katz FD gave the least accuracy indicating that it failed to capture the relevant PD features of the EEG signal. AUC, recall, precision, and F1 score were similar to the accuracy metric in every feature extraction classifier model.
Table 2
Classification results for PD patients ON Meds vs. HC with 50% overlap
Classifier model
Feature extraction
Accuracy
AUC
Recall
Precision
F1 score
KNN
Higuchi
96.46
98.91
96.23
97.28
96.75
Katz
76.26
83.74
92.56
71.99
80.98
Petrosian
90.42
96.66
93.86
89.29
91.50
Spec-ent
88.42
95.20
90.83
88.26
89.49
Perm-ent
91.07
96.31
90.95
92.62
91.75
QDA
Higuchi
94.18
98.70
95.20
94.25
94.71
Katz
79.43
86.68
84.83
79.04
81.82
Petrosian
87.34
94.98
88.48
88.46
88.46
Spec-ent
85.40
93.08
90.67
83.79
87.06
Perm-ent
87.29
94.98
88.01
88.75
88.35
XGB
Higuchi
95.26
99.01
96.08
95.35
95.70
Katz
78.11
85.80
83.96
77.73
80.72
Petrosian
89.83
96.12
91.17
90.43
90.78
Spec-ent
88.42
95.20
90.83
88.26
89.49
Perm-ent
89.77
96.49
90.91
90.51
90.66
LGB
Higuchi
94.81
98.99
95.8
94.75
95.29
Katz
78.91
86.66
84.79
78.36
81.44
Petrosian
88.84
96.00
90.30
89.50
89.87
Spec-ent
87.77
94.93
90.31
87.61
88.91
Perm-ent
89.79
96.34
91.62
89.96
90.75
ET
Higuchi
94.70
98.77
95.32
95.05
95.17
Katz
76.63
84.64
87.02
74.45
80.24
Petrosian
88.64
95.73
89.63
89.69
89.64
Spec-ent
86.99
94.07
90.27
86.44
88.28
Perm-ent
88.53
95.66
89.68
89.43
89.52
RF
Higuchi
93.94
98.25
94.69
94.31
94.48
Katz
76.59
84.74
84.36
75.62
79.73
Petrosian
88.31
95.22
89.39
89.35
89.35
Spec-ent
85.95
93.50
88.27
86.24
87.20
Perm-ent
88.14
95.43
88.96
89.35
89.12
The mean accuracy of every feature extraction measure was calculated across all classifiers ((b) in Fig. 3). Similarly, the mean accuracy of every classifier was calculated across all feature extraction measures ((a) in Fig. 3)). Furthermore, from Fig. 4, it can be observed that RF and ET give the least mean accuracy difference between OFF Meds and ON Meds across 15 folds. However, their mean accuracy for both ON and OFF Meds is less than KNN and XGB. Additionally, the variability of RF and ET seems to be much higher than KNN and XGB since the line graph has a larger variation. Additionally, the classification accuracies obtained for ON Meds were comparatively higher than OFF Meds across feature extraction and classifier models (Figs. 2 and 4).
KNN and XGB give the highest accuracies among all classifiers (Fig. 4). These accuracies are comparable, with KNN giving a slightly higher average accuracy. However, as observed in Fig. 4, the line graph of KNN is flatter than XGB and other classifiers for both ON Meds and OFF Meds, which indicates a lower standard deviation. For QDA, the difference in accuracies between OFF Meds and ON Meds is much larger than other classifiers across 15-fold. Although QDA yields high accuracy for ON Meds, it gives the lowest accuracy for OFF Meds, which indicates that QDA does not accurately generalize over the two datasets, OFF Meds and ON Meds. Thus, it can be inferred that QDA cannot make accurate predictions from the relevant features captured by Higuchi FD from the EEG signal to discriminate between PD patients and HC.
XGBoost (XGB) classifier was chosen as the model to obtain the feature importances since it gave a very high classification accuracy for both ON and OFF Meds datasets. In Python, XGB has an inbuilt feature_importances_ attribute holding the values of the feature importances of the input parameters, making it a more feasible alternative to obtain feature importances than KNN, which does not have such an inbuilt function. Additionally, KNN is a distance-based algorithm, which makes deriving feature importances a complex process.
The feature importance values, when plotted on a topographic plot, throw light upon the regions of the brain that has a greater influence on the classification of PD and play as a differentiating characteristic between PD patients and HC (Fig. 5). This can possibly also imply that the EEG signal from specific areas of the brain plays a more important role in detecting PD than others. These areas may be more impacted by PD than other areas.
There is a clear pattern observed in the topographic plots (Fig. 5) which represents the area of the brain that has the highest feature importance for detecting Parkinson’s disease (PD) according to the XGB algorithm. However for Katz and spectral entropy, the classification accuracy for distinguishing between PD patients and HC was observed to be extremely low (Fig. 3). Furthermore, as evident from Fig. 5, the regions having higher feature importance differ greatly between ON Meds and OFF Meds topographic plots, showing no clear pattern. The performance of Katz and spectral entropy can be seen to be trailing behind with extremely low accuracies in all classifiers employed in this study (Fig. 2). Therefore, it can be inferred that Katz and spectral entropy do not provide reliable information regarding feature importances.
In topographic plots for Higuchi FD, Petrosian FD, and permutation entropy, red areas which indicate high feature importance are commonly found in the midbrain and occipital region. The midbrain region around channels C3, Cz, T3, and Pz shows higher feature importance than other regions. The midbrain is where the brain’s motor cortex lies, so it definitely makes sense that the fractal dimension of the EEG channels coming out of these parts would have higher feature importance than others. The occipital region is responsible for the visual cortical activity. Hence, the topographic plots suggest that PD might also be linked to the visual cortical activity of the brain. Studies such as [9698] discuss the connection between the retina and Parkinson’s disease (PD).

4.2 Sliding windowing with 90% overlap

4.2.1 PD patients OFF medication

Table 3 presents the results associated with discriminating PD patients OFF medication against healthy controls (HC). The overall performance of all classifiers and feature extraction methods was observed to increase drastically compared to the sliding windowing with \(50\%\) overlap. The results show that Higuchi FD yielded the best performance across all feature extraction measures, similar to the case in the sliding windowing with \(50\%\) overlap. Every classifier with Higuchi FD generated very high accuracies. KNN classifier with Higuchi FD produced the highest accuracy of \(99.45\%\). Furthermore, the following hyperparameters yielded the best classifier for KNN with Higuchi FD - leaf_size = 30, metric = “euclidean,” n_neighbors = 6, p = 2, and weights = “uniform.” The accuracies yielded by XGB, LGB, QDA, ET, and RF with Higuchi FD were \(97.90\%, 96.22\%, 93.35\%, 92.22\%\), and \(96.99\%\), respectively. The order of accuracy score for the feature extraction methods was retained in every other classifier, with Higuchi FD performing the best while Katz FD performed the worst. AUC, recall, precision, and F1 score results were in line with the accuracy metric across the feature extraction–classifier model.
Table 3
Classification results for PD patients OFF Meds vs. HC with 90% overlap
Classifier model
Feature extraction
Accuracy
AUC
Recall
Precision
F1 score
KNN
Higuchi
99.45
99.96
99.11
99.89
99.50
Katz
83.87
91.52
87.99
83.32
85.59
Petrosian
99.48
99.98
99.41
99.63
99.52
Spec-ent
97.19
99.52
98.39
96.54
97.45
Perm-ent
99.42
99.98
99.35
99.60
99.47
QDA
Higuchi
93.35
98.10
93.51
94.34
93.92
Katz
80.93
89.16
86.11
80.28
83.09
Petrosian
87.87
95.17
87.56
89.90
88.71
Spec-ent
86.69
94.12
91.04
85.56
88.21
Perm-ent
87.31
94.80
86.84
89.73
88.25
XGB
Higuchi
97.90
99.77
97.72
98.45
98.08
Katz
83.87
91.52
87.99
83.32
85.59
Petrosian
95.79
99.25
95.24
96.98
96.10
Spec-ent
93.79
98.45
94.63
94.06
94.34
Perm-ent
95.55
99.17
95.13
96.71
95.91
LGB
Higuchi
96.22
99.43
95.90
97.18
96.53
Katz
82.88
90.64
87.39
82.27
84.75
Petrosian
93.42
98.35
92.38
95.40
93.86
Spec-ent
91.20
97.30
92.43
91.57
92.00
Perm-ent
93.14
98.20
92.45
94.93
93.67
ET
Higuchi
98.22
99.86
97.50
99.25
98.37
Katz
81.87
90.27
88.34
80.33
84.14
Petrosian
97.38
99.74
96.43
98.74
97.57
Spec-ent
94.53
98.85
94.76
95.21
94.99
Perm-ent
97.38
99.71
96.54
98.66
97.59
RF
Higuchi
96.99
99.62
96.18
98.30
97.22
Katz
81.63
89.51
86.52
81.03
83.68
Petrosian
95.98
99.33
94.88
97.68
96.26
Spec-ent
92.52
97.89
92.79
93.49
93.13
Perm-ent
95.33
99.12
94.23
97.18
95.68

4.2.2 PD patients ON medication

Table 4 presents the results associated with discriminating PD patients ON medication against healthy controls. The overall performance of all classifiers and feature extraction methods was again observed to increase drastically compared to the sliding windowing with \(50\%\) overlap. The results show that Higuchi FD yielded the best performance across all other feature extraction methods, similar to the case in the sliding windowing with \(50\%\) overlap. Every classifier with Higuchi FD generated very high accuracies. KNN classifier with Higuchi FD produced the highest accuracy of \(99.65\%\). Furthermore, the following hyperparameters yielded the best classifier for KNN with Higuchi FD - leaf_size = 30, metric = “Manhattan,” n_neighbors = 3, and weights = “uniform.” The accuracies yielded by XGB, LGB, QDA, ET, and RF with Higuchi FD were \(99.26\%, 94.81\%, 94.18\%, 94.70\%\), and \(93.94\%\), respectively. The order of accuracy score for the feature extraction methods was retained in every other classifier, with Higuchi FD performing the best while Katz FD performed the worst (Fig. 6). AUC, recall, precision, and F1 score results aligned with the accuracy metric across the feature extraction–classifier model.
Table 4
Classification results for PD patients ON Meds vs. HC with 90% overlap
Classifier model
Feature extraction
Accuracy
AUC
Recall
Precision
F1 score
KNN
Higuchi
99.65
99.95
99.59
99.77
99.68
Katz
83.11
90.47
95.11
78.62
86.08
Petrosian
99.60
99.98
99.34
99.92
99.63
Spec-ent
97.02
99.55
98.41
96.24
97.31
Perm-ent
99.57
99.98
99.37
99.85
99.61
QDA
Higuchi
94.18
98.70
95.20
94.25
94.71
Katz
82.46
90.44
86.64
82.35
84.44
Petrosian
89.62
96.47
89.76
91.17
90.46
Spec-ent
86.63
94.13
91.41
85.25
88.22
Perm-ent
88.83
95.99
89.00
90.48
89.73
XGB
Higuchi
95.26
99.01
96.08
95.35
95.70
Katz
84.78
92.36
88.66
84.42
86.49
Petrosian
96.08
99.34
96.33
96.51
96.42
Spec-ent
94.08
98.65
95.35
93.94
94.64
Perm-ent
95.62
99.25
96.12
95.91
96.01
LGB
Higuchi
94.81
98.99
95.88
94.75
95.29
Katz
84.01
91.76
86.60
83.36
85.89
Petrosian
93.86
98.61
94.25
94.53
94.39
Spec-ent
92.00
97.76
93.81
91.78
92.78
Perm-ent
93.75
98.55
94.36
94.26
94.31
ET
Higuchi
94.70
98.77
95.32
95.05
95.17
Katz
83.21
91.20
89.54
81.66
85.41
Petrosian
97.84
99.76
97.75
98.31
98.03
Spec-ent
95.06
99.07
96.48
94.61
95.53
Perm-ent
97.72
99.72
97.67
98.16
97.91
RF
Higuchi
93.94
98.25
94.69
94.31
94.48
Katz
83.25
90.86
87.25
83.11
85.13
Petrosian
96.59
99.45
96.55
97.22
96.88
Spec-ent
93.38
98.26
94.28
93..67
93.98
Perm-ent
95.82
99.25
95.94
96.42
96.17
The mean accuracy of every feature extraction method was calculated across all classifiers ((b) in Fig. 7). Similarly, (a) in Fig. 7 gives the mean accuracy of every classifier across all feature extraction methods.
XGB performed better than extra trees classifier (ET) for a sliding windowing with \(50\%\) overlap. However, ET delivered slightly higher accuracy surpassing XGB with \(90\%\) overlap (Figs. 7 and 8). Furthermore, QDA gave the lowest mean accuracy when observed as an average across all feature extraction methods for both ON Meds and OFF Meds (Fig. 7), which is a drastic change from \(50\%\) overlapping, where RF yielded the lowest accuracy (Fig. 3). However, with \(90\%\) overlapping, RF’s average accuracy was 92.60%, which is higher than that of LGB and QDA.
From Fig. 8, it is evident that KNN gives the best performance. Also, KNN gives a similar accuracy level for both ON and OFF Meds. KNN’s plot is flatter than the remaining models, indicating low variation in accuracy scores across the k-folds. Likewise, QDA performs the worst with the lowest accuracies and a considerable difference in performance between OFF and ON Meds.
Similar to the feature importance obtained during the execution of a 50% overlapping window, the XGB classifier was chosen to obtain the feature importances since it yielded a very high classification accuracy for both ON and OFF Meds cases.
The channels Cz, O2, O1, C3, Pz, and AF4 were found to have higher feature importance, whereas Fp2, CP2, CP5, FC2, P7, and P3 were of lower importance (Fig. 9). Higuchi FD, Petrosian FD, and permutation entropy show high feature importance in the midbrain and occipital region of the brain. These three feature extraction methods have higher accuracy than Katz FD and spectral entropy. Hence, an inference can be made that the EEG features derived from the midbrain, where the motor cortex lies, and the occipital region is instrumental in differentiating between a PD patient and a healthy control (HC). The results obtained are identical to the \(50\%\) overlapping case.
Figure 10 gives the accuracy of sliding windowing with \(50\%\) and \(90\%\) overlap averaged across all classifiers for every feature extraction method. Higuchi FD consistently outperforms other methods, yielding the highest average accuracy in both sliding windowing methods. There is a substantial increase in the accuracies of all feature extraction methods upon shifting from \(50\%\) to \(90\%\) overlap. The increase in the accuracies of Higuchi FD, Petrosian FD, Katz FD, Spectral entropy, and permutation entropy is \(2.63\%\), \(6.98\%\), \(6.06\%\), \(6.33\%\), and \(6.80\%\), respectively.
Another key observation from Fig. 10 is that the gap between the performance of Higuchi and those of others decreases drastically upon shifting from \(50\%\) overlapping window to \(90\%\). In the \(50\%\) overlapping window, there is a gap of more than \(5\%\) between Higuchi and Petrosian FD. However, the gap reduces to less than \(1\%\) in the \(90\%\) overlapping window. A sliding window with \(90\%\) overlap involves more input instances and thus takes longer to process than a \(50\%\) overlapping sliding window. Based on the observations mentioned previously, Higuchi FD has the highest efficiency in capturing the complexities of the EEG signal since it can yield above \(90\%\) average accuracy for both \(50\%\) and \(90\%\) overlap of sliding windows.

5 Discussion

We hypothesize finding the optimal feature extraction and machine learning method to detect and analyze Parkinson’s disease (PD) in resting-state EEG. The study also aims to analyze the EEG signal to understand which brain parts are the most affected by Parkinson’s disease (PD), and if the significance of these regions can be highlighted using fractal dimensions (FD). For this purpose, different methods of segmentation, feature extraction, and machine learning models have been implemented and analyzed extensively, as shown in the previous sections. The results shed light upon the efficiency of fractal dimensions (FD) as an effective way of feature extraction from EEG signals. Higuchi FD outperforms other methods considerably across all employed machine learning models. Among all the methods implemented in this study, the most effective method for detecting Parkinson’s disease (PD) in resting-state EEG is the following:
  • Segmentation: sliding windowing having \(90\%\) overlap.
  • Feature Extraction: Higuchi’s technique to compute the fractal dimension (FD) of every window.
  • Classifier: k-nearest neighbors (KNN) classifier
  • Accuracy yielded: \(99.65\pm 0.15\%\) for “ON Meds vs. HC” and \(99.45\pm 0.18\%\) for “OFF Meds vs. HC.”
The accuracies of almost all feature extraction methods improved drastically with \(90\%\) overlapping window compared to \(50\%\) (Fig. 10). The increase in accuracy can be attributed to the number of training instances available for \(90\%\) overlap. However, the boost in performance when shifting from \(50\%\) overlapping window to \(90\%\) is less significant for Higuchi FD than for other methods. There is an increase in only \(2.63\%\) in Higuchi FD’s performance, while for other methods, it is around \(6.5\%\) (Fig. 10). Higuchi FD gives accuracies between \(90-95\%\) even in \(50\%\) overlapping window, while others give much lower accuracies. Thus, Higuchi FD can capture relevant information from the EEG signal even with relatively few training instances. Furthermore, Higuchi’s method creates multiple time series by repeatedly sub-sampling the signal in the window. The underlying procedure enables Higuchi FD to accurately extract relevant information even in the case of larger windows with less overlap. Petrosian FD and permutation entropy also perform well, especially when using a sliding window with \(90\%\) overlap. However, their performance decreases when using a sliding window with \(50\%\) overlap. On the other end of the spectrum, Katz FD and spectral entropy consistently perform poorly, with Katz FD giving the lowest accuracies by a considerable margin. Also, spectral entropy does not perform well and does not capture the complexities of the EEG signal in the temporal domain.
The topographic plots exemplify the workings of explainable AI, illuminating those brain regions that possess elevated feature importance compared to other areas, particularly in the context of Parkinson’s disease classification through the lens of the extreme gradient boosting (XGBoost) classifier. The topo plots in Fig. 9 show similar patterns to those in Fig. 5. Higuchi FD (Fig. 5: a1 and a2), Petrosian FD (Fig. 5: c1 and c2), and permutation entropy (Fig. 5: e1 and e2) show high feature importance in the midbrain and the occipital region of the brain, which are the regions adversely affected by PD. Prominent symptoms of PD include rhythmic shaking (tremors), loss of balance, and body stiffness. These motor functions of the body are controlled by the motor cortex, which lies in the midbrain region. It is the same region where the topo plots of Higuchi FD, Petrosian FD, and permutation entropy highlight as higher feature importance. Therefore, it can be inferred that features extracted from the channels of the EEG signal derived from the motor cortex were key contributors to the classification of PD, and these features were very accurately captured using fractal dimensions. This reinforces that Higuchi FD, Petrosian FD, and permutation entropy extract relevant features and accurately capture the complexities of the EEG signal that is instrumental in differentiating between a PD patient and a healthy person. These feature extraction methods can discern that the information captured from the electrodes around the motor cortex plays a more decisive role in the classification of PD than other parts of the brain, except the occipital cortex, which is also highlighted in the topo plots as a region of high feature importance. The high feature importance of occipital cortex in the classification of PD suggests that PD might be related to visual cortical activity, as is also investigated astutely in [96]. Studies like [97] show that PD also results in sensory disorders such as loss of sharpness, slower eye movement (Saccadic eye movement), reduced color perception, and reduced blink reflex and movement. The link between ocular impairments and Parkinson’s disease is significant and will impact the occipital cortex. This relation between PD and the occipital cortex has also been captured in the topo plots of this study. On the other hand, Katz FD and spectral entropy give topo plots very different from other methods. Both these methods give much lower accuracy in classification, which is clear from the topo plots since the brain areas with higher feature importance are not around the motor cortex or the occipital cortex. Therefore, Katz FD and spectral entropy cannot accurately capture the complexities of the EEG from the time domain.
This study delivers an architectural pipeline for detecting PD in resting-state EEG while emphasizing fractal dimensions (FD) as an effective way of extracting salient features from EEG signals. The final model built in this study is robust to the impact of medication on the symptoms of Parkinson’s disease (PD) and accurately identifies the signs of underlying disease in resting-state EEG with almost equal accuracy in patients OFF medication and ON medication, both. There is scope to strengthen this study further and understand the efficiency of fractal dimensional methods and their applications on time-series-based datasets. For example, varied-sized windows can be implemented to test the optimal length of the window for calculating fractal dimensions. Also, further research can be conducted to uncover the ocular symptoms of Parkinson’s disease (PD), and how they manifest in the EEG signal captured from the occipital cortex.
The limitation of the proposed model is the manual removal of artifacts from the EEG signal using independent component analysis. The whole pipeline can be effectively executed without human intervention if this artifact removal process can be automated using deep learning procedures [9]. While EEG offers a remarkable temporal resolution capable of capturing nuanced neuronal fluctuations, its spatial resolution is somewhat limited, making it challenging to pinpoint precise brain regions. This limitation presents obstacles in determining the specific areas of the brain associated with Parkinson’s disease through EEG. Nevertheless, our adoption of explainable AI, manifested in topographic brain plots, demonstrates the potential to extract pertinent spatial information from EEG data via machine learning. The topographic plots from our study emphasize the midbrain region’s pronounced feature importance in Parkinson’s disease detection, consistent with current research, underscoring its efficacy in accurately discerning spatial information.
Moreover, our study demonstrates that EEG signals, when processed using Higuchi fractal dimension and combined with machine learning models such as KNN, can effectively differentiate between healthy individuals and Parkinson’s disease patients. Moreover, our approach distinguishes between patients based on their medication status, an aspect that holds considerable clinical importance. Some of the potential clinical implications are as follows: -
  • Early Detection and Diagnosis: The high accuracy achieved by our model indicates the potential for it to serve as a non-invasive diagnostic tool. Early detection of Parkinson’s disease can enable timely medical intervention, potentially slowing the progression of the disease.
  • Personalized Treatment: The machine learning model can be further fine-tuned with more patient data, facilitating more personalized treatments based on the individual’s EEG patterns.
  • Continuous Patient Monitoring: Wearable EEG devices combined with our detection algorithm can be used for continuous monitoring, enabling real-time feedback to both patients and caregivers about the current neurological state. Fractal dimensions, due to their dynamic nature, are especially adept at capturing biomarkers in real-time data.
  • Telemedicine and Remote Monitoring: Especially pertinent in the present scenario, our approach can be integrated into telemedicine platforms, allowing remote monitoring of Parkinson’s patients and timely interventions based on EEG readings.
  • Research Tool: The model can serve as a tool in clinical research, providing insights into the neurological changes that accompany different stages of Parkinson’s disease or the impacts of experimental treatments. Furthermore, this model can be packaged into software or an electronic platform that clinicians can use as an automated tool that provides preliminary diagnosis or likelihood of a neurodegenerative condition.

6 Conclusion

Parkinson’s disease (PD) is a long-term progressive neurodegenerative disorder that causes motor and cognitive impairments. EEG analysis can provide reliable information about the condition of neurological impairments. Machine learning (ML) is immensely useful in building automated pipelines for diagnosing Parkinson’s disease (PD) using EEG signals. However, noise and artifacts from EEG signals can decrease the efficiency of such pipelines. Traditional feature extraction methods, such as partial directed coherence (PDC) works only in the EEG signal’s frequency domain. Recently, few studies have employed fractal dimension (FD) and entropy as feature extraction measures in the ML pipeline for detecting Parkinson’s disease (PD). However, to the best of the authors’ knowledge, no exploratory study exists on the various FD and entropy measures used in the feature extraction phase for neural signal analysis. We aimed to conduct a comparative analysis of the FD and entropy measures that, as feature extraction methods, can discriminate Parkinson’s disease (PD) patients who are ON and OFF medication from healthy controls (HC) using the ML pipeline. For this purpose, different segmentation, feature extraction, and machine learning models have been implemented and analyzed extensively to provide the best architectural pipeline. The results show that the larger overlap among the segmented windows that produced more input instances effectively improved the discriminating capacity. Higuchi FD outperforms other feature extraction methods across all employed machine learning models. K-nearest neighbors (KNN) classifier performs consistently well and gives the highest accuracies for both the PD cases discussed in this study, yielding a mean accuracy of \(99.65\pm 0.15\%\) for PD patients ON medication, and \(99.45\pm 0.18\%\) for PD patients OFF medication, respectively. The final model built in this study is robust to the impact of medication on the symptoms of Parkinson’s disease (PD) and accurately identifies the signs of the disease in resting-state EEG with almost equal accuracy in patients of both OFF and ON medication.
This study contributes to the body of knowledge with an architectural pipeline for detecting PD in resting-state EEG while emphasizing fractal dimensions (FD) as an effective way of extracting salient features from EEG signals. The limitation of the proposed model is the manual removal of artifacts from the EEG signal using independent component analysis (ICA). If the artifact removal process is automated using deep learning (DL) procedures as future work, the whole pipeline can be effectively executed without human intervention. Furthermore, the study was conducted on 31 subjects, and the data were collected using 32 EEG channels. Therefore, there is a need to further test this model on a larger dataset comprising more subjects and EEG channels to amply understand its applicability and usability in the real world.

Declarations

Conflicts of interest

The authors have declared no conflicts of interest in this work.

Declarations

There were no human participants or animals involved in this work. The authors have no relevant financial or non-financial interests to disclose.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://​creativecommons.​org/​licenses/​by/​4.​0/​.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Literatur
4.
Zurück zum Zitat Ferreira-Sánchez MDR, Moreno-Verdú M, Cano-de-la-Cuerda R (2020) Quantitative measurement of rigidity in parkinson’disease: A systematic review. Sensors (Basel) 20(3):880CrossRef Ferreira-Sánchez MDR, Moreno-Verdú M, Cano-de-la-Cuerda R (2020) Quantitative measurement of rigidity in parkinson’disease: A systematic review. Sensors (Basel) 20(3):880CrossRef
6.
Zurück zum Zitat Hill EJ, Mangleburg CG, Alfradique-Dunham I, Ripperger B, Stillwell A, Saade H, Rao S, Fagbongbe O, von Coelln R, Tarakad A, Hunter C, Dawe RJ, Jankovic J, Shulman LM, Buchman AS, Shulman JM (2021) Quantitative mobility measures complement the mds-updrs for characterization of parkinson’s disease heterogeneity. Parkinson Relat Disorders 84:105–111. https://doi.org/10.1016/j.parkreldis.2021.02.006CrossRef Hill EJ, Mangleburg CG, Alfradique-Dunham I, Ripperger B, Stillwell A, Saade H, Rao S, Fagbongbe O, von Coelln R, Tarakad A, Hunter C, Dawe RJ, Jankovic J, Shulman LM, Buchman AS, Shulman JM (2021) Quantitative mobility measures complement the mds-updrs for characterization of parkinson’s disease heterogeneity. Parkinson Relat Disorders 84:105–111. https://​doi.​org/​10.​1016/​j.​parkreldis.​2021.​02.​006CrossRef
8.
Zurück zum Zitat Oh SL, Hagiwara Y, Raghavendra U, Yuvaraj R, Arunkumar N, Murugappan M, Acharya UR (2020) A deep learning approach for parkinson’s disease diagnosis from eeg signals. Neural Comput Appl 32(15):10927–10933CrossRef Oh SL, Hagiwara Y, Raghavendra U, Yuvaraj R, Arunkumar N, Murugappan M, Acharya UR (2020) A deep learning approach for parkinson’s disease diagnosis from eeg signals. Neural Comput Appl 32(15):10927–10933CrossRef
10.
Zurück zum Zitat Gómez-López JC, Escobar JJ, Díaz AF, Damas M, Gil-Montoya F, González J (2022) Boosting the convergence of a ga-based wrapper for feature selection problems on high-dimensional data. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’22, pp 431–434. Association for Computing Machinery, New York, NY, USA. https://doi.org/10.1145/3520304.3528800 Gómez-López JC, Escobar JJ, Díaz AF, Damas M, Gil-Montoya F, González J (2022) Boosting the convergence of a ga-based wrapper for feature selection problems on high-dimensional data. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’22, pp 431–434. Association for Computing Machinery, New York, NY, USA. https://​doi.​org/​10.​1145/​3520304.​3528800
14.
Zurück zum Zitat Geraedts VJ, Boon LI, Marinus J, Gouw AA, van Hilten JJ, Stam CJ, Tannemaat MR, Contarino MF (2018) Clinical correlates of quantitative eeg in parkinson disease. Neurology 91(19):871–883. https://n.neurology.org/content/91/19/871.full.pdf. https://doi.org/10.1212/WNL.0000000000006473 Geraedts VJ, Boon LI, Marinus J, Gouw AA, van Hilten JJ, Stam CJ, Tannemaat MR, Contarino MF (2018) Clinical correlates of quantitative eeg in parkinson disease. Neurology 91(19):871–883. https://​n.​neurology.​org/​content/​91/​19/​871.​full.​pdf.​ https://​doi.​org/​10.​1212/​WNL.​0000000000006473​
15.
Zurück zum Zitat Jamalullah R, S, Mary Gladence L (2023) Application of adaptive chebyshev and fast-fourier transform to identify bradykinesia in humans from eeg data features. Int J Intell Syst Appl Eng 11(2):119–125 Jamalullah R, S, Mary Gladence L (2023) Application of adaptive chebyshev and fast-fourier transform to identify bradykinesia in humans from eeg data features. Int J Intell Syst Appl Eng 11(2):119–125
16.
Zurück zum Zitat Chiang J, Wang ZJ, McKeown MJ (2009) Eeg source extraction by autoregressive source separation reveals abnormal synchronization in parkinson’s disease. In: 2009 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, pp 1868–1872. https://doi.org/10.1109/IEMBS.2009.5332613 Chiang J, Wang ZJ, McKeown MJ (2009) Eeg source extraction by autoregressive source separation reveals abnormal synchronization in parkinson’s disease. In: 2009 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, pp 1868–1872. https://​doi.​org/​10.​1109/​IEMBS.​2009.​5332613
25.
34.
Zurück zum Zitat Babu A, Ranpariya S, Sinha DK, Mandal D Deep learning enabled perceptive wearable sensor: An interactive gadget for tracking movement disorder. Adv Mater Technol n/a(n/a), 2300046 https://onlinelibrary.wiley.com/doi/pdf/10.1002/admt.202300046. https://doi.org/10.1002/admt.202300046 Babu A, Ranpariya S, Sinha DK, Mandal D Deep learning enabled perceptive wearable sensor: An interactive gadget for tracking movement disorder. Adv Mater Technol n/a(n/a), 2300046 https://​onlinelibrary.​wiley.​com/​doi/​pdf/​10.​1002/​admt.​202300046.​ https://​doi.​org/​10.​1002/​admt.​202300046
35.
Zurück zum Zitat Cohen MX (2014) Analyzing Neural Time Series Data: Theory and Practice. MIT Press Cohen MX (2014) Analyzing Neural Time Series Data: Theory and Practice. MIT Press
37.
Zurück zum Zitat Singh SP (2014) Magnetoencephalography: Basic principles. Ann Indian Acad Neurol 17(Suppl 1):107–12CrossRef Singh SP (2014) Magnetoencephalography: Basic principles. Ann Indian Acad Neurol 17(Suppl 1):107–12CrossRef
38.
39.
Zurück zum Zitat Tzimourta KD, Christou V, Tzallas AT, Giannakeas N, Astrakas LG, Angelidis P, Tsalikakis D, Tsipouras MG (2021) Machine learning algorithms and statistical approaches for alzheimer’s disease analysis based on resting-state eeg recordings: A systematic review. Int J Neural Syst 31(05):2130002. https://doi.org/10.1142/S0129065721300023. (PMID: 33588710)CrossRef Tzimourta KD, Christou V, Tzallas AT, Giannakeas N, Astrakas LG, Angelidis P, Tsalikakis D, Tsipouras MG (2021) Machine learning algorithms and statistical approaches for alzheimer’s disease analysis based on resting-state eeg recordings: A systematic review. Int J Neural Syst 31(05):2130002. https://​doi.​org/​10.​1142/​S012906572130002​3. (PMID: 33588710)CrossRef
43.
44.
Zurück zum Zitat Bhattacharjee S, Ghatak S, Dutta S, Chatterjee B, Gupta M (2019) A survey on comparison analysis between eeg signal and mri for brain stroke detection. In: Abraham A, Dutta P, Mandal JK, Bhattacharya A, Dutta S (eds.) Emerging Technologies in Data Mining and Information Security, pp 377–382. Springer, Singapore. https://doi.org/10.1007/978-981-13-1501-5_32 Bhattacharjee S, Ghatak S, Dutta S, Chatterjee B, Gupta M (2019) A survey on comparison analysis between eeg signal and mri for brain stroke detection. In: Abraham A, Dutta P, Mandal JK, Bhattacharya A, Dutta S (eds.) Emerging Technologies in Data Mining and Information Security, pp 377–382. Springer, Singapore. https://​doi.​org/​10.​1007/​978-981-13-1501-5_​32
54.
Zurück zum Zitat Syam V, Safal S, Bhutia O, Singh AK, Giri D, Bhandari SS, Panigrahi R (2023) A non-invasive method for prediction of neurodegenerative diseases using gait signal features. Procedia Comput Sci 218:1529–1541. https://doi.org/10.1016/j.procs.2023.01.131. International Conference on Machine Learning and Data Engineering Syam V, Safal S, Bhutia O, Singh AK, Giri D, Bhandari SS, Panigrahi R (2023) A non-invasive method for prediction of neurodegenerative diseases using gait signal features. Procedia Comput Sci 218:1529–1541. https://​doi.​org/​10.​1016/​j.​procs.​2023.​01.​131. International Conference on Machine Learning and Data Engineering
55.
Zurück zum Zitat di Biase L, Di Santo A, Caminiti ML, De Liso A, Shah SA, Ricci L, Di Lazzaro V (2020) Gait analysis in parkinson’s disease: An overview of the most accurate markers for diagnosis and symptoms monitoring. Sensors (Basel) 20(12):3529CrossRef di Biase L, Di Santo A, Caminiti ML, De Liso A, Shah SA, Ricci L, Di Lazzaro V (2020) Gait analysis in parkinson’s disease: An overview of the most accurate markers for diagnosis and symptoms monitoring. Sensors (Basel) 20(12):3529CrossRef
58.
Zurück zum Zitat Drotár P, Mekyska J, Smékal Z, Rektorová I, Masarová L, Faundez-Zanuy M (2015) Contribution of different handwriting modalities to differential diagnosis of parkinson’s disease. In: 2015 IEEE International Symposium on Medical Measurements and Applications (MeMeA) Proceedings, pp 344–348. https://doi.org/10.1109/MeMeA.2015.7145225 Drotár P, Mekyska J, Smékal Z, Rektorová I, Masarová L, Faundez-Zanuy M (2015) Contribution of different handwriting modalities to differential diagnosis of parkinson’s disease. In: 2015 IEEE International Symposium on Medical Measurements and Applications (MeMeA) Proceedings, pp 344–348. https://​doi.​org/​10.​1109/​MeMeA.​2015.​7145225
62.
Zurück zum Zitat Handojoseno AMA, Shine JM, Nguyen TN, Tran Y, Lewis SJG, Nguyen HT (2012) The detection of freezing of gait in parkinson’s disease patients using eeg signals based on wavelet decomposition. In: 2012 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, pp 69–72. https://doi.org/10.1109/EMBC.2012.6345873 Handojoseno AMA, Shine JM, Nguyen TN, Tran Y, Lewis SJG, Nguyen HT (2012) The detection of freezing of gait in parkinson’s disease patients using eeg signals based on wavelet decomposition. In: 2012 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, pp 69–72. https://​doi.​org/​10.​1109/​EMBC.​2012.​6345873
64.
Zurück zum Zitat Maggioni E, Arienti F, Minella S, Mameli F, Borellini L, Nigro M, Cogiamanian F, Bianchi AM, Cerutti S, Barbieri S, Brambilla P, Ardolino G (2021) Effective connectivity during rest and music listening: An eeg study on parkinson’s disease. Frontiers in Aging Neuroscience 13. https://doi.org/10.3389/fnagi.2021.657221 Maggioni E, Arienti F, Minella S, Mameli F, Borellini L, Nigro M, Cogiamanian F, Bianchi AM, Cerutti S, Barbieri S, Brambilla P, Ardolino G (2021) Effective connectivity during rest and music listening: An eeg study on parkinson’s disease. Frontiers in Aging Neuroscience 13. https://​doi.​org/​10.​3389/​fnagi.​2021.​657221
67.
68.
Zurück zum Zitat Sakar BE, Isenkul ME, Sakar CO, Sertbas A, Gurgen F, Delil S, Apaydin H, Kursun O (2013) Collection and analysis of a parkinson speech dataset with multiple types of sound recordings. IEEE J Biomed Health Inf 17(4):828–834CrossRef Sakar BE, Isenkul ME, Sakar CO, Sertbas A, Gurgen F, Delil S, Apaydin H, Kursun O (2013) Collection and analysis of a parkinson speech dataset with multiple types of sound recordings. IEEE J Biomed Health Inf 17(4):828–834CrossRef
70.
Zurück zum Zitat Trist BG, Hare DJ, Double KL (2019) Oxidative stress in the aging substantia nigra and the etiology of parkinson’s disease. Aging Cell 18(6):13031. https://onlinelibrary.wiley.com/doi/pdf/10.1111/acel.13031. https://doi.org/10.1111/acel.13031 Trist BG, Hare DJ, Double KL (2019) Oxidative stress in the aging substantia nigra and the etiology of parkinson’s disease. Aging Cell 18(6):13031. https://​onlinelibrary.​wiley.​com/​doi/​pdf/​10.​1111/​acel.​13031.​ https://​doi.​org/​10.​1111/​acel.​13031
72.
Zurück zum Zitat Shi C, Chen Y, Kwapong WR, Tong Q, Wu S, Zhou Y, Miao H, Shen M, Ye H (2020) Characterization by fractal dimension analysis of the retinal capillary network in parkinson disease. RETINA 40(8) Shi C, Chen Y, Kwapong WR, Tong Q, Wu S, Zhou Y, Miao H, Shen M, Ye H (2020) Characterization by fractal dimension analysis of the retinal capillary network in parkinson disease. RETINA 40(8)
74.
Zurück zum Zitat Jackson N, Cole SR, Voytek B, Swann NC (2019) Characteristics of waveform shape in parkinson’s disease detected with scalp electroencephalography. eNeuro 6(3). https://www.eneuro.org/content/6/3/ENEURO.0151-19.2019.full.pdf. https://doi.org/10.1523/ENEURO.0151-19.2019 Jackson N, Cole SR, Voytek B, Swann NC (2019) Characteristics of waveform shape in parkinson’s disease detected with scalp electroencephalography. eNeuro 6(3). https://​www.​eneuro.​org/​content/​6/​3/​ENEURO.​0151-19.​2019.​full.​pdf.​ https://​doi.​org/​10.​1523/​ENEURO.​0151-19.​2019
76.
Zurück zum Zitat Appelhoff S, Sanderson M, Brooks TL, van Vliet M, Quentin R, Holdgraf C, Chaumon M, Mikulan E, Tavabi K, Höchenberger R, Welke D, Brunner C, Rockhill AP, Larson E, Gramfort A, Jas M (2019) MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. J Open Source Softw 4(44):1896. https://doi.org/10.21105/joss.01896 Appelhoff S, Sanderson M, Brooks TL, van Vliet M, Quentin R, Holdgraf C, Chaumon M, Mikulan E, Tavabi K, Höchenberger R, Welke D, Brunner C, Rockhill AP, Larson E, Gramfort A, Jas M (2019) MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. J Open Source Softw 4(44):1896. https://​doi.​org/​10.​21105/​joss.​01896
79.
Zurück zum Zitat Bao FS, Liu X, Zhang C (2011) PyEEG: an open source python module for EEG/MEG feature extraction. Comput Intell Neurosci 2011:406391CrossRef Bao FS, Liu X, Zhang C (2011) PyEEG: an open source python module for EEG/MEG feature extraction. Comput Intell Neurosci 2011:406391CrossRef
89.
Zurück zum Zitat Goh C, Hamadicharef B, Henderson GT, Ifeachor EC (2005) Comparison of Fractal Dimension Algorithms for the Computation of EEG Biomarkers for Dementia. In: 2nd International Conference on Computational Intelligence in Medicine and Healthcare (CIMED2005), Lisbon, Portugal. Professor José Manuel Fonseca, UNINOVA, Portugal. https://hal.inria.fr/inria-00442374 Goh C, Hamadicharef B, Henderson GT, Ifeachor EC (2005) Comparison of Fractal Dimension Algorithms for the Computation of EEG Biomarkers for Dementia. In: 2nd International Conference on Computational Intelligence in Medicine and Healthcare (CIMED2005), Lisbon, Portugal. Professor José Manuel Fonseca, UNINOVA, Portugal. https://​hal.​inria.​fr/​inria-00442374
92.
Zurück zum Zitat Bandt C, Pompe B (2002) Permutation entropy: a natural complexity measure for time series. Physical Rev Lett 88 17:174102 Bandt C, Pompe B (2002) Permutation entropy: a natural complexity measure for time series. Physical Rev Lett 88 17:174102
94.
Zurück zum Zitat Lo Giudice M, Varone G, Ieracitano C, Mammone N, Tripodi GG, Ferlazzo E, Gasparini S, Aguglia U, Morabito FC (2022) Permutation entropy-based interpretability of convolutional neural network models for interictal eeg discrimination of subjects with epileptic seizures vs. psychogenic non-epileptic seizures. Entropy 24(1). https://doi.org/10.3390/e24010102 Lo Giudice M, Varone G, Ieracitano C, Mammone N, Tripodi GG, Ferlazzo E, Gasparini S, Aguglia U, Morabito FC (2022) Permutation entropy-based interpretability of convolutional neural network models for interictal eeg discrimination of subjects with epileptic seizures vs. psychogenic non-epileptic seizures. Entropy 24(1). https://​doi.​org/​10.​3390/​e24010102
95.
Zurück zum Zitat Vilone G, Longo L (2022) A global model-agnostic xai method for the automatic formation of an abstract argumentation framework and its objective evaluation Vilone G, Longo L (2022) A global model-agnostic xai method for the automatic formation of an abstract argumentation framework and its objective evaluation
Metadaten
Titel
Fractal dimensions and machine learning for detection of Parkinson’s disease in resting-state electroencephalography
verfasst von
Utkarsh Lal
Arjun Vinayak Chikkankod
Luca Longo
Publikationsdatum
11.02.2024
Verlag
Springer London
Erschienen in
Neural Computing and Applications / Ausgabe 15/2024
Print ISSN: 0941-0643
Elektronische ISSN: 1433-3058
DOI
https://doi.org/10.1007/s00521-024-09521-4

Weitere Artikel der Ausgabe 15/2024

Neural Computing and Applications 15/2024 Zur Ausgabe

Premium Partner