Introduction
The retina is the organ that enables humans to capture visuals from the real world. It is a window to the whole body that shares physiological, embryological, and anatomical characteristics with main organs, including the brain, the heart, the kidneys, and so on. The retina is a vital source to assess distinct pathological processes and neurological complications associated with risks of mortality. The retina refers to the inner surface of the eyeball opposite to the lens, including the optic disc, optic cup, macula, fovea, and blood vessel [
1,
2]. Fundus images are fundus projections captured by a monocular camera on a 2D plane [
3]. Fundus images play an important role in monitoring the health status of the human eye and multiple organs [
4]. Analyzing fundus images and their corresponding association with biological traits can help prevent eye diseases and early diagnosis. It is an acceptable notion that has been considered as a gateway to examine neurological complications. Retina allows us to visualize both vascular and neural tissues in a non-invasive way. The strong association of retina with physiology and vitality may lead to a deeper association with biological traits, such as age and gender. Biological traits can be determined by genes, environmental factors, or a combination of both, which can either be qualitative (such as gender or skin color) or quantitative (such as age or blood pressure) [
5]. Biological traits are relevant to a variety of systemic and ocular diseases in individuals [
6], for instance, females are expected to have longer life expectancies compared to those males in similar living environment [
7‐
10]. With the increasing age, women with reduced estrogen production are predisposed to develop degenerative eye diseases, including cataracts and age-related macular degeneration [
11‐
13]. In contrast, males are more likely to suffer from pigment dispersion glaucoma [
14], open-angle glaucoma [
15], and diabetic retinopathy [
16]. The study of biological traits association with fundus images is a challenging task in the clinical practices where experts in the field are unaware of the gender discrimination in the fundus images of males and females and the association of aging information. This study utilizes deep learning (DL) algorithms to estimate biological traits and their association to the generated fundus images.
The fundus or retinal images have been studied for classification, disease identification, and analysis using conventional machine learning (ML) to recent DL methods [
17,
18]. However, much of the work has been focused on feature engineering, which involves computing explicit features specified by experts. On the contrary, DL has been characterized by multiple computational layers that allow an algorithm to learn the appropriate predictive features based on examples. The DL algorithms are optimized and reformulated with enhanced features and improvements to expand to a wider range of problems [
19‐
21]. The DL algorithms have been utilized for the classification and detection of different eye diseases, such as diabetic retinopathy and melanoma, with human comparable results. In the conventional ML approaches, the relationship between retinal morphology and systemic health has been extrapolated using multivariable regression. However, such methods show limited ability for large size and complex datasets [
22,
23]. Thus, the DL algorithms avoid manual feature engineering, tuning, and made it possible of extracting hidden features which were previously unexplored by the conventional methods. The DL models have shown significant results for previous challenging tasks. The harnessing of DL power innovatively associated the retinal structure and pathophysiology. The DL models can extract independent features unknown to clinicians; however, they may face challenges of explainability and interpretability, which have been attempted to address in the existing work [
24]. The DL approaches to fundus image analysis are receiving popularity featuring easy implementation and high efficiency [
25]. It has been extrapolated that DL models can capture subtle pixel-level information in terms of luminous and contrast which humans may not differentiate. These findings underscore the promising ability of DL models hidden to humans and can be employed in medical imaging with high efficacy in clinical practices [
26].
In clinical studies, experts in the field are unaware of the subjects’ discrimination based on their fundus images which emphasis on the importance of employing DL models. The cause and effect of demographic information in fundus images are not readily apparent to domain experts. On the contrary, DL models may enable data-driven algorithms to discover of novel approaches to disease biomarkers identification and biological traits association. Therefore, the ophthalmoscope has been deeply associated with systemic indices of biological traits (such as aging and gender) and diseases. In previous studies, age has been estimated from distinct clinical images, such as age prediction from brain MRI, facial images, and neuroimaging using machine learning and deep learning [
27‐
30]. For instance, brain MRI and facial images have been used for age prediction emphasizing on the potential of traits estimation from fundus images [
27‐
29,
31]. The excellent performance in age prediction implies that fast, safe, cost-effective, and user-friendly deep learning models can be possible in a larger population. In addition to the aging association, fundus images have also been associated with sex by applying logistic regression on several features [
32]. These features include papillomacular angle, retinal vessel angles, and retinal artery trajectory. Various studies have shown retinal morphology differences between the sexes, including retinal and choroidal thickness [
33,
34]. The study [
26] reported fovea as an important region for gender classification. The prediction of gender became possible, which was an inconvenient job for the ophthalmologist who spent the whole career at retina [
35]. Thus, results for the age and gender estimation may assist investigating physiological variations on fundus images corresponding to biological traits [
17]. The estimation of age and gender classification may not be clinically inevitable, but the study of age progression based on biological traits learning hints the potential application of DL in discovering novel associations between traits and fundus images. The DL models implementation uncovers additional features from fundus images results in better biological traits association [
36].
The successful estimation of age and gender prediction convince for studying age progression effects and evaluating aging status via fundus images. In the study of [
17], aging effects were investigated while associating cardiovascular risk factor with fundus images. Similarly, large size DL models was used for classification and association of fundus images with physiological traits dependent on patients’ health [
17]. The existing algorithms mainly consider the optic disc’s features for gender prediction as having consistent observations with that of Poplin [
17]. In Poplin’s work, large deep learning models were used to classify sex and other physiological and behavioral traits that were associated with patient health based on fundus images. Similarly, fundus (retinal) images were closely related to age and gender traits by allowing the definition of ’retinal age gap’, which is a potential biomarker for aging and risk of mortality [
37].
The variational effects of age progression can be visualized in distinct ways, including saliency maps or heat maps in fundus images that were difficult to be observed by ophthalmologists. The differential visualization in fundus images can also be used to distinguish male and female subjects. After the successful classification of gender trait from fundus images [
38], our proposed model (FAG-Net) emphasizes on optic disc area and learned features while training and learning the association corresponding to aging. The optic disc was also considered the main structure to train our deep learning approaches. Similarly, the second proposed model (FGC-Net) utilizes such knowledge to generate different fundus images given a single fed fundus with a list of ages as label (condition). The detailed of the proposed modalities are illustrated in methodology section.
In the current study, we first trained and successfully evaluated a DL model (FAG-Net) for the trait effects in terms of age and gender estimation. We proposed a second DL model (FGC-Net) to learn aging effects and embed these effects for the generation purpose. The FGC-Net evaluates different age values given a single input fundus image. The corresponding multiple generated versions are subtracted accordingly to demonstrate the learning effects with age progression. The detailed architecture of both models has been illustrated in the methodology section. The rest of the paper is organized as follows: “
Introduction” outlines the existing works, “
Methodology” demonstrates methods, “
Results” illustrates and analyzes results, and “
Conclusion and future directions” concludes the study with future directions.
Literature study
In previous studies, age and gender have been estimated from distinct imaging modalities such as age prediction from brain MRI, facial images, and neuroimaging using machine learning and deep learning [
27‐
30]. Brain MRI and facial images have been used for age prediction, emphasizing the potential of trait estimation from fundus images [
27‐
29,
31]. In the work by Poplin [
17], large deep learning models were used to classify gender and other physiological and behavioral traits that were associated with patient health based on retinal fundus images. There are a number of studies in which the fundus images have been used for age prediction and gender classification using machine learning [
26‐
29,
31‐
34]. Most of them have estimated the age and gender of healthy or unhealthy subjects. However, the current study examines both healthy and unhealthy subjects’ age and gender associations with fundus images. For age and gender prediction, conventional to recent deep learning-based algorithms have been employed [
17,
25,
26,
39]. To our knowledge, none of them attempted the age progression effects besides the age prediction and gender classification.
Clinicians are currently unaware of the distinct retinal features varying between males and females, highlighting the importance of deep learning and model explainability. Automated machine learning (AutoML) may enable clinician-driven automated discovery of novel insights and disease biomarkers. Gender has been classified in the study of [
26], in which the area under the receiver operating characteristic curve of the code-free deep learning model was 0.93. The study [
40] estimated biological age from dataset collected with MAE = 3.67 and cumulative score = 0.39 for age-related macular degeneratioin (AMD) [
41]. The subjects for AMD prevalence are supposed to have an age above 50 as an inclusion criteria, which could not cover subjects of all age ranges. The study [
42] developed CNN age and sex prediction models from normal participants with underlying vascular conditions such as hypertension, diabetes mellitus (DM), or any smoking history with convincing results for age prediction (R
\(^2\) = 0.92), MAE = 3.06 (for normal), MAE = 3.46 years (for hypertension), 3.55 years (for DM), and 2.56 years (for smokers); however, R
\(^2\) = 0.74 is relatively low for hypertension. The proposed model (FAG-Net) shows higher results in the majority of the evaluation metrics compared to the existing models for both healthy and unhealthy subjects, as tabulated in Tables
2 and
3.
The ML algorithms are widely applied in analyzing biological traits with different imaging modalities such as MRI, facial visuals, footprints, and so on [
43]. In the conventional biological traits estimation, the study [
44] proposed a trait tissue association mapping with human biological traits and diseases. The study [
45] estimated the age of the subjects from MRI modality using PCA [
46] for dimension reduction and relevance vector machine [
47] with a significant score. The study in [
48] applied a new automated machine learning approach in brain MRI to predict age with MAE = 4.612 years. Similarly, Valizadeh et al [
49] used neural network [
50] and support vector machine [
51] to analyze five anatomical features, which resulted in high prediction accuracy. Martina et al [
52] estimated brain age in PNC (Philadelphia Neurodevelopmental Cohort;
n = 1126, age range
\(8-22\) years) using a cross-validation [
53] framework with the MAE = 2.93 years. Similarly, the study [
54] used partial least squares regression [
55] to classify gender based on MRI with the accuracy = 97.8%. According to the study of [
17], machine learning has been leveraged for many years for a variety of classification tasks, including the automated classification of eye disease. However, much of the work has focused on feature engineering, which involves computing explicit features specified by experts.
The relationship between retinal morphology and systemic health has been extrapolated using multivariable regression like conventional approaches. However, such methods show limited ability for large size and complex datasets [
22,
23]. Thus, the advances in automatic algorithm into DL avoids manual feature engineering and made extracting hidden features possible, which were previously unexplored. The DL models have shown significant results for previous challenging tasks. The harnessing of DL power is innovatively associated with the retinal structure and pathophysiology. DL models extract independent features unknown to clinicians; however, face challenges of explainability and interpretability which have been attempted to address by a neuro-symbolic learning study [
24]. Deep learning is a family of machine learning characterized by multiple and deep level computations that has been optimized for images. Deep learning has been applied in different domains specifically in diseases diagnosing, such as melanoma and diabetic retinopathy, and achieved comparable accuracy to that of human experts [
56]. The model RCMNet composed of ResNet18 with a self-awareness mechanism observed a decent performance of 83.36% accuracy on the CAR-T cell dataset.
Deep learning approaches to automated retinal image analysis are gaining popularity for their relative ease of implementation and high efficacy [
25]. It has been reported that DL models capture subtle pixel-level luminance variations, which are likely indifferentiable to humans. Such findings underscore the promising ability of deep neural networks to utilize salient features in medical imaging that may remain hidden to domain experts [
26]. Deep learning has shown great strength in medical image analysis. The study [
57] developed a hyperdimensional computing-based algorithm [
58] to classify gender from resting state and task fMRI from the publicly available Human Connectome Project with accuracy
\( = \) 87%. Similarly, Jonsson [
30] presented a novel deep learning approach using residual convolutional neural networks [
59] with the prediction of brain age from a T1-weighted MRI with MAE = 3.39 and
\(R^{2} = 0.87\); however, the study lacks generative feature given age as condition to evaluate the desired projection.
Most importantly, ophthalmologists have successfully predicted biological traits, such as age and gender with the significance of 0.97 as area under curve (AUC) score [
26]. Yamashita performed logistic regression on several features that were identified to be associated with sex [
32]. These features include papillomacular angle, retinal vessel angles and retinal artery trajectory. Various studies have shown retinal morphology differences between the sexes, including retinal and choroidal thickness [
33,
34]. In previous studies, age has been estimated from clinical images via machine learning and deep learning [
27‐
30]. The excellent performance in age prediction implies that fast, safe, cost-effective, and user-friendly deep learning models are possible in larger population. Motivated by recent DL concepts such as convolution neural network and attention mechanism, we employ these characteristics in the proposed model to associate biological traits with retinal visuals. The state-of-the-art (SOTA) models are limited to the learning of trait factors in the fed visuals, whereas the proposed model learns both the aging factor and the generative capability in order to accomplish the desired projection. In contrast to the works of SOTA, our research seeks to demonstrate the continuous effect of aging in addition to age estimation and gender classification. By incorporating both control and healthy group subjects, specialists are able to include age as a condition in the model and retrieve the retinal visuals of a healthy subject. This will not only benefit experts in age estimation similar to SOTA, but it will also assist in examination and diagnosis decisions. The proposed models are elaborated in proceeded section.
Acknowledgements
We thank the support from the National Natural Science Foundation of China 31970752; Science, Technology, Innovation Commission of Shenzhen Municipality JCYJ20190809180003689, JSGG20200225150707332, JCYJ20220530143014032, KCXFZ20211020163813019, ZDSYS20200820165400003, WDZC20200820173710001, WDZC20200821150704001, JSGG20191129110812708; Shenzhen Bay Laboratory Open Funding, SZBL2020090501004; Department of Chemical Engineering-iBHE special cooperation joint fund project, DCE-iBHE-2022-3; Tsinghua Shenzhen Inter- national Graduate School Cross-disciplinary Research and Innovation Fund Research Plan, JC2022009; and Bureau of Planning, Land and Resources of Shenzhen Municipality (2022) 207.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.