Skip to main content
Erschienen in:

Open Access 04.04.2023

Deep Convolutional Neural Networks for Detecting COVID-19 Using Medical Images: A Survey

verfasst von: Rana Khattab, Islam R. Abdelmaksoud, Samir Abdelrazek

Erschienen in: New Generation Computing | Ausgabe 2/2023

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
insite
INHALT
download
DOWNLOAD
print
DRUCKEN
insite
SUCHEN
loading …

Abstract

Coronavirus Disease 2019 (COVID-19), which is caused by Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-COV-2), surprised the world in December 2019 and has threatened the lives of millions of people. Countries all over the world closed worship places and shops, prevented gatherings, and implemented curfews to stand against the spread of COVID-19. Deep Learning (DL) and Artificial Intelligence (AI) can have a great role in detecting and fighting this disease. Deep learning can be used to detect COVID-19 symptoms and signs from different imaging modalities, such as X-Ray, Computed Tomography (CT), and Ultrasound Images (US). This could help in identifying COVID-19 cases as a first step to curing them. In this paper, we reviewed the research studies conducted from January 2020 to September 2022 about deep learning models that were used in COVID-19 detection. This paper clarified the three most common imaging modalities (X-Ray, CT, and US) in addition to the DL approaches that are used in this detection and compared these approaches. This paper also provided the future directions of this field to fight COVID-19 disease.
Hinweise

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

1 Introduction

Recently, people all over the world have heard and known about COVID-19. Health authorities in China notified the World Health Organization (WHO) which is an agency of the united nations that its goal is promoting the health [1]. on December 8, 2019, about different cases of a novel virus that affects the respiratory system [2]. After 1 month, on January 7, 2020, WHO declared that the 2019 Novel Coronavirus ( 2019-NCOV) is an abbreviation for the novel coronavirus pandemic [3]. Then, a coronavirus research group termed it SARS-COV-2 [4]. In late January 2020, it was re-titled COVID-19 by WHO as an abbreviation for Coronavirus Disease 2019. Finally, on March 13, 2020, WHO proclaimed COVID-19 a global pandemic [5]. COVID-19 continued in its great spread and affected more countries. China informed that there were 12,000 suspected COVID-19 cases and about 7736 positive COVID-19 cases on January 30, 2020. On the same day, various suspected cases appeared in 18 countries [6]. Meanwhile, in the year 2021, these cases have increased. On March 18, 2021, the United States announced that it had 29,260,772 COVID-19 positive cases. COVID-19 continued in its spread in 2021 and 2022. Figure 1 shows the total COVID-19 cases and deaths for the most affected countries from January 2020 until November 2022 [7].
This survey contributions are highlighted in:
1.
Providing a comprehensive study about COVID-19 sources, COVID-19 symptoms, Coronaviruses’ families and their subgroups, and how the virus has been transmitted to humans as shown in Sect. 2.
 
2.
Discussing COVID-19 detection from different ways such as blood tests, viral tests, and imaging modalities; highlighting the main differences among them as shown in Sect. 2.2.
 
3.
Providing a comparative study about COVID-19 detection through different image types: X-Ray, CT, Ultrasound. In addition to multi-modal-based detection; illustrating the main features, advantages, and disadvantages of each modality, as highlighted in Sects. 2.2 and 7.
 
4.
Providing a comparative comparison study among more than 100 scientific papers for COVID-19 detection based on their imaging modalities, the employed techniques, datasets, limitations, evaluation measures, publication dates, and publication sources, as shown in Tables 3, 4, 5, and 7.
 
5.
Discussing COVID-19 detection using deep learning techniques; highlighting the main deep learning architectures and their characteristics, as shown in Sect. 4. Also, highlighting the main advantages and limitations of the different deep learning models and how to overcome these limitations, as shown in  Sect. 9.
 
6.
Discussing the most frequently used COVID-19 datasets and providing a detailed description about them, as shown in Table 8, and Sect. 8
 

2 Sources of Coronaviruses’ Families and Their Subgroups

Coronaviruses are classified into four families: alpha \(\alpha\), beta \(\beta\), gamma \(\gamma\), and delta \(\delta\). Beta \(\beta\) group contains Severe Acute Respiratory Syndrome (SARS-COV and SARS-COV-2) [8]. About 8000 confirmed cases with coronaviruses, especially SARS-COV, existed between 2002 and 2003. In 2012 WHO reported 2494 positive cases of the Middle East Respiratory Syndrome (MERS). After various studies, it was found that MERS came from Arabian camels [9, 10]. Studies also showed that SARS-COV-2 infected 750,000 cases in 150 countries yielding a 4% death rate. However, the death rate of SARS-COV was 9%, affecting 26 countries. From these studies, it can be concluded that the real danger of the coronaviruses’ families, especially COVID-19, is its rapid spread. This is due to a specific genetic event in SARS-COV-2 Spike protein’s Receptor-Binding Domain (RBD) [11], where RBD protein closely binds Human and bat Angiotensin-Converting Enzyme 2 (ACE2) receptors [12]. As shown in Fig. 2, the coronaviruses’ families are transmitted to humans from bats and other wild animals after some changes in their genetic structures, which could threaten humans’ lives [13, 14].

2.1 COVID-19 Symptoms and Signs

COVID-19 infection has high spread rates among humans. The cause of these great spread rates is unknown. The symptoms of COVID-19 include difficulty in breathing, fever, and severe cough [15]. WHO reported other symptoms, such as chest discomfort and bilateral infiltration of lungs [1618]. Some symptoms, such as discoloration of toes or fingers and rash on the skin, do not appear in children and adults. Older people and people infected with chronic diseases which are long-term human health conditions or diseases, especially a sickness that develops over time [19], are more prone to acquire COVID-19 infected, as a result of which they loss their lives [18]. Some studies showed that children below 10 years old have a lower chance of getting infected or passed away by COVID-19 [20]. Until April 2020, only one case of adults passed away by COVID-19 [2123]. Until February 2020, it was reported that only one baby had a severe kind of virus [24]. Studies showed that children could catch this virus if they are in contact with people infected with COVID-19 [25].

2.2 COVID-19 Detection

Fighting COVID-19 is not an easy task, as it might be thought since the virus has a rapid spread rate among citizens across all countries around the world. Moreover, it can be developed by itself and make another strain. Therefore, early detection of COVID-19 is the true weapon to beat it. Figure 3 shows that this detection can be achieved through three main approaches. These approaches are either a blood test, viral test, or analysis of different imaging modalities, such as X-ray, Computed Tomography (CT) scan, and Ultrasound (US) [26]
The blood test is used to detect the existence of antibodies for SARS-COV-2. On the other hand, blood analysis sensitivity for detecting SARS-COV-2 ranges from 2 to 3% [27]. The viral test has two approaches: rapid diagnostic test (RDT) and reverse transcription-polymerase chain reaction (RT-PCR). RDT is used in antibodies detection, and it can give a quick result in about half an hour. However, it is not recommended for COVID-19 detection as its accuracy depends on the quality of the sample, and It is unable to classify COVID-19 and other viral pneumonia [28] [29]. Where viral pneumonia is a contagious lung syndrome produced by an virus such as influenza [30]. RT-PCR is regarded as the truly accurate approach for COVID-19 detection [31]. However, it has some limitations. It is an expensive method and time-consuming approach. Moreover, it has lesser sensitivity in COVID detection than imaging modalities [32], as its sensitivity ranges from 50 to 62% [33]. Detecting COVID-19 through imaging is the best way to obtain rapid and accurate results. X-ray images have many advantages that encourage researchers to use them for COVID-19 detection. These advantages include its lower cost than other imaging modalities and its huge availability. Moreover, the amount of radiation during acquiring X-ray images is less than that of CT scan images. Therefore, it is used in detecting different diseases, such as lung cancer and cardiac diseases. The use of X-ray images has significantly spread in many places, especially in poor countries [34]. However, CT scan images have higher quality than X-ray images [35]. Therefore, CT scan images have more accurate diagnosis results. However, CT scan images have some disadvantages, such as their high cost and patients being exposed to more radiation. X-ray and CT scan images have popular features for COVID-19 identification. X-ray images use features, such as ground-glass opacification in the higher right section of the lung. However, CT scan images use features represented by ground-glass areas in the lower side of the lung and halo sign and consolidation areas in lower lobes [3640]. Figure 4 compares COVID-19 cases to non-COVID cases for both X-ray and CT imaging features [41].

3 Methodology

The detection of COVID-19 either from X-Ray, CT, or US was reviewed using a variety of DL architectures, particularly CNNs. In this study, we presented a comprehensive analysis of the researches that have been cited. The survey adopted keele [42] and [43] methodology for formulating a systematic literature review to illustrate research questions, search strategy, and the used datasets and AI tools in building predictive COVID-19 detection models.

3.1 Research Questions

COVID-19 has emerged as a global issue that required attention. Therefore, AI researchers proposed multiple models for accurate detection of COVID-19. In this study, the following major question was posed:
RQ What role did AI play in the development of accurate models for COVID-19 detection ?
The subsequent sub-questions were asked to answer this question:
Sub-RQ1 What are the main approaches for COVID-19 detection?
Sub-RQ2Which imaging modalities give more accurate results? And what are advantages and disadvantages of each modality?
Sub-RQ3 How can COVID-19 be detected using AI and what AI tools are used in this detection?
Sub-RQ4 What are software tools and datasets used in building predictive COVID-19 detection model?
Sub-RQ5 How can DL provide a great weapon for fighting COVID-19 and what are the challenges it faced?

3.2 Search Strategy

When conducting the literature search for this study, researchers took into account studies that tackled COVID-19 automatic detection from a variety of angles. The four steps of the search technique were as follows: identifying the information’s sources, creating the search formula, choosing the most relevant primary research, and quality assessment.

3.2.1 Identifying the Information’s Sources

Finding and selecting the information sources that would be used to conduct the systematic review was the first step of the search strategy. We searched several digital libraries to find pertinent study publications, such as: Google Scholar (https://​scholar.​google.​com/​), Pubmed (https://​pubmed.​ncbi.​nlm.​nih.​gov/​), ResearchGate (https://​www.​researchgate.​net/​), IEEE Explore (https://​ieeexplore.​ieee.​org/​Xplore/​home.​jsp), and Springer Link (https://​link.​springer.​com/​).

3.2.2 Creating the Search Formula

To find the primary studies, a search string that reads as follows was defined: ((“CAD System*”) OR ((“deep learning*”) OR ((“artificial intelligence*”) OR (“imaging modality*”) AND (“COVID-19 detection*”)). This string was solely used for document titles. The search phrase was changed to work in each of the chosen libraries.

3.2.3 Choosing the Most Relevant Primary Research

The study’s inclusion criteria were created to ensure that only ideas that examined and applied AI and image methods to detect COVID-19 were considered. In addition, three exclusion criteria that attempted to weed out papers that didn’t promote the research were defined. Duplicate documents, researches written in a language other than English, and unobtainable documents were the exclusion criteria. These requirements were used to filter out articles that were found using the search keyword.

3.2.4 Quality Assessments

All included publications were evaluated for quality based on the research provided in them as well as the exclusion criteria. For this literature study, high-quality papers that covered the application of imaging technology and deep learning to identify COVID-19 were chosen. In order to establish a comprehensive evaluation of the study’s quality, we also developed a quality standard based on the following three factors that influence study quality:
1:
Is information about the datasets and their citations included in the study?
 
2:
Is data analysis procedure proper?
 
3:
Did accuracy or any other evaluation measures serve as a gauge of the models’ quality?
 
Deep Learning (DL) algorithms provide better accuracy than classical machine learning algorithms [44]. They can deal with a huge number of data and raw images to extract knowledge and information without the need to enhance or segment these images [45]. DL algorithms also provide better improvements in image analysis [46]. They are used in disease detection, such as COVID-19 and retinal diseases which are affecting the iris and delicate nerve on the flip side of the eye and produce blindness [47]. Additionally, they are used in classification modalities, such as Magnetic Resonance Imaging (MRI) images [4850]. Where MRI is a medical imaging technology that creates detailed images of body’s organs and tissues by combining a magnetic field with computer-generated radio waves [51]. As shown in the next sections, researchers used DL algorithms, such as Convolutional Neural Network (CNN), which is a procedure of Artificial Neural Network (ANN). CNN is made up of four layers: convolution, pooling, fully linked, and non-linearity. It is an excellent method for improving pattern recognition and images classification performance [52]. To detect novel coronaviruses, some researchers used different CNN architectures, such as Visual Geometry Group (VGG), Residual Convolution Neural Network (ResNet), and Dense Convolution Network (DenseNet). The selection of suitable CNN architectures is based on the size and the nature of the data.
Recently, CNN architectures have accomplished better performance in most complicated tasks, such as medical image analysis and disease detection [26, 53]. In 1998, Yann LeCun designed LeNet as the earliest effective CNN. It was used to detect handwritten digits. It consisted of three convolution layers, two pooling layers, and two fully connected layers [54]. In the next sections, some of the most common CNN architectures are being discussed.

4.1 AlexNet

AlexNet is designed by (Alex Krizhevsky) and is like LeNet. But AlexNet is deeper. It has more filters, stacked convolution layers, dropout, and max pooling. In 2012, AlexNet provided 17% top-five error rate and won the contest of ImageNet Large Scale Visual Recognition Challenge (ILSVRC) [55] which continued every year from 2010 till 2017. Most of researchers use AlexNet in COVID-19 detection to overcome the problem of overfitting, which happens when DL models achieve better accuracy for training data than testing data.

4.2 GoogLeNet

GoogLeNet is deeper than AlexNet, it contains 22 layers and 27 layers, if pooling layers are taken into account. In 2014, GoogLeNet won ILSVRC contest and achieved 6.67% top -five error rate. An inception module (IM) is a major component in GoogleNet. This module functions as a tiny network and can learn both spatial and cross-channel correlations (depth-wise). The IM has various benefits such as, allowing the training of models that are considerably deeper while having ten times less learnable parameters. The number of feature maps in an IM’s output is configured to be less than its input, this reduces the dimensionality of the IM. In addition to the spatial and depth dimensions, an IM is capable of capturing complicated patterns at various scales. [56].

4.3 VGGNet

VGGNet achieved 7.3% top-five error rate in ILSVRC contest in 2014. It contains 19 convolution layers. VGGNet is simpler than AlexNet, it has three fully connected layers. Therefore, it is used in many fields. VGGNet was developed at Oxford university by Visual Geometry Group [57]. The architectural simplicity of VGGNet is a benefit. Nevertheless, it used three times as many parameters as AlexNet. Advanced object identification models are built on the VGG architecture. The VGGNet, which was created as a deep neural network, performs better than baselines on a variety of tasks and datasets outside of ImageNet. In addition, it remains one of the most widely used image recognition architectures today.

4.4 ResNet

ResNet has a residual module that contains a standard layer and a skip connection. ResNet won ILSVRC contest in 2015 with containing 152 layers and provided 3.6% top-five error rate. By linking the input signal of a layer to its output, the skip connection enables that layer’s input signal to go across the network. Thus, the Residual Units (RUs) allowed for the training of a model with 152 layers, which is incredibly deep. The skip connection joins layer activations to subsequent layers. Consequently, a block is created. These discarded building blocks are stacked to create ResNets. The benefit of including this kind of skip link is that regularization will skip any layer that degrades architecture performance. By doing so, an extremely deep neural network can be trained without encountering issues of vanishing or exploding gradients. [58].

4.5 Inception

Inception is an image model block module that seeks to simulate an ideal local sparse structure in a CNN. The Inception network was first created by a team in Google in 2014 with the name Inception V1 in 2014. Inception architecture uses many filters of various sizes on the same level and the idea behind this is to prevent data overfitting from happening and solving computational expense problems. It combines several filter sizes into a single image block rather than being limited to a single filter size, which is then pass to the following layer. [59].

4.6 Xception

Xception is developed by a Google team with depth-wise separable convolutions. The name Xception is derived from extreme Inception, so Xception can be considered as an interpretation of the Inception modules. Entry flow, middle flow, and exit flow are the three structures that make up the Xception architecture. Each of these three topologies is made up of 14 modules (four, eight, and two, respectively), totaling 36 convolution layers. The entry flow, the middle flow, which is repeated eight times, and the exit flow are all the steps that the data must initially go through. Keep in mind that batch normalization comes after convolution and separable convolution layer [59]

4.7 MobileNet

MobileNet is a widely used CNN-based model for image classification. The primary benefit of adopting the MobileNet architecture is that the model requires significantly less computational effort than the traditional CNN model, making it appropriate for use with mobile devices and computers with limited computational power. MobileNet has depth-wise separable convolution layers and ReLU non-linearity, while the final layer is fully connected followed by the SoftMax classification layer. A trade-off between latency and precision is introduced by MobileNet. By using these hyper-parameters, the model builder can select the appropriate model size for their application while taking into account the limitations of the issue [60, 61].

4.8 DenseNet

DenseNet is a kind of convolution neural network that has a top-five error rate of 6.12%, although it uses fewer parameters and costs less to compute than other cutting-edge CNN architectures like ResNet. Through the use of Dense Blocks, which connect all layers directly with one another when their feature-map sizes match, DenseNet makes use of dense connections between layers. In order to maintain the feed-forward nature, each layer receives extra inputs from all earlier layers and transmits its own feature-maps to all later layers. In contrast to the standard CNN architecture, which uses L connections between L layers, DenseNet uses L(L + 1)/2-layer connections. The feature-maps of all layers before it is utilized as inputs for each layer, and its own feature-maps are used as inputs into all levels after it. DenseNets offer a variety of appealing benefits, including the elimination of the vanishing-gradient issue, improved feature propagation, promoted feature reuse, and significantly fewer parameters. [62]

5 Basic Evaluation Measures Terminologies

In this section, we will review some of scientific terms as well as evaluation measures that are used in evaluating performance in COVID-19 detection. All these measures have equations used in evaluating the classification performance as shown in Table 2. Table 1 shows measures of True positives and True negatives results.
Accuracy: This parameter assesses a model’s overall performance. It’s calculated as the model’s proportion of correctly identified data samples [26].
Recall, Sensitivity or True Positive Rate (TPR): This parameter represents the amount of confirmed scenarios that the model properly expected [26].
Precision: This metric assesses the model’s ability to predict positive samples with reasonable accuracy [63].
Specificity: This metric represents the model’s negative instances [63].
F1-Score: a technique for integrating sensitivity and precision into one statistic that take them into account [64].
Receiver Operating Characteristic (ROC): The exchange between precision and sensitivity across a sequence of cut-off points is depicted in this graph. The classifier is better if the curve is close to the upper left corner [26].
Area Under the Curve (AUC): When evaluating a classifier, the AUC of the ROC is employed. The AUC of a perfect classifier would be one [26].
Table 1
Measures of true positives and true negatives results
 
Diseased
Non-diseased
Test positive
True positives
False positives
Test negative
False negatives
True negatives
Table 2
Summary of evaluation measures formulas
Evaluation measure
Formula
Accuracy
(TP+TN)/(TP+TN+FP+FN)
Precision
TP / (TP + FP)
Recall
TP / (TP + FN)
AUC
.5*((TP/TP+FN) +(TN/TN+FP))
F-Measure
(2 * Precision *Recall)/(Precision+ Recall)
Sensitivity
(TP/ total diseased) *100
Specificity
(TN/ total non-diseased) *100
TPR
TP / (TP + FN)
TNR
TN/(TN+FP)

6 Data Augmentation and Transfer Learning Terminologies

In this section, we will review some of data augmentation methods that are used for balancing COVID-19 datasets for getting better performance.
Data augmentation: A regularization method that uses numerous transformations like flipping, rotating, moving, and resizing o generate a large number of false data samples [26].
Random Image Cropping and Patching (RICAP): RICAP creates a new image by cropping and patching a random number of photographs. As a result, RICAP selects subsets of original features from the images at random and discards the rest, increasing the variety of training images [65].
Synthetic Minority Oversampling Technique (SMOTE): A data augmentation technique that concerns with minority class for making data balance.
Class-weighted entropy: It means that When class weighting is turned on, a weighted sum takes place the entire ensuring that every example adds proportionally to the loss. This implies that samples from the smaller classes contribute more to the overall loss [66].
Cost-sensitive learning: It’s an imbalanced learning sub-field that deals with classification on datasets with skewed class distributions. When a model is being prepared, prediction errors costs are taken into account (and maybe other expenses) [67].
ImageNet: A dataset of 14,197,122 marked photographs of various common items such as creatures, technology, plants and meals, grouped according to WordNet hierarchy [26, 68].
Transfer learning: A representation learning idea established on the hypothesis that particular characteristics are familiar to several jobs. By this strategy, a model that has been accomplished in one environment is employed to boost generality in a different situations [69].
In this section, literature reviews are provided about COVID-19 detection, either through CT, X-ray, ultrasound images, or through multi-model images by applying AI techniques.The papers included in this literature review are divided into four main categories: COVID-19 detection through X-Ray images, COVID-19 detection through CT images, COVID-19 detection through Ultrasound images, and COVID-19 detection through multi-modal images. Figure 5 illustrates these major categories of the different papers used for COVID-19 detection.
AS shown from Fig. 5, These papers were organized into four categories based on the type of classification and the accuracy percentage: The first two categories are based on whether the papers make a binary classification that yielded to an accuracy more than 90% or binary classification that yielded to an accuracy less than 90%. The other two categories are based on whether the papers make multi-class classification that yielded to an accuracy more than 90% or multi-class classification that yielded to an accuracy less than 90%.

7.1 Detection of COVID-19 Through X-Ray Images

7.1.1 Binary Classification and Accuracy Less Than 90%

Hemdan et al. [70] developed COVIDX-Net framework for COVID-19 detection. This model used seven CNN architectures, including VGG19, DenseNet 201, Inception V3, ResNet V2, InceptionResNet V2, MobileNet V2, and Xception. VGG19 and DenseNet models achieved the best accuracy for COVID-19 classification. They yielded 0.89 and 0.91 F1-score for normal and COVID-19, respectively. The dataset consists of 50 X-ray images (25 normal images and 25 COVID images). Catak et al. [71] developed five different deep CNN techniques (VGG19, VGG16, ResNet, DenseNet, and InceptionV3) for identifying COVID-19 from X-ray images. The dataset contained 50 COVID-19 patients and 50 non-COVID-19 patients in the training phase; meanwhile, it contained 20 cases of COVID-19 and 20 cases of non-COVID-19 in the testing phase. VGG16 achieved the highest accuracy of 80%.
Horry et al. [72] developed pre-trained models (Xception, VGG16, VGG19, Inception v3, and RasNet50) to detect COVID-19 from X-ray images. VGG19 achieved the highest precision of 83%. The dataset contained 115 COVID-19 images. Haghanifar et al. [73] developed CheXNet model based on Xception, DenseNet, EfficientNet-B7, and ResNet for classifying X-Ray images into COVID-19, CAP, and normal. Their dataset contained 1326 COVID-19 images, 5000 normal images, and 4600 CAP images. The model achieved 87.88% accuracy.

7.1.2 Binary Classification and Accuracy More Than 90%

Ozturk et al. [74] developed an automatic detection model of COVID-19 from X-ray images. The proposed CNN consists of 17 convolution layers. This approach included both binary (COVID vs. no findings) and multi-class classification (COVID vs. no findings vs. pneumonia). The proposed approach used a darkNet model to classify You Only Look Once (YOLO) object detection system which is an algorithm used for detecting visual items in pictures. It achieved an accuracy of 98.08% and 87.02% for binary and multi-classes, respectively.Apostolopoulos & Mpesiana [75] proposed VGG19 for COVID-19 detection from chest X-ray images. VGG19 achieved an accuracy of 93.48 for binary classification and 98.75 for multi classification, respectively. The dataset contained 700 pneumonia, 504 normal, and 224 COVID-19 X-ray images.
Minaee et al. [76] developed DeepCOVID model based on 5071 X-ray images to distinguish COVID-19 from other lung pneumonia. The proposed model was trained through four CNNs: ResNet50, Dense-19, and SqueezeNet. In this model, a heatmap was generated to determine the regions infected by COVID-19. The evaluation of the model performance showed that SqueezeNet achieved the best performance. It reached 95.6% and 100% of specificity and sensitivity, respectively. Narin et al. [77] developed CNN models (ResNet50, ResNet101, ResNet152, Inceptionv3, and Inception ResNetv2) for three datasets. All datasets contained 341 COVID-19, dataset 1 contained 2800 normal, dataset 3 contained 2772 bacterial pneumonia which is caused by particular bacteria, and dataset 2 contained 1493 viral pneumonia chest X-ray images. The result showed that the ResNet50 has the highest accuracy, achieving 96.1%, 99.5%, and 99.7% for dataset1, dataset2, and dataset3, respectively.
Singh et al. [78] developed a detection model based on CNN and Multi-Objective Differential Evaluation (MODE) classifier which extract important information from the search data during evaluation process using clustering and statistical approaches, and then utilized to direct the production of new populations and local searches [79]. The model achieved 94.65% accuracy.
Pandit et al. [80] proposed a DCNN model for COVID-19 detection. The model used two datasets. The first contained 150 different patients of COVID-19, and the second was for collecting daily information about COVID-19 cases for statistical analysis. The proposed approach achieved an accuracy of 93%. Zhang et al. [81] developed a deep anomaly model for screening and detecting COVID-19 through X-ray images. The model was built on 100 images on 70 subjects confirmed as COVID-19 and 1431 images on 1008 subjects confirmed as another pneumonia. The result showed 96% and 70.65% accuracy for COVID-19 and non-COVID-19 cases, respectively. However, the study has limitations in missing 30% false positive rate and 4% of COVID-19 cases.
Alqudah et al. [82] developed COVID-19 identification model based on Support Vector Machine (SVM), Random Forest (RF), and CNN. Graphical features were extracted using CNN, and the difference between COVID and non-COVID images was performed using CNN classifier, RF, and SVM. The result showed that SVM was less time-consuming; however, in test stage, CNN achieved 95.2% accuracy.
Hossain el al. [83] applied ResNet 50 with ten different pre-trained weights on 7262 X-Ray images divided into COVID-19 and normal images. ResNet50 with iNat2021-Mini-SwAV-1K (iNMSwAV) achieved the highest score: 99.17%, 99.31%, and 99.03% for accuracy, precision, and sensitivity, respectively.

7.1.3 Multi Classification and Accuracy Less Than 90%

Khan et al. [84] proposed a CoroNet model COVID-19 detection through chest X-ray images. This model was pre-trained using ImageNet dataset [68]. CoroNet model achieved 89.6%, 93%, and 98.2% accuracy, precision, and recall, respectively, for four classes (COVID-19 vs. pneumonia viral vs. pneumonia bacterial vs. normal). The dataset of this model consists of 284 chest X-ray images for COVID-19, 310 for normal, 330 for pneumonia bacterial, and 327 for pneumonia viral.
Moutounet-Cartan [85] developed a Deep Convolution Neural Network (DCNN) model based on five CNNs (VGG19, VGG16, Xception, Inception v3, and InceptionResNetv2) to detect COVID-19. VGG16 achieved 84.1% accuracy. The dataset contained 327 X-ray images.
Pereira et al. [86] proposed a classification scheme to distinguish COVID-19 from other lung pneumonia. This model used a pre-trained CNN and resampling algorithms to balance the data. The proposed model achieved 89% F1-score for COVID-19 detection in hierarchical classification. This model used a database called RYDLS-20, containing 1144 chest X-ray images from 7 classes. In Nishio et al. [87] developed a Computer-Aided Diagnosis (CAD) system to distinguish between normal, COVID-19, and other pneumonia images. The developed models were based on the EfficientNet CNN and VGG16. These models combined data augmentation and Random Image Cropping and Patching (RICAP) techniques. The dataset contained 215 X-ray COVID-19 images, 500 normal images, and 533 other pneumonia. The model achieved sensitivity and accuracy of 90%, 83.6% sensitivity and accuracy, respectively.
Rahaman et al. [88] developed a CAD system for COVID-19 detection from X-ray images. VGG19 model achieved 89.3% and 0.90 accuracy and F1-score, respectively. This model contained 860 X-ray images.
Loey et al. [89] used deep transfer model techniques (GoogLeNet, ResNet18, AlexNet, and GAN to detect COVID-19 X-ray images. The dataset consisted of 306 X-ray images. They used three cases of the dataset. The first case consists of four groups of the dataset using GoogLeNet as the main technique for COVID-19 detection, achieving 80.6% testing accuracy. The second case consists of three groups of the dataset COVID-19, normal, and pneumonia bacterial groups. This case used AlexNet as the main transfer model and reached 85.2% accuracy. The third case consists of only two groups: COVID-19 and normal groups. The main transfer model was GoogLeNet and achieved 99.9% validation accuracy and 100% testing accuracy.
Monga et al. [90] applied six different transfer learning approaches: InceptionResNet V2, Xception, VGG19, VGG16, ResNet50 V2, and DenseNet201 for detecting COVID-19 from X-Ray images. Their dataset contained 770 chest X-Ray images divided into three classes: COVID-19, normal, and pneumonia. DenseNet201 achieved the highest performance with 82.8% accuracy.

7.1.4 Multi Classification and Accuracy More Than 90%

Wang et al. [91] implemented COVID-Net neural network for detecting COVID-19 from X-ray images. It was designed based on a combination of machine-driven design exploration and human-driven design principles. The used dataset is COVIDX, consisting of 16,756 chest images obtained from 13,645 patients. The model achieved accuracy and sensitivity of 92.4% and 80%, respectively.
Asif and Wenhui [92] implemented DCNN model built on inception v3 to detect COVID-19 from chest X-ray images. This model achieved 96.9% accuracy. There were 864 COVID-19, 1345 viral pneumonia, and 1341 normal chest X-ray images in the dataset. Narayan Das et al. [93] proposed the extreme version of inception to detect X-ray COVID-19 images. The dataset contained COVID-19, pneumonia but negative COVID-19, and other infections. The model reached 94.44% accuracy. Afshar et al. [94] developed a COVID-CAPS model based on a capsule network to detect COVID-19 in chest X-ray images. This model contained three capsule layers and four convolution layers. The proposed model achieved 95.7% and 90% accuracy and sensitivity in COVID-CAPS without pre-training, respectively. Meanwhile, the pre-trained COVID-CAPS achieved 98.3% and 80% accuracy and sensitivity, respectively.
Ucar and Korkmaz [95] developed a detection model based on deep Bayes-SqueezeNet. The dataset contained 1583 normal, 4290 pneumonia, and 76 COVID X-ray images. The model achieved 98.26% accuracy. However, Punn and Agarwal [96] developed DL models (InceptionResNet V2, ResNet, DenseNet169, Inception v3, and NASNetLarge) for COVID-19 detection. The dataset included108 X-ray images for COVID-19, 515 other pneumonia, and 453 normal images. NASNetLarge reached the highest accuracy for COVID-19 detection, achieving 98% and 99% accuracy and Area Under The Curve ( AUC), respectively.
Al-antari et al. [97] developed a CAD system that was built on the YOLO classifier to distinguish COVID-19 from other 8 diseases based on 50,490 images. It achieved 97.40% accuracy for classification. The study used two databases, chest X-ray8 in [98] and online COVID-19 database However, the system had some challenges, such as no availability for annotating images of digital X-rays and the need for physicians to label the COVID-19 lesion regions. The study’s future work is to make the proposed system to deal with CT images and use a Generative Adversarial Network (GAN) model for image synthesis. Narayanan et al. [99] developed a recommendation system to differentiate between COVID-19 and other several diseases, such as lung cancer, viral pneumonia, and bacterial pneumonia. The developed system was based on six different datasets and used four different CNN models (Inception v3, ResNet50, DenseNet 201, and Xception). The proposed approach used experiments of two hold-out validation and tenfold cross-validation. Ten folds cross-validation achieved 0.99 AUC for COVID-19 detection, whereas two hold-out validations achieved 0.94 sensitivity for COVID-19 detection.
Islam et al. [100] developed a COVID-19 detection model by combining CNN with Long Short-Term Memory (LSTM) which is a form of DL and has a feed forward connections. Features were extracted by CNN and LSTM used these features for classifying COVID-19. The proposed approach achieved a 98.9% F1-score. However, this approach has some limitations in that in a small sample size. It is unable to distinguish anterior-posterior where it solely concentrates on the posterior-anterior, COVID-19 images show various symptoms that are difficult to categorize. The dataset contained 4,575 X-ray images split into 3 groups: normal, COVID-19, and other pneumonia. Rahimzadeh and Attar [101] developed a combination model of ResNet50v2 and Xception to detect COVID-19 in X-ray images. This model used 2 datasets divided into 3 classes containing 180 COVID-19 images, 6054 pneumonia images, and 8851 images for normal cases. The proposed approach achieved 99.56% and 80.53% accuracy and recall, respectively. The accuracy of all classes was 91.41%.
Sethy et al. [102] implemented a combination model for COVID-19 detection. The model combined SVM with one of the 13 pre-trained CNN models: AlexNet, InceptionV3, DenseNet201, MobileNETv2, GoogLeNet, XceptionNet, InceptionResNETV2, ResNet18, ResNet101, SuffleNet, VGG16, ResNet50, and VGG19. The dataset contained 381 X-ray images; the combination of ResNet50 with SVM achieved the highest result by achieving 95.33% accuracy. Farooq and Hafeez [103] developed a COVID-ResNet50 model based on the ResNet50 technique to categorize X-ray images into four groups: COVID-19, healthy, bacterial pneumonia, and viral pneumonia. The dataset was COVIDX and it presented by COVID-Net researchers [91], consisting of 1203 normal images, 931 pneumonia bacterial images, 660 viral pneumonia images, and 68 COVID-19 images. The model achieved 96.2% accuracy in all classes of the COVIDX dataset and 100% accuracy for the COVID-19 class. Riahi et al. [104]developed a combination model of Bidimensional Empirical Mode Decomposition (BEMD) with 3DCNN. The initial X-ray images was decomposed to IMF by BEMD. They applied 3DCNN to the video created by IMFs to detect COVID-19. The dataset was obtained from [100], containing 1802 COVID X-ray images, 1910 normal images, and 272 viral pneumonia X-ray images. The developed model achieved 100% sensitivity and accuracy.
Moujahid et al. [105] applied Grade Cam technique and three different transfer learning approaches: VGG16, VGG19, and MobileNet V2 for classifying X-Ray images into pneumonia or normal or COVID-19. Their dataset contained 1341 normal images, 1345 pneumonia images, and 840 Covid-19 images. VGG19 achieved 96.97%, and 100% accuracy and F1-Score, respectively. ElGannour et al. [106]proposed two concatenation models. The first model is based on MobileNet V2, DenseNet 201, and ResNet50 V2. While the second model is based on Xception, Inception V3, and InceptionResNet V2 for classifying X-Ray images into Covid-19, normal, viral pneumonia, and tuberculosis. Their dataset contained 10,399 images. The models achieved 99.80% and 99.71% accuracy for the first and the second model, respectively.
Maguolo and Nanni [107] reported the biases for classifying X-ray images for verifying the most suitable protocol for COVID-19 detection. Where most datasets came from almost the same sources. They used four different datasets and conducted different tests to know whether the classifier could determine the dataset source or not. They used AlexNet trained on images whose centers were black, and lung regions were deleted to detect COVID-19 images. Therefore, it would be impossible for the classifier to know anything about the source of the image or detection task. Results demonstrated that AlexNet could recognize COVID-19 images that came from the same or different sources without any biases and achieved 99.97% accuracy. The dataset contained 339,271 X-ray images. Similarly, Cohen et al. [108] studied the generalization performance of models for classification from chest X-ray images that came from the same or different sources. They used the DenseNet model for training on different datasets of A, B, C, and D. The result showed that if the model was trained on datasets B, C, and D and tested on A. The result would be less than if the model was trained on the B dataset and tested on B. With the publication of COVID-19 datasets by [109], they tried to merge the COVID-19 dataset to the chest X-ray dataset to classify these images and conduct testing on it. However, the study has limitations; it considers dataset labels only and does not consider patient outcomes. Boulila et al. [110] Proposed a new COVID-19 patient monitoring method that protects patient privacy in the setting of Saudi Arabia. It was a secure system for persistent patient monitoring thanks to the employment of inexpensive wireless devices and a cutting-edge encryption algorithm called chaos-based substitution boxes. To track daily activities and identify natural and unusual breathing rates. The system achieved 99% number of pixel change rate (NPCR).
Table 3 shows a comparison between these papers.
Table 3
Comparison between papers of X-Ray for COVID-19 detection
Paper
Model technique
Dataset
Evaluation measures
Limitations
Paper type
Publication date
Publication source
[84]
coroNet (deep convolution neural network)
284 COVID-19 images 310 normal images 330 bacterial pneumonia images 327 viral pneumonia images
89.6% accuracy 93% precision 98.2% recall
The model was built by a small dataset
Scientific paper
1 Nov 2020
Elsevier
[86]
Pre-trained CNN resampling algorithms
RYDLS-20(1144 images)
89% f1 score
The dataset was imbalanced: 1000 healthy images, 90 COVID-19 images, and the rest belonged to other diseases
Scientific paper
1 Oct 2020
Elsevier
[91]
COVID-Net (open source deep neural network)
COVIDX( 16,756 images)
92.4% accuracy 80% sensitivity
Model sensitivity of COVID-19 infections was very small
Scientific paper
11 Nov 2020
Nature Publication Group
[74]
darkNet
127 COVID-19 images 500 pneumonia images 500 non pneumonia images
98.08% binary classification accuracy 87.02% multi classification accuracy
The model was not validated by large number of images
Scientific paper
1 Jun 2020
Elsevier
[70]
COVIDX-NET, VGG16, DenseNet201 VGG19, MobileNet
50 images
91% f1 score
Dataset was very small, Accuracy of MobileNet should be improved more
Scientific paper
24 Mar 2020
Arxiv
[76]
DeepCOVID (ResNet50 DenseNet121 DenseNet18 squeezeNet )
5071 images
95.6% specificity 100% sensitivity
The dataset was imbalanced
Scientific paper
1 Oct 2020
Elsevier
[87]
CXDX system (Efficient Net, VGG16)
215 COVID-19 images 500 normal images 533 pneumonia images
83.6% accuracy 90% sensitivity
The model was validated only on public dataset which will be different from clinical data causing overfitting in external validation and CADX not be used by clinicians
Scientific paper
16 Oct 2020
Nature Publication Group
[75]
VGG19
700 pneumonia images 504 normal images 224 COVID-19 images
93.48% accuracy
More depth analysis requires for COVID-19 patients to detect mild symptoms that can not be visualized by X-Ray images
Scientific paper
Jun 2020
Springer
[85]
DCNN VGG19 VGG16 Xception Inception v3 InceptionResNet V2
125 COVID-19 images 152 normal images 50 pneumonia images
84.1% accuracy
The dataset was very small and imbalanced, It can not only be depended of the model for COVID-19 detection, professional diagnosis should be used beside
Scientific paper
2 May 2020
arXiv
[92]
DCNN and Inception v3
864 COVID-19 images 1345 viral pneumonia images 1341 normal images
96.9% accuracy
More deep learning approaches should be used for COVID-19 detection
Conference paper
11 Dec 2020
IEEE
[77]
ResNet50 ResNet 101 ResNet 152 Inception v3 Inception ResNet v2
2772 bacterial images 1493 viral pneumonia images 2800 normal images 341 COVID-19 images
96.1% accuracy DB1 99.5% accuracy DB2 99.7% accuracy DB3
Limited COVID-19 images
scientific paper
24 Aug 2021
Springer
[95]
Deep Bayes-squeezeNet
1583 normal images 4,290 pneumonia images 76 COVID-19 images
98.26% accuracy
COVID-19 images should be higher
scientif paper
July 2020
Elsevier
[71]
VGG16 VGG19 ResNet DenseNet Inception v3
70 COVID-19 images 70 non-COVID-19 images
80% accuracy
Small datasets size Model accuracy should be higher for COVID-19 detection
Scientific paper
1 Jan 2021
Elsevier
[72]
Xception Inception v3 ResNet50 VGG16 VGG19
115 COVID-19 images
83% precision
COVID-19 precision was so low and COVID-19 samples was very small
Scientific paper
21 April 2020
Engineering Archive
[96]
NESNetLarge Inception v3 DenseNet 169 ResNet ResNet v2
108 COVID-19 images 515 pneumonia images 453 normal images
98% accuracy 99% AUC
More deep learning and pre-processing techniques should be used for better performance, COVID-19 images was very small
Scientific paper
May 2021
Springer
[88]
CAD system depends on 15 different CNN techniques
860 images
90% f1 score 89.3% accuracy
More effective ensembles architectures should employed and Limited X-Ray images
Scientific paper
1 Jan 2020
IOS press
[97]
CAD system depends on YOLO predictor
50,490 images
97,40% accuracy
More help of physicians are needed for labeling the regions containing COVID-19 lesions
Scientific paper
May 2021
Springer
[80]
DCNN
150 COVID-19 patients
93% accuracy
Limited COVID-19 images
Scientific paper
1 May 2021
Elsevier
[81]
Deep anomaly model
100 images of COVID-19 1,431 other pneumonia images
96% accuracy
Small dataset size
Scientific paper
27 Nov 2020
IEEE
[100]
LSTM with CNN
1525 COVID-19 images 1525 pneumonia images 1525 normal images
98.9% F1 score
Small numbers of samples, the model focused only on posterior anterior (PA) view of images, it can not detect anterior posterior (AP) views in images, images with several COVID-19 symptoms can not be classified well,Model performance not compared with radiologists
Scientific paper
1 Jan 2020
Elsevier
[101]
CNN, ResNet50V2 Xception
6054 pneumonia images 8851 normal images 180 COVID-19 images
91.41% accuracy 80.53% recall
The model not well detected COVID-19 false class which made the model COVID-19 precision low, small number of COVID-19 images
Scientific paper
Jan 2020
Elsevier
[82]
CNN SVM Random forest
 
95.2% accuracy
Small dataset size, some other features like texture feature should be used for improving accuracy
Scientific paper
5 May 2020
JJEE
[89]
d GoogLeNet ResNET18 AlexNet GAN network
69 COVID-19 images, 79 normal images, 79 pneumonia bactrial images 79 pneumonia viral images
80.6% first scenario accuracy 85.2% second scenario accuracy 100% third scenario accuracy
Limited dataset size
Scientific paper
20 April 2020
MDPI
[102]
SVM classifier 13 transfer learning models
127 COVID-19 images 127 normal images 127 pneumonia images
95.33% accuracy
Limited dataset size
Scientific paper
2020
MDPI
[103]
COVID-ResNet50
COVIDx dataset (1203 normal images 931 pneumonia bacterial 660 viral pneumonia 68 COVID-19 images
96.2% accuracy
It can not be employed directly in clinical situations, the model needs more data to be more clinically useful
Scientific paper
31 Mar 2020
arXiv
[107]
AlexNet
339271 X-Ray images
99.97% accuracy
Limited COVID-19 images
Scientific paper
1 Dec 2021
Elsevier
[105]
VGG16 VGG19 MobileNet
1341 normal images 1345 pneumonia images 840 COVID-19 images
96.97% accuracy 100% F1-Score
Small number of COVID-19 images
Scientific paper
17 Nov 2021
Intelligent Automation and soft computing
[106]
Two concatenation models based on MobileNet V2, DenseNet 201, ResNet50 V2, Xception, Inception V3, InceptionResNet V2
10,399 images
99.80% first model accuracy 99.71% second model accuracy
Six publicly accessible datasets were taken into consideration in order to build a comparably balanced dataset due to the lack of coronavirus images in compared to the other diseases there was a significant increase in computational complexity and execution time
Scientific paper
29 Dec 2021
MDPI
[73]
CheXNet model based on Xception DenseNet EfficientNet-B7 ResNet
1326 COVID-19 images, 5000 normal images, and 4600 CAP images
87.88% accuracy
Ensemble models must be developed in order to improve the detection of thoracic abnormalities in CXRs utilizing CheXNet.individual sample variability brought on by changes in data ordering
Scientific paper
7 April 2022
Springer
[83]
ResNet50
7262 X-Ray images
99.17% accuracy 99.31% precision 99.03% sensitivity
 
Scientific paper
1 Jan 20222
Elsevier
[90]
InceptionResNet V2, Xception, VGG19, VGG16,ResNet50 V2, and DenseNet201
770 images
82.8% accuracy
Accuracy should be improved by applying more images dataset for COVID-19, bacterial, and viral pneumonia
book chapter
15 Mar 2022
Springer

7.2 Detection of COVID-19 Through CT Scan Images

7.2.1 Binary Classification and Accuracy Less Than 90%

Shah et al. [111] created a CT-Net10 self-developed model for classifying CT scan images to COVID-19 or non-COVID-19 images. The developed model achieved 82.1% accuracy, reaching higher accuracy. CT-scan images were passed through multiple models (VGG16, ResNet50, Inception V3, DenseNet, and VGG19). VGG19 proved to be superior, achieving 95.52% accuracy. The dataset contained 738 CT-scan images and 349 COVID-19 images from 216 patients.
Shuai Wang et al. [112] developed a DL model to investigate the radiographic changes in CT images. The model used the adjusted inception transfer learning technique and was made on 1065 CT images. The model yielded 89.5% and 0.87 accuracy and sensitivity, respectively. However, it has challenges: poor signal-to-noise ratio and complicated data integration, which affected the efficiency of DL; the training dataset was small, which also affected the efficiency, and an enormous number of variable objects represented difficulty in the classification task. Therefore, their future work is to link the features of the clinical information and the genetic with the CT image hierarchical features to enhance diagnosis through multi-modeling analysis of these features.
Amyar et al. [113] developed a multitask model based on DL for detecting COVID-19 from chest CT images, determining disease severity through segmentation of the infected region from CT images, and making a reconstruction. The dataset came from multiple hospitals and contained 1369 CT images. These data are obtained from [114]. The model achieved 86% and 0.93 accuracy and AUC, respectively.Xiong et al. [115] applied an AI-based system to distinguish COVID-19 from other pneumonia. Chest CT image of the lung was first segmented by HU with a -320-thresholding value. Then, the segmented region was input to EfficientNet B4 deep neural network to classify COVID-19 and other pneumonia. The dataset contained 512 COVID-19 CT images and 665 non-COVID-19 pneumonia. The proposed model achieved 87% and 0.90 accuracy and AUC, respectively

7.2.2 Binary Classification and Accuracy More Than 90%

Wang et al. [116] developed (Decov Net) framework based on Unet and CNN to detect COVID-19 from CT images. Unet was used to segment the lung. Then, DNN used this segmented region to predict the infection probability of COVID-19. The model achieved 90.9% and 95.9% for accuracy and AUC, respectively. The dataset contained 219 and 313 non-COVID CT and COVID-19 images, respectively. However, it has some limitations; UNet trained with ground truth mask was imperfect, and the dataset came from an only single hospital, and there was no data for CAP in the study, where this was done, and the dataset of CAP that was added in [117].
Although Do and Vu [118] investigated several transfer learning models (VGG16, Xception, Inception V3, DenseNet201, Inception ResNet V2, DenseNet169, VGG19, and DenseNet21) for COVID-19 detection in CT scan images. The dataset contained 397 normal images and 349 COVID-19 images, respectively. DenseNet201 reached the highest accuracy in detecting COVID-19. It achieved 85% and 91% accuracy and recall, respectively. Their future work includes investigating a model for stacking several multiple architectures and integrating several imaging modalities into a single model.
Attallah et al. [119] developed a CAD system based on multiple CNNs to detect COVID-19 from CT images. The model employed different CNN techniques (GoogLeNet, AlexNet, ResNet18, and Shuffle-Net). The dataset contained 347 and 347 COVID and non-COVID CT images, respectively. It is available at [114]. The CAD system achieved 94.7% and 0.98 accuracy and AUC, respectively, which is better than [118]. However, it has some challenges: the need for a larger number of training data to differentiate COVID-19 from other pneumonia types and not supporting more segmentation techniques to distinguish between other tissues and the lung.
Gozes et al. [120] developed an automated analysis tool for tracking the progress of COVID-19 based on AI. The testing stage was conducted on 157 patients from China and the United States. The result showed that the developed system could extract lungs opacities slice automatically and produce a quantitative opacity measure as well as a 3D volume visualization for opacities. The experiment achieved 98.2% and 92.2% sensitivity and specificity, respectively.
Shan et al. [121] proposed a VB-Net neural network for quantification and segmentation of regions infected with COVID-19 and the entire lung from chest CT images. They aimed to evaluate disease progression and analyze changes in COVID-19 severity during the treatment period. They used 249 CT images of COVID-19 for the training phase and 300 COVID-19 CT images for validation and achieved a 91.6% Dice similarity coefficient between manual and automatic segmentation and yielded 0.3% mean Point Of Interest (POI) estimation error for the entire lung. However, it has some limitations; the dataset of validation was collected from one center, and this might not be represented all cases of COVID-19 from other areas. The proposed system only quantified infection of COVID-19 and did not quantify other pneumonia. Therefore, their future work is applying transfer learning to enable the system to quantify the severity of other pneumonia.
Chen et al. [122] developed a DL system to distinguish COVID-19 from other pneumonia. The proposed model employed UNet++ and a pre-trained ResNet50 on ImageNet dataset that is in [68]. The study was built on 20,886 CT images for COVID-19 from 51 patients and 14,469 CT images for other diseases from 55 patients. Due to a large number of images, the model achieved 100% sensitivity and 95.2% accuracy, which is higher than model accuracy in [119]. Jin et al. [123] proposed an AI system to detect COVID-19 through CT images and make a pipelined model that was built on ResNet50 and 3D Unet++. The dataset was collected from five different centers, which contained 723 and 413 COVID-19 and non-COVID images, respectively. It achieved 94.8% and 97.4% accuracy and sensitivity, respectively. Abbasian Ardakani et al. [124] developed a CAD system for COVID-19 diagnosis (COVIDag) based on 306 COVID-19 patients and 306 non-COVID patients. The proposed model used different classifiers (K-Nearest -Neighbor (KNN), Decision Tree (DT), Ensemble, SVM, and Naïve Bayes) to detect COVID-19 based on feature extraction of the image, lesion distribution, and ground-glass opacity. The ensemble classifier was the best among them, achieving 96.5% AUC. Their future work is to develop a model that can estimate the severity of the infected COVID-19 patient. Similarly, Afify et al. [125] developed a CAD system to detect COVID-19 based on 200 CT scan images, 100 images for COVID-19 and 100 for non-COVID obtained from [114], CAD system had five stages. The first stage was lung segmentation through threshold-based segmentation. Next, is the feature extraction on the segmented region, followed by feature selection performed using genetic algorithm. Then, they used the decision tree and KNN with k = 3 as a classifier of COVID-19. Finally, they obtained a performance analysis for the proposed model, in which KNN achieved 100% accuracy, whereas the decision tree achieved 95% accuracy.
Saeedi et al. [126] also developed a CAD system for online detection of COVID-19 from CT scan images. Users uploaded their images, and the system would give them the detection result. The proposed model was based on DenseNet 121 network for reduction of image dimensions and used NU-SVM to overcome over fitting problems. The proposed model also combined ResNet, MobileNet, and Inception. The developed approach achieved 90.80% and 90.61% recall and accuracy, respectively. The model was built on 349 and 397 COVID-19 and non-COVID-19 patients obtained from [114]. Ardakani et al. [127] created a CAD system based on ten pre-trained convolution layers (AlexNet, ResNet101, ResNet50, ResNet18, SqueezeNet, GoogLeNet, VGG16, VGG19, MobileNet-v2 and Xception) to classify COVID-19, and non-COVID-19 CT images. The dataset consisted of 1020 slice from 86 non-COVID-19 patients and 180 COVID-19 patients. The results showed that Xception and ResNet101 had the highest performance since both of them provided. 994 AUC. But, ResNet101 achieved 100% sensitivity while the sensitivity of Xception was 98.04%. ACAR et al. [128] developed a CAD system based on 7717 CT images to detect COVID-19 cases. The system used CT images with a Low Dose and CNN methods (LDCT )model to overcome of noise in low-dose CT images. ResNet 50 v2 was used for extracting features, quantum Fourier transform for lung segmentation, and t-SNE methods for determining the efficiency of features extraction. The developed approach calculated 99.5 %, 99% accuracy and sensitivity for detection of COVID-19, respectively.
Swapnarekha et al. [129] applied ResNet50 V2 and DenseNet201 for detecting Covid-19 from CT images. The used dataset contained 610 COVID-19 images and 600 non-COVID-19 images. ResNet50 V2 achieved 95.87%, 91.67%, and 100% of accuracy, specificity, and sensitivity, respectively. While DenseNet201 achieved 97.11%, 96.67%, and 97.54% accuracy, specificity, and sensitivity, respectively. Mete et al. [130] applied different deep learning approaches: VGG19, VGG16, AlexNet, Xception, GoogLeNet, ResNet50, SqueezeNet, and ResNet101 as a features extractors of 1345 CT images divided into COVID-19 and non-COVID-19 images. SVM, RF, DT, Naive Bayes( NB) and KNN were used as a classifier of these features. SVM and ResNet50 had the highest performance with 96.29%, 95.86%, and.9821 accuracy, F1-score, and AUC, respectively. Kogilavan et al. [131] proposed different deep learning models such as: Xception, DenseNet121, MobileNet, VGG16, NASNet, and EfficientNet for detecting COVID-19 from 3873 CT images. VGG16 achieved the best performance with 97.68% accuracy.

7.2.3 Multi-classification and Accuracy Less Than 90%

Xu et al. [132] developed an early screening model of COVID-19. The approach depended on several CNN models and Bayesian functions to detect COVID-19 and calculate the infection probability in CT images. The dataset contained 618 CT images. The developed approach achieved 86.7% accuracy for classifying COVID-19. Wang et al. [133] implemented a DL system for detecting COVID-19. The dataset contained 5372 patients. The mode achieved 0.87 AUC; however, it has limitations; not considering the prediction of events like admission to Intensive Care Unit (ICU), death, and distinct slice thicknesses of CT images are included in this study. Therefore, their future work is to convert CT images with different slice thicknesses into unified slice thicknesses of CT images using GAN.
Ying et al. [134] implemented a Deepneumonia model for identifying COVID-19. They developed a Detail Relation Extraction Neural Network (DRE-NET) model based on ResNet50 to extract the complex features from images. They also combined the pyramid network by attention module to classify COVID-19. The dataset consisted of 101 patients with bacterial pneumonia, 86 healthy patients, and 88 COVID-19 patients. The DRE-NET model achieved 86% and 95% accuracy and AUC, respectively.
Singh et al. [135] developed a DL model based on CNN and MODE for COVID-19 detection through CT images. The proposed model achieved superior accuracy than competitive models, such as Artificial Neural Network (ANN), CNN, and adaptive Neuro-Fuzzy Inference System (ANFIS) which merges the advantages of both ANN and Fuzzy Logic (FL) [136]. The result showed that the model could be used in real time to classify COVID-19 chest CT image from other pneumonia.

7.2.4 Multi-classification and Accuracy More Than 90%

Li et al. [117] developed COVNET framework, a Three-Dimensional (3D) DL based on ResNet50 to detect COVID-19, Community-Acquired Pneumonia (CAP) which is acute lung tissue infection in a patient who acquired it in the community or within 48 h of admission to the hospital [137], and other lung conditions through CT images. The dataset contained 4352 chest CT images. The proposed model achieved 0.96 AUC. However, it has limitations; it could not categorize the severity of COVID-19. Sharma et al. [138] proved the important role of machine learning techniques in fighting COVID-19 and knows whether CT scan image will be the first alternative reverse transcription-polymerase chain reaction (RT-PCR) in detecting COVID-19. Is COVID-19 different from any other pneumonia that resides on the lungs? How to distinguish between COVID-19 CT scan images and other kinds of lung CT scan images? To obtain all of this information, the authors employed customized software built on Microsoft Azure machine learning algorithms. The dataset contained 2200 CT scan images and the training model based on ResNet architecture and grad cam and achieved 91% accuracy.
Jin et al. [36] developed an AI-based system to classify CT images into four classes: COVID-19, CAP, influenza A and B, and non-pneumonia. The dataset was collected from different centers and contained 10,250 scans. The model achieved 97.17% AUC. However, it has some challenges. Guided grad cam did not achieve lesion segmentation, whereas if this was done, it would help phenotype analysis to work better in accurate segmented region. Their future work is to collect more CT images of other lung diseases to achieve higher performance. Zhang et al. [139] developed Novel Coronavirus Pneumonia (NCP) system for detecting COVID-19 based on chest CT images from 3777 patients. The model consists of two models: the lung lesion segmentation and diagnosis prediction models that took the segmented lung lesion from CT image as an input and classified it into COVID-19, or pneumonia, or normal. The proposed model achieved 0.97 AUC.
Fang et al. [140] studied the travel history of two patients with COVID-19. The first patient was a 45-year-old woman, and another patient was a 32-year-old man to know whether CT images had the top sensitivity for detecting COVID-19 or RT-PCR. The result showed that CT images were the most effective in COVID-19 detection. Similarity, Xie et al. [141] compared RT-PCR and CT images, to know which one has the better accuracy in COVID-19 detection. The result showed that 3% of 167 patients had negative COVID-19 using RT-PCR, despite CT images showing that these patients had positive COVID-19. After some days, the result showed that CT images had better sensitivity for COVID-19 detection than RT-PCR. In addition, Bernheim et al. [142] studied the CT images of 121 cases from four different Chinese hospitals. They determined the relationship between symptom onset and CT scan and designated the signs of infection. The result showed that disease severity increased from the time of the first onset.
Zhang et al. [143] developed an analysis system for detection, quantification, and localization of COVID-19 out of chest CT images of 2460 patients. The proposed system could detect the infected region and measure the percentage of infection in the left and right lungs. However, the study has limitations; the intelligent assistant analysis system must be adjusted manually when identifying typical lesions. Singh et al. [144] proposed a lungINFseg model to determine the infected region of COVID-19 in CT images and make a lung segmentation for it. For estimation of the performance of lungINFseg, a comparative study was done between LungINFseg and other 13 different segmentation models (UNET, SegNet, SQNet,FCN,Inf-Net, ERFNET, ContextNet, FSSNet, DABNet, ESNet, CGNet, EDANet, and MISccn). LungINFseg achieved 80.34% dice score. The dataset contained 1800 annotated slice. The proposed model future work is making a good and accurate COVID-19 severity prediction by integration an automated CAD system with the proposed model. And applying the proposed model to another image segmentation problems like segmentation of breast tumor for ultrasound images.
Table 4 represents a comparison between these papers.
Table 4
Comparison between papers of CT images for COVID-19 detection
Paper
Model technique
Dataset
Evaluation measures
Limitations
Paper type
Publication date
Publication source
[117]
(COVNET) framework based on ResNet50
1296 COVID-19 CT exams 1735 CAP 1357 non pneumonia CT exams
96% AUC
Could’t categorize the severity of COVID-19
Scientific paper
19 Mar 2020
Radiology Society of North America
[111]
CT-net10 based on VGG16 ResNet50 InceptionV3 DenseNet VGG19 VGG19
738 CT-Scan images 349 images of them were COVID-19
95.52% accuracy
Small dataset size
Scientific paper
1 Feb 2021
Springer
[132]
Several CNNs models bayesian function
219 COVID-19 images 224 influenza A viral pneumonia images 157 healthy cases images
86.7% accuracy
There may be some overlap between COVID-19’s symptoms and those of other pneumonias. The study only used a small number of model samples, there should be more training and test samples available
Scientific paper
27 June 2020
Elsevier
[116]
(Decov Net) framework U-Net and CNN
219 non-COVID CT images, 313 COVID-19 images
90.9% accuracy 95.9% AUC
The U-net model could be enhanced by employing 3D segmentation networks and adopting exact ground-truth annotated by experts, Dataset came from only one hospital, Study did not include CAP dataset which would improve model performance, Due to the algorithm’s deep learning foundation and immature explain ability, it operated in a black-box manner when diagnosing COVID-19.
Scientific paper
20 May
2020 IEEE
[138]
ResNet Grad-Cam
800 COVID-19 images 600 viral pneumonia images 600 normal images
91% accuracy
There should be more numbers of model samples to improve model performance
Scientific paper
22 July 2020
Springer
[118]
VGG16 VGG19 Inception V3 Inception ResNetV2 Xception DenseNet201 DenseNet21 DenseNet169
397 normal images 349 COVID-19 images
85% accuracy 91% recall
Study contained limited CT images
Scientific book
21 Aug 2020
SPIE
[119]
CAD system based on GoogLeNet AlexNet ResNet18 shuffle-Net SVM as a classifier
347 COVID CT-images 347 non-COVID CT-images
94.7%accuracy 98% AUC
Small samples of COVID-19 images. Model only differentiated COVID-19 form non-COVID-19, not distinguish COVID from other pneumonia. The model performance didn’t compare with trained radiologists
Scientific paper
30 Sep 2020
PeerJ Inc
[133]
automated deep learning system
5372 CT images
87% AUC
The study didn’t consider death and ICU events. The study didn’t’ contain CT images with multiple slice thickness
Scientific paper
6 Aug 2020
European respiratory society
[120]
automated analysis tool based on AI
157 images
98.2% sensitivity 92.2% specificity
Limited dataset size
Scientific paper
24 Mar 2020
Arxiv
[121]
VB-Net neural network
279 COVID-9 CT images
3% mean POI 91.6% Dice similarity coefficient between manual and automatic segmentation
Limited dataset size
Scientific paper
30 Mar 2020
Arxiv
[112]
inception transfer learning technique
325 COVID-19 images 740 viral pneumonia images
89.5% accuracy 87% sensitivity
Small training samples. Model effectiveness has been hampered by a number of reasons, including low signal-to-noise ratio and difficult data integration. the characteristics of the CT scans were examined came from individuals with significant lung lesions at advanced illness stages.
Scientific paper
24 Feb 2021
Springer
[122]
UNet ++ ResNet-50
20,886 COVID-19 images 14,469 CT images for other diseases
95.2 %accuracy 100% sensitivity
 
Scientific paper
5 Nov 2020
Nature publishing Group
[113]
U-Net
425 normal images 495 other infection 449 COVID-19 images
86% accuracy 93% AUC
The study had a small samples number
Scientific paper
8 Oct 2020
Elsevier
[115]
EfficientNet B4 deep neural network
521 COVID-19 images 665 non-COVID-19 pneumonia
87% accuracy 90% AUC
Due to the radiologists in this study analysing the identical patients twice, first without and then with AI aid, there may be bias in results.The COVID-19 cohort’s distribution of the period between the beginning of symptoms and the CT was variable. The study limited sample size,Patients with COVID-19 pneumonitis had significant baseline differences from those with non-COVID-19 pneumonitis, which could have introduced bias.
Scientific paper
22 March 2021
Radiology society of North America
[36]
U-Net ResNet 152 Guided Grad Cam
11,356 CT
97.81% AUC
For improving diagnosis capabilities, gathering more data on more sub types of pneumonia or other lung diseases is beneficial.Guided Grad-CAM can only get the notice area rather than segmenting the lesion, and phenotype feature analysis would be better served by correct segmentation.
scientific paper
9 Oct 2020
Nature publishing Group
[123]
ResNet50 3D U-Net++
723 COVID-19 images 413 non-COVID images
94.8% accuracy 97.4% sensitivity
The study didn’t include dataset for other pneumonia
scientific paper
23 Mar2020
Cold Spring Harbor Press
[134]
Deepneumonia model, details relation extraction neural network (DRE-NET)model based on ResNet50
101 bacterial pneumonia patients 86 healthy patients 88 COVID-19 patients
86% accuracy 95% AUC
The model can’t resolve the batch effect to produce precise predictions for additional sources of data because the training data is still a tiny amount.
scientific paper
11 Mar 2021
IEEE
[139]
NCP system based on AI UNET DRUNET FCN SegNet Deeplabv3
3,777 patients
97% AUC 92.49% accuracy
The study contained small sample size
scientific paper
11 Jun 2020
Elsevier
[124]
CAD system (COVIDag) KNN, decision tree ensemble support vector machine naïve bayes as a classifier
306 COVID-19 patients 306 non-COVID patients
96.5% AUC
Preliminary RT- PCR’s results could provide erroneous negative results
scientific paper
31 Jan 2021
springer
[125]
CAD system genetic algorithm decision tree KNN as a classifier
100 images for COVID-19 100 images non-COVID
100% KNN accuracy 95% decision tree accuracy
Study contained small dataset size
scientific paper
25 Nov 2020
I IIETA
[126]
CAD system DenseNet 121 NU-SVM RasNet MobileNet Inception
349 COVID-19 patients 397 non-COVID patients
90.80% recall 90.61% accuracy
Small dataset size
scientific paper
24 Jun 2020
Arxiv
[127]
CAD system based on AlexNet ResNet101 ResNet50 ResNet18 SqueezeNet GoogLeNet VGG16 VGG19 MobileNet-v2 Xception
1,020 slice
99.4% ResNet_10 and Xception AUC 100% ResNet-101 sensitivity 98.04% Xception sensitivity
CAD evaluation and performance didn’t compare with radiologists. Negative RT-PCR results may be uncommon in COVID-19 patients
scientific paper
1 Jun 2020
Elsevier
[128]
ResNet50v2
7,717 CT images
99.5 % accuracy 99% sensitivity
Class imbalanced problem
scientific paper
4 Jan 2021
Cold Spring Laboratory press
[144]
lungINFseg model based on deep learning
1800 annotated slice
68.77% IOU 80.34% dice score
Study should integrate automated CAD system for provide better prediction of COVID-19 severity
scientific paper
22 Jan 2021
MDPI
[129]
ResNet50 V2 and DenseNet201
610 COVID-19 images, 600 non-COVID-19 images
96.67% DenseNet 201
Small dataset size
Conference paper
5 Sep 2021
Springer
[130]
VGG19, VGG16, AlexNet, Xception, GoogLeNet, ResNet50, SqueezeNet, and ResNet101 as a features extractors. SVM, RF, DT, Naïve Bayes( NB) and KNN were used as a classifiers
1345 CT images
SVM and ResNet50 had the highest performance with 96.29% accuracy, 95.86% F1-Score.9821 AUC
The study dataset was collected from only one hospital The dataset didn’t contain other different pneumonia
scientific paper
1 Sep
2022 Elsevier
[131]
Xception, DenseNet121, MobileNet, VGG16, NASNet, and EfficientNet
3873 CT images
97.68% VGG16 accuracy
Model couldn’t detect the COVID-19 affected areas
scientific paper
1 Feb 2022
Hindawi

7.3 Detection of COVID-19 Through Ultrasound Images

7.3.1 Binary Classification and Accuracy Less Than 90%

Roy et al. [145] developed a model for automatic analysis of Lung US (LUS) images for COVID-19 detection. The model was built on a deep architecture and network to identify regions with pathological artifacts. This network could achieve localization of disease based on consistency losses. The proposed model could obtain an accurate COVID-19 diagnostic. However, it has some limitations: the small size of the dataset, the dataset collected from the same place, and the model needs a heterogeneous dataset to overcome model bias. Meanwhile, Karakuş et al. [146] proposed a method to quantify line artifacts of LUS images from 100 images of 9 patients with COVID-19. The model achieved 87% accuracy.

7.3.2 Multi-classification and Accuracy Less Than 90%

Born et al. [147] developed a DL framework to detect COVID-19 from ultrasound (US) images. They employed a POCUS dataset containing three classes of US images: 654, 172, and 277 images for COVID-19, healthy, and bacterial pneumonia, respectively. The developed framework (POCOVID-NET) was based on CNN and VGG16. The framework was evaluated in five fold cross-validation and achieved 89% and 96% accuracy and sensitivity, respectively.
Moore and Gardiner [148] published a paper about the explanation of the importance of LUS images in detecting COVID-19, where US images could be used in ICU to identify lung conditions that might be required. The result showed that the LUS images are more sensitive than X-ray and CT images. In addition, US could be used to monitor different lung conditions, which help detect COVID-19 symptoms. khalili et al. [149] studied the importance of US images and the findings of COVID-19, such as pleural lines that are unsmooth and patchy consolidation. They discussed the advantages of LUS images. LUS could be used in ICU as an alternative for CT scan images. It has no radiation and has a lower cost; however, it has less sensitivity than CT images. Therefore, it could not be used for COVID-19 diagnosis since it has no ability for lesion detection. Table 5 represents a comparison between these papers
Table 5
Comparison between papers of US images for COVID-19 detection
Paper
Model technique
Dataset
Evaluation measures
Limitations
Paper type
Publication date
Publication source
[147]
POCOVID-NET CNN VGG16
POCUS dataset (654 COVID-19 images 172 healthy images 277 bacterial pneumonia images)
89% accuracy 96% sensitivity
Small dataset size
Scientific paper
25 April 2020
Arxiv
[145]
Deep architecture and network based on consistency losses and STN
277 LUS videos from 35 patients
70% precision 61% F1-Score 60% Recall
The study did not include heterogeneous and balanced dataset Patient demographics features were unknown. All datasets were collected from Italian hospitals and there was a bias in data collection, There were a noisy label in DS
Scientific paper
13 May 2020
IEEE
[146]
Function of nonconvex cauchy with method of line artifact quantification of ultrasound images
100 images from 9 COVID-19 patients
87% accuracy
A clinical drawback of LUS is that line artefact quantification relies on visual estimate, which may make it difficult to adequately depict overall fluid overload or the severity of illnesses, small dataset size
Scientific paper
6 May 2020
Arxiv

7.4 Detection of COVID-19 Through Multi-model Imaging

7.4.1 Binary Classification and Accuracy Less Than 90%

Alom et al. [150] developed deep learning models for different tasks. The first model was for the classification of COVID-19 from X-ray and CT images. This model was built on Inception Recurrent Residual Neural Network (IRRNN). The second one was for segmenting infected regions in X-ray and CT images for detection and localization of COVID-19. The used dataset consisted of 420 X-ray samples, while the number of CT-Scan samples was 267. The model achieved 84.67 and 98.78% detection accuracy of COVID-19 for X-ray and CT-Scan, respectively. However, the main limitation of the developed model was the small sample size.

7.4.2 Binary Classification and Accuracy More Than 90%

Mukherjee et al. [41] developed a CNN-tailored Deep Neural Network (DNN) for the detection of COVID-19 from CT-Scan images and X-Ray. The model was built using two different datasets. The first dataset contained 168 COVID-19 X-ray images and 168 non-COVID images that included other diseases, such as SARS and MERS. The second dataset contained 168 COVID-19 CT-Scan images and 168 non-COVID-19 images. This model achieved 96.28% accuracy, 98.08% AUC, and 0.0208 Rate of False Positives. Jain et al. [151] applied transfer learning approaches such as: VGG16, MobileNet, Inception, DenseNet121, and ResNet50 for detecting COVID-19 and pneumonia diseases from X-Ray and CT images. VGG16 achieved the highest performance for X-Ray images with 99% accuracy. While DenseNet121 had the best performance for CT images with 97% accuracy.

7.4.3 Multi-classification and Accuracy Less Than 90%

Horry et al. [152] developed a framework for detecting COVID-19 from X-Ray, CT Scan, and ultrasound. VGG16 classified images into three classes normal, COVID, and pneumonia. The number of images in each class in the dataset is shown in Table 6. The ultrasound images had the best precision of 100%. Whereas, X-ray and CT achieved 86% and 84% precision, respectively.
Table 6
Dataset used in the study [152]
Image modality
Condition
Source images
X-Ray
COVID-19 pneumonia normal
140 322 60361
CT
COVID-19 non-COVID-19
349 397
US
COVID-19 pneumonia normal
399 277 235

7.4.4 Multi-classification and Accuracy More Than 90%

Panwar et al. [153] developed a transfer learning model for detecting COVID-19 through three different imaging datasets: pneumonia X-ray images, COVID-19 X-ray images, and SARS-COV-2 CT-Scan images. The developed VGG16-based model could detect COVID-19 faster than RT-PCR by 2 s. The experiments showed that there was a relation between pulmonary diseases, such as, COVID-19 and pneumonia. The model achieved 95.6% accuracy.
El Asnaoui and Chawki [154] developed an automated method to distinguish COVID-19 from normal and other pneumonia classes. The authors compared between different deep learning models, such as, DenseNET201, MobileNetv2, VGG16, VGG19, InceptionResNetv2, ResNet50, DenseNet201, and Inceptionv3. These models were developed using 6087 X-Ray and CT-Scan images. The result showed that InceptionResNetv2 had the best performance and it achieved an accuracy of 92.18%.
Gour et al. [155] proposed an ensemble model based on VGG19 and Xception for detecting COVID-19 from CT and X-Ray images. The X-Ray dataset contained 3040 chest X-Ray images divided into COVID-19, normal, and pneumonia images. The CT dataset contained 4645 images divided into COVID-19 and no-findings images. The proposed model achieved 97.62% multi-classification sensitivity for X-Ray images, and 98.31% binary-classification sensitivity for CT images.
Sarosh et al. [156] developed a detection and segmentation model of COVID-19 from X-ray and CT-Scan images based on ResNet50, AlexNet and VGG16. The proposed model aimed to distinguish COVID-19 from other viral and bacterial pneumonia, CAP, and healthy images. This model aimed to identify and segment infected region in order to quantify the size and the ratio of infection. Similarity,
Table 7 shows a comparison between these papers
Table 7
Comparison between papers of multi-model images for COVID-19 detection
Paper
Model technique
Dataset
Evaluation measures
Limitations
Paper type
Publication date
Publication source
[152]
VGG16
140 X-Ray COVID-19 images 322 X-Ray pneumonia images 60361 X-Ray normal images 349 CT Scan COVID-19 images 397 CT Scan non-COVID images 399 Ultrasound COVID-19 images 277 Ultrasound pneumonia images 235 Ultrasound normal images
100% ultrasound precision 86% X-Ray precision 84% CT-Scan precision
The study contained small dataset size
Scientific paper
14 August 2020
IEEE
[150]
(IRRNN) Inception recurrent residual neural network. NABLA-N network
267 CT-Scan 420 X-Ray images
84.67 % X-Ray accuracy 98.78% CT-Scan accuracy
Class imbalanced problem, The CT model was tested by only 300 images, The COVID-Seg CT produces results with several false positive detection as a result of the lack of labelled data for lung segmentation in CT for COVID-19.
Scientific paper
7 April 2020
Arxiv
[153]
VGG 16 Grad-CAM
1252 COVID-19 CT-Scan images 1230 non-COVID- CT images 526 X-Ray images pneumonia 5856 X-Ray images for normal and pneumonia diseases
95.6% accuracy
Model should be tested with radiology images of patients with symptoms pneumonia
Scientific paper
22 Nov 2020
Elsevier
[41]
CNN-tailored DNN
672 images: 168 COVID-19 and non-COVID CT abd X-Ray images
96.28% accuracy 98.08% AUC
Small datasets size
Scientific paper
6 Nov 2020
Springer
[154]
DenseNET201 MobileNetv2 VGG16 VGG19 InceptionResNetv2 ReseNet50 DenseNet201 Inceptionv3
6087 X-Ray CT-Scan images ( 1,583 normal images 213 COVID-19 images 2780 bacterial pneumonia 1,493 corona virus images)
92.18% accuracy for InceptionResNetv2 88.09% accuracy for DenseNet201
The study had a small dataset, More techniques of feature extraction like YOLO should be used in the study for improving performance
Scientific paper
22 May 2020
Taylor and Francis
[155]
ensemble model based on VGG19 and Xception
3040 chest X-Ray images divided into COVID-19, normal, and pneumonia images. CT dataset contained 4,645 images divided into COVID-19 and no-findings images
97.62% multi-classification X-Ray sensitivity 98.31% binary classification CT sensitivity
The study should extend the model dataset for bacterial and viral pneumonia for achieving better performance
Scientific paper
1 Jan 2022
Elsevier
[151]
VGG16, MobileNet, Inception, DenseNet121, and ResNet50
 
VGG16 achieved the highest performance for X-Ray images with 99% accuracy. While DenseNet121 had the best performance for CT images with 97% accuracy.
 
book chapter
24 Jun 2022
IEEE

8 Publicly Available Datasets

In the related work in Sect. 7, several datasets were cited. A summary of these datasets is provided in Table 8. This summary includes a reference to the dataset, its name, a brief description of the dataset, the type of images (X-Ray, CT, or US) included in this dataset, and the number of covid-19 samples.
The most frequently cited dataset is COVID-19 Image Data Collection [109]. This dataset was collected from different sources such as: Radiopaedia.Org [157], and Italian Society of Medical and Interventional Radiology (SIRM) COVID-19 Database [158]. This dataset provides academics working on artificial intelligence with COVID-19 images from several available publications and websites. Each image in this collection has a number of variables, such as: sex, date, age, survival, and medical records.
The COVID-19 Radiography Database [159], the leader of the COVID-19 Dataset Award, is a dataset that was created by combining data from different sources such as: [98, 109, 158, 160].
COVNET dataset [117], that contained 4352 chest CT was collected from 3,322 patients in six different hospitals in the period of August 2016 to February 2020.
The most widely used CT dataset is COVID CT Dataset [114]. This dataset contained 349 CT COVID-19 images with clinical findings of 216 COVID-19 patients and 463 non-COVID-19 images. It has meta data about patients such as: disease severity, age, gender, and his medical history. COVID CT Dataset was used in many studies such as: [119].
[138] A CT dataset that was collected from different sources such as: COVID CT Dataset, SIRM, and medical hospitals in Russia and India between 1 March 2020 to 25 April 2020.
In May 2021, an open access chest CT COVID-19 respiratory was released [161]. This dataset contained more than 1000 CT COVID-19 images collected from two hospitals of universities of Iran and Mashhad between March 2020 to January 2021. All images are 512 * 512 pixels stored in DICOM format.
The most used LUS dataset is POCUS dataset. It was published on May 2020 by Born et al. [147]. It contained 1103 images extracted from 64 videos divided into three groups: 654 COVID-19, 172 healthy, and 277 bacterial pneumonia. In 2021, a new version of POCUS dataset was released by Born et al. [162]. It is an updated POCUS dataset that contained 202 videos of COVID-19, healthy, bacterial, and viral pneumonia.
In March 2020 Soldati et al. [163] suggested a 4-level scoring system and a globally defined acquisition technique of LUS for COVID-19 patients. They revealed 30 COVID-19 positive instances in an online database, called ICLUS-DB, that contained over 60,000 frames. Roy et al. [145] updated this version of ICLU-DB to extend for 277 videos from 35 patients.
Finally, COVIDX-US dataset was released by Ebadi et al. [164] in March 2021. This dataset contained 12,943 frames from 150 videos. The images of the COVIDX-US dataset were divided into four classes: COVID-19, non-COVID-19, healthy, and other lung diseases.
It is important to mention that, various studies used different names for the same dataset. For example, some studies have referred to the COVID-19 Images Data Collection as the Montreal Data Base. Other datasets are not publicly available such as, the dataset used in [165].
Table 8
Summary of COVID-19 datasets used in the reviewed researches
References
Name
Brief description
Type
COVID-19 samples
[109]
COVID-19 Image Data Collection
Publicly available CXR COVID-19 with continuing update
X-Ray
315
[159]
COVID-19 Radiology Database
Publicly and ongoing updated dataset with more than 21,000 X-Ray images
X-Ray
3616
[91]
COVIDX dataset
Open access database with more than 13,975 CXR images
X-Ray
358
[84]
CoroNet dataset
Open access database containing 1251 CXR images belongs to 4 classes
X-Ray
284
[74]
X-Ray images dataset (DarkNet)
Publicly available CXR dataset belongs to three classes containing 1627 images
X-Ray
127
[98, 166]
Chest X-Ray 8 Chest X-Ray 14
Open access dataset which also called RSNA pneumonia detection challenge data or NIH CXR dataset, containing 108,948 frontal views CXR images
X-Ray
108,948 fontal views COVID-19 images
[157]
Radiopaedia. org
Open access website that enables to share radiography
X-Ray
20
[158]
SIRM COVID-19 Database
Publicly available dataset collected by Italian Society of Medical and Interventional Radiology (SIRM)
X-Ray
68
[167]
Augmented COVID-19 X-ray Images Dataset
Open access dataset containing 1,824 CXR images belongs to two classes
X-Ray
912
[160, 168, 169]
Labeled Optical Coherence Tomography (LOCT) Chest X-Ray Images (Pneumonia)
Open access chest X-ray images for classifying pneumonia belongs to three classes containing 5863 images
X-Ray
0
[114]
COVID-CT-Dataset
COVID-19 dataset containing 349 CT images from 219 patients
CT
349
[116]
deCOVnet dataset
Publicly dataset containing 2000 CT images belongs to two classes
CT
313
[117]
COVNET dataset
Open access dataset includes 4,352 CT images belongs to three classes
CT
1296
[170]
CORONACASES
Medical CT images for ten patients
CT
10
[171]
EURORAD.ORG
CT images for more than 10 COVID-19 patients
CT
10
[133]
Wangshuocas COVID-19
Open access dataset belongs to five classes containing 5372 CT images
CT
871
[161]
CT COVID-19 images
containing more than 1000 CT COVID-19 images
CT
1000
[147]
POCUS dataset
Open access and ongoing dataset with 64 videos resulting 1103 images divided into three classes
LUS
39 videos 654 images
[162]
Enlarged POCUS dataset
Updating version of POCUS dataset containing 202 videos belongs to four classes
LUS
 
[163]
ICLUS-DB
LUS open access dataset includes 60,000 frames
LUS
30 cases
[145]
Extended ICLUS-DB
updated version of ICLU-DB to extend for 277 videos from 35 patients
LUS
17 cases
[164]
COVIDx-US
Publicly dataset containing 12,943 frames from 150 videos. divided into four classes
LUS
59 videos

9 Discussion

Based on the data and conclusions presented in more than 100 articles explored. The findings of the primary search outlined in this paper are presented in this section. The following is a summary of these findings:
Sub-RQ1: What are the main approaches for COVID-19 detection?
As shown in Fig. 3, there are three primary methods for COVID-19 detection. These methods include blood tests, virus tests, and analyses of various imaging modalities like X-Rays, CT scans, and ultrasounds (US).The blood test is done to find out if there are any SARS-COV-2 antibodies present. The sensitivity of blood analysis, varies from 2% to 3% for the detection of SARS-COV-2. RDT and RT-PCR are the two methods used for the virus test. RDT is used to identify antibodies and can provide a speedy answer in around 30 min. It is not advised for COVID-19 detection, nevertheless, as its accuracy depends on the sample’s quality and it is unable to differentiate between COVID-19 and other viral pneumonia. RT-PCR is considered more reliable than RDT for detecting COVID-19, according to research [31]. But it has some restrictions. The process is expensive and time-consuming. Additionally, it has lower COVID detection sensitivity than imaging modalities, as its sensitivity ranges from 50 to 62% [33]. Some researches like [141] made a comparison between CT and RT-PCR to determine which method detects COVID-19 with the greatest degree of accuracy?. The research studied the travel history for 167 patients.Despite CT imaging indicating that these patients had positive COVID-19, the results of the RT-PCR test revealed that 3% of these patients had negative COVID-19. After a few days, the results revealed that RT-PCR was less sensitive than CT scans for the detection of COVID-19.
Sub-RQ2: Which imaging modalities gives more accurate results? And what are advantages and disadvantages of each modality?
The most effective technique to get quick and precise findings for COVID-19 detection is through imaging. The use of X-ray imaging for COVID-19 detection is encouraged because of their many benefits. These benefits include its greater accessibility and reduced cost compared to other imaging modalities. Additionally, X-ray image acquisition uses less radiation than CT scan image acquisition. As a result, it is utilized to identify several diseases, including lung cancer and cardiac conditions. The usage of X-ray images has become increasingly common, particularly in developing nations. On the other hand, the quality of CT scan images is superior than that of X-ray scans. As a result, the diagnosis outcomes from CT scan images are more accurate [35]. However, there are significant drawbacks of CT scans, including their high cost and the exposure of patients to more radiation. Regarding US images, Moore and Gardiner [148] and khalili et al. [149] discussed the importance and main advantages of US images. The results showed that the LUS images are lower sensitive than the CT and X-ray images for COVID -19 detection. However, since US uses no radiation and has a lower cost, it can be used in Intensive Care Units (ICUs) as a helper tool for tracking various lung problems.
Sub-RQ3 How can COVID-19 be detected using AI and what AI tools are used in this detection?
For detecting COVID-19 from X-Ray, CT, or US both machine learning and deep learning algorithms have been used. Some studies have used multiple machine learning algorithms such as, KNN, SVM, and DT for detecting COVID-19 either through X-Ray or CT like Abbasian Ardakani et al. [124] and Afify et al. [125]. Other studies have used transfer learning techniques such as, Xception, VGG, and Inception for providing better performance of Covid-19 detection like Catak et al. [71]. Many researches used various DL or ML classifiers to compare their performance in detecting COVID-19 like Jain et al. [151] and Abbasian Ardakani et al. [124]. Other researchers, Mete el al, [130] applied an ensemble method for detecting COVID-19 by using different DL algorithms, such as, VGG16, VGG19, ResNet 50, and Xception, as features extractors then fed these features to classical ML algorithms, such as, SVM, DT, and NB for classification. Gour et al. [155] proposed an ensemble model based on VGG19 and Xception for detecting COVID-19 from CT and X-Ray images by making both binary and multi-class classification. Researchers like Boulila et al. [110] applied AI technology in building a secure system for persistent patient monitoring thanks to the employment of inexpensive wireless devices and a cutting-edge encryption algorithm called chaos-based substitution boxes. To track daily activities and identify natural and unusual breathing rates. Tables 3, 4, 5, and 7 show the different AI techniques used for COVID-19 detection.
Sub-RQ4 What are software tools and datasets used in building predictive COVID-19 detection model?
The development tools that have been used in creating an AI detection COVID-19 models include the Matlab software and the python programming language. The python language has been the most common language for developing AI models. One reason is that python has a plenty of libraries such as, Numpy, Scikit-learn, TensorFlow, PyTorch, and Keras. Using these libraries eases the process of developing different ML and DL models, especially when using development environments such as, Google Colab or Kaggle notebooks.
To build efficient AI models, high quality datasets have to be available. Fortunately, multiple datasets of different imaging modalities that can be used to build such models are available. More details about these publicly-available datasets are provided in Sect. 8 and are summarized in Table 8.
Sub-RQ5 How can DL provide a great weapon for fighting COVID-19 and what are the challenges it faced?
Previous studies showed that DL techniques can provide great tools for detecting COVID-19. However, there are some challenges that can affect the accuracy of COVID-19 detection. The first challenge is the class imbalance problem that results from the limited size of COVID-19 images compared to other pneumonia and healthy images. To solve this problem, researchers like Rajaraman and Antani [172] used a data augmentation technique to increase the number of limited images by applying different transformations such as, translation and rotation on these images. Other researchers like Ucer and Korkmaz [95] used the SMOTE technique. There are other different techniques that can solve this problem like class-weighted entropy, cost-sensitive learning, and using an equal sample of each class. Researchers should choose the best solutions that fit their data.
The second challenge that DL models can face is the confidence of the model results; having high model results does not ensure having high certainty [173]. According to Ucer and Korkmaz [95], if the model produces results with a high level of uncertainty, it is recommended that human involvement should be used to further investigate the results. Ghoshal and Tucker [174], investigated Bayesian Convolution Neural Network (BCNN) for calculating the uncertainty in DL models. The developed DL models provided high or low level of the output certainty based on the COVID-19 X-ray input. The accuracy ranged from 86.02 to 89.82%. Therefore, the accuracy of prediction is significantly connected to the degree of uncertainty. In order to increase the level of trust in AI technology and to improve the process of disease diagnosis and treatment, more studies should consider the uncertainty problem in their models’ prediction.
The third challenge is sample overlap. The cause of this overlap is that many researchers have got their datasets from several online resources. As a result, the same image could be used several times in training and testing phases. One way to deal with this issue is performing image similarity analysis to figure out the images that are duplicated between the training and the testing datasets to remove this duplication. This will help in both reducing overfitting and preventing data sample overuse. Moreover, having COVID-19 datasets with main properties emphasized by radiologists will be more crucial for developing DL models. As the use of such datasets can improve DL models’ prediction and can be more acceptable by physicians in the diagnosis process.
The fourth challenge is disease seriousness. Analysis of COVID-19 images may aid in the identification of disease progression and the areas that require immediate assistance. These problems necessitate greater medical engagement at all the phases of development, evaluation, and validation of DL models. As in [175], the model could track the disease progression and predict from the extracted features whether the patient’s case would become worse or not.
As shown from the previous studies, most researches used transfer learning techniques for detecting COVID-19. Some studies used models pre-trained on ImageNet dataset, such as [84], and [122]. Others used models pre-trained on large dataset of images, such as [176]. Therefore, the selection of the suitable neural networks architectures for detecting COVID-19 should have more future research directions. Although many great efforts have been carried out for facing COVID-19 spread and detecting this disease, there are a number of future directions that should be done for providing better performance of COVID-19 detection. These main future directions include:
1: Building predictive models, beside the detection models, to predict whether individuals will be infected with COVID-19 or not based on their current locations, their current jobs, and the people who are contacting with them
2: Providing more accessible COVID-19 datasets with high quality images in order to develop models with better performance.
3: Most of the publicly-available Covid-19 datasets have small COVID-19 samples, therefore, theses datasets should be enlarged, to help researchers build more accurate detection models.
4: Many researches have used transfer learning techniques for developing COVID-19 detection models. Most of these models were pre-trained on ImageNet dataset, such as, [84], and [122]. While other models, such as, [176] and [83] were pre-trained on different datasets, such as, Chest X-ray 14, and iNat2021. Therefore, the selection of the suitable neural networks architectures for detecting COVID-19 should have more future research directions. [83].

10 Conclusion

This research discussed a comprehensive survey about COVID-19 sources, its detection, its symptoms, and how AI can be used to stand against its spread. The paper discussed Coronaviruses’ families and their subgroups, COVID-19 sources, symptoms, and how it was transmitted from animals to human. This paper presented different DL approaches used in COVID-19 detection through different modalities, namely, X-Ray, CT, and US. The paper provided a comprehensive study about detecting COVID-19 from different approaches provided a comparison between them. It reviewed and compared between DL algorithms that can be used in COVID-19 detection, and highlighted their advantages and limitations in order to facilitate future developments in this area. It also highlighted the main features of each imaging modality in detecting COVID-19. Also, discussed the most frequently used datasets for COVID-19 and provided details about each dataset. Moreover, it showed that until today there is no accurate treatment for COVID-19. Therefore, future researches for COVID-19 detection should not stop to know all the details of this disease in order to help in fighting it.
Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://​creativecommons.​org/​licenses/​by/​4.​0/​.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
insite
INHALT
download
DOWNLOAD
print
DRUCKEN
Literatur
2.
Zurück zum Zitat Harapan, H., Itoh, N., Yufika, A., Winardi, W., Keam, S., Te, H., Megawati, D., Hayati, Z., Wagner, A.L., Mudatsir, M.: Coronavirus disease 2019 (covid-19): A literature review. J. Infect. Public Health 13(5), 667–673 (2020)CrossRef Harapan, H., Itoh, N., Yufika, A., Winardi, W., Keam, S., Te, H., Megawati, D., Hayati, Z., Wagner, A.L., Mudatsir, M.: Coronavirus disease 2019 (covid-19): A literature review. J. Infect. Public Health 13(5), 667–673 (2020)CrossRef
3.
Zurück zum Zitat Hui, D.S., Azhar, E.I., Madani, T.A., Ntoumi, F., Kock, R., Dar, O., Ippolito, G., Mchugh, T.D., Memish, Z.A., Drosten, C., et al.: The continuing 2019-ncov epidemic threat of novel coronaviruses to global health-the latest 2019 novel coronavirus outbreak in wuhan, china. Int. J. Infect. Dis. 91, 264–266 (2020)CrossRef Hui, D.S., Azhar, E.I., Madani, T.A., Ntoumi, F., Kock, R., Dar, O., Ippolito, G., Mchugh, T.D., Memish, Z.A., Drosten, C., et al.: The continuing 2019-ncov epidemic threat of novel coronaviruses to global health-the latest 2019 novel coronavirus outbreak in wuhan, china. Int. J. Infect. Dis. 91, 264–266 (2020)CrossRef
4.
Zurück zum Zitat C.S.G. of the International, et al. The species severe acute respiratory syndrome-related coronavirus: classifying 2019-ncov and naming it sars-cov-2. Nat. Microbiol. 5(4), 536 (2020) C.S.G. of the International, et al. The species severe acute respiratory syndrome-related coronavirus: classifying 2019-ncov and naming it sars-cov-2. Nat. Microbiol. 5(4), 536 (2020)
5.
Zurück zum Zitat Livingston, E., Bucher, K., Rekito, A.: Coronavirus disease 2019 and influenza 2019–2020. JAMA 323(12), 1122–1122 (2020)CrossRef Livingston, E., Bucher, K., Rekito, A.: Coronavirus disease 2019 and influenza 2019–2020. JAMA 323(12), 1122–1122 (2020)CrossRef
6.
Zurück zum Zitat Burki, T.K.: Coronavirus in china. Lancet Respir. Med. 8(3), 238 (2020)CrossRef Burki, T.K.: Coronavirus in china. Lancet Respir. Med. 8(3), 238 (2020)CrossRef
8.
Zurück zum Zitat Chang, L., Yan, Y., Wang, L.: Coronavirus disease 2019: coronaviruses and blood safety. Transfus. Med. Rev. 34(2), 75–80 (2020)CrossRef Chang, L., Yan, Y., Wang, L.: Coronavirus disease 2019: coronaviruses and blood safety. Transfus. Med. Rev. 34(2), 75–80 (2020)CrossRef
11.
Zurück zum Zitat Shereen, M.A., Khan, S., Kazmi, A., Bashir, N., Siddique, R.: Covid-19 infection: Emergence, transmission, and characteristics of human coronaviruses. J. Adv. Res. 24, 91–98 (2020)CrossRef Shereen, M.A., Khan, S., Kazmi, A., Bashir, N., Siddique, R.: Covid-19 infection: Emergence, transmission, and characteristics of human coronaviruses. J. Adv. Res. 24, 91–98 (2020)CrossRef
12.
Zurück zum Zitat Tai, W., He, L., Zhang, X., Pu, J., Voronin, D., Jiang, S., Zhou, Y., Du, L.: Characterization of the receptor-binding domain (rbd) of 2019 novel coronavirus: implication for development of rbd protein as a viral attachment inhibitor and vaccine. Cell. Mol. Immunol. 17(6), 613–620 (2020)CrossRef Tai, W., He, L., Zhang, X., Pu, J., Voronin, D., Jiang, S., Zhou, Y., Du, L.: Characterization of the receptor-binding domain (rbd) of 2019 novel coronavirus: implication for development of rbd protein as a viral attachment inhibitor and vaccine. Cell. Mol. Immunol. 17(6), 613–620 (2020)CrossRef
13.
Zurück zum Zitat York, A.: Novel coronavirus takes flight from bats? Nat. Rev. Microbiol. 18(4), 191–191 (2020)CrossRef York, A.: Novel coronavirus takes flight from bats? Nat. Rev. Microbiol. 18(4), 191–191 (2020)CrossRef
14.
Zurück zum Zitat Rabi, F.A., Al Zoubi, M.S., Kasasbeh, G.A., Salameh, D.M., Al-Nasser, A.D.: Sars-cov-2 and coronavirus disease 2019: what we know so far. Pathogens 9(3), 231 (2020)CrossRef Rabi, F.A., Al Zoubi, M.S., Kasasbeh, G.A., Salameh, D.M., Al-Nasser, A.D.: Sars-cov-2 and coronavirus disease 2019: what we know so far. Pathogens 9(3), 231 (2020)CrossRef
15.
Zurück zum Zitat Chen, N., Zhou, M., Dong, X., Qu, J., Gong, F., Han, Y., Qiu, Y., Wang, J., Liu, Y., Wei, Y., et al.: Epidemiological and clinical characteristics of 99 cases of 2019 novel coronavirus pneumonia in wuhan, china: a descriptive study. Lancet 395(10223), 507–513 (2020)CrossRef Chen, N., Zhou, M., Dong, X., Qu, J., Gong, F., Han, Y., Qiu, Y., Wang, J., Liu, Y., Wei, Y., et al.: Epidemiological and clinical characteristics of 99 cases of 2019 novel coronavirus pneumonia in wuhan, china: a descriptive study. Lancet 395(10223), 507–513 (2020)CrossRef
16.
Zurück zum Zitat Zhu, N., Zhang, D., Wang, W., Li, X., Yang, B., Song, J., Zhao, X., Huang, B., Shi, W., Lu, R., et al.: A novel coronavirus from patients with pneumonia in china, 2019. N. Engl. J. Med. (2020) Zhu, N., Zhang, D., Wang, W., Li, X., Yang, B., Song, J., Zhao, X., Huang, B., Shi, W., Lu, R., et al.: A novel coronavirus from patients with pneumonia in china, 2019. N. Engl. J. Med. (2020)
17.
Zurück zum Zitat Gralinski, L.E., Menachery, V.D.: Return of the coronavirus: 2019-ncov. Viruses 12(2), 135 (2020)CrossRef Gralinski, L.E., Menachery, V.D.: Return of the coronavirus: 2019-ncov. Viruses 12(2), 135 (2020)CrossRef
18.
Zurück zum Zitat Hu, B., Guo, H., Zhou, P., Shi, Z.-L.: Characteristics of sars-cov-2 and covid-19. Nat. Rev. Microbiol. 19(3), 141–154 (2021)CrossRef Hu, B., Guo, H., Zhou, P., Shi, Z.-L.: Characteristics of sars-cov-2 and covid-19. Nat. Rev. Microbiol. 19(3), 141–154 (2021)CrossRef
19.
Zurück zum Zitat Bernell, S., Howard, S.W.: Use your words carefully: what is a chronic disease? Front. Public Health 4, 159 (2016)CrossRef Bernell, S., Howard, S.W.: Use your words carefully: what is a chronic disease? Front. Public Health 4, 159 (2016)CrossRef
20.
Zurück zum Zitat Morand, A., Fabre, A., Minodier, P., Boutin, A., Vanel, N., Bosdure, E., Fournier, P.: COVID-19 virus and children: what do we know? Elsevier (2020) Morand, A., Fabre, A., Minodier, P., Boutin, A., Vanel, N., Bosdure, E., Fournier, P.: COVID-19 virus and children: what do we know? Elsevier (2020)
21.
Zurück zum Zitat Wei, M., Yuan, J., Liu, Y., Fu, T., Yu, X., Zhang, Z.-J.: Novel coronavirus infection in hospitalized infants under 1 year of age in china. JAMA 323(13), 1313–1314 (2020)CrossRef Wei, M., Yuan, J., Liu, Y., Fu, T., Yu, X., Zhang, Z.-J.: Novel coronavirus infection in hospitalized infants under 1 year of age in china. JAMA 323(13), 1313–1314 (2020)CrossRef
22.
Zurück zum Zitat Team, E.: The epidemiological characteristics of an outbreak of 2019 novel coronavirus diseases (covid-19)-China, 2020. China CDC Week. 2(8), 113 (2020)CrossRef Team, E.: The epidemiological characteristics of an outbreak of 2019 novel coronavirus diseases (covid-19)-China, 2020. China CDC Week. 2(8), 113 (2020)CrossRef
24.
Zurück zum Zitat Tian, S., Hu, N., Lou, J., Chen, K., Kang, X., Xiang, Z., Chen, H., Wang, D., Liu, N., Liu, D., et al.: Characteristics of Covid-19 infection in Beijing. J. Infect. 80(4), 401–406 (2020)CrossRef Tian, S., Hu, N., Lou, J., Chen, K., Kang, X., Xiang, Z., Chen, H., Wang, D., Liu, N., Liu, D., et al.: Characteristics of Covid-19 infection in Beijing. J. Infect. 80(4), 401–406 (2020)CrossRef
25.
Zurück zum Zitat Jiehao, C., Jin, X., Daojiong, L., Zhi, Y., Lei, X., Zhenghai, Q., Yuehua, Z., Hua, Z., Ran, J., Pengcheng, L., et al.: A case series of children with 2019 novel coronavirus infection: clinical and epidemiological features. Clin. Infect. Dis. 71(6), 1547–1551 (2020)CrossRef Jiehao, C., Jin, X., Daojiong, L., Zhi, Y., Lei, X., Zhenghai, Q., Yuehua, Z., Hua, Z., Ran, J., Pengcheng, L., et al.: A case series of children with 2019 novel coronavirus infection: clinical and epidemiological features. Clin. Infect. Dis. 71(6), 1547–1551 (2020)CrossRef
26.
Zurück zum Zitat Alghamdi, H., Amoudi, G., Elhag, S., Saeedi, K., Nasser, J.: Deep learning approaches for detecting Covid-19 from chest x-ray images: A survey. IEEE Access (2021) Alghamdi, H., Amoudi, G., Elhag, S., Saeedi, K., Nasser, J.: Deep learning approaches for detecting Covid-19 from chest x-ray images: A survey. IEEE Access (2021)
29.
Zurück zum Zitat Ying, L., Yue-ping, L., Bo, D., Feifei, R., Yue, W., Jinya, D., Qianchuan, H.: Diagnostic indexes of a rapid igg/igm combined antibody test for sars-cov-2. MedRxiv (2020) Ying, L., Yue-ping, L., Bo, D., Feifei, R., Yue, W., Jinya, D., Qianchuan, H.: Diagnostic indexes of a rapid igg/igm combined antibody test for sars-cov-2. MedRxiv (2020)
30.
Zurück zum Zitat Ruuskanen, O., Lahti, E., Jennings, L.C., Murdoch, D.R.: Viral pneumonia. The Lancet 377(9773), 1264–1275 (2011)CrossRef Ruuskanen, O., Lahti, E., Jennings, L.C., Murdoch, D.R.: Viral pneumonia. The Lancet 377(9773), 1264–1275 (2011)CrossRef
31.
Zurück zum Zitat Wang, D., Hu, B., Hu, C., Zhu, F., Liu, X., Zhang, J., Wang, B., Xiang, H., Cheng, Z., Xiong, Y., et al.: Clinical characteristics of 138 hospitalized patients with 2019 novel coronavirus-infected pneumonia in Wuhan, China. JAMA 323(11), 1061–1069 (2020)CrossRef Wang, D., Hu, B., Hu, C., Zhu, F., Liu, X., Zhang, J., Wang, B., Xiang, H., Cheng, Z., Xiong, Y., et al.: Clinical characteristics of 138 hospitalized patients with 2019 novel coronavirus-infected pneumonia in Wuhan, China. JAMA 323(11), 1061–1069 (2020)CrossRef
32.
Zurück zum Zitat From, T.: Observation: Brief research report From, T.: Observation: Brief research report
33.
Zurück zum Zitat He, J.-L., Luo, L., Luo, Z.-D., Lyu, J.-X., Ng, M.-Y., Shen, X.-P., Wen, Z.: Diagnostic performance between ct and initial real-time rt-pcr for clinically suspected 2019 coronavirus disease (covid-19) patients outside Wuhan, China. Respir. Med. 168, 105980 (2020)CrossRef He, J.-L., Luo, L., Luo, Z.-D., Lyu, J.-X., Ng, M.-Y., Shen, X.-P., Wen, Z.: Diagnostic performance between ct and initial real-time rt-pcr for clinically suspected 2019 coronavirus disease (covid-19) patients outside Wuhan, China. Respir. Med. 168, 105980 (2020)CrossRef
34.
Zurück zum Zitat Speidel, M.A., Wilfley, B.P., Star-Lack, J.M., Heanue, J.A., Van Lysel, M.S.: Scanning-beam digital x-ray (sbdx) technology for interventional and diagnostic cardiac angiography. Med. Phys. 33(8), 2714–2727 (2006)CrossRef Speidel, M.A., Wilfley, B.P., Star-Lack, J.M., Heanue, J.A., Van Lysel, M.S.: Scanning-beam digital x-ray (sbdx) technology for interventional and diagnostic cardiac angiography. Med. Phys. 33(8), 2714–2727 (2006)CrossRef
35.
Zurück zum Zitat Ai, T., Yang, Z., Hou, H., Zhan, C., Chen, C., Lv, W., Tao, Q., Sun, Z., Xia, L.: Correlation of chest ct and rt-pcr testing for coronavirus disease 2019 (covid-19) in China: a report of 1014 cases. Radiology 296(2), 32–40 (2020)CrossRef Ai, T., Yang, Z., Hou, H., Zhan, C., Chen, C., Lv, W., Tao, Q., Sun, Z., Xia, L.: Correlation of chest ct and rt-pcr testing for coronavirus disease 2019 (covid-19) in China: a report of 1014 cases. Radiology 296(2), 32–40 (2020)CrossRef
36.
Zurück zum Zitat Jin, C., Chen, W., Cao, Y., Xu, Z., Tan, Z., Zhang, X., Deng, L., Zheng, C., Zhou, J., Shi, H., et al.: Development and evaluation of an artificial intelligence system for covid-19 diagnosis. Nat. Commun. 11(1), 1–14 (2020)CrossRef Jin, C., Chen, W., Cao, Y., Xu, Z., Tan, Z., Zhang, X., Deng, L., Zheng, C., Zhou, J., Shi, H., et al.: Development and evaluation of an artificial intelligence system for covid-19 diagnosis. Nat. Commun. 11(1), 1–14 (2020)CrossRef
37.
Zurück zum Zitat Li, X., Zeng, X., Liu, B., Yu, Y.: COVID-19 infection presenting with CT halo sign. Radiol. Cardiothorac. Imaging. 2(1), e200026 (2020)CrossRef Li, X., Zeng, X., Liu, B., Yu, Y.: COVID-19 infection presenting with CT halo sign. Radiol. Cardiothorac. Imaging. 2(1), e200026 (2020)CrossRef
38.
Zurück zum Zitat Lin, T.-Y., Dollár, P., Girshick, R., He, K., Hariharan, B., Belongie, S.: Feature pyramid networks for object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2117–2125 (2017) Lin, T.-Y., Dollár, P., Girshick, R., He, K., Hariharan, B., Belongie, S.: Feature pyramid networks for object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2117–2125 (2017)
39.
Zurück zum Zitat Shi, H., Han, X., Jiang, N., Cao, Y., Alwalid, O., Gu, J., Fan, Y., Zheng, C.: Radiological findings from 81 patients with covid-19 pneumonia in Wuhan, China: a descriptive study. Lancet. Infect. Dis 20(4), 425–434 (2020)CrossRef Shi, H., Han, X., Jiang, N., Cao, Y., Alwalid, O., Gu, J., Fan, Y., Zheng, C.: Radiological findings from 81 patients with covid-19 pneumonia in Wuhan, China: a descriptive study. Lancet. Infect. Dis 20(4), 425–434 (2020)CrossRef
40.
Zurück zum Zitat Zhou, S., Wang, Y., Zhu, T., Xia, L., et al.: Ct features of coronavirus disease 2019 (covid-19) pneumonia in 62 patients in Wuhan, China. AJR Am. J. Roentgenol. 214(6), 1287–1294 (2020)CrossRef Zhou, S., Wang, Y., Zhu, T., Xia, L., et al.: Ct features of coronavirus disease 2019 (covid-19) pneumonia in 62 patients in Wuhan, China. AJR Am. J. Roentgenol. 214(6), 1287–1294 (2020)CrossRef
41.
Zurück zum Zitat Mukherjee, H., Ghosh, S., Dhar, A., Obaidullah, S.M., Santosh, K., Roy, K.: Deep neural network to detect covid-19: one architecture for both ct scans and chest x-rays. Appl. Intell. 51(5), 2777–2789 (2021)CrossRef Mukherjee, H., Ghosh, S., Dhar, A., Obaidullah, S.M., Santosh, K., Roy, K.: Deep neural network to detect covid-19: one architecture for both ct scans and chest x-rays. Appl. Intell. 51(5), 2777–2789 (2021)CrossRef
42.
Zurück zum Zitat Keele, S., et al.: Guidelines for performing systematic literature reviews in software engineering. Technical report, Technical report, ver. 2.3 ebse technical report. ebse (2007) Keele, S., et al.: Guidelines for performing systematic literature reviews in software engineering. Technical report, Technical report, ver. 2.3 ebse technical report. ebse (2007)
43.
Zurück zum Zitat Al-Asadi, M.A., Tasdemir, S.: Using artificial intelligence against the phenomenon of fake news: a systematic literature review. Combating Fake News with Computational Intelligence Techniques, 39–54 (2022) Al-Asadi, M.A., Tasdemir, S.: Using artificial intelligence against the phenomenon of fake news: a systematic literature review. Combating Fake News with Computational Intelligence Techniques, 39–54 (2022)
44.
Zurück zum Zitat Zemouri, R., Zerhouni, N., Racoceanu, D.: Deep learning in the biomedical applications: Recent and future status. Appl. Sci. 9(8), 1526 (2019)CrossRef Zemouri, R., Zerhouni, N., Racoceanu, D.: Deep learning in the biomedical applications: Recent and future status. Appl. Sci. 9(8), 1526 (2019)CrossRef
45.
Zurück zum Zitat Attallah, O., Sharkas, M.A., Gadelkarim, H.: Deep learning techniques for automatic detection of embryonic neurodevelopmental disorders. Diagnostics 10(1), 27 (2020)CrossRef Attallah, O., Sharkas, M.A., Gadelkarim, H.: Deep learning techniques for automatic detection of embryonic neurodevelopmental disorders. Diagnostics 10(1), 27 (2020)CrossRef
46.
Zurück zum Zitat Litjens, G., Kooi, T., Bejnordi, B.E., Setio, A.A.A., Ciompi, F., Ghafoorian, M., Van Der Laak, J.A., Van Ginneken, B., Sánchez, C.I.: A survey on deep learning in medical image analysis. Med. Image Anal. 42, 60–88 (2017)CrossRef Litjens, G., Kooi, T., Bejnordi, B.E., Setio, A.A.A., Ciompi, F., Ghafoorian, M., Van Der Laak, J.A., Van Ginneken, B., Sánchez, C.I.: A survey on deep learning in medical image analysis. Med. Image Anal. 42, 60–88 (2017)CrossRef
47.
Zurück zum Zitat Cunha-Vaz, J.: The blood-retinal barrier in the management of retinal disease: Euretina award lecture. Ophthalmologica 237(1), 1–10 (2017)CrossRef Cunha-Vaz, J.: The blood-retinal barrier in the management of retinal disease: Euretina award lecture. Ophthalmologica 237(1), 1–10 (2017)CrossRef
48.
Zurück zum Zitat Muhammad, K., Khan, S., Del Ser, J., De Albuquerque, V.H.C.: Deep learning for multigrade brain tumor classification in smart healthcare systems: A prospective survey. IEEE Trans. Neural Netw. Learn. Syst. 32(2), 507–522 (2020)CrossRef Muhammad, K., Khan, S., Del Ser, J., De Albuquerque, V.H.C.: Deep learning for multigrade brain tumor classification in smart healthcare systems: A prospective survey. IEEE Trans. Neural Netw. Learn. Syst. 32(2), 507–522 (2020)CrossRef
49.
Zurück zum Zitat Liu, J., Pan, Y., Li, M., Chen, Z., Tang, L., Lu, C., Wang, J.: Applications of deep learning to mri images: A survey. Big Data Min. Anal. 1(1), 1–18 (2018)CrossRef Liu, J., Pan, Y., Li, M., Chen, Z., Tang, L., Lu, C., Wang, J.: Applications of deep learning to mri images: A survey. Big Data Min. Anal. 1(1), 1–18 (2018)CrossRef
50.
Zurück zum Zitat Seeböck, P., Orlando, J.I., Schlegl, T., Waldstein, S.M., Bogunović, H., Klimscha, S., Langs, G., Schmidt-Erfurth, U.: Exploiting epistemic uncertainty of anatomy segmentation for anomaly detection in retinal oct. IEEE Trans. Med. Imaging 39(1), 87–98 (2019)CrossRef Seeböck, P., Orlando, J.I., Schlegl, T., Waldstein, S.M., Bogunović, H., Klimscha, S., Langs, G., Schmidt-Erfurth, U.: Exploiting epistemic uncertainty of anatomy segmentation for anomaly detection in retinal oct. IEEE Trans. Med. Imaging 39(1), 87–98 (2019)CrossRef
52.
Zurück zum Zitat Albawi, S., Mohammed, T.A., Al-Zawi, S.: Understanding of a convolutional neural network. In: 2017 International Conference on Engineering and Technology (ICET), pp. 1–6 (2017). Ieee Albawi, S., Mohammed, T.A., Al-Zawi, S.: Understanding of a convolutional neural network. In: 2017 International Conference on Engineering and Technology (ICET), pp. 1–6 (2017). Ieee
53.
Zurück zum Zitat Al-Asadi, M.A., Tasdemİr, S.: Medical image analysis using machine learning techniques: A systematic review. Machine Learning and Deep Learning in Efficacy Improvement of Healthcare Systems, 137–153 (2022) Al-Asadi, M.A., Tasdemİr, S.: Medical image analysis using machine learning techniques: A systematic review. Machine Learning and Deep Learning in Efficacy Improvement of Healthcare Systems, 137–153 (2022)
54.
Zurück zum Zitat LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)CrossRef LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)CrossRef
55.
Zurück zum Zitat Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012)
56.
Zurück zum Zitat Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–9 (2015) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–9 (2015)
57.
Zurück zum Zitat Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:​1409.​1556 (2014)
58.
Zurück zum Zitat He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)
59.
Zurück zum Zitat Polat, Ö.: Detection of covid-19 from chest ct images using exception architecture: A deep transfer learning based approach. Sakarya Univ. J. Sci. 25(3), 813–823 (2021) Polat, Ö.: Detection of covid-19 from chest ct images using exception architecture: A deep transfer learning based approach. Sakarya Univ. J. Sci. 25(3), 813–823 (2021)
60.
Zurück zum Zitat Liu, X., Jia, Z., Hou, X., Fu, M., Ma, L., Sun, Q.: Real-time marine animal images classification by embedded system based on mobilenet and transfer learning. In: OCEANS 2019-Marseille, pp. 1–5 (2019). IEEE Liu, X., Jia, Z., Hou, X., Fu, M., Ma, L., Sun, Q.: Real-time marine animal images classification by embedded system based on mobilenet and transfer learning. In: OCEANS 2019-Marseille, pp. 1–5 (2019). IEEE
61.
Zurück zum Zitat Howard, A.G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., Andreetto, M., Adam, H.: Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861 (2017) Howard, A.G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., Andreetto, M., Adam, H.: Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:​1704.​04861 (2017)
62.
Zurück zum Zitat Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017)
65.
Zurück zum Zitat Takahashi, R., Matsubara, T., Uehara, K.: Ricap: Random image cropping and patching data augmentation for deep cnns. In: Asian Conference on Machine Learning, pp. 786–798 (2018). PMLR Takahashi, R., Matsubara, T., Uehara, K.: Ricap: Random image cropping and patching data augmentation for deep cnns. In: Asian Conference on Machine Learning, pp. 786–798 (2018). PMLR
68.
Zurück zum Zitat Fei-Fei, L., Deng, J., Li, K.: Imagenet: Constructing a large-scale image database. J. Vis. 9(8), 1037–1037 (2009)CrossRef Fei-Fei, L., Deng, J., Li, K.: Imagenet: Constructing a large-scale image database. J. Vis. 9(8), 1037–1037 (2009)CrossRef
69.
Zurück zum Zitat Li, T., Han, Z., Wei, B., Zheng, Y., Hong, Y., Cong, J.: Robust screening of covid-19 from chest x-ray via discriminative cost-sensitive learning. arXiv preprint arXiv:2004.12592 (2020) Li, T., Han, Z., Wei, B., Zheng, Y., Hong, Y., Cong, J.: Robust screening of covid-19 from chest x-ray via discriminative cost-sensitive learning. arXiv preprint arXiv:​2004.​12592 (2020)
70.
Zurück zum Zitat Hemdan, E.E.-D., Shouman, M.A., Karar, M.E.: Covidx-net: A framework of deep learning classifiers to diagnose covid-19 in x-ray images. arXiv preprint arXiv:2003.11055 (2020) Hemdan, E.E.-D., Shouman, M.A., Karar, M.E.: Covidx-net: A framework of deep learning classifiers to diagnose covid-19 in x-ray images. arXiv preprint arXiv:​2003.​11055 (2020)
71.
Zurück zum Zitat Sahinbas, K., Catak, F.O.: Transfer learning-based convolutional neural network for covid-19 detection with x-ray images. In: Data Science for COVID-19, pp. 451–466. Elsevier, (2021) Sahinbas, K., Catak, F.O.: Transfer learning-based convolutional neural network for covid-19 detection with x-ray images. In: Data Science for COVID-19, pp. 451–466. Elsevier, (2021)
72.
Zurück zum Zitat Horry, M.J., Chakraborty, S., Paul, M., Ulhaq, A., Pradhan, B., Saha, M., Shukla, N.: X-ray image based covid-19 detection using pre-trained deep learning models (2020) Horry, M.J., Chakraborty, S., Paul, M., Ulhaq, A., Pradhan, B., Saha, M., Shukla, N.: X-ray image based covid-19 detection using pre-trained deep learning models (2020)
73.
Zurück zum Zitat Haghanifar, A., Majdabadi, M.M., Choi, Y., Deivalakshmi, S., Ko, S.: Covid-cxnet: Detecting covid-19 in frontal chest x-ray images using deep learning. Multimed. Tools Appl. 1–31 (2022) Haghanifar, A., Majdabadi, M.M., Choi, Y., Deivalakshmi, S., Ko, S.: Covid-cxnet: Detecting covid-19 in frontal chest x-ray images using deep learning. Multimed. Tools Appl. 1–31 (2022)
74.
Zurück zum Zitat Ozturk, T., Talo, M., Yildirim, E.A., Baloglu, U.B., Yildirim, O., Acharya, U.R.: Automated detection of covid-19 cases using deep neural networks with x-ray images. Comput. Biol. Med. 121, 103792 (2020)CrossRef Ozturk, T., Talo, M., Yildirim, E.A., Baloglu, U.B., Yildirim, O., Acharya, U.R.: Automated detection of covid-19 cases using deep neural networks with x-ray images. Comput. Biol. Med. 121, 103792 (2020)CrossRef
75.
Zurück zum Zitat Apostolopoulos, I.D., Mpesiana, T.A.: Covid-19: automatic detection from x-ray images utilizing transfer learning with convolutional neural networks. Phys. Eng. Sci. Med. 43(2), 635–640 (2020)CrossRef Apostolopoulos, I.D., Mpesiana, T.A.: Covid-19: automatic detection from x-ray images utilizing transfer learning with convolutional neural networks. Phys. Eng. Sci. Med. 43(2), 635–640 (2020)CrossRef
76.
Zurück zum Zitat Minaee, S., Kafieh, R., Sonka, M., Yazdani, S., Soufi, G.J.: Deep-covid: Predicting covid-19 from chest x-ray images using deep transfer learning. Med. Image Anal. 65, 101794 (2020)CrossRef Minaee, S., Kafieh, R., Sonka, M., Yazdani, S., Soufi, G.J.: Deep-covid: Predicting covid-19 from chest x-ray images using deep transfer learning. Med. Image Anal. 65, 101794 (2020)CrossRef
77.
Zurück zum Zitat Narin, A., Kaya, C., Pamuk, Z.: Automatic detection of coronavirus disease (covid-19) using x-ray images and deep convolutional neural networks. Pattern Anal. Appl. 24(3), 1207–1220 (2021)CrossRef Narin, A., Kaya, C., Pamuk, Z.: Automatic detection of coronavirus disease (covid-19) using x-ray images and deep convolutional neural networks. Pattern Anal. Appl. 24(3), 1207–1220 (2021)CrossRef
78.
Zurück zum Zitat Singh, D., Kumar, V., Yadav, V., Kaur, M.: Deep neural network-based screening model for covid-19-infected patients using chest x-ray images. Int. J. Pattern Recognit. Artif. Intell. 35(03), 2151004 (2021)CrossRef Singh, D., Kumar, V., Yadav, V., Kaur, M.: Deep neural network-based screening model for covid-19-infected patients using chest x-ray images. Int. J. Pattern Recognit. Artif. Intell. 35(03), 2151004 (2021)CrossRef
79.
Zurück zum Zitat Wang, X., Tang, L.: An adaptive multi-population differential evolution algorithm for continuous multi-objective optimization. Inf. Sci. 348, 124–141 (2016)CrossRef Wang, X., Tang, L.: An adaptive multi-population differential evolution algorithm for continuous multi-objective optimization. Inf. Sci. 348, 124–141 (2016)CrossRef
80.
Zurück zum Zitat Pandit, M.K., Banday, S.A., Naaz, R., Chishti, M.A.: Automatic detection of covid-19 from chest radiographs using deep learning. Radiography 27(2), 483–489 (2021)CrossRef Pandit, M.K., Banday, S.A., Naaz, R., Chishti, M.A.: Automatic detection of covid-19 from chest radiographs using deep learning. Radiography 27(2), 483–489 (2021)CrossRef
81.
Zurück zum Zitat Zhang, J., Xie, Y., Pang, G., Liao, Z., Verjans, J., Li, W., Sun, Z., He, J., Li, Y., Shen, C., et al.: Viral pneumonia screening on chest x-rays using confidence-aware anomaly detection. IEEE Trans. Med. Imaging 40(3), 879–890 (2020)CrossRef Zhang, J., Xie, Y., Pang, G., Liao, Z., Verjans, J., Li, W., Sun, Z., He, J., Li, Y., Shen, C., et al.: Viral pneumonia screening on chest x-rays using confidence-aware anomaly detection. IEEE Trans. Med. Imaging 40(3), 879–890 (2020)CrossRef
82.
Zurück zum Zitat Alqudah, A.M., Qazan, S., Alquran, H., Qasmieh, I.A., Alqudah, A.: Covid-19 detection from x-ray images using different artificial intelligence hybrid models. Jordan J. Electr. Eng. 6(2), 168–178 (2020)CrossRef Alqudah, A.M., Qazan, S., Alquran, H., Qasmieh, I.A., Alqudah, A.: Covid-19 detection from x-ray images using different artificial intelligence hybrid models. Jordan J. Electr. Eng. 6(2), 168–178 (2020)CrossRef
83.
Zurück zum Zitat Hossain, M.B., Iqbal, S.H.S., Islam, M.M., Akhtar, M.N., Sarker, I.H.: Transfer learning with fine-tuned deep cnn resnet50 model for classifying covid-19 from chest x-ray images. Inf. Med. Unlocked 30, 100916 (2022)CrossRef Hossain, M.B., Iqbal, S.H.S., Islam, M.M., Akhtar, M.N., Sarker, I.H.: Transfer learning with fine-tuned deep cnn resnet50 model for classifying covid-19 from chest x-ray images. Inf. Med. Unlocked 30, 100916 (2022)CrossRef
84.
Zurück zum Zitat Khan, A.I., Shah, J.L., Bhat, M.M.: Coronet: A deep neural network for detection and diagnosis of covid-19 from chest x-ray images. Comput. Methods Programs Biomed. 196, 105581 (2020)CrossRef Khan, A.I., Shah, J.L., Bhat, M.M.: Coronet: A deep neural network for detection and diagnosis of covid-19 from chest x-ray images. Comput. Methods Programs Biomed. 196, 105581 (2020)CrossRef
85.
Zurück zum Zitat Moutounet-Cartan, P.G.: Deep convolutional neural networks to diagnose covid-19 and other pneumonia diseases from posteroanterior chest x-rays. arXiv preprint arXiv:2005.00845 (2020) Moutounet-Cartan, P.G.: Deep convolutional neural networks to diagnose covid-19 and other pneumonia diseases from posteroanterior chest x-rays. arXiv preprint arXiv:​2005.​00845 (2020)
86.
Zurück zum Zitat Pereira, R.M., Bertolini, D., Teixeira, L.O., Silla, C.N., Jr., Costa, Y.M.: Covid-19 identification in chest x-ray images on flat and hierarchical classification scenarios. Comput. Methods Programs Biomed. 194, 105532 (2020)CrossRef Pereira, R.M., Bertolini, D., Teixeira, L.O., Silla, C.N., Jr., Costa, Y.M.: Covid-19 identification in chest x-ray images on flat and hierarchical classification scenarios. Comput. Methods Programs Biomed. 194, 105532 (2020)CrossRef
87.
Zurück zum Zitat Nishio, M., Noguchi, S., Matsuo, H., Murakami, T.: Automatic classification between covid-19 pneumonia, non-covid-19 pneumonia, and the healthy on chest x-ray image: combination of data augmentation methods. Sci. Rep. 10(1), 1–6 (2020)CrossRef Nishio, M., Noguchi, S., Matsuo, H., Murakami, T.: Automatic classification between covid-19 pneumonia, non-covid-19 pneumonia, and the healthy on chest x-ray image: combination of data augmentation methods. Sci. Rep. 10(1), 1–6 (2020)CrossRef
88.
Zurück zum Zitat Rahaman, M.M., Li, C., Yao, Y., Kulwa, F., Rahman, M.A., Wang, Q., Qi, S., Kong, F., Zhu, X., Zhao, X.: Identification of covid-19 samples from chest x-ray images using deep learning: A comparison of transfer learning approaches. J. Xray Sci. Technol. 28(5), 821–839 (2020) Rahaman, M.M., Li, C., Yao, Y., Kulwa, F., Rahman, M.A., Wang, Q., Qi, S., Kong, F., Zhu, X., Zhao, X.: Identification of covid-19 samples from chest x-ray images using deep learning: A comparison of transfer learning approaches. J. Xray Sci. Technol. 28(5), 821–839 (2020)
89.
Zurück zum Zitat Loey, M., Smarandache, F., Khalifa, M.N.E.: Within the lack of chest covid-19 x-ray dataset: a novel detection model based on gan and deep transfer learning. Symmetry 12(4), 651 (2020)CrossRef Loey, M., Smarandache, F., Khalifa, M.N.E.: Within the lack of chest covid-19 x-ray dataset: a novel detection model based on gan and deep transfer learning. Symmetry 12(4), 651 (2020)CrossRef
90.
Zurück zum Zitat Monga, J.S., Champawat, Y.S., Kharb, S.: Transfer learning based multiclass classification for covid-19 detection using chest x-rays. In: Computer Vision and Robotics, pp. 213–227. Springer, (2022) Monga, J.S., Champawat, Y.S., Kharb, S.: Transfer learning based multiclass classification for covid-19 detection using chest x-rays. In: Computer Vision and Robotics, pp. 213–227. Springer, (2022)
91.
Zurück zum Zitat Wang, L., Lin, Z.Q., Wong, A.: Covid-net: A tailored deep convolutional neural network design for detection of covid-19 cases from chest x-ray images. Sci. Rep. 10(1), 1–12 (2020) Wang, L., Lin, Z.Q., Wong, A.: Covid-net: A tailored deep convolutional neural network design for detection of covid-19 cases from chest x-ray images. Sci. Rep. 10(1), 1–12 (2020)
92.
Zurück zum Zitat Asif, S., Wenhui, Y., Jin, H., Jinhai, S.: Classification of covid-19 from chest x-ray images using deep convolutional neural network. In: 2020 IEEE 6th International Conference on Computer and Communications (ICCC), pp. 426–433 (2020). IEEE Asif, S., Wenhui, Y., Jin, H., Jinhai, S.: Classification of covid-19 from chest x-ray images using deep convolutional neural network. In: 2020 IEEE 6th International Conference on Computer and Communications (ICCC), pp. 426–433 (2020). IEEE
93.
Zurück zum Zitat Das, N.N., Kumar, N., Kaur, M., Kumar, V., Singh, D.: Automated deep transfer learning-based approach for detection of covid-19 infection in chest x-rays. Irbm (2020) Das, N.N., Kumar, N., Kaur, M., Kumar, V., Singh, D.: Automated deep transfer learning-based approach for detection of covid-19 infection in chest x-rays. Irbm (2020)
94.
Zurück zum Zitat Afshar, P., Heidarian, S., Naderkhani, F., Oikonomou, A., Plataniotis, K.N., Mohammadi, A.: Covid-caps: A capsule network-based framework for identification of covid-19 cases from x-ray images. Pattern Recogn. Lett. 138, 638–643 (2020)CrossRef Afshar, P., Heidarian, S., Naderkhani, F., Oikonomou, A., Plataniotis, K.N., Mohammadi, A.: Covid-caps: A capsule network-based framework for identification of covid-19 cases from x-ray images. Pattern Recogn. Lett. 138, 638–643 (2020)CrossRef
95.
Zurück zum Zitat Ucar, F., Korkmaz, D.: Covidiagnosis-net: Deep bayes-squeezenet based diagnosis of the coronavirus disease 2019 (covid-19) from x-ray images. Med. Hypotheses 140, 109761 (2020)CrossRef Ucar, F., Korkmaz, D.: Covidiagnosis-net: Deep bayes-squeezenet based diagnosis of the coronavirus disease 2019 (covid-19) from x-ray images. Med. Hypotheses 140, 109761 (2020)CrossRef
96.
Zurück zum Zitat Punn, N.S., Agarwal, S.: Automated diagnosis of covid-19 with limited posteroanterior chest x-ray images using fine-tuned deep neural networks. Appl. Intell. 51(5), 2689–2702 (2021)CrossRef Punn, N.S., Agarwal, S.: Automated diagnosis of covid-19 with limited posteroanterior chest x-ray images using fine-tuned deep neural networks. Appl. Intell. 51(5), 2689–2702 (2021)CrossRef
97.
Zurück zum Zitat Al-Antari, M.A., Hua, C.-H., Bang, J., Lee, S.: Fast deep learning computer-aided diagnosis of covid-19 based on digital chest x-ray images. Appl. Intell. 51(5), 2890–2907 (2021)CrossRef Al-Antari, M.A., Hua, C.-H., Bang, J., Lee, S.: Fast deep learning computer-aided diagnosis of covid-19 based on digital chest x-ray images. Appl. Intell. 51(5), 2890–2907 (2021)CrossRef
98.
Zurück zum Zitat Wang, X., Peng, Y., Lu, L., Lu, Z., Bagheri, M., Summers, R.M.: Chestx-ray8: Hospital-scale chest x-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2097–2106 (2017) Wang, X., Peng, Y., Lu, L., Lu, Z., Bagheri, M., Summers, R.M.: Chestx-ray8: Hospital-scale chest x-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2097–2106 (2017)
99.
Zurück zum Zitat Narayanan, B.N., Hardie, R.C., Krishnaraja, V., Karam, C., Davuluru, V.S.P.: Transfer-to-transfer learning approach for computer aided detection of covid-19 in chest radiographs. AI 1(4), 539–557 (2020)CrossRef Narayanan, B.N., Hardie, R.C., Krishnaraja, V., Karam, C., Davuluru, V.S.P.: Transfer-to-transfer learning approach for computer aided detection of covid-19 in chest radiographs. AI 1(4), 539–557 (2020)CrossRef
100.
Zurück zum Zitat Islam, M.Z., Islam, M.M., Asraf, A.: A combined deep cnn-lstm network for the detection of novel coronavirus (covid-19) using x-ray images. Inf. Med. Unlocked 20, 100412 (2020)CrossRef Islam, M.Z., Islam, M.M., Asraf, A.: A combined deep cnn-lstm network for the detection of novel coronavirus (covid-19) using x-ray images. Inf. Med. Unlocked 20, 100412 (2020)CrossRef
101.
Zurück zum Zitat Rahimzadeh, M., Attar, A.: A modified deep convolutional neural network for detecting covid-19 and pneumonia from chest x-ray images based on the concatenation of xception and resnet50v2. Inf. Med. Unlocked 19, 100360 (2020)CrossRef Rahimzadeh, M., Attar, A.: A modified deep convolutional neural network for detecting covid-19 and pneumonia from chest x-ray images based on the concatenation of xception and resnet50v2. Inf. Med. Unlocked 19, 100360 (2020)CrossRef
102.
Zurück zum Zitat Sethy, P.K., Behera, S.K.: Detection of coronavirus disease (covid-19) based on deep features (2020) Sethy, P.K., Behera, S.K.: Detection of coronavirus disease (covid-19) based on deep features (2020)
103.
Zurück zum Zitat Farooq, M., Hafeez, A.: Covid-resnet: A deep learning framework for screening of covid19 from radiographs. arXiv preprint arXiv:2003.14395 (2020) Farooq, M., Hafeez, A.: Covid-resnet: A deep learning framework for screening of covid19 from radiographs. arXiv preprint arXiv:​2003.​14395 (2020)
104.
Zurück zum Zitat Riahi, A., Elharrouss, O., Al-Maadeed, S.: Bemd-3dcnn-based method for covid-19 detection. Comput. Biol. Med. 142, 105188 (2022)CrossRef Riahi, A., Elharrouss, O., Al-Maadeed, S.: Bemd-3dcnn-based method for covid-19 detection. Comput. Biol. Med. 142, 105188 (2022)CrossRef
105.
Zurück zum Zitat Moujahid, H., Cherradi, B., Al-Sarem, M., Bahatti, L., Eljialy, B.A., Alsaeedi, A., Saeed, F.: Combining cnn and grad-cam for covid-19 disease prediction and visual explanation. Intell. Autom. Soft Comput. 32(2), 723–745 (2021)CrossRef Moujahid, H., Cherradi, B., Al-Sarem, M., Bahatti, L., Eljialy, B.A., Alsaeedi, A., Saeed, F.: Combining cnn and grad-cam for covid-19 disease prediction and visual explanation. Intell. Autom. Soft Comput. 32(2), 723–745 (2021)CrossRef
106.
Zurück zum Zitat El Gannour, O., Hamida, S., Cherradi, B., Al-Sarem, M., Raihani, A., Saeed, F., Hadwan, M.: Concatenation of pre-trained convolutional neural networks for enhanced covid-19 screening using transfer learning technique. Electronics 11(1), 103 (2021)CrossRef El Gannour, O., Hamida, S., Cherradi, B., Al-Sarem, M., Raihani, A., Saeed, F., Hadwan, M.: Concatenation of pre-trained convolutional neural networks for enhanced covid-19 screening using transfer learning technique. Electronics 11(1), 103 (2021)CrossRef
107.
Zurück zum Zitat Maguolo, G., Nanni, L.: A critic evaluation of methods for covid-19 automatic detection from x-ray images. Inf. Fusion 76, 1–7 (2021)CrossRef Maguolo, G., Nanni, L.: A critic evaluation of methods for covid-19 automatic detection from x-ray images. Inf. Fusion 76, 1–7 (2021)CrossRef
108.
Zurück zum Zitat Cohen, J.P., Hashir, M., Brooks, R., Bertrand, H.: On the limits of cross-domain generalization in automated x-ray prediction. In: Medical Imaging with Deep Learning, pp. 136–155 (2020). PMLR Cohen, J.P., Hashir, M., Brooks, R., Bertrand, H.: On the limits of cross-domain generalization in automated x-ray prediction. In: Medical Imaging with Deep Learning, pp. 136–155 (2020). PMLR
110.
Zurück zum Zitat Boulila, W., Shah, S.A., Ahmad, J., Driss, M., Ghandorh, H., Alsaeedi, A., Al-Sarem, M., Saeed, F.: Noninvasive detection of respiratory disorder due to covid-19 at the early stages in Saudi Arabia. Electronics 10(21), 2701 (2021)CrossRef Boulila, W., Shah, S.A., Ahmad, J., Driss, M., Ghandorh, H., Alsaeedi, A., Al-Sarem, M., Saeed, F.: Noninvasive detection of respiratory disorder due to covid-19 at the early stages in Saudi Arabia. Electronics 10(21), 2701 (2021)CrossRef
111.
Zurück zum Zitat Shah, V., Keniya, R., Shridharani, A., Punjabi, M., Shah, J., Mehendale, N.: Diagnosis of covid-19 using ct scan images and deep learning techniques. Emerg. Radiol. 28(3), 497–505 (2021)CrossRef Shah, V., Keniya, R., Shridharani, A., Punjabi, M., Shah, J., Mehendale, N.: Diagnosis of covid-19 using ct scan images and deep learning techniques. Emerg. Radiol. 28(3), 497–505 (2021)CrossRef
112.
Zurück zum Zitat Wang, S., Kang, B., Ma, J., Zeng, X., Xiao, M., Guo, J., Cai, M., Yang, J., Li, Y., Meng, X., et al.: A deep learning algorithm using ct images to screen for corona virus disease (covid-19). Eur. Radiol. 31(8), 6096–6104 (2021)CrossRef Wang, S., Kang, B., Ma, J., Zeng, X., Xiao, M., Guo, J., Cai, M., Yang, J., Li, Y., Meng, X., et al.: A deep learning algorithm using ct images to screen for corona virus disease (covid-19). Eur. Radiol. 31(8), 6096–6104 (2021)CrossRef
113.
Zurück zum Zitat Amyar, A., Modzelewski, R., Li, H., Ruan, S.: Multi-task deep learning based ct imaging analysis for covid-19 pneumonia: Classification and segmentation. Comput. Biol. Med. 126, 104037 (2020)CrossRef Amyar, A., Modzelewski, R., Li, H., Ruan, S.: Multi-task deep learning based ct imaging analysis for covid-19 pneumonia: Classification and segmentation. Comput. Biol. Med. 126, 104037 (2020)CrossRef
114.
Zurück zum Zitat Zhao, J., Zhang, Y., He, X., Xie, P.: Covid-ct-dataset: a ct scan dataset about covid-19. 490. arXiv preprint arXiv:2003.13865 (2020) Zhao, J., Zhang, Y., He, X., Xie, P.: Covid-ct-dataset: a ct scan dataset about covid-19. 490. arXiv preprint arXiv:​2003.​13865 (2020)
115.
Zurück zum Zitat Bai, H., Wang, R., Xiong, Z., Hsieh, B., Chang, K., Halsey, K., Tran, T., Choi, J., Wang, D., Shi, L., et al.: Erratum: Artificial intelligence augmentation of radiologist performance in distinguishing covid-19 from pneumonia of other origin at chest ct (radiology (2020) 296 3 (e156–e165)). Radiology, 225–225 (2021) Bai, H., Wang, R., Xiong, Z., Hsieh, B., Chang, K., Halsey, K., Tran, T., Choi, J., Wang, D., Shi, L., et al.: Erratum: Artificial intelligence augmentation of radiologist performance in distinguishing covid-19 from pneumonia of other origin at chest ct (radiology (2020) 296 3 (e156–e165)). Radiology, 225–225 (2021)
116.
Zurück zum Zitat Wang, X., Deng, X., Fu, Q., Zhou, Q., Feng, J., Ma, H., Liu, W., Zheng, C.: A weakly-supervised framework for covid-19 classification and lesion localization from chest ct. IEEE Trans. Med. Imaging 39(8), 2615–2625 (2020)CrossRef Wang, X., Deng, X., Fu, Q., Zhou, Q., Feng, J., Ma, H., Liu, W., Zheng, C.: A weakly-supervised framework for covid-19 classification and lesion localization from chest ct. IEEE Trans. Med. Imaging 39(8), 2615–2625 (2020)CrossRef
117.
Zurück zum Zitat Li, L., Qin, L., Xu, Z., Yin, Y., Wang, X., Kong, B., Bai, J., Lu, Y., Fang, Z., Song, Q., et al.: Using artificial intelligence to detect covid-19 and community-acquired pneumonia based on pulmonary ct: evaluation of the diagnostic accuracy. Radiology 296(2), 65–71 (2020)CrossRef Li, L., Qin, L., Xu, Z., Yin, Y., Wang, X., Kong, B., Bai, J., Lu, Y., Fang, Z., Song, Q., et al.: Using artificial intelligence to detect covid-19 and community-acquired pneumonia based on pulmonary ct: evaluation of the diagnostic accuracy. Radiology 296(2), 65–71 (2020)CrossRef
118.
Zurück zum Zitat Do, C., Vu, L.: An approach for recognizing covid-19 cases using convolutional neural networks applied to ct scan images. In: Applications of Digital Image Processing XLIII, vol. 11510, p. 1151034 (2020). International Society for Optics and Photonics Do, C., Vu, L.: An approach for recognizing covid-19 cases using convolutional neural networks applied to ct scan images. In: Applications of Digital Image Processing XLIII, vol. 11510, p. 1151034 (2020). International Society for Optics and Photonics
119.
Zurück zum Zitat Attallah, O., Ragab, D.A., Sharkas, M.: Multi-deep: a novel cad system for coronavirus (covid-19) diagnosis from ct images using multiple convolution neural networks. PeerJ 8, 10086 (2020)CrossRef Attallah, O., Ragab, D.A., Sharkas, M.: Multi-deep: a novel cad system for coronavirus (covid-19) diagnosis from ct images using multiple convolution neural networks. PeerJ 8, 10086 (2020)CrossRef
120.
Zurück zum Zitat Gozes, O., Frid-Adar, M., Greenspan, H., Browning, P.D., Zhang, H., Ji, W., Bernheim, A., Siegel, E.: Rapid ai development cycle for the coronavirus (covid-19) pandemic: Initial results for automated detection & patient monitoring using deep learning ct image analysis. arXiv preprint arXiv:2003.05037 (2020) Gozes, O., Frid-Adar, M., Greenspan, H., Browning, P.D., Zhang, H., Ji, W., Bernheim, A., Siegel, E.: Rapid ai development cycle for the coronavirus (covid-19) pandemic: Initial results for automated detection & patient monitoring using deep learning ct image analysis. arXiv preprint arXiv:​2003.​05037 (2020)
121.
Zurück zum Zitat Shan, F., Gao, Y., Wang, J., Shi, W., Shi, N., Han, M., Xue, Z., Shen, D., Shi, Y.: Lung infection quantification of covid-19 in ct images with deep learning. arXiv preprint arXiv:2003.04655 (2020) Shan, F., Gao, Y., Wang, J., Shi, W., Shi, N., Han, M., Xue, Z., Shen, D., Shi, Y.: Lung infection quantification of covid-19 in ct images with deep learning. arXiv preprint arXiv:​2003.​04655 (2020)
122.
Zurück zum Zitat Chen, J., Wu, L., Zhang, J., Zhang, L., Gong, D., Zhao, Y., Chen, Q., Huang, S., Yang, M., Yang, X., et al.: Deep learning-based model for detecting 2019 novel coronavirus pneumonia on high-resolution computed tomography. Sci. Rep. 10(1), 1–11 (2020) Chen, J., Wu, L., Zhang, J., Zhang, L., Gong, D., Zhao, Y., Chen, Q., Huang, S., Yang, M., Yang, X., et al.: Deep learning-based model for detecting 2019 novel coronavirus pneumonia on high-resolution computed tomography. Sci. Rep. 10(1), 1–11 (2020)
123.
Zurück zum Zitat Jin, S., Wang, B., Xu, H., Luo, C., Wei, L., Zhao, W., Hou, X., Ma, W., Xu, Z., Zheng, Z., et al.: Ai-assisted ct imaging analysis for covid-19 screening: Building and deploying a medical ai system in four weeks. MedRxiv (2020) Jin, S., Wang, B., Xu, H., Luo, C., Wei, L., Zhao, W., Hou, X., Ma, W., Xu, Z., Zheng, Z., et al.: Ai-assisted ct imaging analysis for covid-19 screening: Building and deploying a medical ai system in four weeks. MedRxiv (2020)
124.
Zurück zum Zitat Abbasian Ardakani, A., Acharya, U.R., Habibollahi, S., Mohammadi, A.: Covidiag: a clinical cad system to diagnose covid-19 pneumonia based on ct findings. Eur. Radiol. 31(1), 121–130 (2021)CrossRef Abbasian Ardakani, A., Acharya, U.R., Habibollahi, S., Mohammadi, A.: Covidiag: a clinical cad system to diagnose covid-19 pneumonia based on ct findings. Eur. Radiol. 31(1), 121–130 (2021)CrossRef
125.
Zurück zum Zitat Afify, H.M., Darwish, A., Mohammed, K.K., Hassanien, A.E.: An automated cad system of ct chest images for covid-19 based on genetic algorithm and k-nearest neighbor classifier. Ingénierie des Systèmes d Inf. 25(5), 589–594 (2020)CrossRef Afify, H.M., Darwish, A., Mohammed, K.K., Hassanien, A.E.: An automated cad system of ct chest images for covid-19 based on genetic algorithm and k-nearest neighbor classifier. Ingénierie des Systèmes d Inf. 25(5), 589–594 (2020)CrossRef
126.
Zurück zum Zitat Saeedi, A., Saeedi, M., Maghsoudi, A.: A novel and reliable deep learning web-based tool to detect covid-19 infection from chest ct-scan. arXiv preprint arXiv:2006.14419 (2020) Saeedi, A., Saeedi, M., Maghsoudi, A.: A novel and reliable deep learning web-based tool to detect covid-19 infection from chest ct-scan. arXiv preprint arXiv:​2006.​14419 (2020)
127.
Zurück zum Zitat Ardakani, A.A., Kanafi, A.R., Acharya, U.R., Khadem, N., Mohammadi, A.: Application of deep learning technique to manage covid-19 in routine clinical practice using ct images: Results of 10 convolutional neural networks. Comput. Biol. Med. 121, 103795 (2020)CrossRef Ardakani, A.A., Kanafi, A.R., Acharya, U.R., Khadem, N., Mohammadi, A.: Application of deep learning technique to manage covid-19 in routine clinical practice using ct images: Results of 10 convolutional neural networks. Comput. Biol. Med. 121, 103795 (2020)CrossRef
128.
Zurück zum Zitat Acar, E., Öztoprak, B., Reşorlu, M., Daş, M., Yılmaz, İ., Öztoprak, İ.: Efficiency of artificial intelligence in detecting covid-19 pneumonia and other pneumonia causes by quantum fourier transform method. medRxiv, 2020–12 (2021) Acar, E., Öztoprak, B., Reşorlu, M., Daş, M., Yılmaz, İ., Öztoprak, İ.: Efficiency of artificial intelligence in detecting covid-19 pneumonia and other pneumonia causes by quantum fourier transform method. medRxiv, 2020–12 (2021)
129.
Zurück zum Zitat Swapnarekha, H., Behera, H.S., Nayak, J., Naik, B.: Deep densenet and resnet approach for covid-19 prognosis: Experiments on real ct images. In: Computational Intelligence in Pattern Recognition, pp. 731–747. Springer, (2022) Swapnarekha, H., Behera, H.S., Nayak, J., Naik, B.: Deep densenet and resnet approach for covid-19 prognosis: Experiments on real ct images. In: Computational Intelligence in Pattern Recognition, pp. 731–747. Springer, (2022)
130.
Zurück zum Zitat Oğuz, Ç., Yağanoğlu, M.: Detection of covid-19 using deep learning techniques and classification methods. Inf. Process. Manag. 59(5), 103025 (2022)CrossRef Oğuz, Ç., Yağanoğlu, M.: Detection of covid-19 using deep learning techniques and classification methods. Inf. Process. Manag. 59(5), 103025 (2022)CrossRef
131.
Zurück zum Zitat Kogilavani, S., Prabhu, J., Sandhiya, R., Kumar, M.S., Subramaniam, U., Karthick, A., Muhibbullah, M., Imam, S.B.S.: Covid-19 detection based on lung ct scan using deep learning techniques. Comput. Math. Methods Med. 2022 (2022) Kogilavani, S., Prabhu, J., Sandhiya, R., Kumar, M.S., Subramaniam, U., Karthick, A., Muhibbullah, M., Imam, S.B.S.: Covid-19 detection based on lung ct scan using deep learning techniques. Comput. Math. Methods Med. 2022 (2022)
132.
Zurück zum Zitat Xu, X., Jiang, X., Ma, C., Du, P., Li, X., Lv, S., Yu, L., Ni, Q., Chen, Y., Su, J., et al.: A deep learning system to screen novel coronavirus disease 2019 pneumonia. Engineering 6(10), 1122–1129 (2020)CrossRef Xu, X., Jiang, X., Ma, C., Du, P., Li, X., Lv, S., Yu, L., Ni, Q., Chen, Y., Su, J., et al.: A deep learning system to screen novel coronavirus disease 2019 pneumonia. Engineering 6(10), 1122–1129 (2020)CrossRef
133.
Zurück zum Zitat Wang, S., Zha, Y., Li, W., Wu, Q., Li, X., Niu, M., Wang, M., Qiu, X., Li, H., Yu, H., et al.: A fully automatic deep learning system for covid-19 diagnostic and prognostic analysis. Eur. Respir. J. 56(2) (2020) Wang, S., Zha, Y., Li, W., Wu, Q., Li, X., Niu, M., Wang, M., Qiu, X., Li, H., Yu, H., et al.: A fully automatic deep learning system for covid-19 diagnostic and prognostic analysis. Eur. Respir. J. 56(2) (2020)
134.
Zurück zum Zitat Song, Y., Zheng, S., Li, L., Zhang, X., Zhang, X., Huang, Z., Chen, J., Wang, R., Zhao, H., Chong, Y., et al.: Deep learning enables accurate diagnosis of novel coronavirus (covid-19) with ct images. IEEE/ACM Trans. Comput. Biol. Bioinf. 18(6), 2775–2780 (2021)CrossRef Song, Y., Zheng, S., Li, L., Zhang, X., Zhang, X., Huang, Z., Chen, J., Wang, R., Zhao, H., Chong, Y., et al.: Deep learning enables accurate diagnosis of novel coronavirus (covid-19) with ct images. IEEE/ACM Trans. Comput. Biol. Bioinf. 18(6), 2775–2780 (2021)CrossRef
135.
Zurück zum Zitat Singh, D., Kumar, V., Kaur, M., et al.: Classification of covid-19 patients from chest ct images using multi-objective differential evolution-based convolutional neural networks. Eur. J. Clin. Microbiol. Infect. Dis. 39(7), 1379–1389 (2020)CrossRef Singh, D., Kumar, V., Kaur, M., et al.: Classification of covid-19 patients from chest ct images using multi-objective differential evolution-based convolutional neural networks. Eur. J. Clin. Microbiol. Infect. Dis. 39(7), 1379–1389 (2020)CrossRef
136.
Zurück zum Zitat Chopra, S., Dhiman, G., Sharma, A., Shabaz, M., Shukla, P., Arora, M.: Taxonomy of adaptive neuro-fuzzy inference system in modern engineering sciences. Comput. Intell. Neurosci. 2021 (2021) Chopra, S., Dhiman, G., Sharma, A., Shabaz, M., Shukla, P., Arora, M.: Taxonomy of adaptive neuro-fuzzy inference system in modern engineering sciences. Comput. Intell. Neurosci. 2021 (2021)
137.
Zurück zum Zitat Chellammal, M.R.: Bacterial pneumonia. Res. J. Pharm. Technol. 7(8), 942–945 (2014) Chellammal, M.R.: Bacterial pneumonia. Res. J. Pharm. Technol. 7(8), 942–945 (2014)
138.
Zurück zum Zitat Sharma, S.: Drawing insights from covid-19-infected patients using ct scan images and machine learning techniques: a study on 200 patients. Environ. Sci. Pollut. Res. 27(29), 37155–37163 (2020)CrossRef Sharma, S.: Drawing insights from covid-19-infected patients using ct scan images and machine learning techniques: a study on 200 patients. Environ. Sci. Pollut. Res. 27(29), 37155–37163 (2020)CrossRef
139.
Zurück zum Zitat Zhang, K., Liu, X., Shen, J., Li, Z., Sang, Y., Wu, X., Zha, Y., Liang, W., Wang, C., Wang, K., et al.: Clinically applicable ai system for accurate diagnosis, quantitative measurements, and prognosis of covid-19 pneumonia using computed tomography. Cell 181(6), 1423–1433 (2020)CrossRef Zhang, K., Liu, X., Shen, J., Li, Z., Sang, Y., Wu, X., Zha, Y., Liang, W., Wang, C., Wang, K., et al.: Clinically applicable ai system for accurate diagnosis, quantitative measurements, and prognosis of covid-19 pneumonia using computed tomography. Cell 181(6), 1423–1433 (2020)CrossRef
140.
Zurück zum Zitat Fang, Y., Zhang, H., Xu, Y., Xie, J., Pang, P., Ji, W.: Ct manifestations of two cases of 2019 novel coronavirus (2019-ncov) pneumonia. Radiology (2020) Fang, Y., Zhang, H., Xu, Y., Xie, J., Pang, P., Ji, W.: Ct manifestations of two cases of 2019 novel coronavirus (2019-ncov) pneumonia. Radiology (2020)
141.
Zurück zum Zitat Xie, X., Zhong, Z., Zhao, W., Zheng, C., Wang, F., Liu, J.: Chest ct for typical 2019-ncov pneumonia: relationship to negative rt-pcr testing. Radiology (2020) Xie, X., Zhong, Z., Zhao, W., Zheng, C., Wang, F., Liu, J.: Chest ct for typical 2019-ncov pneumonia: relationship to negative rt-pcr testing. Radiology (2020)
142.
Zurück zum Zitat Bernheim, A., Mei, X., Huang, M., Yang, Y., Fayad, Z.A., Zhang, N., Diao, K., Lin, B., Zhu, X., Li, K., et al.: Chest ct findings in coronavirus disease-19 (covid-19): relationship to duration of infection. Radiology (2020) Bernheim, A., Mei, X., Huang, M., Yang, Y., Fayad, Z.A., Zhang, N., Diao, K., Lin, B., Zhu, X., Li, K., et al.: Chest ct findings in coronavirus disease-19 (covid-19): relationship to duration of infection. Radiology (2020)
143.
Zurück zum Zitat Zhang, H.-T., Zhang, J.-S., Zhang, H.-H., Nan, Y.-D., Zhao, Y., Fu, E.-Q., Xie, Y.-H., Liu, W., Li, W.-P., Zhang, H.-J., et al.: Automated detection and quantification of covid-19 pneumonia: Ct imaging analysis by a deep learning-based software. Eur. J. Nucl. Med. Mol. Imaging 47(11), 2525–2532 (2020)CrossRef Zhang, H.-T., Zhang, J.-S., Zhang, H.-H., Nan, Y.-D., Zhao, Y., Fu, E.-Q., Xie, Y.-H., Liu, W., Li, W.-P., Zhang, H.-J., et al.: Automated detection and quantification of covid-19 pneumonia: Ct imaging analysis by a deep learning-based software. Eur. J. Nucl. Med. Mol. Imaging 47(11), 2525–2532 (2020)CrossRef
144.
Zurück zum Zitat Kumar Singh, V., Abdel-Nasser, M., Pandey, N., Puig, D.: Lunginfseg: Segmenting covid-19 infected regions in lung ct images based on a receptive-field-aware deep learning framework. Diagnostics 11(2), 158 (2021)CrossRef Kumar Singh, V., Abdel-Nasser, M., Pandey, N., Puig, D.: Lunginfseg: Segmenting covid-19 infected regions in lung ct images based on a receptive-field-aware deep learning framework. Diagnostics 11(2), 158 (2021)CrossRef
145.
Zurück zum Zitat Roy, S., Menapace, W., Oei, S., Luijten, B., Fini, E., Saltori, C., Huijben, I., Chennakeshava, N., Mento, F., Sentelli, A., et al.: Deep learning for classification and localization of covid-19 markers in point-of-care lung ultrasound. IEEE Trans. Med. Imaging 39(8), 2676–2687 (2020)CrossRef Roy, S., Menapace, W., Oei, S., Luijten, B., Fini, E., Saltori, C., Huijben, I., Chennakeshava, N., Mento, F., Sentelli, A., et al.: Deep learning for classification and localization of covid-19 markers in point-of-care lung ultrasound. IEEE Trans. Med. Imaging 39(8), 2676–2687 (2020)CrossRef
146.
Zurück zum Zitat Karakuş, O., Anantrasirichai, N., Aguersif, A., Silva, S., Basarab, A., Achim, A.: Line artefact quantification in lung ultrasound images of covid-19 patients via non-convex regularisation (2020) Karakuş, O., Anantrasirichai, N., Aguersif, A., Silva, S., Basarab, A., Achim, A.: Line artefact quantification in lung ultrasound images of covid-19 patients via non-convex regularisation (2020)
147.
Zurück zum Zitat Born, J., Brändle, G., Cossio, M., Disdier, M., Goulet, J., Roulin, J., Wiedemann, N.: Pocovid-net: automatic detection of covid-19 from a new lung ultrasound imaging dataset (pocus). arXiv preprint arXiv:2004.12084 (2020) Born, J., Brändle, G., Cossio, M., Disdier, M., Goulet, J., Roulin, J., Wiedemann, N.: Pocovid-net: automatic detection of covid-19 from a new lung ultrasound imaging dataset (pocus). arXiv preprint arXiv:​2004.​12084 (2020)
148.
Zurück zum Zitat Moore, S., Gardiner, E.: Point of care and intensive care lung ultrasound: a reference guide for practitioners during covid-19. Radiography 26(4), 297–302 (2020)CrossRef Moore, S., Gardiner, E.: Point of care and intensive care lung ultrasound: a reference guide for practitioners during covid-19. Radiography 26(4), 297–302 (2020)CrossRef
149.
Zurück zum Zitat Haseli, S., Iranpour, P., et al.: Lung ultrasound in covid-19 pneumonia: prospects and limitations. Acad. Radiol. 27(7), 1044–1045 (2020)CrossRef Haseli, S., Iranpour, P., et al.: Lung ultrasound in covid-19 pneumonia: prospects and limitations. Acad. Radiol. 27(7), 1044–1045 (2020)CrossRef
150.
Zurück zum Zitat Alom, M.Z., Rahman, M., Nasrin, M.S., Taha, T.M., Asari, V.K.: Covid_mtnet: Covid-19 detection with multi-task deep learning approaches. arXiv preprint arXiv:2004.03747 (2020) Alom, M.Z., Rahman, M., Nasrin, M.S., Taha, T.M., Asari, V.K.: Covid_mtnet: Covid-19 detection with multi-task deep learning approaches. arXiv preprint arXiv:​2004.​03747 (2020)
151.
Zurück zum Zitat Jain, P., Santhanalakshmi, S.: Diagnosing covid-19 and pneumonia from chest ct-scan and x-ray images using deep learning technique. In: 2022 2nd International Conference on Intelligent Technologies (CONIT), pp. 1–9 (2022). IEEE Jain, P., Santhanalakshmi, S.: Diagnosing covid-19 and pneumonia from chest ct-scan and x-ray images using deep learning technique. In: 2022 2nd International Conference on Intelligent Technologies (CONIT), pp. 1–9 (2022). IEEE
152.
Zurück zum Zitat Horry, M.J., Chakraborty, S., Paul, M., Ulhaq, A., Pradhan, B., Saha, M., Shukla, N.: Covid-19 detection through transfer learning using multimodal imaging data. Ieee Access 8, 149808–149824 (2020)CrossRef Horry, M.J., Chakraborty, S., Paul, M., Ulhaq, A., Pradhan, B., Saha, M., Shukla, N.: Covid-19 detection through transfer learning using multimodal imaging data. Ieee Access 8, 149808–149824 (2020)CrossRef
153.
Zurück zum Zitat Panwar, H., Gupta, P., Siddiqui, M.K., Morales-Menendez, R., Bhardwaj, P., Singh, V.: A deep learning and grad-cam based color visualization approach for fast detection of covid-19 cases using chest x-ray and ct-scan images. Chaos Solitons Fractals 140, 110190 (2020)MathSciNetCrossRef Panwar, H., Gupta, P., Siddiqui, M.K., Morales-Menendez, R., Bhardwaj, P., Singh, V.: A deep learning and grad-cam based color visualization approach for fast detection of covid-19 cases using chest x-ray and ct-scan images. Chaos Solitons Fractals 140, 110190 (2020)MathSciNetCrossRef
154.
Zurück zum Zitat El Asnaoui, K., Chawki, Y.: Using x-ray images and deep learning for automated detection of coronavirus disease. J. Biomol. Struct. Dyn. 39(10), 3615–3626 (2021)CrossRef El Asnaoui, K., Chawki, Y.: Using x-ray images and deep learning for automated detection of coronavirus disease. J. Biomol. Struct. Dyn. 39(10), 3615–3626 (2021)CrossRef
155.
Zurück zum Zitat Gour, M., Jain, S.: Automated covid-19 detection from x-ray and ct images with stacked ensemble convolutional neural network. Biocybern. Biomed. Eng. 42(1), 27–41 (2022)CrossRef Gour, M., Jain, S.: Automated covid-19 detection from x-ray and ct images with stacked ensemble convolutional neural network. Biocybern. Biomed. Eng. 42(1), 27–41 (2022)CrossRef
156.
Zurück zum Zitat Sarosh, P., Parah, S.A., Mansur, R.F., Bhat, G.M.: Artificial intelligence for covid-19 detection—a state-of-the-art review. arXiv preprint arXiv:2012.06310 (2020) Sarosh, P., Parah, S.A., Mansur, R.F., Bhat, G.M.: Artificial intelligence for covid-19 detection—a state-of-the-art review. arXiv preprint arXiv:​2012.​06310 (2020)
159.
Zurück zum Zitat Chowdhury, M.E., Rahman, T., Khandakar, A., Mazhar, R., Kadir, M.A., Mahbub, Z.B., Islam, K.R., Khan, M.S., Iqbal, A., Al Emadi, N., et al.: Can ai help in screening viral and covid-19 pneumonia? IEEE Access 8, 132665–132676 (2020)CrossRef Chowdhury, M.E., Rahman, T., Khandakar, A., Mazhar, R., Kadir, M.A., Mahbub, Z.B., Islam, K.R., Khan, M.S., Iqbal, A., Al Emadi, N., et al.: Can ai help in screening viral and covid-19 pneumonia? IEEE Access 8, 132665–132676 (2020)CrossRef
161.
Zurück zum Zitat Shakouri, S., Bakhshali, M.A., Layegh, P., Kiani, B., Masoumi, F., Ataei Nakhaei, S., Mostafavi, S.M.: Covid19-ct-dataset: an open-access chest ct image repository of 1000+ patients with confirmed covid-19 diagnosis. BMC. Res. Notes 14(1), 1–3 (2021)CrossRef Shakouri, S., Bakhshali, M.A., Layegh, P., Kiani, B., Masoumi, F., Ataei Nakhaei, S., Mostafavi, S.M.: Covid19-ct-dataset: an open-access chest ct image repository of 1000+ patients with confirmed covid-19 diagnosis. BMC. Res. Notes 14(1), 1–3 (2021)CrossRef
162.
Zurück zum Zitat Born, J., Wiedemann, N., Cossio, M., Buhre, C., Brändle, G., Leidermann, K., Aujayeb, A.: L2 Accelerating COVID-19 differential diagnosis with explainable ultrasound image analysis: an AI tool. BMJ Publishing Group Ltd (2021) Born, J., Wiedemann, N., Cossio, M., Buhre, C., Brändle, G., Leidermann, K., Aujayeb, A.: L2 Accelerating COVID-19 differential diagnosis with explainable ultrasound image analysis: an AI tool. BMJ Publishing Group Ltd (2021)
163.
Zurück zum Zitat Soldati, G., Smargiassi, A., Inchingolo, R., Buonsenso, D., Perrone, T., Briganti, D.F., Perlini, S., Torri, E., Mariani, A., Mossolani, E.E., et al.: Proposal for international standardization of the use of lung ultrasound for patients with covid-19: a simple, quantitative, reproducible method. J. Ultrasound Med. 39(7), 1413–1419 (2020)CrossRef Soldati, G., Smargiassi, A., Inchingolo, R., Buonsenso, D., Perrone, T., Briganti, D.F., Perlini, S., Torri, E., Mariani, A., Mossolani, E.E., et al.: Proposal for international standardization of the use of lung ultrasound for patients with covid-19: a simple, quantitative, reproducible method. J. Ultrasound Med. 39(7), 1413–1419 (2020)CrossRef
164.
Zurück zum Zitat Ebadi, A., Xi, P., MacLean, A., Tremblay, S., Kohli, S., Wong, A.: Covidx-us—an open-access benchmark dataset of ultrasound imaging data for ai-driven covid-19 analytics. arXiv preprint arXiv:2103.10003 (2021) Ebadi, A., Xi, P., MacLean, A., Tremblay, S., Kohli, S., Wong, A.: Covidx-us—an open-access benchmark dataset of ultrasound imaging data for ai-driven covid-19 analytics. arXiv preprint arXiv:​2103.​10003 (2021)
165.
Zurück zum Zitat Gomes, J.C., Barbosa, V.A.d.F., Santana, M.A., Bandeira, J., Valença, M.J.S., de Souza, R.E., Ismael, A.M., dos Santos, W.P.: Ikonos: an intelligent tool to support diagnosis of covid-19 by texture analysis of x-ray images. Res. Biomed. Eng. 1–14 (2020) Gomes, J.C., Barbosa, V.A.d.F., Santana, M.A., Bandeira, J., Valença, M.J.S., de Souza, R.E., Ismael, A.M., dos Santos, W.P.: Ikonos: an intelligent tool to support diagnosis of covid-19 by texture analysis of x-ray images. Res. Biomed. Eng. 1–14 (2020)
167.
Zurück zum Zitat Alqudah, A.M.: Augmented covid-19 x-ray images dataset (2020) Alqudah, A.M.: Augmented covid-19 x-ray images dataset (2020)
168.
Zurück zum Zitat Kermany, D., Zhang, K., Goldbaum, M., et al.: Labeled optical coherence tomography (oct) and chest x-ray images for classification. Mendeley Data 2(2) (2018) Kermany, D., Zhang, K., Goldbaum, M., et al.: Labeled optical coherence tomography (oct) and chest x-ray images for classification. Mendeley Data 2(2) (2018)
169.
Zurück zum Zitat Kermany, D.S., Goldbaum, M., Cai, W., Valentim, C.C., Liang, H., Baxter, S.L., McKeown, A., Yang, G., Wu, X., Yan, F., et al.: Identifying medical diagnoses and treatable diseases by image-based deep learning. Cell 172(5), 1122–1131 (2018)CrossRef Kermany, D.S., Goldbaum, M., Cai, W., Valentim, C.C., Liang, H., Baxter, S.L., McKeown, A., Yang, G., Wu, X., Yan, F., et al.: Identifying medical diagnoses and treatable diseases by image-based deep learning. Cell 172(5), 1122–1131 (2018)CrossRef
172.
Zurück zum Zitat Rajaraman, S., Antani, S.: Training deep learning algorithms with weakly labeled pneumonia chest x-ray data for covid-19 detection. medRxiv (2020) Rajaraman, S., Antani, S.: Training deep learning algorithms with weakly labeled pneumonia chest x-ray data for covid-19 detection. medRxiv (2020)
173.
Zurück zum Zitat Gal, Y., Ghahramani, Z.: Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In: International Conference on Machine Learning, pp. 1050–1059 (2016). PMLR Gal, Y., Ghahramani, Z.: Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In: International Conference on Machine Learning, pp. 1050–1059 (2016). PMLR
174.
Zurück zum Zitat Ghoshal, B., Tucker, A.: Estimating uncertainty and interpretability in deep learning for coronavirus (covid-19) detection. arXiv preprint arXiv:2003.10769 (2020) Ghoshal, B., Tucker, A.: Estimating uncertainty and interpretability in deep learning for coronavirus (covid-19) detection. arXiv preprint arXiv:​2003.​10769 (2020)
175.
Zurück zum Zitat Duchesne, S., Gourdeau, D., Archambault, P., Chartrand-Lefebvre, C., Dieumegarde, L., Forghani, R., Gagne, C., Hains, A., Hornstein, D., Le, H., et al.: Tracking and predicting covid-19 radiological trajectory using deep learning on chest x-rays: Initial accuracy testing. medRxiv (2020) Duchesne, S., Gourdeau, D., Archambault, P., Chartrand-Lefebvre, C., Dieumegarde, L., Forghani, R., Gagne, C., Hains, A., Hornstein, D., Le, H., et al.: Tracking and predicting covid-19 radiological trajectory using deep learning on chest x-rays: Initial accuracy testing. medRxiv (2020)
176.
Zurück zum Zitat Mangal, A., Kalia, S., Rajgopal, H., Rangarajan, K., Namboodiri, V., Banerjee, S., Arora, C.: Covidaid: Covid-19 detection using chest x-ray. arXiv preprint arXiv:2004.09803 (2020) Mangal, A., Kalia, S., Rajgopal, H., Rangarajan, K., Namboodiri, V., Banerjee, S., Arora, C.: Covidaid: Covid-19 detection using chest x-ray. arXiv preprint arXiv:​2004.​09803 (2020)
Metadaten
Titel
Deep Convolutional Neural Networks for Detecting COVID-19 Using Medical Images: A Survey
verfasst von
Rana Khattab
Islam R. Abdelmaksoud
Samir Abdelrazek
Publikationsdatum
04.04.2023
Verlag
Springer Japan
Erschienen in
New Generation Computing / Ausgabe 2/2023
Print ISSN: 0288-3635
Elektronische ISSN: 1882-7055
DOI
https://doi.org/10.1007/s00354-023-00213-6