Skip to main content
Top
Published in: EURASIP Journal on Wireless Communications and Networking 1/2019

Open Access 01-12-2019 | Research

Small sample-based disease diagnosis model acquisition in medical human-centered computing

Authors: Xueqing Jia, Tao Luo, Sheng Ren, Kehua Guo, Fangfang Li

Published in: EURASIP Journal on Wireless Communications and Networking | Issue 1/2019

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

With the development of wireless communications and networks, HCC (human-centred computing) has attracted considerable attention in recent years throughout the medical field. HCC can provide an effective integration of various medical auxiliary diagnosis models using machine learning algorithms. In medical HCC, deep learning has demonstrated its powerful ability in the field of computer vision. However, image processing based on deep learning usually requires a large amount of labeled data, which requires significant resources since it needs to be completed by doctors, and it is difficult to collect a large amount of data for some rare diseases. Therefore, how to use the deep learning method to obtain an effective auxiliary diagnosis model based on a small sample or zero sample data set has become an important issue in the study of medical auxiliary diagnosis. We proposes an auxiliary diagnosis model acquisition method based on a variational auto-encoder and zero sample augmentation technology, and the incremental update training program based on wireless communications and networks is designed to obtain the auxiliary diagnosis model to solve the difficulty of collecting a large amount of valid data. The experimental results show that the model obtained by the above method based on a small sample or zero sample data set can effectively diagnose the types of skin diseases, which helps doctors make better judgments.
Notes

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Abbreviations
Conv
Convolution
KL
Kullback-Leibler

1 Introduction

With the development of wireless communications and networks, human-centered computing (HCC) has recently gained much attention in the medical area. In this field, people (including doctors and patients) produce a large amount of data as a basis for medical auxiliary diagnosis and HCC provides an effective integration of various auxiliary diagnosis models using machine learning algorithms, significantly improving the efficiency of the medical auxiliary diagnosis. Based on intelligent HCC and with the support of wireless communication and network technology, a hospital can develop various auxiliary diagnosis applications to alleviate the shortage of medical resources.
At present, the rise of artificial intelligence has allowed an increasing amount of technologies in the medical field to use machines to help people complete some tasks [1]. When a human works with machines in HCC, images are the most commonly used information carrier. This is not only the most direct carrier for human beings to understand things but also the most important carrier for communication with machines. This is especially true in the medical field because images are more intuitive than sound and text. Clinically, doctors usually diagnose diseases by observing a large number of medical images and then combine their own experience to judge the disease types and obtain the diagnosis [2]. With an increasing number of outpatients and a lack of professional doctors, doctors’ diagnoses are increasing, which may lead to a misdiagnosis due to an inconspicuous observation or inexperience; thus, patients’ diseases cannot be treated effectively [3]. Therefore, providing a smarter auxiliary diagnosis model for doctors has become a very important issue.
In recent years, deep learning has shown great ability in many fields, especially in the field of computer vision [4]. Medical image processing research based on deep learning has also become a prominent research area in the field of medical auxiliary diagnosis [5]. The medical imaging-assisted diagnostic systems developed by iFlytek, Tencent MIAIS, and Ali Health can assist diagnosing in specific medical fields (CT imaging of the lungs, early screening of esophageal cancer, etc.) [6]. The usage of deep learning methods generally requires a large amount of data [7]. However, medical image data involves patient privacy, so hospitals cannot publicize the data [8]. It is therefore difficult to obtain large amounts of data on some rare diseases in the medical field, and some diseases do not easily attract the attention of patients. Many patients do not go to the hospital because they are busy, they ignore the changes of the disease, or for other reasons, resulting in the hospital being unable to collect a large amount of effective data. For example, skin diseases such as rosacea, acne, and whelk are mostly found on the face, including the nose tip, nose ala, cheek, and forehead [9]. The majority of patients are teenagers, and most have mild infections with mild itching and rough skin. Therefore, patients often ignore this kind of disease. However, these dermatoses are closely related to the presence of Demodex [10]. In severe cases, the severity of skin lesions will increase and affect the patient’s quality of life.
Aimed at the fact that it is difficult to obtain a large amount of valid data for some diseases, this paper proposes a method of obtaining a disease auxiliary diagnosis model based on a variational auto-encoder and zero sample augmentation technology. This method collects a very small amount of disease data even no disease data from hospitals and generates a large amount of data by using a variational auto-encoder and zero sample augmentation learning model (ZSAL) to expand the data set. It then uses the generated data set to obtain a discrimination model in order to identify the types of diseases. In this process, only a very small amount of the collected data is utilized for research. A generated image is used for the acquisition of the auxiliary diagnosis model. The model is mainly used for the auxiliary diagnosis of dermatologists, providing a reference for doctors’ decision-making, obtaining more accurate diagnosis results, and helping patients obtain timely and effective treatment.
Our key contributions are as follows: (1) This paper proposes a small sample expansion method based on a variational auto-encoder and zero sample augmentation learning model that can generate a large number of effective samples for diseases using a small sample even zero sample. (2) We trained the medical auxiliary diagnosis model based on the expanded sample, which effectively improved the accuracy of the model. (3) Our proposed method is applied to the diagnosis of skin diseases and can help doctors make better judgments.
The remainder of this paper is organized as follows. Section 2 describes the related works. Section 3 describes the work process and key technologies, including the variational auto-encoder, zero sample augmentation learning model, data expansion, and model acquisition methods. Section 4 provides the experimental results. Finally, Section 5 presents the study’s conclusions and outlines future work.
The heavy workload of doctors has led to an increasing number of computer auxiliary diagnosis systems. Many clinical diseases require image recognition to assist doctors in diagnosing. Images have become very important reference information in the process of doctors’ diagnosis. The use of a computer-aided system can greatly improve the convenience of disease diagnosis. Researchers have been studying this for some time and have explored many different methods applied in various directions of the medical field, such as image segmentation [11], boundary detection [12], and feature extraction [13]. Deep learning can develop rapidly and has become the mainstream method in the field of computer vision [14, 15]. Compared with previous studies, it can automatically learn the deep features of images. For the deep features of automatic learning, it can more comprehensively express the information of the images and make better use of the images [16]. Therefore, there are an increasing number of explorations regarding deep learning in disease auxiliary diagnosis, and its application in the field of medical image is becoming increasingly widespread. In 2018, Cell published a cover article about a system for diagnosing eye diseases and pneumonia based on deep learning whose accuracy can rival that of top-level doctors in related specialties [17].
Skin disease auxiliary diagnosis technology based on deep learning can improve the efficiency and accuracy of doctors in diagnosing diseases. Esteva et al. proposed the use of deep convolutional neural networks for the classification of skin lesions [18]. The method mainly uses a dermoscopic image, a clinical lesion image, and the skin disease label as input and directly carries out end-to-end training of the image to obtain a deep convolutional neural network for keratinocyte cancer and benign seborrheic keratosis. The disease is differentiated, as is the classification of malignant melanoma and common sputum. Codella et al. proposed a method for identifying melanoma via dermabrasion combined with deep learning, sparse coding, and support vector machine learning algorithms [19]. The advantages of the proposed method are unsupervised learning in the field of dermoscopic images and a feature transfer from the field of natural photographs, eliminating the need to label data in the target task in order to learn good features. The feature transfer of the application also allows the system to make an analogy between the dermoscopic image and the natural image. Codella et al. proposed a system that combines the latest developments in deep learning with established machine learning methods to create a collection of methods that can segment skin lesions, as well as analyze detected regions and the surrounding tissues for melanoma [20]. In another study, a recurrent neural network layer was integrated into a complete convolutional neural network, and the local features were captured using the complete convolutional neural network layer [21]. The recursive neural network layer simulates the semantic context dependence in the image. The recurrent neural network layer is enhanced. This allows for the discrimination of skin detection in complex background situations. An end-to-end network for human skin detection has been developed to improve the accuracy of skin detection in the pre-treatment step and to help detect changes in the pigmentation of the skin.
The medical auxiliary diagnosis method based on deep learning has achieved some excellent results. However, the potential of deep learning methods in the field of medical auxiliary diagnosis has not been fully realized. Most of the existing studies are focused on lethal diseases. There are few studies on common diseases, and the early detection and correct diagnosis of many diseases are very important. Additionally, these methods usually require a large amount of valid data. Because of the rarity of diseases, the isolation of hospital data and some diseases that result in few visits, it is often difficult to collect large amounts of effective data.
To solve the above problems, this paper proposes a small and zero sample-based auxiliary diagnosis model acquisition method. We use a variational auto-encoder and ZSAL to generate medical images to expand the training data set. The generated medical image dataset is not a simple copy of the seed data; it contains some new features. Training the medical auxiliary diagnosis model based on expanded data can effectively improve the accuracy of the model. We thus use the training model in the auxiliary diagnosis of skin disease, which can improve the efficiency and accuracy of doctors. This method only requires a small amount of data to effectively acquire the auxiliary diagnosis model. This is important for the diagnosis of rare diseases or diseases with small or zero sample.

3 Methods

3.1 Variational auto-encoder and zero sample augmentation learning model

3.1.1 Variational auto-encoder

In the research of image classification using a deep convolution neural network, when medical data samples are difficult to be collected in large quantities, data enhancement is usually used to expand the sample set. Training is then combined with transfer learning to obtain the image classification model. The data set generated by the traditional data enhancement method is a simple transformation combination of existing features, with no new features generated. If the seed data set is small, the expanded data set generated by the traditional data enhancement method has difficultly training the medical auxiliary diagnosis model.
Generation model can solve the problem of traditional data enhancement method. The data generated by the generation model has a high similarity with the original sample, and it can expand the sample set and be used in image classification. Compared with the image generated by traditional data amplification technology, the overfitting phenomenon can be reduced when the data generated by the generation model is trained, and the recognition accuracy of the samples can be improved. Compared to other generation models, the mean and variance of the hidden variables of the variational auto-encoder can be obtained by the neural network, so the training process of the variational self-encoder is easy to control [22]. In addition, the variational auto-encoder adds a KL function that balances the training process of the decoder and improves the quality of the generated data set. It can automatically generate data similar to the sample set, create samples, and expand the sample set. Its specific structure is shown in Fig. 1.
The structure of the variational auto-encoder includes the encoding process and the decoding process. The encoding part is a mean-variance calculation module, which includes two neural networks: one is used to calculate the mean and the other is used to calculate the variance and then generates a probability distribution of the hidden variables, which is used to generate the hidden variables. The encoding part is also called the recognition model. The decoding part restores the probability distribution of the hidden variables to a probability distribution similar to that of the original data, which is called the generation model.
We input the small medical data into the constructed variational auto-encoder for generation, expand the small sample data set, and then use the generated data set to obtain a diagnosis model that can be used to assist doctors. In this paper, we assume that we have collected X types of small sample disease data sets. For testing purposes, the small sample data sets are divided into two parts. One is the seed data set Si used for generation which number is l; the other is the test data set Ti used to test the model, which number is n.

3.1.2 Zero sample augmentation learning model

If it is difficult to collect any data from the hospital, ZASL can solve this problem. In ZSAL, the computer generates an expanded dataset in batches according to the basic disease information provided by the expert doctor. The whole implementation process of ZSAL is shown in Fig. 2.
The steps of ZSAL are as follows: (1) A medical image that does not contain lesions is selected by the expert doctor as a background image. (2) The expert doctor selects the range of the lesions in the BGI. (3) The expert doctor depicts texture features such as the contour of the lesion on the background image. (4) The expert doctor selects a base color from the palette as the color of M to fill the lesion. (5) The affine transformation, the color gradient filling, the diversity transformation, and the fuzzy fusion method are used to generate a zero sample augmentation data set.

3.2 Work process

Medical images with only a small amount of labeled data are collected from hospitals, and then the seed data set is selected. In order to obtain a medical auxiliary diagnosis model with high precision, the seed data set is input into the constructed variational auto-encoder for image expansion in the case where the data set is small. If no image data of the disease is collected, the ZSAL method is used for data expansion. The effective models can then be stored on a cloud server for use by doctors [23].
As shown in Fig. 3, the process of this method consists of four steps: (1) collecting data: in the small sample method, collecting labeled medical image data sets from the hospital and picking seed the data set, and in the zero sample method, collecting background medical images; (2) extending data: in the small sample method, inputting the seed data set into the designed variational auto-encoder to generate the image, and in the zero sample method, using ZASL to generate medical image data set; (3) training model: using the extended data set from the previous step to train deep neural network to obtain medical auxiliary diagnosis model; (4) testing model: using the trained medical auxiliary diagnosis model to test the test set to evaluate the model performance.

3.3 Image generation and model acquisition method

3.3.1 Image generation and model acquisition method based on small sample

In order to expand the data, it is necessary to construct an effective variational auto-encoder. There are many choices for multi-layer neural networks, such as multi-layer perceptron, convolutional neural networks, or recurrent neural networks. In this paper, we choose a convolutional neural network to construct a variational auto-encoder model including an encoder and a decoder. The specific construction of the model is shown in Table 1.
Table 1
Variational auto-encoder model construction details
 
Number of layers
Type
Number of kernels
Kernel size
Step size
Activation function
Encoder
1
Conv
16
4
2
ReLU
2
Conv
32
4
2
ReLU
3
Conv
64
4
2
ReLU
4
Conv
128
4
2
ReLU
5
Full conn
   
ReLU
Decoder
1
Full conn
   
ReLU
2
Full conn
   
ReLU
3
Conv_trans
4
4
2
ReLU
4
Conv_trans
4
4
2
ReLU
5
Conv_trans
4
4
1
ReLU
6
Conv_trans
4
4
1
ReLU
7
Full conn
   
Sigmoid
In Table 1, convolution (Conv) denotes a convolutional layer, convolution transpose (Conv_trans) denotes a deconvolution layer, and Full conn denotes a fully concatenated layer. In the variational auto-encoder model constructed in this paper, the encoder contains four convolutional layers, and the decoder contains four deconvolution layers. The encoder convolution layer kernel number increases twice (from 16 to 128). In addition to the four deconvolution layers, the decoder has three fully connected layers. If the levels of the encoder and decoder are too shallow, the quality of the generated image will not be high enough. If the level is too deep, the execution efficiency of the variational auto-encoder will be too low and will have difficulty converging. Therefore, we have chosen the network structure shown in Table 1, which not only ensures the image quality but also has a high execution efficiency.
After the model is built, image generation can be performed. The specific steps are as follows: (1) Load the training data set, which is the seed data set collected in this paper. (2) Normalize the image to make the image size uniform. (3) Input the normalized data into the encoder of the variational auto-encoder. (4) The input data can be outputted to the mean and variance after learning by the model’s encoder. (5) The hidden variable can be obtained from the mean and the variance, and the probability distribution of the hidden variable can be obtained. (6) Input the hidden variable into the decoder and decode it through a Gaussian distribution to output the data. (7) Train the model with a random gradient iteration until a clear image is generated. We use the incremental update training method to improve the accuracy of the auxiliary model. In the case of low accuracy of the current model, the accuracy of the model will be improved gradually by increasing the number of data generated, until it becomes stable [24].

3.3.2 Image generation and model acquisition method based on zero sample

In the ZSAL method, there are five steps: the morphological operation, affine transformation, color filling, fuzzy fusion, and diversity transformation. After the zero sample augmentation methodology is used, a zero sample augmentation data set is generated. Firstly, the morphological operation is used to preprocess the contour. We binarize the image of the contour curve, and then the binarized image is subjected to a morphological closing operation [25]. The closing operation will smooth the contours of the object, bridging the narrower discontinuities and slender gullies. Secondly, since the contour depicted by the doctor is relatively simple, to enrich the diversity of the lesion, the contour is transformed by affine transformation [26]. The affine transformation mainly includes the translation, rotation, scaling, tilting, and flipping operations. The affine transformation algorithm is shown in Algorithm 1.
In Algorithm 1, by randomly generating the parameters (θ, Δx, Δy), many new contours can be obtained, which realizes the diversity of the lesion. This is the precondition for our subsequent effective feature extraction in the classifier, and the data augmentation by the affine transformation will not destroy the original features. The time complexity of the whole algorithm is O(N∗| CC| ), where N is the number of generated CCATs, and ∣CC∣ is the number of points on the CC.
Thirdly, we propose a color gradient filling method to fill the lesion color. We suppose that Σ is the shape enclosed by the CCAT. On the same line segment, the color changes evenly. After these steps, we obtained a background image and some images containing the lesion which are separate. Fourthly, we designed a fuzzy fusion algorithm that allowed the fused image to transition naturally at the edge of the lesion. We use the mean color as the fused color in the range of K pixels from the edge of the contour so that the edge of the fused image has some fuzziness. Finally, to enrich the diversity of the background image and enhance the robustness of the model, we add noise, randomly flip, and randomly crop the fused image. These operations further improve the diversity of the generated images.
After a data set is generated with ZSAL, we construct a zero sample convolutional classifier (ZSCC) and use the data set to train the classifier. Considering the low complexity of the data set, if we use a deep network structure, there will be no good generalization ability in the classification of actual images. Therefore, a convolutional network with five convolutional layers is designed to classify the data set. In addition, the images are scaled to the same size as the input of the convolutional network, which plays a role in data augmentation. In the experiments, the accuracy of the classifier in the actual auxiliary diagnosis shows that the network structure is effective.

4 Experiments results

4.1 Data collection and experimental environment configuration

In the experiments of this paper, the variational auto-encoder is used to generate data in the small sample method, while the zero sample method generates data by ZSAL. And the small sample model is trained by Google’s Inception V3 in combination with the model-based transfer learning method [27]. The zero sample model is trained by the ZSCC we proposed. To verify the validity of these models, we chose a skin disease to conduct the experiments. The experimental data are real clinical data from the Third Xiangya Hospital at Central South University.
Big data technology is widely used in various fields [28]. An increasing amount of new technologies are applied to the collection and storage of medical data, such as wireless sensor networks and blockchain technology [29, 30]. In the experiment of the small sample method, there are two kinds of labeled skin disease images which all contain 5 seed data sets and 50 test data sets. One is a laser scanning confocal microscope of demodicosis. The other is a laser scanning confocal microscope of no demodicosis. In the experiment of the zero sample method, expert doctors provide some prior information, such as the description of the contour of the lesion, which is used to generate data. The collected seed data sets in the small sample method are shown in Fig. 4.
The experimental operating environment is as follows: (1) In the small sample method, the variational auto-coder is constructed on an Ubuntu Server 16.04 x64 system and using an NVIDIA Titan_Xp 12G graphics card for training. The training and testing of the model are completed in an IOS system with an Intel Core i5 CPU, with 2.7 GHz and 8.00 GB of memory. The training and testing process is based on the deep learning framework TensorFlow and is implemented in Python. (2) In the zero sample method, the experiments are performed using the Windows operating system with 8 GB of memory and an Intel(R) HD Graphics 630. All the zero sample method experiments are implemented in Python. The user interface for the expert doctor is designed with PyQt5, and the neural network is completed with the Keras framework.
To verify the validity of the proposed methods, this paper mainly tests the effectiveness of the variational auto-encoder and ZSAL, the impact of the generated data under different iteration training times on the small sample model, the accuracy of the ZSAL method, and the influence of the category ratio on the ZSAL.

4.2 The effectiveness of the variational auto-encoder and ZSAL

The seed data sets collected in the small sample method are inputted into the constructed variational auto-encoder for training, and the resulting data results are shown in Figs. 5 and 6. The structure, texture, brightness, and features of the generated image are all very close to the original data. On the other hand, we mark the corresponding positions of the original data and the generated image into different shapes. As seen in Figs. 5 and 6, some new features are obtained from the generated data. From Fig. 5, it can be seen that the shape of the generated image is different from that of the original data.
In the zero sample method, the doctor is required to complete the process of selecting the disease background image, selecting the lesion range, depicting the lesion, and selecting the base color of the lesion in the user interface, and then the computer automatically completes the generation of the virtual disease data set by using ZSAL. In this experiment, 600 virtual demodicosis images were generated in batches, including 300 images of disease and 300 images of non-disease. Figure 7 shows the original image and generated image of demodicosis.

4.3 The impact of the generated data under different iteration training times

In the first experiment, we can see that the data generated by the variational auto-encoder achieves good results. By training the generated data with different effects, the discriminatory ability of the small sample model is different. Therefore, this experiment verifies the impact on the small sample model by using the generated data of different iteration training times. First, a series of generated data sets are obtained by setting different training times for the variational auto-encoder, and then the seed data set with demodicosis and the seed data set without demodicosis are inputted. These generated data sets are used to train different auxiliary diagnosis models. The number of generated data set is approximately 300 pieces, and the accuracy of the training model is relatively high and tends to be stable. Therefore, in this experiment, the number of generated data sets is set to 300 pieces. Then, different models are obtained by using the data generated by different iteration training time. Finally, these models are tested by testing data sets. The accuracy of the results of different models is shown in Fig. 8.
As seen from Fig. 8, when the number of iterative trainings is small, the accuracy of the demodicosis discrimination model trained by the generated data is relatively low. Because the quality of the generated image is very low, it has no effect on improving the accuracy of the model. As the number of iterative trainings increases, the accuracy of the model obtained from the generated data also improves. Because the quality of the generated image progressively increases, it can effectively improve the accuracy of the model. When the number of iterations is 18,500, the accuracy of the model has reached 94%. Therefore, the experimental results show that the data quality generated by different iteration training times is different, which directly affects the accuracy of the generated model.

4.4 Multi-clinical patient experiment based on the ZSAL

To further demonstrate the accuracy of the ZSAL method, we conducted multiple clinical patient tests using undiagnosed cases. We used the data for 204 patients with the demodicosis disease in the Third Xiangya Hospital at Central South University and selected an image from each patient’s medical record as test data.
The doctor, independent from the ZSAL method, diagnosed the 204 images. ZSAL used zero sample augmentation to obtain 600 images, including 300 images of disease and 300 images of non-disease. We trained the ZSCC and tested 204 images of the real dataset. The results of the doctor’s diagnosis and the results of the ZSAL diagnosis are shown in Fig. 9.
From Fig. 9, we can see the diagnostic results of the doctor and ZSAL, in which the blue square indicates a case of demodicosis (positive), and the orange square indicates a case of no demodicosis (negative). The squares marked with an X are cases of misclassification. We assume that the doctor has 100% accuracy in the diagnosis of demodicosis. In the 204 actual images, there are 22 false negatives and 11 false positives, which indicates that the classification of the ZSAL is more likely to diagnose a case of demodicosis as no demodicosis. The accuracy is 83.82%, and the Youden index is 0.682. In the ZSAL model, only one background image from the actual dataset is used; such a test result shows that ZSAL has a better diagnostic result.

4.5 The influence of the category ratio on the ZSAL

For a binary classification, it is generally considered that the ratio of each category in the training set should be close to 50%, which is most beneficial for the classifier. The so-called category ratio imbalance problem occurs when the number of images in each category is very different. Intuitively, in the case where the total number of images in the training set is the same, the imbalance of the category ratios is unfavorable for the classification. However, in this experiment, since the no demodicosis images are generated from the same background image, they are quite similar. Thus, compared with the no demodicosis images, the demodicosis images are more important for classification.
In this experiment, the number of images in the training set (that is, the number of images generated by ZSAL) is set to 300, 600, and 900, and the ratio of the images containing demodicosis is set from 0.2 to 0.9 in order. The obtained accuracy and Youden index are shown in Fig. 10.
Interestingly, Fig. 10 verifies our thoughts. As the ratio of the demodicosis category increases, the accuracy and the Youden index increase. When the ratio of images containing demodicosis reached 0.9, the best evaluation index was obtained. This suggests that it is not necessary to generate too many no demodicosis images. This guides us that when ZSAL is used in the future, it is a good choice to generate the diseased images to account for 90% of the total number of generated images.

5 Discussion

The deep learning method has achieved a number of results in the field of medical auxiliary diagnosis. However, the deep learning method is mostly focused on situations where a large amount of labeled data can be collected. There are relatively few studies on cases where only a small amount of data or even no data can be collected. In this paper, we proposed using a variational auto-encoder and ZSAL method to generate images to expand the training data sets and used a very small number of confocal laser scanning microscopy images of demodicosis for testing. In the process of testing, the accuracy of the model obtained by the proposed method was higher, which shows that the proposed method can effectively solve the problem where only a small amount of data or even no data can be collected to obtain an effective auxiliary diagnosis model. In this paper, these methods can be used to expand the image of demodicosis and to obtain an auxiliary diagnosis model for the diagnosis of demodicosis.
Our next step is to verify whether this method can be extended to the acquisition of auxiliary diagnosis models for other diseases in different clinical situations. At the same time, we will explore the feasibility of the model in a clinical practice by using the obtained model for clinical auxiliary diagnosis.

Acknowledgements

The authors would like to thank the dataset provided by the Xiangya Third Hospital and the hardware environment support provided by the National Engineering Laboratory of Medical Big Data Application Technology of Central South University. In this paper, zero sample-based disease diagnosis model acquisition is extended from the conference paper (ZSAL: Zero Shot Augmentation Learning for Medical Imaging) written by Tao Luo in the 13th IEEE International Conference on Big Data Science and Engineering.

Competing interests

The authors declare that they have no competing interests.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://​creativecommons.​org/​licenses/​by/​4.​0/​), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Literature
1.
go back to reference J. Wang, Y. Gao, W. Liu, W.B. Wu, S.J. Lim, An asynchronous clustering and mobile data gathering schema based on timer mechanism in wireless sensor networks. CMC-Computers Materials & Continua 58, 711–725 (2019)CrossRef J. Wang, Y. Gao, W. Liu, W.B. Wu, S.J. Lim, An asynchronous clustering and mobile data gathering schema based on timer mechanism in wireless sensor networks. CMC-Computers Materials & Continua 58, 711–725 (2019)CrossRef
2.
go back to reference G. Kehua, L. Ting, H. Runhe, et al., DDA: a deep neural network-based cognitive system for IoT-aided dermatosis discrimination. Ad. Hoc. Netwo. 80, 95–103 (2018)CrossRef G. Kehua, L. Ting, H. Runhe, et al., DDA: a deep neural network-based cognitive system for IoT-aided dermatosis discrimination. Ad. Hoc. Netwo. 80, 95–103 (2018)CrossRef
3.
go back to reference K. Guo, D. Liu, T. Li, et al., MADP: an open and scalable medical auxiliary diagnosis platform. Comput. Sci. Eng., 1–1 (2018) K. Guo, D. Liu, T. Li, et al., MADP: an open and scalable medical auxiliary diagnosis platform. Comput. Sci. Eng., 1–1 (2018)
4.
go back to reference Y. Tu, Y. Lin, J. Wang, J.U. Kim, Semi-supervised learning with generative adversarial networks on digital signal modulation classification. Comput. Mater. Continua. 55, 243–254 (2018) Y. Tu, Y. Lin, J. Wang, J.U. Kim, Semi-supervised learning with generative adversarial networks on digital signal modulation classification. Comput. Mater. Continua. 55, 243–254 (2018)
5.
go back to reference J. Stoitsis, I. Valavanis, S.G. Mougiakakou, et al., Computer aided diagnosis based on medical image processing and artificial intelligence methods. Nucl. Instrum. Methods. Phys. Res. Sect. A 569(2), 591–595 (2006) (Accelerators, Spectrometers, Detectors and Associated Equipment)CrossRef J. Stoitsis, I. Valavanis, S.G. Mougiakakou, et al., Computer aided diagnosis based on medical image processing and artificial intelligence methods. Nucl. Instrum. Methods. Phys. Res. Sect. A 569(2), 591–595 (2006) (Accelerators, Spectrometers, Detectors and Associated Equipment)CrossRef
6.
go back to reference K. Guo, Y. He, X. Kui, P. Sehdev, T. Chi, R. Zhang, J. Li, LLTO: towards efficient lesion localization based on template occlusion strategy in intelligent diagnosis. Pattern Recognit. Lett. 116, 225–232 (2018)CrossRef K. Guo, Y. He, X. Kui, P. Sehdev, T. Chi, R. Zhang, J. Li, LLTO: towards efficient lesion localization based on template occlusion strategy in intelligent diagnosis. Pattern Recognit. Lett. 116, 225–232 (2018)CrossRef
7.
go back to reference E. De Bezenac, A. Pajot, P. Gallinari, Deep learning for physical processes: incorporating prior scientific knowledge (2017) E. De Bezenac, A. Pajot, P. Gallinari, Deep learning for physical processes: incorporating prior scientific knowledge (2017)
8.
go back to reference L. Qi, R. Wang, S. Li, Q. He, X. Xu, C. Hu, Time-aware distributed service recommendation with privacy-preservation. Inf. Sci. 480, 354–364 (2019)CrossRef L. Qi, R. Wang, S. Li, Q. He, X. Xu, C. Hu, Time-aware distributed service recommendation with privacy-preservation. Inf. Sci. 480, 354–364 (2019)CrossRef
9.
go back to reference V.D.P. Dave, D.S. Kris, S. Philippe, Use of polyethylene glycol in inflammation related topical disorders or diseases and wound healing (2009), pp. 374–375 V.D.P. Dave, D.S. Kris, S. Philippe, Use of polyethylene glycol in inflammation related topical disorders or diseases and wound healing (2009), pp. 374–375
10.
go back to reference J. Doyen, S. Demoulin, K. Delbecque, et al., Vulvar skin disorders throughout lifetime: about some representative dermatoses. BioMed Res. Int. 2014, 1–6 (2014)CrossRef J. Doyen, S. Demoulin, K. Delbecque, et al., Vulvar skin disorders throughout lifetime: about some representative dermatoses. BioMed Res. Int. 2014, 1–6 (2014)CrossRef
11.
go back to reference F. Xie, A.C. Bovik, Automatic segmentation of dermoscopy images using self-generating neural networks seeded by genetic algorithm. Pattern Recognit. 46, 1012–1019 (2013)CrossRef F. Xie, A.C. Bovik, Automatic segmentation of dermoscopy images using self-generating neural networks seeded by genetic algorithm. Pattern Recognit. 46, 1012–1019 (2013)CrossRef
12.
go back to reference S. Aurora et al., Model-based classification methods of global patterns in der-moscopic images. IEEE Trans. Med. Imaging 33, 1137–1147 (2014)CrossRef S. Aurora et al., Model-based classification methods of global patterns in der-moscopic images. IEEE Trans. Med. Imaging 33, 1137–1147 (2014)CrossRef
13.
go back to reference A.G. Isasi et al., Melanomas non-invasive diagnosis application based on the ABCD rule and pattern recognition image processing algorithms. Comput. Biol. Med. 41, 742–755 (2011)CrossRef A.G. Isasi et al., Melanomas non-invasive diagnosis application based on the ABCD rule and pattern recognition image processing algorithms. Comput. Biol. Med. 41, 742–755 (2011)CrossRef
14.
go back to reference R.K. Sinha, R. Pandey, R. Pattnaik, Deep learning for computer vision tasks: a review (2018) R.K. Sinha, R. Pandey, R. Pattnaik, Deep learning for computer vision tasks: a review (2018)
15.
go back to reference S. Ren, D.K. Jain, K. Guo, T. Xu, T. Chi, Towards efficient medical lesion image super-resolution based on deep residual networks. Signal Processing Image Commun. 75, 1–10 (2019)CrossRef S. Ren, D.K. Jain, K. Guo, T. Xu, T. Chi, Towards efficient medical lesion image super-resolution based on deep residual networks. Signal Processing Image Commun. 75, 1–10 (2019)CrossRef
16.
go back to reference Z. Zhang, Y.B. Li, C. Wang, M.Y. Wang, Y. Tu, J. Wang, An ensemble learning method for wireless multimedia device identification. Security and Communication Networks (2018) Z. Zhang, Y.B. Li, C. Wang, M.Y. Wang, Y. Tu, J. Wang, An ensemble learning method for wireless multimedia device identification. Security and Communication Networks (2018)
17.
go back to reference D.S. Kermany, M. Goldbaum, W. Cai, et al., Identifying medical diagnoses and treatable diseases by image-based deep learning. Cell 172(5), 1122–1131 (2018) e9CrossRef D.S. Kermany, M. Goldbaum, W. Cai, et al., Identifying medical diagnoses and treatable diseases by image-based deep learning. Cell 172(5), 1122–1131 (2018) e9CrossRef
18.
go back to reference A. Esteva et al., Dermatologist-level classification of skin cancer with deep neural networks. Nature 542, 115–118 (2017)CrossRef A. Esteva et al., Dermatologist-level classification of skin cancer with deep neural networks. Nature 542, 115–118 (2017)CrossRef
19.
go back to reference N. Codella, J. Cai, M. Abedini, et al., Deep learning sparse coding and SVM for melanoma recognition in dermoscopy images [C]// Machine Learning in Medical Imaging (Springer International Publishing, Munich, 2015), pp. 118–126CrossRef N. Codella, J. Cai, M. Abedini, et al., Deep learning sparse coding and SVM for melanoma recognition in dermoscopy images [C]// Machine Learning in Medical Imaging (Springer International Publishing, Munich, 2015), pp. 118–126CrossRef
20.
go back to reference N. Codella et al., Deep learning ensembles for melanoma recognition in dermoscopy images. IBM J. Res. Dev. 61, 5:1–5:15 (2017)CrossRef N. Codella et al., Deep learning ensembles for melanoma recognition in dermoscopy images. IBM J. Res. Dev. 61, 5:1–5:15 (2017)CrossRef
21.
go back to reference S.S. Han et al., Classification of the clinical images for benign and malignant cutaneous tumors using a deep learning algorithm. J. Invest. Dermatol. 138(7), 1529–1538 (2018)CrossRef S.S. Han et al., Classification of the clinical images for benign and malignant cutaneous tumors using a deep learning algorithm. J. Invest. Dermatol. 138(7), 1529–1538 (2018)CrossRef
22.
go back to reference I.J. Goodfellow, J. Pouget-Abadie, M. Mirza, et al., Generative adversarial nets [C]// International Conference on Neural Information Processing Systems (MIT Press, Boston, 2014), pp. 2672–2680 I.J. Goodfellow, J. Pouget-Abadie, M. Mirza, et al., Generative adversarial nets [C]// International Conference on Neural Information Processing Systems (MIT Press, Boston, 2014), pp. 2672–2680
24.
go back to reference K. Guo, T. Xu, X. Kui, R. Zhang, T. Chi, iFusion: towards efficient intelligence fusion for deep learning from real-time and heterogeneous data. Inf. Fusion 51, 215–223 (2019)CrossRef K. Guo, T. Xu, X. Kui, R. Zhang, T. Chi, iFusion: towards efficient intelligence fusion for deep learning from real-time and heterogeneous data. Inf. Fusion 51, 215–223 (2019)CrossRef
25.
go back to reference M.L. Comer, E.J. Delp, Morphological operations for color image processing. J. Electron. Imaging 8(3), 279–289 (1999)CrossRef M.L. Comer, E.J. Delp, Morphological operations for color image processing. J. Electron. Imaging 8(3), 279–289 (1999)CrossRef
26.
go back to reference S. Jimbo, A. Maruoka, in In Proceedings of the Seventeenth Annual ACM Symposium on Theory of Computing. Expanders obtained from affine transformations (1985), pp. 88–97 S. Jimbo, A. Maruoka, in In Proceedings of the Seventeenth Annual ACM Symposium on Theory of Computing. Expanders obtained from affine transformations (1985), pp. 88–97
27.
go back to reference L. Qi, W. Dou, W. Wang, G. Li, H. Yu, S. Wan, Dynamic mobile crowdsourcing selection for electricity load forecasting. IEEE ACCESS 6, 46926–46937 (2018)CrossRef L. Qi, W. Dou, W. Wang, G. Li, H. Yu, S. Wan, Dynamic mobile crowdsourcing selection for electricity load forecasting. IEEE ACCESS 6, 46926–46937 (2018)CrossRef
28.
go back to reference Z. Xiaokang, L. Wei, W.K. I-Kai, et al., Academic influence aware and multidimensional network analysis for research collaboration navigation based on scholarly big data. IEEE Trans. Emerg. Top. Comput., 1–1 (2018) Z. Xiaokang, L. Wei, W.K. I-Kai, et al., Academic influence aware and multidimensional network analysis for research collaboration navigation based on scholarly big data. IEEE Trans. Emerg. Top. Comput., 1–1 (2018)
29.
go back to reference J. Wang, Y. Gao, X. Yin, F. Li, H.J. Kim, An enhanced PEGASIS algorithm with mobile sink support for wireless sensor networks. Wirel. Commun. Mob. Comput. (2018) J. Wang, Y. Gao, X. Yin, F. Li, H.J. Kim, An enhanced PEGASIS algorithm with mobile sink support for wireless sensor networks. Wirel. Commun. Mob. Comput. (2018)
30.
go back to reference Y.J. Ren, Y.P. Liu, S. Ji, A.K. Sangaiah, J. Wang, Incentive mechanism of data storage based on blockchain for wireless sensor networks. Mob. Inf. Syst. (2018) Y.J. Ren, Y.P. Liu, S. Ji, A.K. Sangaiah, J. Wang, Incentive mechanism of data storage based on blockchain for wireless sensor networks. Mob. Inf. Syst. (2018)
Metadata
Title
Small sample-based disease diagnosis model acquisition in medical human-centered computing
Authors
Xueqing Jia
Tao Luo
Sheng Ren
Kehua Guo
Fangfang Li
Publication date
01-12-2019
Publisher
Springer International Publishing
DOI
https://doi.org/10.1186/s13638-019-1541-y

Other articles of this Issue 1/2019

EURASIP Journal on Wireless Communications and Networking 1/2019 Go to the issue

Premium Partner