Skip to main content
Top
Published in: EURASIP Journal on Wireless Communications and Networking 1/2019

Open Access 01-12-2019 | Research

A Preprocessing Algorithm Based on Heterogeneity Detection for Transmitted Tissue Image

Authors: Chengcheng Zhang, Baoju Zhang, Gang Li, Ling Lin, Cuiping Zhang, Fengjuan Wang

Published in: EURASIP Journal on Wireless Communications and Networking | Issue 1/2019

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

In hyperspectral transmission imaging (mainly refers to transmission breast imaging), the strong scattering characteristics of the tissue cause the blurred image and weak image signal, which hinders heterogeneity detection in tissue. In this paper, we designed the simulation experiment of collecting phantom images, and a joint preprocessing algorithm suitable for transmission tissue image is proposed and verified: the algorithm combining single channel frame accumulation and edge enhancement algorithm. The result shows that the PSNR of the phantom image is increased to 57.3 dB and the edge of phantom image processed by the joint preprocessing algorithm is preserved; the standard deviation is 19.8998 higher than original image, that is, the contrast is greatly improved. In our previous work, the detection accuracy of the image processed by this algorithm is higher than that without processed when the image detected in object detection algorithm based on deep learning; the mAP reaches 99.9%. Therefore, the preprocessing algorithm in this paper provides a highly compatible and easier preprocessing method for heterogeneity detection of multispectral tissue images, which improves the detection accuracy of heterogeneity to some extent. And it may be a new way to improve the quality of such multispectral and hyperspectral transmission tissue images.
Notes

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Abbreviations
Faster R-CNN
Faster regions with convolutional neural networks features
LED
Light-emitting diode
mAP
Mean average precision
MSE
Mean square error
MSNR
Mean signal-to-noise ratio
PMMA
Polymethyl methacrylate
PSNR
Peak signal-to-noise ratio
RGB
Red and green and blue
SNR
Signal-to-noise ratio

1 Introduction

Multispectral imaging has been applied to many fields, from biological tissue characterization to environmental monitoring [13], which has a wide range of applications. In the new medical testing, with the trend of younger and high incidence of breast tumors [4], scholars have begun to explore a simpler, lower cost screening method for breast tissue lesions. Hyperspectral mammography [5] may provide a means for early self-examination of breast tumors, but the strong scattering characteristics of the tissue result in low signal-to-noise ratio (SNR) and insufficient clarity of the tissue image, which prevents the accurate detection of heterogeneity [5] (lesion tissue in the breast that is distinct from normal breast tissue). In general, the study of low-resolution problems in multispectral transmission images can be divided into two aspects: the acquisition end and the preprocessing end. On the acquisition side, Jian et al. [6] designed an adaptive multispectral imaging system to correct the wavefront error of the illumination light for obtaining a high-resolution image of the biological tissue. Rousset et al. [7] demonstrated an adaptive multispectral acquisition scheme based on time resolution of single-pixel imaging, which provides low-cost, high-quality multispectral images. Yang et al. [8] optimized illumination methods using shape function signals to increase the dynamic range of multispectral imaging systems based on light-emitting diodes (LEDs), thereby improving the grayscale resolution of multispectral images. Li et al. [9] proposed and demonstrated that multi-wavelength “synergy effects” in light-emitting diode (LED)-multispectral images obtained by frequency division modulation can be used to improve the image quality of each waveband. In hyperspectral transmission imaging, the frame accumulation technique that has been successfully applied to various low-light-level image detection devices is one of the most effective methods for enhancing weak transmission image signals. On the preprocessing side, the main methods are wavelet transform filtering, space-time domain combined filtering, and other classical image denoising methods [1013] for the low SNR and low contrast of images. However, the filtering methods may smooth the image and lose edge details. Gang et al. [14] greatly enhanced the SNR of the low-light-level transmission image by combining the frame accumulation and the shaping signal technology, and improved the detection sensitivity of the transmission image. Starting from the preprocessing side, this paper innovatively added image synthesis and edge enhancement algorithm based on the frame accumulation of single-channel images, which obtained a method that better matched with the heterogeneity detection of the transmitted tissue image and greatly improved the image quality.
Aiming at the characteristics of tissue images, this paper designed a simulation experiment to collected multispectral phantom [5] images to get the joint preprocessing algorithm for multispectral transmission tissue images. The experiment used liquid and solid as the phantom of breast tissue and heterogeneous tissue, respectively. The polymethyl methacrylate (PMMA) [15] container with a transmittance up to 96% is used to hold liquid imitation for the first time. We combined the frame-accumulated averaging with the edge enhancement and image synthesis to get the joint preprocessing algorithm, which matched better with heterogeneity detection. Finally, the collected multispectral transmission phantom images are processed by this algorithm to obtained high-quality images. The experiment result shows that the peak signal-to-noise ratio (PSNR) of the image that is processed by the joint preprocessing algorithm is increased to 57.3 dB, and the image SNR is improved by 2.60 dB, which is improved by 1.11 dB compared with the filtering alone. The signal-to-noise ratio and the contrast at the edges are significantly improved. The standard deviation of the image processed by the algorithm is 19.8998 higher than the original image. And this preprocessing algorithm has been indirectly verified in our previous work [16]: we used faster regions with convolutional neural networks features (Faster R-CNN) [17] object detection [18] algorithm based on deep learning to detect the image processed by the preprocessing algorithm, and reached 99.9% mean average precision (mAP) [19]. Moreover, the detection accuracy of the transmission image processed by the algorithm is higher than that without the algorithm. Therefore, we propose a preprocessing algorithm suitable for heterogeneity detection of transmission breast tissue image based on the frame accumulation technique. The algorithm plays a good role in locating the edges of heterogeneous tissues, so it may improve the heterogeneity detection accuracy of multispectral and hyperspectral transmission tissue images to some extent.
The self-test of the human body may be greatly affected by the environment and individuals and the biological tissue has comparatively strong scattering and absorption characteristics, which thus inevitably leads to weak signal and insufficient image clarity. In addition, we made a low-light-level treatment in the experimental environment in order to reduce the influence of external illumination on the transmission experiment. The frame accumulation technique that has been successfully applied to various low-light-level image detection devices is one of the most effective methods for enhancing weak transmission image signals. Thence, this paper used the frame accumulation technique to improve image SNR and grayscale resolution to obtain high precision transmission phantom images.

2.1 Frame accumulation technique

Gray level is the core of image accuracy and sensitivity of heterogeneity detection. The higher the gray level resolution of the image is, the richer the image information is, the more favorable to the classification and analysis of tissues. Frame accumulation could enhance the gray level and the grayscale resolution to some extent [14]. In image processing, multi-frame accumulation is to add the gray values of the corresponding pixels of two images or multiple frames of images at different times to obtain their time-averaged images, which can multiply the SNR of the image while avoiding edge loss caused by filtering:
The SNR of a single frame:
$$ \mathrm{SNR}=\raisebox{1ex}{${x}_s^2$}\!\left/ \!\raisebox{-1ex}{${\sigma}_n^2$}\right. $$
(1)
xs is the image signal, xn is the noise, and \( {\sigma}_n^2 \) is the variance of xn.
The frame-accumulated SNR is:
$$ {\mathrm{SNR}}^{\hbox{'}}=\raisebox{1ex}{${\left(\sum \limits_{i=1}^m{x}_{s_i}\right)}^2$}\!\left/ \!\raisebox{-1ex}{$D\left(\sum \limits_{i=1}^m{x}_{n_i}\right)$}\right. $$
(2)
In static imaging, the image signal of each frame is the same, \( {x}_{s_1}={x}_{s_2}={x}_{s_3}=\dots ={x}_s \), the random noise of each frame \( {x}_{n_1},{x}_{n_2},\cdots, {x}_{n_m} \) is independent, then:
$$ D\left(\sum \limits_{i=1}^m{x}_{n_i}\right)=m{\sigma}_n^2 $$
(3)
$$ {\mathrm{SNR}}^{\hbox{'}}={m}^2{x}_s^2/m{\sigma}_n^2=m\mathrm{SNR} $$
(4)
The SNR is m times higher than before.

2.2 Laplace enhancement

The edge of the transmission phantom image is blurred that results in the inability to accurately locate the edge, which is very unfavorable to subsequent heterogeneity detection. Therefore, we do edge enhancement operations on the frame-accumulated image.
The essence of image blurring is that the image is subjected to an averaging or integral operation, so it can be restored by an inverse operation. The differential operation can highlight the image details to make the image clearer; the second-order differential edge positioning ability is stronger and the sharpening effect is better than the first-order differential. And the basic method of using the second-order differential operator is to define a discrete form of second-order differential, then generate a filter template based on this form to convolve with the image. Isotropic filter, its response is independent of the direction of the abrupt change of the image, which is the rotation invariance. Consequently, when the original image is rotated by 90°, the details (mutations) that can be detected at a certain point in the original image can also be detected after the rotation. And the Laplacian is the simplest isotropic differential operator, thus this paper uses the second-order differential linear operator—Laplacian operator to enhance the edge.
For two-dimensional images, the Laplacian is defined as:
$$ {\nabla}^2f=\frac{\partial^2f}{\partial {x}^2}+\frac{\partial^2f}{\partial {y}^2} $$
(5)
This equation is represented as a discrete form to more suitable for digital image processing. So the second-order partial differential in the x and y directions can be obtained by the definition of the digital second-order differential processing:
$$ {\displaystyle \begin{array}{c}\frac{\partial^2f}{\partial^2{x}^2}=f\left(x+1,y\right)+f\left(x-1,y\right)-2f\left(x,y\right)\\ {}\frac{\partial^2f}{\partial^2{y}^2}=f\left(x,y+1\right)+f\left(x,y-1\right)-2f\left(x,y\right)\end{array}} $$
(6)
Combined the above definition with the definition of the Laplacian operator according to the formulas (5) and (6), the digital realization of the two-dimensional Laplacian is obtained by adding these two components:
$$ {\nabla}^2f=\left[f\left(x+1,y\right)+f\left(x-1,y\right)+f\left(x,y+1\right)+f\left(x,y-1\right)\right]-4f\left(x,y\right) $$
(7)
That is to say, the Laplacian calculation of a point is the sum of the gray levels of the top, bottom, left, and right minus the gray level of the point itself. Similarly, all the symbols are inverted according to the different definitions of the second-order differential, then all gray values in the above formula are all added with a minus sign, the coefficients become − 1, − 1, − 1, − 1, 4. The described above is a four-contiguous Laplacian operator. And the operator is rotated by 45° and added to the original operator to obtain the operator of the eight neighborhood; that is to say, the difference between the sum of eight pixels around one pixel and eight times the middle pixel is taken as the Laplace calculation result. For ease of programming, the Laplacian is represented as a mask (template). Figure 1a shows a discrete Laplacian mask defined by eq. (7), and Fig. 1b shows its extended mask. The Laplacian operator is a differential operator that can enhance the region of grayscale mutations in an image and reduce the area where the grayscale change slowly. Therefore, superimposing the sharpened image with the Laplacian image may protect the effect of Laplacian sharpening while restoring the background information. The original image will minus the Laplace transform image if the definition we used has a negative center coefficient. Then, the final Laplacian sharpening formula can be obtained from Eqs. (5)–(7):
$$ g\left(x,y\right)=\left\{{}_{f\left(x,y\right)+{\nabla}^2f\left(x,y\right),k>0}^{f\left(x,y\right)-{\nabla}^2f\left(x,y\right),k<0}\right. $$
(8)
k is the Laplacian mask center coefficient.
In view of the characteristics of multispectral tissue images in this experiment, it was found that the result obtained using the two masks (a) and (b) in Fig. 1 is best. The mask coefficients are k =  − 4, and k =  − 8, respectively.
As seen in the form of the mask, the Laplace operation will make the bright spot brighter if a bright spot appears in the darker area of the image. This feature is very advantageous for the grayscale transition point of the low-light-level transmission tissue image, which will make the mutation point at the heterogeneous tissue more prominent. Because the Laplacian increases the noise while enhancing the edges, we use Gaussian filtering to denoise before the Laplace operation in order to avoid increasing noise of the image. In this paper, we use the Laplacian filter expansion mask of Fig. 1b to sharpen the red and green and blue (RGB) three-channel color image to enhance the edge detail in the image, and then use the mask with a mask center coefficient of k =  − 4 to enhance each single-channel image.

3 Experiment

3.1 Experimental device

Figure 2 depicts a schematic diagram of the experimental equipment in our previous work [14]. The device consists of the following components: LED light source (0.5 W), phantom, mobile phone (model: HUAWEI mate9, frame rate: 59 fps, image resolution: 1080 × 1920), a computer (connected to mobile phone) for images handling, shading cloth.
The blue light (460 nm) has the strongest transmission ability in water according to the absorption spectrum of water; so, the synthetic light of two wavelengths of 460 nm and 560 nm was used in the experiment to transmit the phantom.
The composition of the phantom: the related literature [20] shows that the fat emulsion can be regarded as the scattering medium. The composition of the fat emulsion: soybean oil (the main component is the ester of higher fatty acid and glycerol), lecithin, glycerin, and water. However, the main chemical components of milk are 87.5% water, 3.5~4.2% fat (composed of glycerin and fatty acid), 2.8~3.4% protein, 4.6% to 4.8% lactose, and a small amount of inorganic salt. In view of the similarity of the composition of fat emulsion and milk, and the similarity between pork tissue and human breast tissue structure, we used a 1:3 mixture of milk and water in the experiment and adjusted the concentration within the error tolerance. The mixture is used as a breast tissue simulating fluid, and a certain thickness of raw pork is suspended in the solution as a diseased breast tissue (heterogeneous tissue). And we use the PMMA flat cuboid container with a transmittance of up to 96% (the translucency of the breast is higher than other human tissues) to hold the tissue simulating fluid.

3.2 Get images

Based on the experiment setups, the phantom images are obtained:
The specific experiment procedures are as follows:
1.
Adjust and fix the distance between the phantom, the mobile phone, and light source; turn on the light source; cover the shading cloth; and turn on the camera to record the video of the phantom for 10 min.
 
2.
The video of (1) is transmitted to the computer and the image is extracted; we obtained a total of n = 35200 frames of the original phantom images after images with the coarse error are removed.
 
3.
The extracted image xi is separated into RGB channels to obtain three single-channel images xi, j, then m = 3n single-channel grayscale images are obtained, which prepares for single-channel frame accumulation in preprocessing.
 

3.3 Joint preprocessing algorithm

On the basis of separating the original image into three single-channel grayscale image, the single-channel grayscale image is cropped, frame-accumulated, denoised and filtered, image synthesized (information fusion), etc., and synthesized it into a full-color image, then do further edge enhancement.
1.
In this paper, a total of 35200 frames are extracted and each frame size is 1080 × 1920. Because the storage space required for the later processing is so large that the image size will affect the speed of the post-processing algorithm, we cuts out the edge parts that uncorrelated and entrained noise of the image without affecting the accuracy and quality of the algorithm. Then the size is appropriately cut to 500 × 360. Figure 3 is the schematic of image cropping.
 
2.
Averaging the frame-accumulated of single-channel image: frame accumulation for every 100 frames of single-channel image to improve image grayscale and increase SNR. For each group of single-channel images xi, j, frame-accumulated averaging is performed every N frames, then have:
 
$$ {\displaystyle \begin{array}{c}\sum \limits_{i,j=1}^{i+N-1}{x}_{i,j}={x}_{i,1}+{x}_{i+1,1}+\cdots +{x}_{i+N-1,1}\\ {}\sum \limits_{i,j=2}^{i+N-1}{x}_{i,j}={x}_{i,2}+{x}_{i+1,2}+\cdots +{x}_{i+N-1,2}\\ {}\sum \limits_{i,j=3}^{i+N-1}{x}_{i,j}={x}_{i,3}+{x}_{i+1,3}+\cdots +{x}_{i+N-1,3}\end{array}},0<i<n-1,N>1 $$
(9)
Average results after accumulation:
$$ {\overline{x}}_{t,j}=\frac{1}{N}\sum \limits_i^{i+N-1}{x}_{i,j}\kern0.48em ,t=1,2,\dots, \coprod \left(\frac{n}{N}\right),j=1,2,3 $$
(10)
In this experiment, N is taken as 100. From the formula (9) (10), 352 × 3 single-channel images xt, j are obtained after frame accumulation averaging. The grayscale image is then further denoised by a mean filter and a Gaussian filter designed for the phantom image features.
3.
Image synthesis: the single-channel grayscale image xt, j after (1), (2) operation is synthesized into a three-channel full-color imagext,.
 
4.
The Laplacian mask and its extended mask described in Section 2.2 are superimposed on the image to enhance edge detail.
 
The methods extraction of (1)–(4) is combined into a preprocessing method with a high degree matching with heterogeneity detection of transmission tissue image, which is called a joint preprocessing algorithm.
The term “multispectral image” as used herein means that in this paper, we used a two-wavelength spectral transmission phantom and then got a dual-spectral band image instead of a single-wavelength image, which is the “multi-spectral image” in this paper. Although the number of transmission bands does not reach the “10 or more wavelengths of light transmission” as claimed in the technical field, it is not a single-wavelength transmission. It is intended to illustrate that the preprocessing method studied in this paper is based on the experiment of dual-wavelength transmission instead of single-wavelength light transmission. So the method is likely to be used in image processing under more spectral transmission, that is, the “multispectral” image here is an attempted object of the preprocessing method.

4 Results and analysis

In this paper, comparing the experimental results, it is found that the SNR of the imitation image processed by the joint preprocessing algorithm is increased, and the grayscale resolution and edge contrast are improved to some extent.

4.1 Frame accumulation analysis

By calculating the SNR of the original image and the cropped image, it is found that the noise of the cropped image is lower than that of the original image, and the SNR of the cropped image is increased. The PSNR of the cropped image reaches 50.96 dB, which indicates that the clipping removes noise to a certain extent and increases the SNR (Table 1).
Table 1
The SNRs of original single-channel grayscale image and single-channel image processed by each preprocessing method
Image
SNR/dB
Mean filtered image
7.3427
Gaussian filtered image
7.3637
100 frames of accumulated image
8.0141
Image processed by the joint preprocessing algorithm
8.4817
Original image
5.8794
The portion of frame accumulation significantly improved the quality of multispectral tissue images. It is found by calculation that the SNR of the frame accumulated image is increased by 2.13 dB, which is in accordance with the theoretical derivation in Section 2.1 of this paper.
According to the derivation of Section 2.1, we can also approximate the 3000-frame accumulated image as a true-value image to use the quality evaluation method without reference image [21] to evaluate the single-channel frame-accumulated image quality of this experiment. The results are shown in Table 2. Table 2 shows the mean signal-to-noise ratio (MSNR) of the image after the original image and frame accumulation.
Table 2
The mean signal-to-noise ratios (MSNRs) of the original single-channel image and the N-frames accumulated image [21]
Image
MSNR/dB
Original image
38.5706
100 frames of accumulated image
47.6169
1000 frames of accumulated image
59.0384
Since the existing display device has only 256 gray levels, it is impossible to display images of higher gray levels. To show the effect of frame accumulation, we normalized the image and then linearly stretched it to 256 grayscale. Then we show that the effect of frame accumulation is compared with the original image in Fig. 4.
It can be seen from Fig. 4 that the gray scale of the image after the single-channel frame-accumulation is greatly improved, and the gray scale distribution is more uniform and dense. Therefore, the gray-scale is increased and the gray level resolution is improved after the frame accumulation processing.
PSNR is the most widely used objective measure for evaluating image quality, but the actual test results show that the PSNR evaluation results are not completely consistent with the visual quality seen by the human eye. In the human eye, it is possible that images with higher PSNR look worse than images with lower PSNR. This is because the sensitivity of the human eye to the error is not absolute, and the perceived result is affected by many factors. However, the PSNR is based on the error between the corresponding pixel points. In other words, it is based on the error-sensitive image quality evaluation, and does not take the visual characteristics of the human eye into account. Let fij denotes the pixel of the original blurred multispectral image, \( {f}_{ij}^{\hbox{'}} \) denotes the pixel of the enhanced image, and the image size is M × N, then we have:
$$ \mathrm{MSE}\kern0.5em =\kern0.5em \frac{\sum \limits_{0\le i\le M}\sum \limits_{0\le j\le N}{\left({f}_{ij}-{f}_{ij}^{\hbox{'}}\right)}^2}{M\times N} $$
(11)
$$ \mathrm{PSNR}\kern0.5em =\kern0.5em 10{\log}_{10}\frac{{\left({2}^{bits}-1\right)}^2}{\mathrm{MSE}} $$
(12)
MSE is the mean square error, and the larger the PSNR value is, the smaller the image distortion is. In the field of image processing, it is generally considered that the image quality is excellent when the PSNR is higher than 40 dB, the image quality is good at 30–40 dB (the distortion of image can be perceived but acceptable), and the image quality is poor at 20–30 dB, and images with PSNR below 20 dB are unacceptable.
Table 3 shows that the single-channel image quality is significantly enhanced by frame accumulation. It can be concluded from Table 3 that the PSNR of the image after the frame is accumulated is higher than the image after filtering; that is, the image quality of the frame accumulation is higher than that of the filtered image, and its PSNR is 57.3278 dB which is greater than 40 dB.
Table 3
The PSNRs of frame accumulated image and its mean image and the single-channel image processed by the conventional filtering method
Image
PSNR/dB
Mean filtered image
24.1376
Gaussian filtered image
24.5873
Mean and Gaussian filtered image
24.2158
Frames cumulative averaged image
25.0075
100 frames of accumulated image
57.3278
In addition, the paper also experiments the median filtering, Gaussian filtering, and wavelet transform to denoise the transmission image; the effect is shown in Figs. 5 and 6.
From the above comparison of the graphs, it can be found that the quality improvement of the transmitted phantom images processed by the pre-processing algorithm is significant compared to other filtering methods.
In addition, we use the standard deviation to represent the contrast of an image. The standard deviation of the image is an important indicator to measure the effect of an image enhancement. The larger the standard deviation of the image pixel matrix is, the higher the contrast of the image is. It is found by calculation that the standard deviation of the original image in this experiment is 56.9473, and the standard deviation of the image after the preprocessing algorithm is 76.8471, which is 19.8998 higher than the original image. This shows a large increase in contrast. Figure 7 is the comparison of a color phantom image subjected to a preprocessing algorithm with the original image.

4.2 Edge detection

We use three edge detection methods to evaluate the effect of the joint preprocessing algorithm:
a. Edge detection is performed using the Prewitt operator and the Sobel operator.
b. We used non-maximum suppression to set the same gradient threshold for the images before and after image pre-processing, then the edges are extracted and the set of points in the boundary is obtained, and then we got the edge detection image in Fig. 8.
The gradient threshold algorithm is as follows:
$$ m\left(i,j\right)=\left\{{}_{0,g\left(i,j\right)\le p}^{225,g\left(i,j\right)>p},i=0,1,2\cdots, 502,j=0,1,2\right.\cdots, 362.p>0. $$
(13)
m(i, j) is the gray value of any point of the image, g(i, j) represents the gradient value at any point of the image, and p is the threshold. In this paper, p = 5 is set to obtain the four edge detection images of Fig. 9.
It is found from Fig. 9 that (d) has the largest set of boundary points and contains the most points, showing more details under the same gradient threshold. Among them, in addition to the boundary between the illuminated circle and the corners, and bright arcs in the 12 o’clock and 6 o’clock positions, the center position of the heterogeneous body also appears more edge points than the original image and presents the outline of the original heterogeneous body. Therefore, the overall edge and content edge of the image are preserved and sharpened after the pre-processing algorithm, which is crucial for extracting the edge features of the transmitted image in heterogeneity detection.

4.3 Comparison of heterogeneity detection results

In the previous work, we performed heterogeneity detection on the data set obtained by the preprocessing algorithm, and obtained the mAP of 99.9% using the Faster R-CNN training model [16]. And it can also explain that the preprocessing algorithm in this paper plays a role in improving the detection accuracy of heterogeneous bodies.
We use three kinds of image data to test the depth model trained: (a) the preprocessed image data of test set within the data set of the experiment, (b) the preprocessed image data that is not within the data set, and (c) image data that is not preprocessed and not within the data set. The test result found that the detection accuracy of the above were 85.6%, 75.1%, and 66.7%, respectively. So the detection accuracy of three test data is a > b > c, which indicates that the detection accuracy is affected by the image SNR and resolution to some extent. It also verifies that the heterogeneity detection accuracy of the image processed by the joint preprocessing algorithm is higher.

5 Discussion

Based on the experiment and single-channel frame accumulation, this paper proposes a preprocessing algorithm that is well matched with heterogeneous detection of transmitted breast tissue images. The algorithm can greatly improve the SNR and contrast compared with the traditional filtering method, and improve the quality of the transmitted phantom image. Since our algorithm is based on our simulated experimental images. There are still some gaps between our simulated experiment and the real application although the design of our simulated experiment and diseased phantoms is very similar to the real environment and the diseased breast tissue. The algorithm has certain limitations on the application of the actual scene, and may not be as good as the effect obtained on the experimental image. In view of the limitation, we will further carry out experiments and research on real data in the subsequent work based on this algorithm. Although this algorithm has certain limitations, it can provide a reference direction for us to study the data processing of real lesion images in the future, and it may become a method of improving the quality of multi-spectral transmission tissue images and such blurred images.

6 Conclusions

In this paper, we designed a transmission phantom image acquisition experiment combined with the characteristics of breast tissue. For the first time, the PMMA material container with high transmittance was used to hold liquid phantom, then we proposed a joint preprocessing algorithm based on frame accumulation and edge enhancement algorithm. The high-quality transmission tissue images suitable for heterogeneity detection are obtained after processed by the algorithm. The experiment results show that the PSNR of the image processed by the joint preprocessing algorithm is increased to 57.3 dB, and the SNR of the pre-processed image is 2.60 dB higher than the original image. And the edges are preserved and sharpened, and the standard deviation is 19.8998 higher than the original image, which shows that the contrast is significantly increased.

Acknowledgements

Baoju Zhang acknowledges funding from NYSFC and TSF. This work was partially supported by the Natural Youth Science Foundation of China (NYSFC-61401310) and the Tianjin Science Foundation (TSF-18JCYBJC86400).

Competing interests

The authors declare that they have no competing interests.
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://​creativecommons.​org/​licenses/​by/​4.​0/​), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Literature
2.
go back to reference R Peyret, F Khelifi, A Bouridane, and S Al-Maadeed, Automatic diagnosis of prostate cancer using multispectral based linear binary pattern bagged codebooks, Paper presented at the 2017 2nd International Conference on Bio-Engineering for Smart Technologies (BioSMART). 4, 2017. R Peyret, F Khelifi, A Bouridane, and S Al-Maadeed, Automatic diagnosis of prostate cancer using multispectral based linear binary pattern bagged codebooks, Paper presented at the 2017 2nd International Conference on Bio-Engineering for Smart Technologies (BioSMART). 4, 2017.
5.
go back to reference X. Yang, G. Li, L. Lin, Assessment of spatial information for hyperspectral imaging of lesion, Paper presented at Proceedings of the SPIE. 100242O (2016), p. 9 X. Yang, G. Li, L. Lin, Assessment of spatial information for hyperspectral imaging of lesion, Paper presented at Proceedings of the SPIE. 100242O (2016), p. 9
Metadata
Title
A Preprocessing Algorithm Based on Heterogeneity Detection for Transmitted Tissue Image
Authors
Chengcheng Zhang
Baoju Zhang
Gang Li
Ling Lin
Cuiping Zhang
Fengjuan Wang
Publication date
01-12-2019
Publisher
Springer International Publishing
DOI
https://doi.org/10.1186/s13638-019-1534-x

Other articles of this Issue 1/2019

EURASIP Journal on Wireless Communications and Networking 1/2019 Go to the issue

Premium Partner