Open Access
9 December 2016 Automatic blood detection in capsule endoscopy video
Adam Novozámský, Jan Flusser, Ilja Tachecí, Lukáš Sulík, Jan Bureš, Ondřej Krejcar
Author Affiliations +
Abstract
We propose two automatic methods for detecting bleeding in wireless capsule endoscopy videos of the small intestine. The first one uses solely the color information, whereas the second one incorporates the assumptions about the blood spot shape and size. The original idea is namely the definition of a new color space that provides good separability of blood pixels and intestinal wall. Both methods can be applied either individually or their results can be fused together for the final decision. We evaluate their individual performance and various fusion rules on real data, manually annotated by an endoscopist.

1.

Introduction

Wireless capsule video endoscopy (WCE) is a noninvasive diagnostic tool for small bowel investigation that has been used in clinical praxis since 2001. WCE is very considerate of the patient. The patient swallows a cylindrical plastic capsule of a size about 10×25  mm (depending on the manufacturer), which contains a digital video camera, LED light source, signal transmitter, and battery. The capsule travels through the gastrointestinal tract by peristaltic contractions, captures the images, and wirelessly transmits them in real time to an external console worn by the patient. The images are recorded and stored in the console memory and can be uploaded to a computer for a visual inspection or automatic analysis immediately after the monitoring has been completed. Current capsules take frames at a rate between two and six per second on average, which results in many thousands of images (typically up to 60,000) and >10  h of video per patient. (Fortunately, current batteries are powerful enough to supply the light source, camera, and transmitter all the time.) The primary use of the capsule endoscopy is to examine areas of the small intestine that are difficult to display by other types of endoscopy. WCE has been successfully applied in the detection of small bowel bleeding sources, Crohns disease, complications of coeliac disease, small bowel tumors, and nonsteroidal anti-inflammatory drug induced enteropathy.

The main obstacle in a routine usage of WCE is that the visual evaluation of the video is very time consuming. It is supposed to be done by a trained endoscopist. Even if the software provided by the capsule producers offers certain supporting tools to simplify and speed up the process, it still takes at least 1 h of full concentration of the evaluator. Taking into account that the pathology can be visible only on very few frames and, hence, can be missed easily, the importance of a human factor is apparent.

The goal of this paper is to propose a technique for the detection of frames that are suspected to have the presence of bleeding. We do not aim to develop a fully automatic tool for bleeding detection that would replace the doctor. Rather, the method should preprocess the video, identify and export suspected frames, and prepare them for visual inspection, whereas the other frames are skipped and not sent to the inspection at all. This may significantly reduce the evaluation time while the final decision is still left to the endoscopist. This intended goal predetermines the required properties of the method. It should be fast and should provide a high true-positive (TP) rate while keeping the false-positive (FP) rate reasonably low.

In Sec. 2, we present a short survey of relevant literature. Section 3 describes the proposed algorithms. In Sec. 4, we present the WCE technical parameters and implementation details of the methods. Section 5 contains experimental evaluation on real data.

2.

Literature Survey

The first research articles on WCE video analysis appeared soon after the WCE had been introduced into clinical praxis. Since the bleeding detection is a very frequent requirement and appears to be easily achievable (which is, however, not generally true as we can see from the literature survey), the largest group of papers has dealt with this challenge. The main problem here is that the blood spots and traces do not have any typical texture and shape, and the blood color may vary widely from light red through dark red to brown, which makes the blood difficult to distinguish from other objects and from the intestinal wall. This diversity in color depends on the types of disease, the bleeding time,1 the position of the capsule, and the surrounding conditions.2 If the parameters have been trained on a certain data collection, the color-based methods perform well on this set, but their generalization to other patients or even to another capsule manufacturer is questionable. However, the color-based features remain the main indicators for the classification. The approaches, adopted by various authors, differ from one another only in the color space in which they are working. Traditional color spaces such as RGB and HSV have been commonly used, but some authors proposed specialized color spaces. Most of the methods extract the features at the pixel level,35 others use image blocks,1 and some methods work with the image as a whole.6

The first one who came up with the solution of the blood detection was one of the producers of the capsules, the Israeli medical technology company Given Imaging. Its second-generation capsule had an embedded software for detection of bleeding lesions called the suspected blood indicator (SBI). The SBI detected the “red areas” in the frames and marked the frames as potentially bloody. However, the study by Signorelli et al.7 shows a low performance of SBI, where the TP, true-negative, FP, and false-negative results, were 40.9%, 70.7%, 69.2%, and 42.6%, respectively. Other studies,2,8 reached similar conclusions and confirmed that the SBI does not reduce the time required for interpretation of WCE, which was the main goal. This insufficiency initiated designing of new algorithms for detecting blood presence. They can be categorized into three main groups according to the size of the frame area that they are working with.

Pixel-wise methods do not work with any local context and process the individual pixels independently. Their main advantages are their simplicity and speed. Some of them work in the RGB color space with spectrum transformation4 or use a simple thresholding9 with the threshold found by a support vector machine. However, as we will discuss below in Sec. 3.1, RGB space does not exhibit a good discriminability in tests on more patients. Shahet al.3 proposed using only the hue component of the hue, saturation, intensity (HSI) representation, and other authors followed their approach. Penna et al.5 tried to improve the accuracy by edge masking with Mumford–Shah functional; Mohanapriya and Sangeetha10 used gray-level co-occurrence matrix and classified the pixels by neural networks. In Ref. 6, the color features are extracted with the help of K-means clustering.

Block-wise methods divide each frame into blocks of n×n pixels and calculate the color features thereof. Each block is evaluated at first separately, then additional criteria for adjacent blocks may be applied.1 The performance of the block-wise methods is controlled by the block size n. For large n, the methods converge to image-wise approach and become robust but lose sensitivity to small blood spots. If n approaches one, the methods converge to the pixel-wise approach—they become less stable but more sensitive. A choice of n=5 is recommended as a compromise.

Image-wise methods provide global features that describe the whole frame. However, this approach cannot discover small bleeding areas. Still, some authors classified the frames in this way, e.g., Lv et al.11 used a spatial pyramid of color invariant histograms.

There are of course other image processing methods that were designed for WCE data analysis. For instance, Gueye et al.12 used WCE for automatic detection of cancer and precancerous colon lesions. They consider polyps, inflammation, tumor, and bleeding areas as suspect regions they wanted to detect. Since the primary features are texture and shape, the authors employed the scale invariant feature transform (SIFT)13 as the descriptors. Although they reported promising results in polyp and inflammation detection, their approach cannot be adopted for blood detection in the small intestine. In that case, the texture and the distribution of the key points detected by the SIFT method are not distinctive features for classification. The same is true for the methods that use local binary patterns (LBP) for WCE image description.14,15 They capture the texture of the intestinal wall, which is not discriminative for bleeding detection.

3.

Proposed Technique

The aim of this paper is to develop and implement methods for blood detection that recognize blood in the frame regardless of its particular color and size. Special attention is paid to detection of small blood spots (with a diameter around 5 pixels or less), which cannot be detected by most of the earlier methods. We propose two different methods for blood detection, which can be used either individually or their results can be fused by various fusion rules. Let us call these methods A and B in the sequel.

3.1.

Method A

Method A works pixel-wise and is based solely on the color. However, it does not work in standard RGB space because it has been known both from the literature1 as well as from our experiments that the RGB space does not provide enough discriminability of blood pixels (see Fig. 1). We define our new color space such that the separability of blood pixels and intestinal wall should be maximized. The study we performed on 15 patients shows that an appropriate color space can be defined, as shown below in the first step. The complete algorithm can be summarized as follows:

  • 1.

    K=min(1R,1G,1B),
    M=(1GK),
    where R,G, and B0,255. This color space is similar to the popular CMYK space. The pixels with a low value in green and high values in red and blue are well separated.

  • 2.

    R1=G2+B2,
    Rn={0if  R1=0R<128255if  R1=0R128R/R1if  R10.
    This transform emphasizes the red channel.

  • 3. The classification criterion C is defined as

    C=Rn·M.
    The number of pixels of the frame, where the C-value exceeds 200, is denoted as NC.

  • 4. Finally, NC is compared to a user-defined threshold of “the required number of blood pixels” t. If NCt, the frame is classified as positive (i.e., as one that may contain blood).

Fig. 1

(a)–(c) The separability of blood (red histogram) against the background (green histogram) in various color bands. The histograms were calculated over several hundred manually selected blood and blood-free patches. None of the RGB channels provides a sufficient discriminability. (d) The new color space separates the blood very well even in one dimension, given by the value of C. The empirically selected decision threshold on this training data is about 200. This value was used in all experiments in this paper.

JBO_21_12_126007_f001.png

The main advantage of method A is its speed because it does not contain any high-level operations. As we will see in the experimental section, the method provides a good TP rate.

3.2.

Method B

Method B uses a more sophisticated approach that not only is based on pixel colors but also incorporates the assumption that the blood in the frame forms a continuous region (or a few such regions). In other words, it eliminates isolated pixels or small spots with a color similar to blood but that probably do not exhibit actual bleeding. Thanks to this, method B achieves a low FP rate; however, it is at the expense of the computational time. On the other hand, it yields generally higher false-negative rate than A.

The algorithm consists of four main steps:

  • 1. The Canny edge detector16 is applied to find closed-boundary regions.

  • 2. Morphological erosion is applied to remove small regions. The term “small” is given by a user-defined maximum diameter.

  • 3. The color of the input image is converted to HSV. If the color is within the interval of blood color, defied in advance by training, we mask the respective pixel.

  • 4. The intersection of the outputs of step 2 and step 3 is classified as a blood spot.

For illustration of the individual steps, see Fig. 2.

Fig. 2

Blood detection by Method B. (a) Input image. (b) Output of Canny detector. (c) Approximative closed-boundary regions. (d) Morphological operation. (e) At the same time, the input image (a) is converted to HSV and potential blood pixels are masked. (f) The output created by intersection of (d) and (e).

JBO_21_12_126007_f002.png

4.

Technical Parameters and Implementation

In our study, we were working with two WCE systems—PillCam SB3 by Given Imaging17 and EndoCapsule developed by Olympus.18 The former capsule produces a video of a spatial resolution 256×256 pixels with variable sample frequency, where the frames per second (fps) rate fluctuates from 0.5 to 3; the latter one yields a 288×288 video in the stable rate 2 fps. Both videos are stored in proprietary compression formats based on motion JPEG. We decoded these formats and extracted individual video frames.

Both companies also provide a simple blood detection software along with the capsule. This software apparently works in RGB space, trying to detect the frames containing the “above-threshold” number of pixels of “blood color” (the details about the implemented algorithms are not available). As we already mentioned, the “blood color” varies significantly among the patients, which is the main drawback of these simple algorithms. Nevertheless, both Olympus as well as Given Imaging software are very fast, have comfortable graphical interface [see Figs. 3(b) and 3(a)], and may be useful for approximative view, but they are not suitable for a rigorous analysis.

Fig. 3

Screen shots of commercial software: (a) Given imaging and (b) Olympus.

JBO_21_12_126007_f003.png

Both our methods were implemented in C language and contain a user-friendly graphical user interface (GUI) to provide the doctor with maximum comfort. The input parameter t can be easily changed by a GUI slider (see Fig. 4 for a GUI screenshot). Note that t becomes effective in the last step of the algorithm. Hence, the steps 1 to 3 are performed automatically and only once, regardless of particular value of t. When changing t, only a single-number comparison is performed for each frame, and the results are displayed in real time. This is a big advantage of method A.

Fig. 4

Screen shot of our software solution. The frames classified as positive are marked by a small red bar in the time axis. The thickness of the bar is proportional to the number of blood pixels. The frames can be visualized for inspection (left) and for interactive contrast enhancement if necessary (right).

JBO_21_12_126007_f004.png

5.

Experiments

We tested the performance of the methods on real WCE data. We used videos of 15 patients of the length from 12,000 to 20,000 frames each. The patients suffered from bleeding into the small intestine of various intensities and scales. Some patients bled at more than one place.

All videos were first annotated manually by an experienced endoscopist, who selected 390 frames altogether of bleeding of various types and extents (see Fig. 5 for some examples). Among them, he marked 339 frames as “serious bleeding.” Then, he selected other 1500 frames with no bleeding (see Fig. 6 for some examples). Some of them are of an appearance that could be misinterpreted as bleeding because of the presence of red spots. We did not use this knowledge for training but solely as a ground truth for evaluating the performance of the methods.

Fig. 5

Examples of frames containing blood annotated by a doctor.

JBO_21_12_126007_f005.png

Fig. 6

Examples of blood-free frames annotated by a doctor.

JBO_21_12_126007_f006.png

Fig. 7

Examples of hard-to-detect blood frames. (a) Found by A, missed by B. (b) Found by B, missed by A. (c) Not found.

JBO_21_12_126007_f007.png

The results are summarized in Tables 1Table 23. We can see the performance of A and B when used individually along with the performance of various fusion rules. The rule AB means that the frame must be classified as a blood frame by both A and B together. A+B means that at least one method must classify the frame as positive. The rule AB requires the frame to be labeled as positive by A but as negative by B. In the second column of Table 1, we see the TP rate achieved on 339 significant blood frames. The third column shows the same calculated for all 390 blood frames. The last column shows the FP rate evaluated on the set of 1500 blood-free frames. Both A and B require few user-defined parameters. The main parameter is the number of suspected pixels t in method A. Table 1 is for t=1, Table 2 for t=5, and Table 3 for t=10.

Table 1

Sensitivity t=1 pixel.

MethodTP-significant (%)TP-all (%)FP (%)
A86.4385.6420.20
AB28.9131.5415.93
B58.1154.625.93
BA0.590.511.67
AB57.5254.104.27
A+B87.0286.1521.87

Table 2

Sensitivity t=5 pixels.

MethodTP-significant (%)TP-all (%)FP (%)
A81.1280.2615.13
AB26.2528.9711.27
B58.1154.625.93
BA3.243.332.07
AB54.8751.283.87
A+B84.3783.5917.20

Table 3

Sensitivity t=10 pixels.

MethodTP-significant (%)TP-all (%)FP (%)
A75.8174.8712.80
AB22.4225.139.53
B58.1154.625.93
BA4.724.872.67
AB53.3949.743.27
A+B80.5379.7415.47

Based on this experiment, we can deduce the following conclusions.

  • Method A, if used individually, yields good TP and FP ratios even if the blood spots are very small (2 to 5 pixels). Method B is individually significantly worse in terms of TPs. If the fusion rule AB has been used, the FP rate falls below 5% but the TP rate does not exceed 60%, which is not a good choice. If the fusion A+B has been used, then both rates are acceptable by the endoscopists—a TP rate above 80% and a FP rate around 20%. This combination seems to be optimal if the computing complexity (and consequently the time of processing) is not a crucial criterion. If time plays an important role, then individual A is the best choice. On a common PC, it is able to process >80 fps.

  • The dependence of the performance of A on the parameter t is relatively mild. Obviously, both TP and FP rates decrease (with some random fluctuations) as t increases. Since the TP rate uses time as a primary criterion, we recommend setting t from 1 to 3 pixels. Note that the method B does not depend on t at all.

  • If the fusion A+B is to be used, the best choice of t is about 5 pixels.

We performed a comparison with the state-of-the-art method.19 The TP rate on 390 frames was 58%, whereas the FP rate on 1500 negative frames was 41%. Clearly, our method performs much better in most parameter settings. In Fig. 7 we also show some examples of the frames which are difficult to detect.

Tables 1Table 23 show our ability to detect bleeding in an isolated frame without taking the time context into account. However, in reality, the appearance of the blood in the intestine is always depicted on more consecutive frames than one. So, we evaluated the results of the same experiments once again with a modified methodology. The detection is marked as “TP” if a ground-truth positive frame lies in its 5-s neighborhood. This corresponds to how the method is actually used—the detected frames are displayed, and the doctor checks their short-time neighborhood. When recalculating the success rates (see Tables 4Table 56), an excellent performance of our method is clearly demonstrated.

Table 4

Sensitivity t=1 pixel—at least one frame within ±5  s.

MethodTP-significant (%)TP-all (%)FP (%)
A98.2396.4120.20
AB11.5013.5915.93
B86.7382.825.93
BA0.000.001.67
AB86.7382.824.27
A+B98.2396.4121.87

Table 5

Sensitivity t=5 pixels—at least one frame within ±5  s.

MethodTP-significant (%)TP-all (%)FP (%)
A96.7694.1015.13
AB10.9112.3111.27
B86.7382.825.93
BA0.881.032.07
AB85.8481.793.87
A+B97.6495.1317.20

Table 6

Sensitivity t=10 pixels—at least one frame within ±5  s.

MethodTP-significant (%)TP-all (%)FP (%)
A92.0489.7412.80
AB8.269.749.53
B86.7382.825.93
BA2.952.822.67
AB83.7880.003.27
A+B94.9992.5615.47

The last statistics illustrates saving of time needed for visual inspection. In the third column of Table 7, we can see the time needed for observation of the original video. The last three columns show the time spent by a doctor (in percents of the original time) when checking the positive frames only depending on the choice of parameter t (we assume that the frames are displayed in 3 fps rate). The fifth column with the bold values corresponds to the recommended choice of parameter t=5.The time saving, of course, depends on the extent of bleeding, and in our study varied from 1% to almost 90% in the case of a patient with extensive bleeding throughout the digestive tract.

Table 7

Reducing the inspection time by the proposed method.

PatientNo. of framesVideo timet=1 (%)t=5 (%)t=10 (%)
Pat 210,17456:313.471.370.66
Pat 832,7503:01:562.411.761.51
Pat 931,3892:54:234.942.141.15
Pat 711,1401:01:532.842.201.65
Pat 1463,6295:53:292.582.412.32
Pat 1213,3221:14:006.523.442.13
Pat 519,0781:45:596.814.253.03
Pat 1363,9925:55:304.644.384.22
Pat 416,9091:33:569.415.813.88
Pat 625,5042:21:4111.729.087.09
Pat 1119,7421:49:4010.819.809.17
Pat 113,4321:14:3714.8910.367.86
Pat 1029,8322:45:4422.8318.9216.59
Pat 1524,7622:17:3430.8427.2124.93
Pat 387,0758:03:4590.1787.2885.50
Median24,7622:17:346.814.383.88

6.

Conclusion

In this paper, we proposed two automatic methods for detecting bleeding in WCE video of the small intestine. The first one uses solely the color information, whereas the second one incorporates the assumptions about the blood spot shape and size. The original idea is namely the definition of a new color space that provides good separability of blood pixels and the intestinal wall.

We tested both methods individually as well as in various combinations. We evaluated the results on a large test set, manually annotated by an endoscopist. The conclusion is that method A gives good results in terms of both TP and FP rates. The fusion rule A+B further enhances this result because method B is based on different assumptions, but A+B is significantly slower than the individual A.

We compared the proposed method with the method published in Ref. 19, which proved our method superior. Objective comparison with other methods published in the literature is unfortunately not possible because each author uses his own dataset, and different hardware and software platforms; additionally, most authors have not made their codes publicly available.

We believe the presented technique significantly saves the time of endoscopists, which are required for visual expert assessment. The method is currently in use at the University Hospital at Hradec Kralove of the Charles University, Czech Republic.

Disclosures

No conflicts of interest, financial or otherwise, are declared by the authors.

Acknowledgments

The study was supported by the IGA NT 13532-4/2012 research grant provided by the Czech Ministry of Health Care.

References

1. 

Y.-G. Lee and G. Yoon, “Real-time image analysis of capsule endoscopy for bleeding discrimination in embedded system platform,” Int. J. Med. Health Biomed. Bioeng. Pharm. Eng., 5 (11), 583 –587 (2011). Google Scholar

2. 

S. C. Park et al., “Sensitivity of the suspected blood indicator: an experimental study,” World J. Gastroenterol., 18 4169 –4174 (2012). http://dx.doi.org/10.3748/wjg.v18.i31.4169 Google Scholar

3. 

S. K. Shah, J. K. Lee and M. E. Celebi, “Classification of bleeding images in wireless capsule endoscopy using HSI color domain and region segmentation,” in Proc. of 2007 New England American Society for Engineering Education Conf., (2007). Google Scholar

4. 

Y. S. Jung et al., “Active blood detection in a high resolution capsule endoscopy using color spectrum transformation,” in 2008 Int. Conf. on BioMedical Engineering and Informatics, 859 –862 (2008). Google Scholar

5. 

B. Penna et al., “A technique for blood detection in wireless capsule endoscopy images,” in 17th European Signal Processing Conf., 1864 –1868 (2009). Google Scholar

6. 

Y. Yuan, B. Li and M. Q. H. Meng, “Bleeding frame and region detection in the wireless capsule endoscopy video,” IEEE J. Biomed. Health Inform., 20 624 –630 (2016). http://dx.doi.org/10.1109/JBHI.2015.2399502 Google Scholar

7. 

C. Signorelli et al., “Sensitivity and specificity of the suspected blood identification system in video capsule enteroscopy,” Endoscopy, 37 1170 –1173 (2005). http://dx.doi.org/10.1055/s-2005-870410 ENDCAM Google Scholar

8. 

P.-N. D’Halluin et al., “Does the suspected blood indicator improve the detection of bleeding lesions by capsule endoscopy?,” Gastrointestinal Endoscopy, 61 (2), 243 –249 (2005). http://dx.doi.org/10.1016/S0016-5107(04)02587-8 GAENBQ 0016-5107 Google Scholar

9. 

T. Ghosh, S. A. Fattah and K. A. Wahid, “Automatic bleeding detection in wireless capsule endoscopy based on rgb pixel intensity ratio,” in 2014 Int. Conf. on Electrical Engineering and Information Communication Technology (ICEEICT), 1 –4 (2014). Google Scholar

10. 

P. Mohanapriya and M. Sangeetha, “An efficient approach to detect bleeding region in gi tract using segmentation and classification techniques,” Int. J. Adv. Inf. Commun. Technol., 1 (1), 153 –159 (2014). http://dx.doi.org/01.0401/ijaict.2014.01.30 Google Scholar

11. 

G. Lv, G. Yan and Z. Wang, “Bleeding detection in wireless capsule endoscopy images based on color invariants and spatial pyramids using support vector machines,” in 2011 Annual Int. Conf. of the IEEE Engineering in Medicine and Biology Society, 6643 –6646 (2011). http://dx.doi.org/10.1109/IEMBS.2011.6091638 Google Scholar

12. 

L. Gueye et al., “Automatic detection of colonoscopic anomalies using capsule endoscopy,” in 2015 IEEE Int. Conf. on Image Processing (ICIP), 1061 –1064 (2015). http://dx.doi.org/10.1109/ICIP.2015.7350962 Google Scholar

13. 

D. G. Lowe, “Object recognition from local scale-invariant features,” in The Proc. of the Seventh IEEE Int. Conf. on Computer Vision, 1150 –1157 (1999). Google Scholar

14. 

B. Li and M. Q. H. Meng, “Computer-aided detection of bleeding regions for capsule endoscopy images,” IEEE Trans. Biomed. Eng., 56 1032 –1039 (2009). http://dx.doi.org/10.1109/TBME.2008.2010526 IEBEAX 0018-9294 Google Scholar

15. 

Q. Zhao and M. Q. H. Meng, “Polyp detection in wireless capsule endoscopy images using novel color texture features,” in 2011 9th World Congress on Intelligent Control and Automation (WCICA), 948 –952 (2011). Google Scholar

16. 

J. Canny, “A computational approach to edge detection,” IEEE Trans. Pattern Anal. Mach. Intell., PAMI-8 679 –698 (1986). http://dx.doi.org/10.1109/TPAMI.1986.4767851 ITPIDJ 0162-8828 Google Scholar

17. 

Given Imaging, “Rapid for pillcam software,” (2016) http://www.givenimaging.com/ May ). 2016). Google Scholar

18. 

Olympus, “Endocapsule 10 software client,” (2016) http://www.olympus-europa.com/medical/ May ). 2016). Google Scholar

19. 

A. A. Al-Rahayfeh and A. A. Abuzneid, “Detection of bleeding in wireless capsule endoscopy images using range ratio color,” (2010). Google Scholar

Biography

Adam Novozámský received his MSc degree in informatics from the Faculty of Nuclear Sciences and Physical Engineering, Czech Technical University, Prague, in 2010. He is currently pursuing his PhD with the Institute of Information Theory and Automation Cooperating Institute of Czech Technical University, Prague. His research interests include medical imaging, image segmentation, and image forensics.

Jan Flusser received his MSc degree in mathematical engineering from Czech Technical University, Prague, Czech Republic, in 1985, his PhD in computer science in 1990, his DrSc. degree in 2001, and became a full professor in 2004. Since 1985, he has been with the Institute of Information Theory and Automation, Czech Academy of Sciences (director of the Institute in 2007 to 2017). His research interests include moments and moment invariants, image registration, image fusion, multichannel blind deconvolution, and super-resolution imaging.

Ilja Tachecí received his MD degree in general medicine from the Charles University (Faculty of Medicine in Hradec Kralove) in 1999 and his PhD in internal medicine in 2010. He is currently deputy head for Medical Education at 2nd Department of Internal Medicine, University Hospital in Hradec Kralove. His research interest covers invasive and experimental gastrointestinal endoscopy focused on the small bowel diseases.

Lukáš Sulík received his MSc degree in applied informatics from the Faculty of Informatics and Management, University of Hradec Kralove, Czech Republic, in 2016. He is currently in postgraduate study at the Center of Basic and Applied Research of the same university. His interests are in image processing.

Jan Bureš graduated from Charles University in 1979. He has been affiliated with the 2nd Department of Medicine, Charles University Faculty of Medicine and University Hospital in Hradec Kralove since 1979. He became an associate professor in 1995 and a professor of internal medicine in 2002. He was elected a fellow of the Czech Medical Academy (FCMA) in 2014. Currently, he is the head of the Academic Department of Internal Medicine. His clinical and experimental research is directed at digestive endoscopy, gastrointestinal microbiome, and inflammatory bowel disease.

Ondřej Krejcar received his MSc degree in control and information systems from the Technical University of Ostrava, Ostrava, Czech Republic, in 2002, his PhD in technical cybernetics in 2008, and became an associate professor in 2011 in technical cybernetics at the same university. He is currently a vice-dean for science and research at the Faculty of Informatics and Management of the University of Hradec Kralove. His research interests include biomedicine, image segmentation and recognition, video processing, biometrics, technical cybernetics, and ubiquitous computing.

© 2016 Society of Photo-Optical Instrumentation Engineers (SPIE) 1083-3668/2016/$25.00 © 2016 SPIE
Adam Novozámský, Jan Flusser, Ilja Tachecí, Lukáš Sulík, Jan Bureš, and Ondřej Krejcar "Automatic blood detection in capsule endoscopy video," Journal of Biomedical Optics 21(12), 126007 (9 December 2016). https://doi.org/10.1117/1.JBO.21.12.126007
Published: 9 December 2016
Lens.org Logo
CITATIONS
Cited by 17 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Blood

Video

Endoscopy

Acquisition tracking and pointing

RGB color model

Intestine

Optical inspection

Back to Top