2014 | OriginalPaper | Buchkapitel
Computer-Aided Diagnosis of Abdominal Aortic Aneurysm after Endovascular Repair Using Active Learning Segmentation and Texture Analysis
verfasst von : G. García, J. Maiora, A. Tapia, M. Graña, M. De Blas
Erschienen in: XIII Mediterranean Conference on Medical and Biological Engineering and Computing 2013
Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.
Wählen Sie Textabschnitte aus um mit Künstlicher Intelligenz passenden Patente zu finden. powered by
Markieren Sie Textabschnitte, um KI-gestützt weitere passende Inhalte zu finden. powered by
Endovascular repair is a minimal invasive alternative to open surgical therapy. From a long term perspective, complications such as prostheses displacement or leaks inside the aneurysm sac (endoleaks) could appear influencing the evolution of treatment. The objective of this work is to develop a Computer-Aided Diagnosis system (CAD) for an automated classification of EndoVascular Aneurysm Repair (EVAR) progression from Computed Tomography Angiography (CTA) images. The CAD system is based on the extraction of texture features from segmented thrombus aneurysm samples and a posterior classification. Image segmentation is produced by an interactive Active Learning procedure based on Random Forest (RF) classification.. An initial set of labeled points are required to start training an RF. The human operator is presented with the most uncertain unlabeled voxels to select some of them for inclusion in the training set, retraining the RF classifier. Three texture-analysis methods such as the gray level co-occurrence matrix (GLCM), the gray level run length matrix (GLRLM), and the gray level difference method (GLDM), were applied to each ROI to obtain texture features. Classification of the ROI is carried out by an ensemble of SVM classifiers (ECs). The final decision is based on the application of a voting scheme across the outputs of the individual SVMs. The performance of the classifier was evaluated using 10-fold cross validation and the average of accuracy (93.84% ± 0.29), sensitivity (94.72 ± 0.23), and specificity (91.48 ± 0.35) results.