Skip to main content
main-content
Top

About this book

This book gathers papers presented at the VipIMAGE 2017-VI ECCOMAS Thematic Conference on Computational Vision and Medical Image Processing. It highlights invited lecturers and full papers presented at the conference, which was held in Porto, Portugal, on October 18–20, 2017. These international contributions provide comprehensive coverage on the state-of-the-art in the following fields: 3D Vision, Computational Bio-Imaging and Visualization, Computational Vision, Computer Aided Diagnosis, Surgery, Therapy and Treatment, Data Interpolation, Registration, Acquisition and Compression, Industrial Inspection, Image Enhancement, Image Processing and Analysis, Image Segmentation, Medical Imaging, Medical Rehabilitation, Physics of Medical Imaging, Shape Reconstruction, Signal Processing, Simulation and Modelling, Software Development for Image Processing and Analysis, Telemedicine Systems and their Applications, Tracking and Analysis of Movement, and Deformation and Virtual Reality.

In addition, it explores a broad range of related techniques, methods and applications, including: trainable filters, bilateral filtering, statistical, geometrical and physical modelling, fuzzy morphology, region growing, grabcut, variational methods, snakes, the level set method, finite element method, wavelet transform, multi-objective optimization, scale invariant feature transform, Laws’ texture-energy measures, expectation maximization, the Markov random fields bootstrap, feature extraction and classification, support vector machines, random forests, decision trees, deep learning, and stereo vision.

Given its breadth of coverage, the book offers a valuable resource for academics, researchers and professionals in Biomechanics, Biomedical Engineering, Computational Vision (image processing and analysis), Computer Sciences, Computational Mechanics, Signal Processing, Medicine and Rehabilitation.

Table of Contents

Frontmatter

Invited Lectures

Frontmatter

Interactive Browsing Systems for Large Image Collections

Image database navigation systems provide an interesting approach of managing and finding images in databases of large image repositories. Here, an image collection is visualised, typically based on features extracted from the images, in such a way that similar images are placed close to each other in an arrangement that aids understanding and supports interactive browsing of the image set. In this paper, we provide an overview of image browsing systems, focussing on some of the systems that we have developed in our lab to enable effective and efficient interactive exploration of large image collections.

Gerald Schaefer

Quantitative MR Image Analysis for Brian Tumor

This paper presents an integrated quantitative MR image analysis framework to include all necessary steps such as MRI inhomogeneity correction, feature extraction, multiclass feature selection and multimodality abnormal brain tissue segmentation respectively. We first obtain mathematical algorithm to compute a novel Generalized multifractional Brownian motion (GmBm) texture feature. We then demonstrate efficacy of multiple multiresolution texture features including regular fractal dimension (FD) texture, and stochastic texture such as multifractional Brownian motion (mBm) and GmBm features for robust tumor and other abnormal tissue segmentation in brain MRI. We evaluate these texture and associated intensity features to effectively delineate multiple abnormal tissues within and around the tumor core, and stroke lesions using large scale public and private datasets.

Zeina A. Shboul, Sayed M. S. Reza, Khan M. Iftekharuddin

Contributed Papers

Frontmatter

Foot Pressure Distribution of Patients with Hallux Valgus During Walking up and Down Stairs

Hallux Valgus is one of the most common deformities in orthopedic context, with direct influence on gait. For this pathology, the plantar pressure distribution analysis can be important as a complementary method of evaluation and diagnosis, as also in the follow-up of patient’s rehabilitation. This study has its framework in the context of the distribution of plantar pressure in patients with Hallux Valgus, with clinical indication for surgical treatment. The work involves the gait analysis in nine volunteers. The plantar pressure distribution was collected with an insoles measurement system for a defined protocol based on walking down and upstairs. The data have been registered in two moments: the day before surgery and ninety days after surgery. The analysis of the results, focused on the pathologic foot, shows that the foot pressure analysis, following a down and upstairs protocol, can play an important role as a surgical follow-up for the post-surgery phase.

Linda Pinto, Luis Roseiro, Luís Margalho, Francisco Gomes, Tiago Roseiro, Pedro Carvalhais

Minimisation of Acquisition Time in a TOF PET/CT Scanner Without Compromising Image Quality

Significant improvements have been made in Positron Emission Tomography (PET) to enhance the image quality, namely, the development of time-of-flight (TOF) technology. This technique is useful to localize the emission point of the beta plus-emitter (β+) radiopharmaceutical inside the body, allowing better lesion contrast and leading to a short scan time. The main goal of this study is to investigate the shortest acquisition time without compromising the image quality in both a NEMA body phantom and patients, using a TOF PET/Computed Tomography (PET/CT) scanner and the radionuclide Gallium-68 (68Ga). Image quality parameters and quantification in terms of standardized uptake value (SUV) were acquired. A time between 45 and 60 s per bed position is proposed for future clinical practices.

J. Oliveira, R. Parafita, S. Branco

A Variational Model for Image Artifact Correction Based on Wasserstein Distance

Uneven illumination is a recurrent problem in image processing. This is essentially due to image acquisition sensors’ malfunction or external interference. In this paper we propose a variational model for nonuniform illumination correction, that incorporates a penalty term that performs the intensity distribution transfer between two pre-defined sub-regions of the input scalar image, one uniformly illuminated and the other nonuniformly illuminated. This term representing the illumination correction is a Wasserstein distance. It corresponds to the optimal permutation minimizing the cost of rearranging the intensity distribution of the nonuniformly illuminated sub-region into the other, the uniformly illuminated. Simultaneously, this variational model also carries out a regularization of the image by means of a total variation penalty term, to reduce noise. The effectiveness of the model is illustrated for some images.

Isabel Narra Figueiredo, Luís Pinto, Gil Gonçalves, Björn Engquist

Semi-supervised Bayesian Source Separation of Scintigraphic Image Sequences

Many diagnostic methods using scintigraphic image sequence require decomposition of the sequence into tissue images and their time-activity curves. Standard procedure for this task is still manual selection of regions of interest (ROIs) which can be highly subjective due to their overlaps and poor signal-to-noise ratio. This can be overcome by automatic decomposition, however, the results may not have good physiological meaning. In this contribution, we aim to combine these approaches in semi-supervised procedure which is based on Bayesian blind source separation with the possibility of manual interaction after each run until an acceptable solution is obtained. The manual interaction is based on manual ROI placement and using its position to modify the corresponding prior parameters of the model. Performance of the proposed method is studied on real scintigraphic image sequence as well as on estimation of the specific diagnostic parameter on representative dataset of 10 scintigraphic sequences.

Lenka Bódiová, Ondřej Tichý, Václav Šmídl

Cluster Analysis of Functional Neuroimages Using Data Reduction and Competitive Learning Algorithms

In the present work we use pattern vectors derived from Statistical Parametric Map, generated from a group of artificial and in-house collected fMRI data, to conduct cluster analysis. Two clustering algorithms, self-organizing map (SOM) and growing neural gas (GNG), are selected to explore inherent properties in the brain functional data. As seen in our experimental context, SOM and GNG show comparable behavior, however GNG prevails in the management of large data sets. An exploratory, descriptive analysis is conducted on in-house collected data clustered by GNG and results are detailed in the paper.

Alberto A. Vergani, Samuele Martinelli, Elisabetta Binaghi

Development of Activities for Human-Robot Interaction: Preliminary Results

The objective of the work described in this paper is to develop an interactive environment between a child and the humanoid robot ZECA. The robot is the mediator in the game, greeting and encouraging the child. The environment consists of two different activities, one to identify geometric figures and other to identify colours, each with two levels of difficulty. In the first level, the player must fill the empty spaces in a board with the correct figures/colours; in the second level, a piece (figure or colour) is requested by the robot and the player must present the corresponding piece. The image processing was developed in C++ with the OpenCV library. Using Qt Creator, it was created an interface where the user is able to control the execution of the activity. The application was tested with typically developing children between the ages of 3 and 5 years. The children reacted very positively and the tests allowed optimizing the experimental setup. The next step in the research is to test the system with children with autism spectrum disorders. The goal is to promote social and academic skills in these children by using a robot as a game partner.

Pedro Costa, Helder Freitas, Filomena Soares, João Sena Esteves

Soft Computing Based Technique for Optic Disc and Cup Detection in Digital Fundus Images

Cup-to-disc ratio is an important measure to diagnose glaucoma. This measure can be automatically computed from the segmentation of the optic disc and cup in eye-fundus image. In this paper, a novel segmentation algorithm based on Soft Computing techniques is presented. This theory is able to handle the imprecision and uncertainty present in the determination of the boundaries of these structures. The algorithm is composed of three main steps: vessel segmentation and removal, inpainting and optic disc and cup boundaries localisation. The preliminary results show the potential of this approach which obtains a visual accurate segmentation of both structures in the images of the DRIVE database.

P. Bibiloni, M. González-Hidalgo, S. Massanet, A. Mir, D. Ruiz-Aguilera

Automatic Segmentation of the Lumen in Magnetic Resonance Images of the Carotid Artery

The segmentation of the lumen and vessel wall in Magnetic Resonance (MR) images of carotid arteries represents a crucial step towards the evaluation of cerebrovascular diseases. However, the automatic segmentation of the lumen is still a challenge due to the usual low quality of the images and the presence of elements that compromise the accuracy of the results. In this article, we describe a fully automatic method to identify the location of the lumen in MR images of the carotid artery. A circularity index is used to assess the roundness of the regions identified by the K-means algorithm in order to obtain the one with the maximum value, i.e. the potential lumen region. Then, an active contour algorithm is employed to refine the boundary of the region found. The method achieved a maximum Dice coefficient of 0.91 ± 0.04 and 0.74 ± 0.16 in 181 postcontrast 3D-T1-weighted and 181 proton density-weighted MR images, respectively. Therefore, the method seems to be promising for identifying the correct location of the lumen in MR images.

Danilo Samuel Jodas, Aledir Silveira Pereira, João Manuel R. S. Tavares

Adaptive Bias Field Correction: Application on Abdominal MR Images

Segmentation of medical images is one of the most important phases for disease diagnosis. Accuracy, robustness and stability of the results obtained by image segmentation is a major concern. Many segmentation methods rely on absolute values of intensity level, which are affected by a bias term due to in-homogeneous field in magnetic resonance images. The main objective of this paper is two folded: (1) To show efficiency of an energy minimization based approach, which uses intrinsic component optimization, on abdominal magnetic resonance images. (2) To propose an adaptive method to stop the optimization automatically. The proposed method can control the value of the energy functional and stops the iteration efficiently. Comparisons with two previous state-of-the art methods indicate a better performance of the proposed method.

Evgin Goceri, Esther Dura, Juan Domingo Esteve, Melih Gunay

Super-Resolution Reconstruction of Plane-Wave Ultrasound Imaging Based on the Improved CNN Method

Plane wave imaging (PWI) can cover the entire image region by using a single plane wave transmission. The time-saving imaging mode, however, provides poor imaging resolution and contrast. It is highly demanded for the PWI to compensate the weakness in the imaging quality while maintain the ultrafast imaging speed. In this paper, we proposed a multi-scaled convolutional neural network (CNN) model to improve the quality of the PWI. To further increase the convergence rate and robustness of the CNN, a feedback system was added into the iteration process of the stochastic parallel gradient descent (SPGD) optimization. Three different types of data including the simulation, phantom and real human data have been used in the experiment with each class containing 150 pairs of data. The proposed method produced 52% improvement in the peak signal to noise ratio (PSNR) and 4 times improvement in the structural similarity index measurement (SSIM) compared with the original images. Moreover, the proposed method not only guarantees the global convergence, but also improves the converging rate with 15% reduction of the total elapsed time.

Zixia Zhou, Yuanyuan Wang, Jinhua Yu, Wei Guo, Zhenghan Fang

N-D Point Cloud Registration for Intensity Normalization on Magnetic Resonance Images

Magnetic resonance imaging (MRI) is a non-invasive inspection method widely used in clinical environment. Therefore, this is one of the research hotspots in computer-aided medical diagnosis. However, due to the disparities in imaging protocols as well as the magnetic field strength, the variation of intensity in different MRI scanners results in performance reduction in automatic image analysis and diagnosis procedure. This paper aims at forming a non-rigid intensity transforming function to normalize the intensities of MRI images. The transforming function is obtained from an N-Dimensional (N-D) point cloud, which is formed of weighted sub-region intensity distribution. The proposed method consists of five parts, including pre-alignment, sub-region standard intensity estimation, weighted N-D point cloud generation, spline-based transforming function interpolation as well as final image normalization. This novel method could not only avoid the intensity distortion caused by inconsistent bright-dark relation between tissues in target images and reference images, but also reduce the dependence on the accuracy of multi-modality MRI image registration. The experiments were conducted on a database of 10 volunteers scanned with two different MRI scanners and three modalities. We show that the peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) were both enhanced comparing with the histogram-matching method as well as the joint histogram registration method. It can be concluded that the intensities of MRI images acquired from different scanners could be normalized well using the proposed method so that multi-center/multi-machine correlation could be easily carried out in MRI image acquisition and analysis.

Yuan Gao, Jiawei Pan, Yi Guo, Jinhua Yu, Jun Zhang, Daoying Geng, Yuanyuan Wang

An Area-Based Measure of Directional Convexity for Grayscale Images

Convexity is a well-known shape descriptor that can be used for various applications of digital image processing. The concept of directional convexity of binary images has been recently studied in the field of binary tomography and image segmentation. It has been also investigated how the amount of convexity can be measured in the binary case. In this paper we present two area-based directional convexity measures for grayscale images, and study their properties (like, e.g., rotation and scale invariance) on artificial and real-life images. We also compare rankings of images provided by both measures. The aim is to reveal the advances and drawbacks of the measures affecting the quality of image processing issues they are used for.

Péter Bodnár, Péter Balázs

Analysis of Crowdsourced Images for Flooding Detection

Crowdsourced images taken at near ground-level present a new source of data for real-time flooding detection. In this new study, crowdsourced images taken in the city of Norfolk, Virginia are analyzed to extract the inundated area. In the proposed analysis, images of the same flooded roads in the crowdsourced images are obtained under dry conditions for comparison and detection of the flooding. Few preprocessing steps are used to normalize the image sets with and without flooding that are then registered considering the image without flooding as reference. After the registration, an algorithm pipeline is developed to extract the flooded area in the crowdsourced images. The method accounts for reflection on the standing water due to nearby landmarks and overhead clouds/sky. First, the flooded area with reflections from nearby landmarks on the water is identified. Then, the algorithm uses the detected flooded area as a seed to detect the rest of the flooding with reflections from overhead clouds/sky by using the saturation channel in hue, saturation, and value (HSV) color model. The proposed algorithm detects the flooded area as described above since the reference images are not able to detect areas with reflection from overhead clouds/sky. The novelty of the proposed algorithm involves using a new source of data, crowdsourced images, and detecting flooded area with reflection of nearby landmarks and overhead clouds/sky. The proposed algorithm is tested on real images and quantitative evaluation (detection and discrimination accuracy), based on a ground truth, is also presented to evaluate the proposed algorithm.

Megan A. Witherow, Mohamed I. Elbakary, Khan M. Iftekharuddin, Mecit Cetin

Adaptive Differential Pulse Coding for ECG Signal Compression

The electrocardiogram (ECG) signal is the recording of the electrical activity of the human heart. The compression of the ECG signal is highly beneficial for the purpose of wireless transmission as well as storage. A new algorithm for ECG signal compression is proposed in this paper. The algorithm is based on the observation that the ECG signal in the steady state is very stable with highly correlated successive pulses. Thus, the algorithm performs differential encoding between each new pulse and a stored reference pulse. This idea is inspired by the video compression techniques where the inter-frame changes are very limited. Therefore, high signal compression ratio can be obtained. The performance of the introduced technique is evaluated and compared to the state-of-the-art techniques. The performance is characterized by the compression ratio (CR) and the percentage of root mean square difference (PRD). The algorithm achieved a CR of 105 with PRD below 1.3%. Moreover, the comparison with other existing ECG compression methods demonstrated the superiority of the proposed algorithm.

M. Soliman, Ahmed El-Rafei, Mohamed El-Nozahi, Hani Ragai

Space-Variant TV Regularization for Image Restoration

We propose two new variational models aimed to outperform the popular total variation (TV) model for image restoration with L$$_2$$ and L$$_1$$ fidelity terms. In particular, we introduce a space-variant generalization of the TV regularizer, referred to as TV$$_p^{SV}$$, where the so-called shape parameter$$p\,$$ is automatically and locally estimated by applying a statistical inference technique based on the generalized Gaussian distribution. The restored image is efficiently computed by using an alternating direction method of multipliers procedure. We validated our models on images corrupted by Gaussian blur and two important types of noise, namely the additive white Gaussian noise and the impulsive salt and pepper noise. Numerical examples show that the proposed approach is particularly effective and well suited for images characterized by a wide range of gradient distributions.

A. Lanza, S. Morigi, M. Pragliola, F. Sgallari

Effective Colour Reduction Using Grey Wolf Optimisation

Colour reduction algorithms allow displaying or processing true colour images using a limited palette of distinct colours. Clearly, the colours that make up the palette are important as they determine the quality of the resulting image. Colour quantisation can also be seen as an optimisation problem where the task is to identify those colours that will lead to the best possible resulting image quality. In this paper, we utilise a recent meta-heuristic optimisation algorithm, Grey Wolf Optimisation, for colour reduction of images. Experimental results on a benchmark set of images confirm that our approach performs significantly better than other, purpose built colour quantisation algorithms.

Gerald Schaefer, Punjal Agarwal, M. Emre Celebi

UCID-RAW – A Colour Image Database in Raw Format

Virtually all multimedia and imaging applications and algorithms require validation and evaluation, yet benchmarking and evaluation have proven to be difficult and challenging. This is partly due to the amount of work involved in performing an appropriate and convincing test, but is also often hindered by the fact that there are relatively few test datasets that are publicly available.In this paper, we present UCID-RAW, an image database comprising 10,000 colour images. Importantly, all images in the dataset are captured and preserved in raw format; that is, stored in a way that maintains the actual information that the camera sensors recorded without any alteration or processing. The dataset is consequently useful for evaluation of a variety of applications including image compression, image retrieval, steganography and image forensics but in particular also demosaicing, colour correction, gamma and contrast correction, dynamic range compression and device calibration and characterisation.

Gerald Schaefer

Radioembolization with 90Y-Labeled Glass Microspheres: Analytical Methods for Patient-Personalized Voxel-Based Dosimetry

Radioembolization (RE) is a radiation treatment based on intra-arterial injection of Yttrium-90 (90Y) labeled microspheres, recognized as an efficient clinical practice for liver neoplasms either primary or metastatic. As a current practice, the RE procedure follows the guidelines of the microsphere manufacturers. At the Champalimaud Centre for the Unknown (CCU), Champalimaud Foundation (CF), we have been using 90Y-labeled glass microspheres (Therasphere®) in our patients referred for hepatic RE treatments. Following the Therasphere® guidelines, the calculation of 90Y activity to be injected through the hepatic artery, is based on the formalism of the MIRD (Medical Internal Radiation Dose) committee of the Society of Nuclear Medicine and Molecular Imaging (SNMMI). The calculation model is based on the prescribed absorbed dose and the liver volume (total, lobe or segment), not considering any patient-specific characteristics that can influence the microspheres deposition.Accurate absorbed dose calculation in the liver is crucial for treatment planning and establishing the dose-effect correlation between tumor control and normal tissue complications. Therefore, our research group has been developing patient-personalized voxel-based dosimetry analysis of Single Photon Emission Computer Tomography (SPECT) images obtained from the patient injected with Technetium-99 m Macroaggregated Albumin (99mTc-MAA) in a pre-treatment phase.The use of 99mTc-MAA as a 90Y microsphere surrogate is one of the biggest challenges in hepatic RE due to the intrinsic differences between particles (MAA vs glass microspheres) and consequent distribution in the liver parenchyma. Therefore, we capture post-treatment 90Y Positron Emission Tomography PET images to investigate the predictive power of the pre-treatment MAA dose distributions in comparison with the post-treatment 90Y dose maps.Two analytical methods have been used in our analysis: the gamma-index (γ-index) test, as a numerical measure of the agreement between two image datasets, and the Dose-Volume Histograms (DVH), to quantify the absorbed dose in each volume of interest (VOI) previously defined, i.e., the total liver, the normal functioning liver tissue and the planning target volume (PTV).We can conclude so far, that patients with multiple PTV undergoing hepatic RE (total or lobar) can be simulated by the MAA. For these patients, optimized pre-treatment voxel-based dosimetry seems to be suitable. Our hypothesis (ongoing work) is that patients with few PTV may also benefit from personalized pre-treatment dosimetry (segment or tumor) and optimization based on MAA.

P. Ferreira, R. Parafita, P. S. Girão, P. L. Correia, D. C. Costa

Minimisation of Equivalent Dose to the Extremities During PET Radiopharmaceuticals Dispensing

Positron Emission Tomography-Computed Tomography (PET/CT) has emerged as an early diagnosis and staging in several pathologies, particularly on cancer. The radiopharmaceuticals used in this technique are beta plus-emitters (β+) and radiate gamma photons (γ) with 511 keV, which is considerably higher in comparison with conventional nuclear medicine, implying that the conventional amounts of lead and tungsten in shields are not enough for an appropriate radiation protection of the practitioners. In this work two commercially available PET radiopharmaceuticals (PET-Rph) dispensers are compared. Results demonstrate that both instruments are robust and accurate, although not precise. Also particular care needs to be taken while working with PET-Rph, namely the arrangement of the material and equipment on the working bench.

J. Oliveira, J. Hunter, E. Carolino, F. Lucena

CNR and PSNR Evaluation Between 2D FFDM and 3D Tomosynthesis Images Using PMMA Plates

In Brazil, breast cancer is the leading cause of cancer death in women. In order to improve the early detection and survival rate of this disease, a new technique of additional exam was created: tomosynthesis. In this recent technology, commonly referred to as 3D mammography, the X-ray tube is rotated, generating many slices images of the breast, increasing the breast cancer detection mainly in dense breasts. The goal of this work is to evaluate and compare the contrast-to-noise (CNR) and peak signal-to-noise (PSNR) ratios of 2D FFDM (Full-Field Digital Mammography) and 3D tomosynthesis images using polymethylmethacrylate (PMMA) plates. We observed that the CNR values for the 2D images were always higher than for the 3D images, the opposite of what happened with the PSNR, which was higher for the tomosynthesis images. Besides that, we noticed that the thickness of the PMMA images and the CNR values calculated are inversely proportional, probably due to the scattered radiation. From this work, it was possible to evaluate the differences in level of contrast between the two types of image tested, which motivate us to investigate more these types of images in a larger database and with different contrast measures. We would like to propose one or a set of digital processing techniques that increase the contrast and reduce the noise in the 2D mammographic image.

Pedro Cunha Carneiro, Ricardo de Lima Thomaz, Ana Claudia Patrocinio, Adriano de Oliveira Andrade

Corpus Callosum 2D Segmentation on Diffusion Tensor Imaging Using Growing Neural Gas Network

The Corpus Callosum (CC) segmentation on Magnetic Resonance Images (MRI) is of utmost importance for the study of neurodegenerative diseases, since it is the largest white matter brain structure, interconnecting the two cerebral hemispheres. Operator-independent segmentation methods are desirable, even though such task is complex due to shape and intensity variation among subjects, especially on low resolution images such as Diffusion-MRI. This paper proposes an automatic CC segmentation approach on Diffusion Tensor imaging (DTI). The method uses Growing Neural Gas (GNG) network, an unsupervised machine learning algorithm, on the fractional anisotropy map. The proposed method obtained a Dice coefficient of 0.88 in experiments using DTI of fifty human subjects, while other segmentation approaches obtained Dice results below 0.73. Although the GNG network had five parameters to be set, it requires no user intervention and was the only method that successfully detected and segmented the CC on all experimented dataset.

Giovana S. Cover, William G. Herrera, Mariana P. Bento, Leticia Rittner

Pixel-Based Classification Method for Corpus Callosum Segmentation on Diffusion-MRI

The Corpus Callosum (CC) is an important brain structure and its volume and variations in shape are correlated with diseases like Alzheimer, schizophrenia, dyslexia, epilepsy and multiple sclerosis. CC segmentation is a necessary step in both clinical and research studies. CC is commonly studied using structural Magnetic Resonance Imaging (MRI); evaluation and segmentation on Diffusion-MRI is important because there is relevant fiber and tissue information presented on these images, although it is challenging and rarely considered. In this work, a pixel-based classifier on Diffusion-MRI (directly in Diffusion-Weighted Imaging) using a Support Vector Machine is proposed for CC segmentation. A subsampling technique, based on K-means clustering, is used to treat the intrinsically unbalanced pixel classification problem. STAPLE algorithm is used to estimate both a silver-standard and a quantitative analysis through sensitivity, specificity and the Dice coefficient metrics. Our method reached a median value of $$88\%$$ in Dice coefficient, had no initialization or parameters to be set and it was compared with two state-of-the-art approaches, showing higher CC detection rate.

William G. Herrera, Giovana S. Cover, Leticia Rittner

Facial Temperature Recovery After Ice Therapy: A Comparative Study Based on Thermography Evaluation

Thermography is a non-radiating and contact-free technology which can be used to monitor skin temperature. The efficiency and safety of thermography technology make it a useful tool for detecting and locating thermal changes in skin surface, characterized by increases or decreases in temperature. This work intends to be a contribution for the use of thermography as a methodology for evaluation of skin temperature in the context of orofacial biomechanics. The study aims to identify the oscillations of skin temperature in hemiface’s region of the masseter muscle and estimate the time required to restore the initial temperature after the application of an ice stimulus. Using an infrared camera, a data acquisition protocol was followed with a group of volunteers in a controlled environment. The thermal stimulus involves the use of an ice volume and the skin surface temperature was recorded in two distinct situations, namely without further stimulus and with the addition of a complementary stimulus obtained by a chewing gum. The results shows that recovery is faster with the addition of the stimulus and may guide clinicians regarding the pre and post-operative times with ice therapy, in the presence or absence of mechanical stimulus that increase muscle functions.

Ana Dionísio, Luis Roseiro, Júlio Fonseca, Luis Margalho, Pedro Nicolau

Hybrid Image Registration of Endoscopic Robotic Capsule (ERC) Images Using Vision-Inertial Sensors Fusion

In this paper, a hybrid registration technique, which takes advantage of data fusion of multiple sensors integrated inside a robotic endoscopic capsule, is proposed to enhance the registration accuracy and construct the 3D trajectory of the endoscopic robotic capsule. The proposed hybrid technique extracts motion information of the endoscopic capsule from the image sequences captured by the capsules camera and combines it with the Inertial Measurements Unit readings, to simultaneously localize and map the path traveled by the capsule device. Furthermore, the performance of three different bundle adjustment techniques was evaluated for mapping and registering endoscopic capsule images. The evaluated methods are: (i) global bundle adjustment, (ii) local bundle adjustment and (iii) inertial-integrated local bundle adjustment. The performance of the three bundle adjustment techniques were compared in terms of number of iterations, elapsed time, initial and final errors while varying the temporal distances between images and the sliding window size. Experimental results show that a 3D map can be precisely constructed to represent the position of the capsule inside the colon. The proposed method allows the operator to register several maps of endoscopic images toward the reconstruction of the full colon model.

Yasmeen Abu-Kheil, Lakmal Seneviratne, Jorge Dias

Segmentation of Heavily Clustered Cell Nuclei in Histopathological Images

Automated cell nuclei determination of stained images is of uttermost importance for diagnosis. In this work, we have proposed a novel efficient and accurate image segmentation technique for densely clustered overlapping cell nuclei. Firstly, we have extracted the cell body (foreground) from the background using global thresholding followed by local thresholding. Then, we have employed the fusion of seeded region growing technique and level-set algorithm. The initial seed points need to be selected accurately and precisely in order to generate appropriate outcomes from region growing framework. Initial contours for level-set evolution relies heavily on an output of this adaptive region growing approach and some morphological operations. Finally, Global Gaussian distribution with several means and variances is employed in an enhanced edge-based level-set approach for precise nuclei segmentation. We have performed our analysis on Nissl stained EMF exposed, and SHAM exposed cell images. The proposed framework is very much capable of extracting the cell nuclei from stained cell images. Experimental outcomes reveal that our approach has out-performed existing state of art techniques for cell nuclei extraction and segmentation.

Rahul Singh, Mukta Sharma, Mahua Bhattacharya

Image Denoising with Convolutional Neural Networks for Percutaneous Transluminal Coronary Angioplasty

Percutaneous transluminal coronary angioplasty (PTCA) requires X-ray images employing high radiation dose with high concentration of contrast media, leading to the risk of radiation induced injury and nephropathy. These drawbacks can be reduced by using lower doses of X-rays and contrast media, with the disadvantage of noisier PTCA images. In this paper, convolutional neural networks were used in order to denoise low dose PTCA-like images, built by adding artificial noise to high dose images. MSE and SSIM based loss functions were tested and compared visually and quantitatively for different types and levels of noise. The results showed promising performance for denoising task.

Marco Pavoni, Yongjun Chang, Örjan Smedby

The Importance of SPECT Imaging Attenuation Correction During Treatment Planning for 90Y-labeled Glass Microspheres Liver Radioembolization

Nowadays, liver radioembolization with 90Y microscpheres represents an innovative treatment for unresectable hepatic tumors. 90Y radioembolization treatment planning is performed by means of 99mTc-MAA SPECT imaging. However, photons interaction within human tissues and, in particular, 99mTc $$\gamma $$-rays attenuation, may significantly compromise the quantitative accuracy of SPECT images. The aim of this study was to validate a patient-specific and Computed Tomography (CT)-based attenuation correction of SPECT images, by employing and analyzing, both phantom models and clinical liver SPECT images. The results observed suggested that the implementation of attenuation corrections, during the iterative process of SPECT images reconstruction, resulted in qualitatively and quantitatively visually improved images, with higher contrast and increased count density profiles. Moreover, as compared to uncorrected images, attenuation corrected SPECT images presented a counts distribution more proportional to the real radiopharmaceutical concentration, both in phantom model and in human liver. Thus, the methodology implemented will permit to improve the liver radioembolization treatment planning, based on 99mTc-MAA SPECT imaging and its efficacy, while minimizing the complications.

Laura Demino, Paulo Ferreira, Francisco P. M. Oliveira, Durval C. Costa

Developments on Finite Element Methods for Medical Image Supported Diagnostics

Variational image-processing models offer high-quality processing capabilities for imaging. They have been widely developed and used in the last two decades, enriching the fields of mathematics as well as information science. Mathematically, several tools are needed: energy optimization, regularization, partial differential equations, level set functions, and numerical algorithms. For this work we consider a second-order variational model for solving medical image problems. The aim is to obtain as far as possible fine features of the initial image and identify medical pathologies. The approach consists of constructing a regularized functional and to locally analyse the obtained solution. Some parameters selection is performed at the discrete level in the framework of the finite element method. We present several numerical simulations to test the efficiency of the proposed approach.

A. Almeida, J. I. Barbosa, A. Carvalho, M. A. R. Loja, R. Portal, J. A. Rodrigues, L. Vieira

Brain Tumor Segmentation of Normal and Pathological Tissues Using K-mean Clustering with Fuzzy C-mean Clustering

Segmentation of brain tumor from magnetic resonance imaging is a time consuming and critical task due to unpredictable characteristics of tumor tissues. In this paper, we propose a new tissue segmentation algorithm that segments brain MR images into gray matter (GM), white matter (WM), cerebrospinal fluid (CSF), tumor and edema. It is crucial to segment the normal and pathological tissues simultaneously for treatment planning. K-mean clustering algorithm has minimal computation time, and fuzzy c mean clustering has advantages in the aspect of accuracy on the soft tissues. So we are integrating the K-mean clustering algorithm with Fuzzy C-means clustering algorithm for segmenting the brain magnetic resonance imaging. First, we segment the abnormal region from $$T_{2}$$-weighted FLAIR modality based on k mean clustering algorithm integrated with fuzzy c mean algorithm. And in the next stage, we segment the tumor from $$T_{1}$$-weighted contrast enhancement modality $$T_{1ce}$$. We used $$T_{1}$$, $$T_{1c}$$ , $$T_{2}$$ and flair images of 60 subject suffering from high graded and low grade glioma, and 20 $$T_{1}$$-weighted anatomical models of normal brains.

Ravi Shanker, Mahua Bhattacharya

Automatic Classification of Ulcers Through Visual Spectrum Image

Chronic wounds are a health condition that constitutes a threat to the public health and economy, having a detrimental effect on patient’s quality of life and high costs in treatment. Since chronic wounds do not perform a well-ordered reparative process, the anatomic and functional integrity of the damaged tissue are not restored, being of extreme importance the establishment of an appropriate treatment based on an accurate assessment of the state of the healing process. The aim of this research is to create an image processing methodology that characterizes chronic ulcers providing information about its area and tissue composition. The developed solution, which incorporates a flood fill algorithm to segment the ulcer and performs wound area calculation based on a calibration marker introduced during the image collection, was tested in diabetic foot ulcers, allows greater characterization of 97% of the 200 ulcers tested, with high correlation when compared with the clinical assessment, lower subjectivity, wound contamination probability and costs than conventional solutions.

Rita A. Frade, Ricardo Vardasca, Rui Carvalho, Joaquim Mendes

Body Navigation via Robust Segmentation of Basic Structures

Segmentation of internal organs from Computed Tomography images often uses intensity and shape properties. We introduce a navigation system based on robust segmentation of body tissues like spine, body surface and lungs. Pose estimation of an investigated tissue can be performed using this algorithm and it also can be used as a support information for various segmentation algorithms. Precision of liver segmentation based on Bayes classifier is shown in this paper and it is compared with state of the art methods using SLIVER07 dataset.

Miroslav Jirik, Vaclav Liska

Using the FDK Algorithm to Reconstruct Low Contrast Images Generated by Monte Carlo, Simulation of Sediment Imaging

In geology field, the experimental or computational simulation of the sediment deposition process is widely used to characterize regions and possible findings. Noninvasive methods are the best way to evaluate sediments; in order of not to destroy the information of the sediment deposition structure. The use of computed tomography (CT) techniques to get images of sediments will be evaluated. A CT will be analyzed has a two stage process, the scan of body test to produce a set of X-ray images and the reconstruction step where the set of X-ray images is processed by computational algorithm to generate cross-sectional images of the scanned object. To test the method the X-ray images are generated using the Monte Carlo method with a XRMC tool, and the simulated images are 3D reconstructed using the Feldkamp, David e Kress algorithm. The reconstruction planes and the volume generated in the simulation showed very similar after a visual comparison for all of tested volumes. However a statistical analysis showed that there was no statistical coincidence of the simulated dimensions and reconstruction. The results presented showed the possibility to use CT to study the sediments in tanks or rivers.

J. S. Domínguez, G. Hoff, J. T. de Assis

Mechatronics Supported Virtual Bronchoscopy for Navigation in Bronchoscopy of Peripheral Respiratory Tree

The purpose of this paper is to test conjecture that the data related to manual maneuvers with the handle of a catheter of a bronchoscope provides information sufficient for accomplishing the task of estimating the position, orientation and flexure of the distal end of a catheter. To test the conjecture three devices were developed and mounted on the handle of a catheter and on the external opening of the working channel of a bronchoscope. The devices recorded shifts of a catheter, rotations of the catheter handle and the shifts of the ring of the catheter, directly related to the flexure of the tip of the catheter. In laboratory experiments it has been shown that the shift of the catheter can be monitored with accuracy of 2 mm and the catheter tip rotation and flexure angles can be monitored with accuracy of 10 and 20°, respectively. The accuracy of the developed devices compares favorably with the accuracy of the current standard methods for supporting peripheral bronchoscopy.

Dariusz Michalski, Tomasz Nabagło, Józef Tutaj, Wojciech Mysiński, Rafał Petryniak, Damian Pietrzyk, Wadim Wojciechowski, Zbisław Tabor

The Underrated Dimension: How 3D Interactive Mammography Can Improve Breast Visualization

Breast tissue superposition or parenchymal density have been known as Digital Mammography’s (DM) main limitations. More expensive and case-specific tools such as MRI and ultrasound imaging may be used to address this problem, but 2D DM remains the most practical and cost-effective approach. Digital Breast Tomosynthesis (DBT) has the ability to overcome both problems. However, the images produced by this technique are affected by the limited resolution between slices, which combined with a lack of a viewing mode make it difficult to visualize the true 3D structure of the breast. This is unfortunate since stereo static 3D representations can improve real lesion detection. In this paper, we propose a new interactive visualization approach to DBT that explores all three dimensions of the volume data. Our approach allows combining DBT slices to generate a 3D representation of the breast in order to improve the radiologist’s depth perception. Preliminary results suggest that this alternative has the potential to achieve similar visual enhancement of lesions, as well as to reduce the time required to locate and classify these.

Soraia F. Paulo, João Martins, Ana M. Mota, Elisa Melo Abreu, João Niza, Nuno Matela, Joaquim A. Jorge, Daniel S. Lopes

Biopsy Procedure Applied in MentorEye Molecular Surgical Navigation System

This article describes biopsy procedure implemented in MentorEye system which aims at supporting surgeons during complex oncological treatment. Mainly, MentorEye system comprises of virtual planning phase using 3D virtual reality display, medical content presentation with the help of head mounted displays and finally different image modalities fusion such as CT (Computed Tomography) and fluorescence. Tumor treatment involves an individual approach and in some cases biopsy is required to determine further procedure steps. Limited time during real surgery and restricted visibility causes that an initial preparation is needed. In this paper we show virtual planning algorithm steps with further intraoperative realization under the control of the optical navigation system.

Marcin Majak, Magdalena Zuk, Ewelina Swiatek-Najwer, Michal Popek, Piotr Pietruski

The Rigid Registration of CT and Scanner Dataset for Computer Aided Surgery

The main aim of this work was to perform rigid registration of Computed Tomography (CT) and scanner datasets. The surgeon applies CT and scanner datasets in computer aided surgery and performs registration in order to visualize the location of surgical instrument on screen. It is well known fact that the registration procedure is crucial for efficient computer aiding of surgery. Selected algorithm should take into account types of datasets, required accuracy and time of calculations. The algorithms are classified basing on the various criteria: e.g. precision (coarse and fine registration), types of pointset (set of pair of corresponding points – so called point-point method, unorganized sets of points – so called surface registration). The paper presents exemplary results of applying the following algorithms: Landmark Transform (point-point registration), two methods of uninitialized Iterative Closest Point type (surface registration) and a hybrid method. The evaluated factors were: distance error (mean, minimal and maximal value) and running time of algorithm. The algorithms were tested on various datasets: (1) two similar datasets from Computed Tomography (one is geometrically transformed), (2) Computed Tomography dataset and cloud of points recorded using 3D Artec Space Spider scanner. In the first case the mean error values equaled: 102.08 mm – 121.70 mm for uninitialized ICPs methods, 0.005 mm for Landmark Transform method, and 0.0003 mm for hybrid method. The slowest algorithms in our tests were ICPs methods, faster was hybrid algorithm, and the fastest was Landmark Transform method. In the second case the distance errors were evaluated in four selected points, and the smallest errors were: 23.21 mm for uninitialized ICPs method, 0.69 mm for Landmark Transform, 9.03 for hybrid method. All algorithms were relatively slow for these large datasets, the fastest was Landmark Transform. In the second part of research we analysed the Target Registration Error (TRE) for fused Computed Tomography and scanner-recorded dataset. The TRE values equaled 0.7 mm - 2.8 mm. The results of CT – scanner datasets registration highly depend on the similarity of sets, especially their overlapping, but also their resolutions and uniformities.

Ewelina Świątek-Najwer, Magdalena Żuk, Marcin Majak, Michał Popek

Evaluation of Calibration Procedure for Stereoscopic Visualization Using Optical See-Through Head Mounted Displays for a Complex Oncological Treatment

In this paper we are presenting MentorEye system supporting complex oncological procedures. In the past, whole medical content and virtual plan were presented in the operating theater using external monitors. This basic approach was quite difficult for surgeons, because they were distracted from real operation and what is more important to follow virtual plan realization, they lost their direct eye contact with the operating field. Huge development in head mounted displays (HMD) technology creates an opportunity to use this kind of presenters interoperatively. First of all, they are quite small and user friendly, and content presentation can be from an egocentric point of view. In this paper, we show different HMD models and their potential usage in computer aided surgery. Finally, calibration procedure is validated to support virtual plan visualization directly in the front of surgeon eyes.

Magdalena Zuk, Marcin Majak, Ewelina Swiatek-Najwer, Michal Popek, Zbigniew Kulas

Lesion Classification in Mammograms Using Convolutional Neural Networks and Transfer Learning

Computer-Aided Detection/Diagnosis (CAD) tools were created to assist the detection and diagnosis of early stage cancers, decreasing false negative rate and improving radiologists’ efficiency. Convolutional Neural Networks (CNNs) are one example of deep learning algorithms that proved to be successful in image classification. In this paper we aim to study the application of CNNs to the classification of lesions in mammograms. One major problem in the training of CNNs for medical applications is the large dataset of images that is often required but seldom available. To solve this problem, we use a transfer learning approach, which is based on three different networks that were pre-trained on the Imagenet dataset. We then investigate the performance of these pre-trained CNNs and two types of image normalization to classify lesions in mammograms. The best results were obtained using the Caffe reference model for the CNN with no image normalization.

Ana Perre, Luís A. Alexandre, Luís C. Freire

Saliency Maps for Localization of Liver Lesions

The density of a significant number of liver tumors is not very contrasting from the density of surrounding liver tissue. Therefore, the use of conventional automatic segmentation methods did not provide satisfactory results. Thresholding-based algorithms are not able to detect the right threshold value, region growing algorithms cannot deal with the noisy-nature of data etc. Even more advanced algorithms cannot deal with such problems. For example, the active contours are often leaking out from the tumorous area due to the low contrast at its border. Thus a new fully automatic approach was designed that is based on the saliency maps and the Markov random fields. This approach can deal with the nature of medical data and is able to localize liver lesions with satisfactory precision.

Tomáš Ryba, Miloš Železný

A Dual-Modal CT/US Kidney Phantom Model for Image-Guided Percutaneous Renal Access

Percutaneous renal access (PRA) is a crucial step in some minimally invasive kidney interventions. During this step, the surgeon inserts a needle through the skin until the kidney target site using fluoroscopy and ultrasound imaging. Recently, new concepts of enhanced image-guided interventions have been introduced in these interventions. However, their validation remains a challenging task. Phantom models have been presented to solve such challenge, using realistic anatomies in a controlled environment. In this work, we evaluate the accuracy of a porcine kidney phantom for validation of novel dual-modal computed tomography (CT)/ultrasound (US) image-guided strategies for PRA. A porcine kidney was combined with a tissue mimicking material (TMM) and implanted fiducial markers (FM). While the TMM mimics the surrounding tissues, the FM are used to accurately assess the registration errors between the US and CT images, providing a valid ground-truth. US and CT image acquisitions of the phantom model were performed and the FM were manually selected on both images. A rigid alignment was performed between the selected FM, presenting a root-mean-square error of 1.1 mm. Moreover, the kidney was manually segmented, presenting volumes of 203 ml and 238 ml for CT and US, respectively. The initial results are promising on achieving a realistic kidney phantom model to develop new strategies for PRA, but further work to improve the manufacturing process and to introduce motion and anatomical artifacts in the phantom is still required.

João Gomes-Fonseca, Alice Miranda, Pedro Morais, Sandro Queirós, António C. M. Pinho, Jaime C. Fonseca, Jorge Correia-Pinto, Estêvão Lima, João L. Vilaça

Automatic Liver Tumor Characterization Using LAVA DCE-MRI Images

Dynamic contrast enhanced MRI images play a crucial role in liver tumor characterization during daily clinical practice. However, this task can be very time-consuming due to very different and various tumor types. In this paper we present an automatic liver tumor characterization method which consists of two main parts: registration of the MRI images and a supervised learning-based classification using the Random Forest method. Our dataset contained 10 benign and 30 malignant liver tumor cases. Manual tumor contours were determined by a well-trained physician. Although we used a relatively small train and test set, presented results can be considered promising. Our preliminary results showed that colorectal carcinoma metastasis (CRC) can be separated from other tumor types with an average accuracy of 96% (±8%). Furthermore, other mixed tumor types were successfully classified as non-CRC cases with high accuracy.

Szabolcs Urbán, Attila Tanács

Segmenting MR Images by Level-Set Algorithms for Perspective Colorectal Cancer Diagnosis

Segmentation is an essential and crucial step in interpreting medical images for possible treatment. Medical image segmentation is very chaotic procedure as medical image may have different structures of same organ in different image modalities and may also have different features in different image slices of same modality. In this work, we present a comparison of segmentation algorithms based on level set methods, viz. Caselles, Chan & Vese, Li, Lankton, Bernard, and Shi algorithms. We assessed these algorithms with our T2-weighted colorectal MR images using Dice coefficient that measures the similarity between the reference sketched by specialist and the segmentation result produced by each algorithm. In addition, computational time taken by each algorithm to perform the segmentation is also computed. Our results on average Dice coefficient and average time computation demonstrate that Bernard has the lowest average Dice coefficient and the highest computational complexity followed by Li which has second lowest Dice coefficient and highest computational complexity. Lankton has achieved satisfactory results on average Dice coefficient and computational complexity followed by Chan & Vese and Shi. Whereas, Caselles algorithm outperforms than all with respect to average Dice coefficient and computational time.

Mumtaz Hussain Soomro, Gaetano Giunta, Andrea Laghi, Damiano Caruso, Maria Ciolina, Cristiano De Marchis, Silvia Conforto, Maurizio Schmid

Virtual Application to Prevent Repetitive Strain Injuries in Hands

Repetitive movements in hands may cause strain injuries that negatively influence the employee performance. The lack of awareness for prevention this type of working injuries often leads the employers to believe that costs associated to prevention are unnecessary expenditures. So, the objective of this paper is to develop a serious game for preventing repetitive strain injuries in hands. The game activities are developed in Unity 3D software and using the 3D sensor, Intel RealSense 3D Camera F200. There are two sets of exercises, the warming up and the stretching off exercises, to be executed during the playing activity. Three game scenarios were developed. The tests performed so far allow concluding that the proposed system correctly detects the hands movements.

Hélder Freitas, Vítor Carvalho, Filomena Soares, Demétrio Matos

Monitoring of Bioelectrical and Biomechanical Signals in Taekwondo Training: First Insights

Taekwondo is an Olympic combat sport that has gained much popularity in the last years. Portugal is not an exception to the substantial growth of this martial art. Currently there are about 4500 practitioners of this art affiliated in the national federation. Several have already produced excellent results not only in national but also in international competitions, recognized by the International Olympic Committee of Portugal, Taekwondo European Union and the World Taekwondo Federation (WTF). This work is a joint project between the University of Minho, Sporting Club de Braga Taekwondo section and the National Technical Team of the Portuguese Federation of Taekwondo. It is aimed at developing a system to analyse, monitor and quantify the athlete performance in real time. The developed system should be able to help the athlete improve his/her technique and performance by monitoring his/her heart rate, analyzing and identifying the technical movements in the training by comparing the various movements performed, as well as quantifying variables related to the athlete’s performance. Visual cues for the heart rate and performance are indicated by LEDs. With this system, we propose a contribution to innovation and development in Taekwondo training.

Bruno Amaro, Joel Antunes, Pedro Cunha, Filomena Soares, Vítor Carvalho, Hélder Carvalho

Recording of Occurrences Through Image Processing in Taekwondo Training: First Insights

Nowadays, the most used evaluation method of athlete’s performance in the martial art Taekwondo is still performed manually, where the coach analyses the collected videos of the athlete training. This method besides being time consuming, it is prone to errors. Aiming the development of methods for improvement the training, this project intends to present a new method for recording occurrences and recognizing the movements of athletes in real time during the Taekwondo training. To achieve the purpose, it was used the Microsoft Kinect sensor fused with image processing technics. This project arises as a collaboration between the University of Minho, the School of Technology from the Polytechnic Institute of Cávado and Ave and the Sporting Club de Braga Taekwondo section, Portugal. It is the authors believe that the proposed system may improve the athlete’s performance and the development of the Taekwondo training technics.

Tiago Pinto, Emanuel Faria, Pedro Cunha, Filomena Soares, Vítor Carvalho, Hélder Carvalho

iBoccia: A Framework to Monitor the Boccia Gameplay in Elderly

The increase of the elderly population has an enormous effect on the health care system of a country, as the rise of this population sets the mood to an exponential growth in assistance and care. Indeed, the inherent costs of this populational class are higher when comparing to the younger classes. Today paradigm focuses on the reduction of these costs by promoting a healthier lifestyle on all classes of the populations. Thus, the concern of a more active lifestyle is present in the elderly population, which has proven to reduce, for example, the risk of coronary problems. The stimulus on physical activity is now higher and it is possible to get several monitoring devices to keep track on the activity that was performed. Following this trend, the present paper presents a hybrid approach that employs the use of wearable devices, the Mio Fuse band and the pandlet, and a non-wearable device, the Kinect camera, to monitor elderly people during a Boccia game scenario. Preliminary tests were performed in laboratory. The results include data collected concerning a main movement that is used during a Boccia gameplay.

Vinícius Silva, João Ramos, Filomena Soares, Paulo Novais, Pedro Arezes, Filipe Sousa, Joana Silva, António Santos

Innovative Analysis of 3D Pelvis Coordination on Modified Gait Mode

This study presents innovative analysis at the time, frequency and phase domain of the pelvis angular oscillation at transverse (T), sagittal (S) and coronal (C) planes, assessing its coordination during stiff knee gait (SKG) and slow running (SR) comparing it to normal gait (NG). Case study is considered of an adult male 70 kg mass and 1.86 m height. Computer vision is used with 8 Qualysis 100 Hz cameras tracking position of right and left anterior and posterior superior iliac spine (RAsis, LAsis, RPsis, LPsis) including one complete stride during NG, SKG and SR. 3D position coordinates are obtained from 2D image coordinate of multiple camera image using direct linear transformation (DLT). Inverse kinematics is performed using cartesian position data of RAsis, LAsis, RPsis, LPsis and scaled model to subject dimension. The angles, angular velocities and angular accelerations coordination of the pelvis oscillation at T, S, C planes were assessed using linear and cross correlation analysis (LCA, CCA), fast Fourier transform (FFT) and phase space analysis (PSA). Results point for important complementary analysis on entire series of time, frequency and phase analysis of human movement such as the pelvis coordination assessment on different gait modes.

C. Rodrigues, M. V. Correia, J. M. C. S. Abrantes, J. Nadal, M. A. B. Rodrigues

Out-of-Core Progressive Web-Based Rendering of Triangle Meshes

The visualization of large volumes of data has been explored in several knowledge domains, such as remote sensing, medicine, meteorology, biology, among others. In traditional data visualization techniques, data is stored, processed and rendered locally on the client machine, which may require expensive computational resources in terms of storage space and processing power. This work presents and discusses a methodology for out-of-core remote rendering of large three-dimensional triangles meshes. Users are able to interact with the developed visualization tool through requests sent to a server by directly manipulating the data volumes on their own Web browser.

Thiago F. de Moraes, Paulo H. J. Amorim, Jorge V. L. da Silva, Helio Pedrini

Issues on the Simulation of Geometric Fractures of Bone Models

The simulation of realistic fracture cases on geometric models representing bone structures is almost an unexplored field of research. These fractured models have many applications in computer-assisted methods that support specialist in fracture reduction interventions. For instance, the generation of specific fracture patterns can provide uncommon cases for training simulators or even can be used to improve machine-learning applications. This paper focuses on the issues to be considered in the generation of fractures on geometric models that represent bone structures. The main recent contributions for fracturing geometric models are examined and the challenges in terms of the application of real bone fracture patterns on geometric models are presented. Moreover, different alternatives for the evaluation of the results obtained by the geometric fracture generation algorithms when applied to bone structures are showed. Finally, the potential applications of the virtual generation of specific bone fractures are described.

Félix Paulano-Godino, J. Roberto Jiménez-Pérez, Juan J. Jiménez-Delgado

Multifractal Detrended Fluctuation Analysis of Eye-Tracking Data

In this contribution, we perform detrended fluctuation analysis on eye movement data obtained using an eye tracker with different experimentation subjects performing a set of distinct cognitive tasks. We define three different paradigms: spotting differences among two similar pictures, answering questions in a multiple choice questionnaire, and performing the Trail Making Test to measure subject attention. Using multifractal detrended fluctuation analysis (MDFA) we evaluate the Hurst exponent, the multifractal spectrum and correlation coefficient of the subjects’ eye movements under these different cognitive tasks. Under these paradigms, the MDFA presents a characteristic $$f(\alpha )$$ spectrum, showing that the organizational structure of the data reflects the ocular activities of the subjects under the different tasks. An interpretation of the MDFA spectrum features provides insights on the internal cognitive strategies deployed in each of the paradigms.

M. L. Freije, A. A. Jimenez Gandica, J. I. Specht, G. Gasaneo, C. A. Delrieux, B. Stošic, T. Stošic, R. de Luis-Garcia

Estimating the Patient-Specific Relative Stiffness Between a Hepatic Lesion and the Liver Parenchyma

This paper presents a novel non-invasive methodology to obtain the patient-specific relative stiffness between a hepatic lesion and the liver parenchyma in vivo. This relative stiffness can be used as a biomarker about the type of lesion. This biomarker together with the rest of pathological information can be used to plan a biopsy, an image-guide intervention or a radiation therapy. This relative stiffness is estimated by means of the finite element simulation of the breathing process, which is embedded in an optimization routine based on genetic algorithms. This routine was aimed at finding the patient-specific relative stiffness between a hepatic lesion and the liver parenchyma for the proposed model. The feasibility of the proposed methodology was proved first with a synthetic case, where the relative stiffness factor between liver and a tumour were known and the algorithm performed a blind search of it. After that, the methodology was applied to real cases. The results show the good performance of the proposed methodology since the deformation of the tumors were detected with a mean error low.

S. Martinez-Sanchis, M. J. Rupérez, E. Nadal, D. Borzacchiello, C. Monserrat, E. Pareja, S. Brugger, R. López-Andújar

Patient-Specific Study of a Stenosed Carotid Artery Bifurcation Using Fluid–Structure Interactive Simulation

Atherosclerosis at the carotid bifurcation is a major risk factor for stroke. A computational model incorporating transient wall deformation of carotid arteries was developed to assess the influence of artery compliance on wall shear stress (WSS). Clinical data was obtained from ultrasound technique. Two patients were studied, one presenting a mild-graded carotid stenosis along internal carotid artery (ICA) and the other with no visible stenosis. It is hoped that patient-specific biomechanical analyses will help diagnosis and to assess the rupture potential for any particular lesion.

Nelson Pinho, Marco Bento, Luísa C. Sousa, Sónia Pinto, Catarina F. Castro, Carlos C. António, Elsa Azevedo

Pattern Recognition in Macroscopic and Dermoscopic Images for Skin Lesion Diagnosis

Pattern recognition in macroscopic and dermoscopic images is a challenging task in skin lesion diagnosis. The search for better performing classification has been a relevant issue for pattern recognition in images. Hence, this work was particularly focused on skin lesion pattern recognition, especially in macroscopic and dermoscopic images. For the pattern recognition in macroscopic images, a computational approach was developed to detect skin lesion features according to the asymmetry, border, colour and texture properties, as well as to diagnose types of skin lesions, i.e., nevus, seborrheic keratosis and melanoma. In this approach, an anisotropic diffusion filter is applied to enhance the input image and an active contour model without edges is used in the segmentation of the enhanced image. Finally, a support vector machine is used to classify each feature property according to their clinical principles, and also for the classification between different types of skin lesions. For the pattern recognition in dermoscopic images, classification models based on ensemble methods and input feature manipulation are used. The feature subsets was used to manipulate the input feature and to ensure the diversity of the ensemble models. Each ensemble classification model was generated by using an optimum-path forest classifier and integrated with a majority voting strategy. The performed experiments allowed to analyse the effectiveness of the developed approaches for pattern recognition in macroscopic and dermoscopic images, with the results obtained being very promising.

Roberta B. Oliveira, Aledir S. Pereira, João Manuel R. S. Tavares

Design Hints for Efficient Robotic Vision - Lessons Learned from a Robotic Platform

Interest in autonomous vehicles has steadily increased in recent years. A number of tasks, like lane tracking, semaphore detection and decoding, are key features for a self-driving robot. This paper presents a path detection and tracking algorithm using the Inverse Perspective Mapping and Hough Transform methods compounded with real-time vision techniques and a semaphore recognition system based on color segmentation. An evaluation of the proposed algorithm is performed and a comparison between the results using real-time techniques is also presented. The suggested architecture has been put to test on autonomous driving robot who competed in the Portuguese autonomous vehicle competition called “Festival Nacional de Robótica”. The overall process of the lane tracking algorithm, takes about 1.4 ms per image, almost 60 times faster than the first algorithm tested and a good accuracy, showing a translation error below 0.03 m and a rotation error below 5$$^\circ $$. Regarding the real-time semaphore recognition, it takes about 0.35 ms to detect a semaphore and has achieved a perfect score in the laboratory tests performed.

Valter Costa, Peter Cebola, Armando Sousa, Ana Reis

Co-reference Analysis Through Descriptor Combination

NELL (Never-Ending Language Learning) is the first never-ending learning system presented in the literature. It has been modeled to create a knowledge based on an autonomous way, reading the web 24 hours per day, 7 days per week. As such, the co-reference analysis has a crucial role in NELL’s learning paradigm. In this paper, we approach a method to combining different feature vectors in order to solve the coreference resolution problem. In order to fulfill this work, an optimization task is devised by meta-heuristic techniques in order to maximize the separability of samples in the feature space, being the optimization process guided by the accuracy of Optimum Path Forest in a validation set. The experiments showed the proposed methodology can obtain much better results when compared to the performance of individual feature extraction algorithms.

A. F. Mansano, E. R. Hrushcka, J. P. Papa

Automatic Identification of Pollen in Microscopic Images

A system for the identification of pollen grains in bright-field microscopic images is presented in this work. The system is based on segmentation of raw images and binary classification for 3 types of pollen grain. The segmentation method developed tackles a major difficulty of the problem: the existence of clustered pollen grains in the initial binary images. Two different SVM classification kernels are compared to identify the 3 pollen types. The method presented in this paper is able to provide a good estimate of the number of pollen grains of Olea Europea (relative error of 1.3%) in microscopic images. For the two others pollen types tested (Corylus and Quercus), the results were not as good (relative errors of 14.5% and 20.3%, respectively).

Elisabete M. D. S. Santos, André R. S. Marcal

A Workbench for Biomedical Applications Based on Image Analysis

Unraveling the underlying mechanisms involved in single and collective cell migration, organization, tissue formation and regeneration are some of the currently spearheading research topics worldwide. To overcome the intrinsic difficulties associated to realistic environments in 3D, a multidisciplinary framework combining computer simulations and experiments is proposed. The success of this approach relies on result validation (quantitative comparison: computational vs. experimental), so we have polished different techniques and computational tools to get as accurate measurements as possible. For that, we take advantage of our own designed microfluidic devices and perform thorough image and statistical analyses.

Carlos Borau, Cristina del Amo, Jesús Asín, Nieves Movilla, Mar Cóndor, José Manuel García-Aznar

Learning Digital Image Processing Concepts with Simple Scilab Graphical User Interfaces

Digital Image Processing is an extraordinary and fascinating world. However, to discover its enchantments it is necessary first to understand its fundamentals and its formulations. In this article, we intend to describe a set of simple Graphical User Interfaces (GUIs), developed in Scilab and using the “Scilab Image and Video Processing Toolbox” package. These interfaces aim to facilitate the learning of concepts of an Image Processing course, as well as to promote the interest of students. The GUIs described in this article include the transformation of truecolor into grayscale images, manipulation of grayscale images using radiometric transformations, histograms, spatial and frequency domain filtering, image restoration and image segmentation.

L. Francisco, C. Campos

A Database-Driven Software Framework for Industrial Data Acquisition and Processing

In industrial quality inspection data acquisition and processing is often a crucial step. A huge amount of data gathered from various sensors and different modalities must be transferred, processed, stored, retrieved, analysed, and visualized, while speed is another critical issue. Further aim is to minimize the expertise needed by the user to control the inspection process. To satisfy all of these requirements hardware and software issues have to be taken simultaneously into account. In this paper we present a prototype which effectively realizes such a system. To ensure flexibility and improve stability we establish no direct connection between the different system software modules, they communicate via a relational database management system. For computationally intensive tasks GPU nodes are used. We present design and implementation details, and report also an industrial case study.

Gábor Petrovszki, Péter Balázs

Interactive Tablets for 3D Medical Image Exploration

Physicians take advantage of desktop and mobile software to perform their work, although current visualization systems still rely on interaction approaches that do not go beyond 2D interfaces. The advent of portable devices, such as tablets, coupled with the growing need to explore and apply 3D image manipulation techniques, motivated our team to develop a tool for 3D medical image exploration and visualization using tangible and spatially aware mobile touch interfaces running on mobile devices. Our approach was validated with user tests using a medical image visualization prototype. Our results show that for 3D manipulation, mobile devices can improve the experience in comparison with traditional techniques. Testing the proposed system with healthcare professionals will be performed as future work.

Vasco Pires, Miguel Belo, Carlos Sousa, Joaquim Jorge, Daniel Simões Lopes

Thematic Session Papers – Advanced Techniques for Image-Based Numerical Simulation in Biomedical Applications

Frontmatter

Modeling the Mechanical Behavior of the Breast Tissues Under Compression in Real Time

This work presents a data-driven model to simulate the mechanical behavior of the breast tissues in real time. The aim of this model is to speed up some multimodal registration algorithms, as well as some image-guided interventions. Ten virtual breast phantoms were used in this work. Their deformation during a mammography was performed off-line using the finite element method. Three machine learning models were trained with the data from those simulations. Then, they were used to predict the deformation of the breast tissues. The models were a decision tree and two ensemble methods (extremely randomized trees and random forest). Four experiments were designed to assess the performance of these models. The mean 3D euclidean distance between the nodal displacements predicted by the models and those extracted from the FE simulations were used for the assessment. The mean error committed by the three models were under 3 mm for all the experiments, although extremely randomized trees performed better than the other two models. Breast compression prediction takes on average 0.05 s, 0.33 s and 0.43 s with decision tree, random forest and extremely randomized trees respectively, thus proving the suitability of the three models for clinical practice.

M. J. Rupérez, F. Martínez-Martínez, M. Martínez-Sober, M. A. Lago, D. Lorente, P. R. Bakic, A. J. Serrano-López, S. Martínez-Sanchis, C. Monserrat, J. D. Martín-Guerrero

Towards Image-Based Analysis of the Liver Perfusion Using a Hierarchical Flow Model

The paper summarizes our activities in modelling tissue perfusion using a multilevel approach which is based on information retrievable form CT and micro CT images. We focus on the liver tissue for which the perfusion modelling is of great interest for both medical research and clinical practice. The blood flow in liver is characterized at several scales for which different models are used. Flows in upper hierarchies represented by larger branching vessels are described using simple 1D models based on the Bernoulli equation extended by the Poiseuille correction terms to respect the viscous pressure losses. To describe flows in smaller vessels and in the tissue parenchyma, we propose a 3D continuum model of porous medium defined in terms of hierarchically matched compartments characterized by hydraulic permeabilities. The 1D models corresponding to the portal and hepatic veins are coupled with the 3D model through point sources, or sinks. For the lowermost level representing the quasi-periodic lobular structure we apply the homogenization method which provides the permeability features of the hepatic sinusoids considered as the double porosity. In the paper we discuss several approaches how to determine the flow model parameters which can be combined. Also the model validation using the realistic geometries reconstructed using the CT scans is discussed.

Eduard Rohan, Vladimír Lukeš, Jana Turjanicová, Miroslav Jiřík

Finite Element Model Set-up of Colorectal Tissue for Analyzing Surgical Scenarios

Finite Element Analysis (FEA) has gained an extensive application in the medical field, such as soft tissues simulations. In particular, colorectal simulations can be used to understand the interaction with the surrounding tissues, or with instruments used in surgical procedures. Although several works have been introduced considering small displacements, as a result of the forces exerted on adjacent tissues, FEA applied to colorectal surgical scenarios is still a challenge. Therefore, this work aims to provide a sensitivity analysis on three geometric models, taking in mind different bioengineering tasks. In this way, a set of simulations has been performed using three mechanical models named Linear Elastic, Hyper-Elastic with a Mooney-Rivlin material model, and Hyper-Elastic with a YEOH material model.

Robinson Guachi, Fabiano Bini, Michele Bici, Francesca Campana, Franco Marinozzi

Thematic Session Papers – Advances in Lung CT Image Processing

Frontmatter

Radiomics-Based Recognition of Metastatic and Histopathological Patterns of Lung Cancer

Lung cancer is the leading cause of cancer-related deaths in the world and its poor prognosis varies markedly according to the tumor staging. Tumor histopathology and computed tomography (CT) features have been used as prognostic factors, but they still present challenges. This work addressess the problem of lung cancer pattern recognition in terms of histopathology and nodal and distant metastasis, using radiomic CT image features and machine learning classifiers. We retrospectively analyzed 52 tumors and semiautomaticaly segmented the CT images. Tumors were characterized by clinical factors and quantitative image attributes of gray level, histogram, texture, shape, and volume. Three classifiers used relevant selected features to perform the analysis. An artificial neural network presented stabled performances on pattern recognition, obtaining areas under the receiver operating characteristic curve of 0.90 for histopathology, 0.88 for nodal metastasis, and 0.98 for distant metastasis. The radiomic pattern recognition presented high performance and great potential to aid the lung cancer diagnosis and prognosis.

José Raniery Ferreira Junior, Federico Enrique Garcia Cipriano, Alexandre Todorovic Fabro, Marcel Koenigkam-Santos, Paulo Mazzoncini de Azevedo-Marques

Effects of Preprocessing in Slice-Level Classification of Interstitial Lung Disease Based on Deep Convolutional Networks

Several preprocessing methods are applied to the automatic classification of interstitial lung disease (ILD). The proposed methods are used for the inputs to an established convolutional neural network in order to investigate the effect of those preprocessing techniques to slice-level classification accuracy. Experimental results demonstrate that the proposed preprocessing methods and a deep learning approach outperformed the case of the original images input to deep learning without preprocessing.

Yongjun Chang, Örjan Smedby

Thematic Session Papers – Application of Image Analysis in Musculoskeletal Radiology

Frontmatter

Automated Assessment of Hallux Valgus in Radiographic Images

The purpose of the study was to develop an automated method for measurement of the selected angular variables, characterizing foot skeleton based on dorsoplantar projection radiographs. The study was a retrospective analysis of radiographic data. Totally, 50 dorsoplantar projection radiographs of a weight-wearing foot were analyzed (24 left and 26 right feet) of 32 patients (23 female, 9 male). Various quantities were measured to assess the severity of hallux valgus. The measurements were performed manually and with an automated method designed in the study. The automated and manual measurements were correlated. Repeated manual measurements were additionally performed to determine the variability of manual assessment of the hallux valgus. High correlation between manual and automated measurements have been observed. The accuracy of the framework is comparable with the accuracy of manual measurements.

Tomasz Gąciarz, Wadim Wojciechowski, Zbisław Tabor

Pattern Recognition of Inflammatory Sacroiliitis in Magnetic Resonance Imaging

The standard reference to evaluate active inflammation of sacroiliac joints in spondyloarthritis is magnetic resonance imaging (MRI). However, visual evaluation may be challenging to specialists due to clinical variability. In order to improve the diagnosis of inflammatory sacroiliitis we have used image processing and machine learning technics to recognize inflammatory patterns in sacroiliac joints in spectral attenuated inversion recovery (SPAIR) T2-weighted MRI using gray-level, texture and spectral features. Pattern recognition was performed by the ReliefF method for attribute selection and the classifiers K nearest neighbors (with 5 values for K), Multilayer Perceptron artificial neural network, Naive Bayes, Random Forest, and Decision Tree J48. Classification was assessed by the area under the ROC (receiver operating characteristic) curve (AUC), Sensitivity and Specificity, with a 10-fold cross validation. The K nearest neighbors with K = 5 obtained the best performance with AUC up to 0.96.

Matheus Calil Faleiros, José Raniery Ferreira Junior, Eddy Zavala Jens, Vitor Faeda Dalto, Marcello Henrique Nogueira-Barbosa, Paulo Mazzoncini de Azevedo-Marques

Stress-Based Femur Fracture Risk Evaluation from Bone Densitometry

Osteoporosis, characterised by a decrease in bone mineral density, is responsible for millions of fractures. The reference test for this disease is bone densitometry. The diagnosis, based on statistical parameters obtained from an X-ray image, evaluates the patient’s risk of fracture. Their results are not accurate enough, since about 40% of patients with a low fracture risk diagnosis end up suffering from an osteoporotic fracture. Therefore, it seems reasonable to look for a technique that allows diagnoses of greater accuracy. The authors have developed an analysis technique, called the Cartesian Grid Finite Element Method (cgFEM), based on the use of Cartesian meshes, that allows the generation of Finite Element (FE) models directly from medical images. This method has been adapted to obtain an indication of the stress distribution on the femur that enriches the information available to evaluate the risk of osteoporotic failure. The multidisciplinary collaboration of this work has allowed analysing densitometries from a large set of patients. The preliminary results obtained show that it will be able to improve the evaluation of the risk of fracture in femurs due to osteoporosis.

E. Nadal, J. J. Ródenas, J. J. Sánchez-Taroncher, A. Alberich-Bayarri, L. Martí-Bonmati

Characterization of Bone Microarchitecture Extracted from MR and MDCT. Feature Analysis Validated Against a Synthetic Trabecular Bone Phantom

Several pathological conditions carry high risk of bone fracture which is associated to high economic and healthy costs. In order to improve understanding the bone microarchitecture, this work aims to evaluate the accuracy of the bone analysis methodology and compare it to the reference standard under different imaging modalities. The results indicate that there is a good relation between the analysis of the bone analysis methodology obtained from the MDCT and MR scans and these modalities can be used for bone quality assessment in clinical practice.

Amadeo Ten-Esteve, Fabio García-Castro, Raúl García-Marcos, Luis Martí-Bonmatí, M. Ángeles Pérez, Ángel Alberich-Bayarri

Thematic Session Papers – Computational Vision and Image Processing Applied to Dental Medicine

Frontmatter

Evaluation of Two Denture Adhesives Removal Techniques Using Image Processing

Denture adhesives improve of quality of life of its users. However, patient’s opinion is unanimously express as they are difficult to remove from both the denture and oral tissues. Only four articles address the removal of commercial denture adhesives issue. The objective of the present study is to evaluate under image processing the recommended adhesive removal protocols from the adhesive producers. Thirty pink acrylic discs were made and denture adhesive was applied to the outer face, in a uniform layer of 1 g per plate. After a 45-minute period in natural saliva, excess was removed with a denture brush. Coloring of the adhesive was then performed using a green food coloring with dipping for 30 s, then removing excess dye. The samples were photographed. Total surface area and pigmented areas were measured using image processing software (Image Tool 3.0, University of Texas Health Science Center, Texas, USA). After each measurement, each protocol was performed. Thus, the samples were divided into two groups of 15 units.In group 1 (n = 15) it was used a denture brush and water with 3 longitudinal movements along the same axis and in the same direction. In group 2 (n = 15) the discs were submerged in water with a denture cleansing tablet for 3 min and brushed with 3 longitudinal movements. Quantitative analysis was again performed. Statistical analyses were performed (p = 0.05). Based on the results obtained in our study and considering in vitro studies limitations we can concluded that neither the techniques advocated by the producers is not sufficient for the total removal of adhesive. Water brushing obtained less efficient results. The combination of immersion in alkaline peroxide solution followed by brushing, despite continuing to obtain low efficacy, presents much better results. More studies in this field must be fulfil to achieve better removing methods and results. Image processing is a valid tool as objective mean to measure the efficacy of different techniques of denture adhesive removable.

C. F. Almeida, M. Sampaio-Fernandes, J. Reis-Campos, J. M. Rocha, M. H. Figueiral, J. Sampaio-Fernandes

Validation of a Numerical Model Representative of an Oral Rehabilitation with Short Implants

A numerical model representative of an oral rehabilitation with short implants was subjected to experimental validation. The electronic speckle pattern interferometry (ESPI) was chosen as experimental technique. The numerical and the experimental models exhibited similar behavior allowing for the intended validation.

J. Ferreira, M. Vaz, J. Oliveira, A. Correia, A. Reis

Jaw Tracking Device and Methods of Analysis of Patient’s Specific TMJ Kinematics

The recording of the patient’s jaw motion is not a new problem – history shows dentists have used various devices. The Jaw Tracking Device, developed and presented here, comprises of a computerized mechanical system instrumented with sensor technology and a man-machine interface to provide a quick, friendly and reliable setup for both doctors as well as patients. This system is used to conduct, acquire, store and analyze movement of the lower jaw as well as the temporo-mandibular joint.Scanning models for both jaws using intraoral or stationary scanners will develop two STL meshes associated with this specific patient. The ability to display and analyze the motion of the patient’s jaw presented by STL models has been extended to the area of volumetric information in form of DICOM obtained during CBCT process. Combination of STL and DICOM images driven by the same kinematics of the TMJ allows researchers to see the interaction between the internal parts of the joint without use of invasive methods under “in vivo” process and with a minimum exposure of X-ray.

Yevsey Gutman, John Keller

Thematic Session Papers – Computer Vision in Robotics

Frontmatter

A Study on Face Identification for an Outdoor Identity Verification System

As one of the most successful applications of image analysis and understanding, face recognition has recently received significant attention, especially during the past several years. One of the many uses of face recognition is building access control where a person has one or several photos associated to an Identification Document (also known as identity verification). This paper focuses on the use of face recognition methods in the context of an Identity Verification System to be used under natural light. Experimental results are presented using the most important detection and recognition algorithms taking into consideration several problems: ageing, face rotation, sensor used and illumination. Some pre-processing techniques are proposed using face alignment and auto calibration of camera parameters. The results using these pre-processing algorithms are then compared and discussed.

Daniel P. F. Lopes, António J. R. Neves

Human-Robot Interaction Based on Gestures for Service Robots

Gesture recognition is very important for Human-Robot Interfaces. In this paper, we present a novel depth based method for gesture recognition to improve the interaction of a service robot autonomous shopping cart, mostly used by reduced mobility people. In the proposed solution, the identification of the user is already implemented by the software present on the robot where a bounding box focusing on the user is extracted. Based on the analysis of the depth histogram, the distance from the user to the robot is calculated and the user is segmented using from the background. Then, a region growing algorithm is applied to delete all other objects in the image. We apply again a threshold technique to the original image, to obtain all the objects in front of the user. Intercepting the threshold based segmentation result with the region growing resulting image, we obtain candidate objects to be arms of the user. By applying a labelling algorithm to obtain each object individually, a Principal Component Analysis is computed to each one to obtain its center and orientation. Using that information, we intercept the silhouette of the arm with a line obtaining the upper point of the interception which indicates the hand position. A Kalman filter is then applied to track the hand and based on state machines to describe gestures (Start, Stop, Pause) we perform gesture recognition. We tested the proposed approach in a real case scenario with different users and we obtained an accuracy around 89,7%.

Patrick de Sousa, Tiago Esteves, Daniel Campos, Fábio Duarte, Joana Santos, João Leão, José Xavier, Luís de Matos, Manuel Camarneiro, Marcelo Penas, Maria Miranda, Ricardo Silva, António J. R. Neves, Luís Teixeira

Thematic Session Papers – Emotions Classification from EEG Signals

Frontmatter

A Brain Computer Interface by EEG Signals from Self-induced Emotions

Human computer interface (HCI) has become more and more important in the last few years. This is mainly due to the increase in the technology and in the new possibilities in yielding a help to disabled people. Brain Computer Interfaces (BCI) represent a subset of the HCI systems which use measurements of the voluntary brain activity for driving a communication system mainly useful for severely disabled people. Electroencephalography (EEG) has been intensively used for the measurement of electrical signals related to the brain activity. The BCI usage requires the activation of mental tasks that could be derived by external stimulations (often audio-visual) or by autonomous activations (for example by thinking to move an arm for signaling a binary command). In the last few years, a new paradigm of activation has been used, consisting in the autonomous brain activation through self-induced emotions, remembered on autobiographical basis. In the present paper, we describe the state of the art of a BCI system based on self-induced emotions, from the activation paradigm to the used signal classification strategies and the final graphic interface. Moreover, we will discuss its extension toward a multi-emotional paradigm.

Paolo Di Giamberardino, Daniela Iacoviello, Giuseppe Placidi, Matteo Polsinelli, Matteo Spezialetti

Pain and Stress Reactions in Neurohormonal, Thermographic and Behavioural Studies in Calves

Dehorning of cattle is painful, but remains a commonly performed procedure in horned breeds as it reduces injuries from social contact and especially improves safety for stock-persons. Dehorned cattle also require less space during feeding or transport. To date a large proportion of cattle is disbudded or dehorned and in most cases without proper pain relief. Understanding and recognizing the signs and factors causing pain in farm animals is an area that has been continually developing.

P. Cwynar, M. Soroko, R. Kupczyński, A. Burek, K. Pogoda-Sewerniak

Thematic Session Papers – Image Analysis and Machine Learning for Skin Ulcers

Frontmatter

Volume Estimation of Skin Ulcers: Can Cameras Be as Accurate as Laser Scanners?

Cavity volume is an important clinical index for the assessment of the healing process and effectiveness of treatment applied on chronic ulcers. Recently, 3D scanners have proven to effectively track ulcer’s volume evolution. However, photogrammetry presents itself as a low cost and portable alternative. We conducted an inter-laboratory comparative study between photogrammetric and 3D scanner-based volume estimation of small skin ulcers. A total of 20 Cutaneous Leishmaniasis ulcers’ virtual models were generated using a commercial laser scanner and a full-HD portable camera. The reconstruction from videos was performed using comercial and open-source software (i.e., Agisoft Photoscan and VisualSFM). The results revealed similar performance with a median deviation of 16.18% and 21.10% (compared to 3DScan-based volume estimation) using VisualSFM and PhotoScan respectively. In addition, both methods proved similar efficiency in the assessment of healing ulcers when compared to 3D-scanner.

Omar Zenteno, Eduardo González, Sylvie Treuillet, Benjamin Castañeda, Braulio Valencia, Alejandro Llanos, Yves Lucas

Optical Imaging Technology for Wound Assessment: A State of the Art

Wound assessment still often relies on manual measurements in clinical staffs. However, optical imaging demonstrated its efficiency to assess wound 3D geometry and the biological status of its coloured tissues. For a massive spreading in hospitals, convenient and low cost devices were required, as wound care is devoted to the nurses and wound prevalence high among patients. For accurate diagnosis, multimodal devices are adapted and advanced technology now addresses compact thermal, hyperspectral and range imaging issues. Commercial devices available for wound care are somewhat less advanced but the gap will be filled rapidly and the economic pressure for health monitoring at home will have a great impact in the solutions available in the next years.

Yves Lucas, Sylvie Treuillet

Light-Tissue Interaction Model for the Analysis of Skin Ulcer Multi-spectral Images

Skin ulcers (SU) are ones of the most frequent causes of consultation in primary health-care units (PHU) in tropical areas. However, the lack of specialized physicians in those areas, leads to improper diagnosis and management of the patients. There is then a need to develop tools that allow guiding the physicians toward a more accurate diagnosis. Multi-spectral imaging systems are a potential non-invasive tool that could be used in the analysis of skin ulcers. With these systems it is possible to acquire optical images at different wavelengths which can then be processed by means of mathematical models based on optimization approaches. The processing of those kind of images leads to the quantification of the main components of the skin. In the case of skin ulcers, these components could be correlated to the different stages of wound healing during the follow-up of a skin ulcer. This article presents the processing of a skin ulcer multi-spectral image. The ulcer corresponds to Leishmaniasis which is one of the diseases the most prominent in tropical areas. The image processing is performed by means of a light-tissue interaction model based on the distribution of the skin as a semi-infinite layer. The model, together with an optimization approach allows quantifying the main light-absorbing and scattering skin-parameters in the visible and near-infrared range. The results show significant differences between healthy and unhealthy area of the image.

July Galeano, Pedro Jose Tapia-Escalante, Sandra Milena Pérez-Buitrago, Yesid Hernández-Hoyos, Luisa Fernánda Arias-Muñoz, Artur Zarzycki, Johnson Garzón-Reyes, Franck Marzani

LED-based System for the Quantification of Oxygen in Skin: Proof of Concept

Imaging technologies have the potential to supplement conventional diagnosis and follow up of ulcers, by providing detailed information regarding skin components imperceptible to visual inspection. Clinical investigations have shown that several factors including reduced oxygen delivery and disturbed metabolism can impair the wound healing process, for that reason is desirable to asses the distribution of tissue oxygenation around a lesion. In this sense, this work describes the development and preliminary tests of a LED-based multispectral imaging system to measure changes in the oxygen saturation.

Pérez Sandra, Tapia Pedro, Galeano July, Zarzycki Artur, Garzón Johnson, Marzani Franck

Surface Acoustic Wave Propagation Using Crawling Waves Technique in High Frequency Ultrasound

Several tropical diseases generate cutaneous lesions on the skin with different elastic properties than normal tissue. A number of non-invasive elastography techniques have been created for detecting the mechanical properties in tissue in the last decades. Quantitative information is mainly obtained by harmonic elastography, which is distinguished for producing shear wave propagation. When wave propagation is near a boundary region, surface acoustic waves (SAW) are found. This work presents crawling waves elastography technique implemented with a high-frequency ultrasound (HFUS) system for the estimation of SAW speed and its relationship with the elastic modulus. Experiments are conducted to measure SAW speed in a homogeneous phantom with a solid-water interface for a theoretical validation. Afterwards, ex-vivo experiments in thigh pork were performed to show SAW propagation in animal tissue. Preliminary results demonstrate the presence of SAW propagation in phantoms and skin tissue and how wave speed should be correctly adjusted according to the coupling media for elastography applications.

Ana Cecilia Saavedra, Fernando Zvietcovich, Benjamin Castaneda

Multimodal Viewing Interface for Skin Ulcers (Leish-MUVI)

Multimodal image exploration in medicine has proven to efficiently improve screening and diagnosis. The IMPULSO project (IMage Processing for ULcerS in trOpical areas) performed an acquisition campaign for the collection of a multimodal image database of Cutaneous Leishmaniasis ulcers in Peru. The database includes color images, 3D scans (scatter plots, surface), ultrasound images (US) and hyperspectral cubes (HSI) acquired from the same ulcers. In this paper, we present a graphical interface for the simultaneous visualization/exploration of two different modalities. A data set of 5 patients which were scanned once every 7-days during their 28-day treatment for a total of 20 ulcers were processed. Color images where overlaid to the ulcers’ superficial information from ultrasound-based 3D models using a projective transformation. Our hypothesis is that the complementarity potential of these imaging techniques may help to improve diagnosis and monitoring of the ulcers. The initial results revealed that it is possible to evaluate ultrasonic and visual features on the same area and correlate the superficial features observed in the color images to the sub-dermic information provided by ultrasound.

Ru Zhang, Omar Zenteno, Sylvie Treuillet, Benjamin Castaneda

Thematic Session Papers – Imaging and Image processing in Ophthalmology

Frontmatter

Automatization of Eye Fundus Vessel Width Measurements

Many diseases can be early detected from eye fundus images by several different features. One of the features is the artery and vein ratio. Width measurement is made on the main vessels. The aim of this automatization process is to create a fully automated method for eye fundus analysis. The fully automated system consists of blood vessel tree extraction and optic nerve disc detection in order to perform measurements at the standard place. Vessel tree extraction is complicated in some situations, so the vessel measurement algorithm is developed independently of the extracted blood vessel tree, and the tree is used only for direction enhancement and only when it is available. Vessel measurements are compared with manual measurements, performed by an expert. Automated measurements are not different from the expert’s measurements with the confidence level of 95%. For this investigation, the Optomed OY digital mobile eye fundus camera Smartscope M5 PRO was used, but the algorithm can be used on any type of eye fundus images.

Giedrius Stabingis, Jolita Bernatavičienė, Gintautas Dzemyda, Alvydas Paunksnis, Povilas Treigys, Ramutė Vaičaitienė, Lijana Stabingienė

Exploratory Study on Direct Prediction of Diabetes Using Deep Residual Networks

Diabetes is threatening the health of many people in the world. People may be diagnosed with diabetes only when symptoms or complications such as diabetic retinopathy start to appear. Retinal images reflect the health of the circulatory system and they are considered as a cheap and patient-friendly source of information for diagnosis purposes. Convolutional neural networks have enhanced the performance of conventional image processing techniques significantly by neglecting inconsistent feature extraction pipelines and learning informative features automatically from data. In this work we explore the possibility of using the deep residual networks as one of the state-of-the-art convolutional networks to diagnose diabetes directly from retinal images, without using any blood glucose information. The results indicate that convolutional networks are able to capture informative differences between healthy and diabetic patients and it is possible to differentiate between these two groups using only the retinal images. The performance of the proposed method is significantly higher than human experts.

Samaneh Abbasi-Sureshjani, Behdad Dashtbozorg, Bart M. ter Haar Romeny, François Fleuret

Automated Blood Vessel Extraction Based on High-Order Local Autocorrelation Features on Retinal Images

Automated blood vessels detection on retinal images is an important process in the development of pathologies analysis systems. This paper describes about an automated blood vessel extraction using high-order local autocorrelation (HLAC) on retinal images. Although HLAC features are shift-invariant, HLAC features are weak to turned image. Therefore, a method was improved by the addition of HLAC features to a polar transformed image. We have proposed a method using HLAC, pixel-based-features and three filters. However, we have not investigated about feature selection and machine learning method. Therefore, this paper discusses about effective features and machine learning method. We tested eight methods by extension of HLAC features, addition of 4 kinds of pixel-based features, difference of preprocessing techniques, and 3 kinds of machine learning methods. Machine learning methods are general artificial neural network (ANN), a network using two ANNs, and Boosting algorithm. As a result, our already proposed method was the best. When the method was tested by using “Digital Retinal Images for Vessel Extraction” (DRIVE) database, the area under the curve (AUC) based on receiver operating characteristics (ROC) analysis was reached to 0.960.

Yuji Hatanaka, Kazuki Samo, Kazunori Ogohara, Wataru Sunayama, Chisako Muramatsu, Susumu Okumura, Hiroshi Fujita

Analysis of Retinal Vascular Biomarkers for Early Detection of Diabetes

This paper presents an automated retinal vessel analysis system for the measurement and statistical analysis of vascular biomarkers. The proposed retinal vessel enhancement, segmentation, optic disc and fovea detection algorithms provide fundamental tools for extracting the vascular network within the predefined region of interest (ROI). Based on that, the artery/vein classification, vessel caliber, curvature and fractal dimension measurement tools are used to assess the quantitative vascular biomarkers: width, tortuosity, and fractal dimension. A statistical analysis on the extracted geometric biomarkers is set up using a dataset provided by the Maastricht study with the aim of exploring the associations between different vessel biomarkers and type 2 diabetes mellitus. A linear regression analysis is used to model the relationships between different factors. The results indicate that the vascular biomarker variables have associations with diabetes. These findings demonstrate the possibility of applying the proposed pipeline tools on further analysis of vessel biomarkers for the computer-aided diagnosis.

Jiong Zhang, Behdad Dashtbozorg, Fan Huang, Tos T. J. M. Berendschot, Bart M. ter Haar Romeny

Validation Study on Retinal Vessel Caliber Measurement Technique

Changes in retinal vessel caliber are associated with several diseases, such as diabetes and hypertension. The robust assessment of abnormality on vessels with different sizes is a challenging task. In this paper, we propose a robust and reliable method for the measurement of retinal vessel caliber. The method is validated on a dataset where the optic disc centered images are acquired using 6 different fundus cameras with a repetitive acquisitions. The results are compared with the semi-automatic software IVAN, where the relative errors are similar.

Fan Huang, Behdad Dashtbozorg, Jiong Zhang, Alexander Yeung, Tos T. J. M. Berendschot, Bart M. ter Haar Romeny

Automatic Detection of Spontaneous Venous Pulsations Using Retinal Image Sequences

This paper proposes a method of automated detection and parameterization of spontaneous venous pulsation using raw video data acquired from retinal video-ophthalmoscope. Evaluation of magnitude of spontaneous venous pulsation has been proven to correlate with occurrence of glaucoma. Based on this relation a method is proposed that might help to detect glaucoma via detection of spontaneous venous pulsation.

Michal Hracho, Radim Kolar, Jan Odstrcilik, Ivana Liberdova, Ralf P. Tornow

3D Mapping of Choroidal Thickness from OCT B-Scans

The choroid is the middle layer of the eye globe located between the retina and the sclera. It is proven that choroidal thickness is a sign of multiple eye diseases. Optical Coherence Tomography (OCT) is an imaging technique that allows the visualization of tomographic images of near surface tissues like those in the eye globe. The automatic calculation of the choroidal thickness reduces the subjectivity of manual image analysis as well as the time of large scale measurements. In this paper, a method for the automatic estimation of the choroidal thickness from OCT images is presented. The pre-processing of the images is focused on noise reduction, shadow removal and contrast adjustment. The inner and outer boundaries of the choroid are delineated sequentially, resorting to a minimum path algorithm supported by new dedicated cost matrices. The choroidal thickness is given by the distance between the two boundaries. The data are then interpolated and mapped to an infrared image of the eye fundus.The method was evaluated by calculating the error as the distance from the automatically estimated boundaries to the boundaries delineated by an ophthalmologist. The error of the automatic segmentation was low and comparable to the differences between manual segmentations from different ophthalmologists.

Simão P. Faria, Susana Penas, Luís Mendonça, Jorge A. Silva, Ana Maria Mendonça

Retinal Image Quality Assessment by Mean-Subtracted Contrast-Normalized Coefficients

The automatic assessment of visual quality on images of the eye fundus is an important task in retinal image analysis. A novel quality assessment technique is proposed in this paper. We propose to compute Mean-Subtracted Contrast-Normalized (MSCN) coefficients on local spatial neighborhoods of a given image and analyze their distribution. It is known that for natural images, such distribution behaves normally, while distortions of different kinds perturb this regularity. The combination of MSCN coefficients with a simple measure of local contrast allows us to design a simple but effective retinal image quality assessment algorithm that successfully discriminates between good and low-quality images, while delivering a meaningful quality score. The proposed technique is validated on a recent database of quality-labeled retinal images, obtaining results aligned with state-of-the-art approaches at a low computational cost.

Adrian Galdran, Teresa Araújo, Ana Maria Mendonça, Aurélio Campilho

A Simple Physical Representation for Saccadic Eye Movement Data

In this paper we employ a driven harmonic oscillator to satisfactorily describe data corresponding to ocular movements produced during a visual search task. The data is acquired with an EyeLink 1000. We describe the details of the oscillations in the eye’s rotation angles in between 4 and 40$$^{\circ }$$ observed before and after the saccadic and microsaccadic moments. We perform an analysis of the efficiency in the use of the energy of the associated harmonic oscillator.

J. I. Specht, M. L. Freije, A. L. Frapiccini, R. de Luis Garcia, G. Gasaneo

Multi-layer 3D Simultaneous Retinal OCT Layer Segmentation: Just-Enough Interaction for Routine Clinical Use

All current fully automated retinal layer segmentation methods fail in some subset of clinical 3D Optical Coherence Tomography (OCT) datasets, especially in the presence of appearance-modifying retinal diseases like Age-related Macular Degeneration (AMD), Diabetic Macular Edema (DME), and others. In the presence of local or regional failures, the only current remedy is to edit the obtained segmentation in a slice-by-slice manner. This is a very tedious and time-demanding process, which prevents the use of quantitative retinal image analysis in clinical setting. In turn, the non-existence of reliable retinal layer segmentation methods substantially limits the use of precision medicine concepts in retinal-disease applications of clinical ophthalmology. We report a new non-trivial extension of our previously-reported LOGISMOS-based simultaneous multi-layer 3D segmentation of retinal OCT images. In this new approach, automated segmentation of up to 9 retinal layers defined by 10 surfaces is followed by visual inspection of the segmentation results and by employment of minimally-interactive correction steps that invariably lead to successful segmentation thus yielding reliable quantification. The novel aspect of this “Just-Enough Interaction” (JEI) approach for retinal OCT relies on a 2-stage coarse-to-fine segmentation strategy during which the operator interacts with the LOGISMOS graph-based segmentation algorithm by suggesting desired but approximate locations of the layer surfaces in 3D rather than performing manual slice-by-slice corrections. As a result, the efficiency of the reliable analysis has been improved dramatically with more than 10-fold speedup compared to the traditional retracing approaches. In an initial testing set of 40 3D OCT datasets from glaucoma, AMD, DME, and normal subjects, clinically accurate segmentation was achieved in all analyzed cases after 5.3 ± 1.4 min/case devoted to JEI modifications. We estimate that reaching the same performance using slice-by-slice editing in the regions of local segmentation failures would require at least 60 min of expert-operator time for the 9 segmented retinal layers. Our JEI-LOGISMOS approach to segmentation of retinal 3D OCT images is now employed in a larger clinical-research study to determine its usability on a larger sample of OCT image data.

Kyungmoo Lee, Honghai Zhang, Andreas Wahle, Michael D. Abràmoff, Milan Sonka

Thematic Session Papers – Imaging and Simulation Techniques for Cardiovascular Diseases

Frontmatter

An Automatic Method for Aortic Segmentation Based on Level-Set Methods Using Multiple Seed Points

Thoracic Aortic Aneurysm (TAA) is an enlargement of the aortic lumen at chest level. An accurate assessment of the geometry of the enlarged vessel is crucial when planning vascular interventions. This study developed an automatic method to extract aortic geometry and supra-aortic vessels from computerized tomography (CT) images. The proposed method consists of a fast-marching level-set method for detection of the initial aortic region from multiple seed points automatically selected along the pre-extracted vessel centerline, and a level-set method for extraction of the detailed aortic geometry from the initial aortic region. The automatic method was implemented inside Endosize (Therenva, Rennes), a commercially available software used for planning minimally invasive techniques. The performance of the algorithm was compare with the existing Endosize segmentation method (based on the region growing approach). For this comparison a CT dataset from an open source data file system (Osirix Advanced Imaging in 3D, 2016) was used. Results showed that, whilst the segmentation time increased (956 s for the new method, 0.308 s for the existing one), the new method produced a more accurate aortic segmentation, particularly in the region of supra-aortic branches. Further work to examine the efficacy of the proposed method should include a statistical study of performance across many datasets.

Massimiliano Mercuri, Andrew J. Narracott, DR Hose, Cemil Göksu

Analysis of Speckle Pattern Quality and Uncertainty for Cardiac Strain Measurements Using 3D Digital Image Correlation

Measurement of full-field cardiac strain using optical methods has potential application to validate ultrasound measurements ex vivo, confirming their suitability for accurate in vivo strain imaging. This study describes the use of an effective technique to create a speckle pattern over the surface of a porcine heart for ex vivo experiments with the 3D digital image correlation (DIC) method. We characterised the quality of the speckle pattern applied on the cardiac surface by analysis of speckle size and evaluated the baseline uncertainty of 3D-DIC technique using a zero-strain test, applying rigid-body motion to position the marked sample at four locations. Strain errors were reported at a high spatial resolution (~128 µm) and were evaluated over a range of subset sizes. For subset size greater than 29 pixels the strain error was less than 1% making the baseline uncertainty of the 3D-DIC system acceptable to measure strains on the cardiac surface of the order 10%.

Paolo Ferraiuoli, John W. Fenner, Andrew J. Narracott

The Ring Vortex: A Candidate for a Liquid-Based Complex Flow Phantom for Medical Imaging

Flow informed diagnosis of cardiovascular diseases requires the accurate and specific interpretation of complex flow patterns acquired by medical imaging systems. Satisfactory imaging performance is assured through calibration and validation against known reference flows, but in the domain of complex flows these suffer from numerous limitations. The hypothesis of the present work is that the ring vortex combines characteristics of high complexity comparable to pathological flows but also offers characterizability comparable to simple flows. This is explored through a combination of experiment and theory involving ring vortex production in a water tank. Measurements confirm that despite the complexity of this vortical flow, it is stable, reproducible, predictable and controllable. The flow is sufficiently well behaved that it is consistent with some flow imaging standards, and consequently deserves consideration as a candidate for a complex flow phantom.

Simone Ferrari, Simone Ambrogio, Adrian Walker, Andrew J. Narracott, John W. Fenner

Assessing Cardiac Tissue Function via Action Potential Wave Imaging Using Cardiac Displacement Data

The ability to visualize action potentials deep within the walls of the heart has important applications. It enables the identification of regions of electrically and mechanically compromised tissue that can mark the location(s) of infarcted and ischemic myocardial tissue, and also permits the visualization of normal and abnormal action potential wave propagation patterns for use in both clinical and cardiac research settings. Recently, we have been investigating the possibility of using 4-D mechanical deformation data, obtained either from MRI or ultrasound images, to reverse-calculate these action potential patterns [2, 4, 5]. This idea has also been studied by Konofagou et al. [6], who used mixed time and space second derivatives in the displacement fields to identify the location of action potentials. While this mixed-derivative method should be effective for spatially one-dimensional action potentials, it is less effective when propagation of the waves is fundamentally three-dimensional.

Niels F. Otani, Dylan Dang, Shusil Dangi, Mike Stees, Suzanne M. Shontz, Cristian A. Linte

Thematic Session Papers – Imaging of Flows in Lab-on-Chip Devices

Frontmatter

Imaging of Healthy and Malaria-Mimicked Red Blood Cells in Polydimethylsiloxane Microchannels for Determination of Cells Deformability and Flow Velocity

Imaging analysis techniques have been extensively used to obtain crucial information on blood phenomena in the microcirculation. In the present study, it is intended to mimic the effects of malaria on the red blood cells (RBCs), by changing their properties using a different concentration of glutaraldehyde solution. The effects of the disease in stiffing RBCs were evaluated using polydimethylsiloxane microchannels that comprise contractions with 10 µm width and measuring the cells deformability and the flow velocity in healthy and modified conditions. The obtained results show a decrease in the RBCs deformability and in the flow velocity with the presence of glutaraldehyde, when compared to the behavior of healthy RBCs samples. Therefore, it can be concluded that, using image analysis (ImageJ & PIVLab), it is possible to measure the deformability of the RBCs and the flow velocity and, consequently, obtaining a correlation between the difference of velocities/deformabilities in the microchannels. In the future, this correlation can be used to relate the RBCs behavior with the various stages of malaria. This study can be a starting point for establishing the development of new malaria diagnostic systems towards point-of-care lab-on-a-chip devices.

Liliana Vilas Boas, Rui Lima, Graça Minas, Carla S. Fernandes, Susana O. Catarino

A Comparative Study of Image Processing Methods for the Assessment of the Red Blood Cells Deformability in a Microfluidic Device

Red blood cells (RBCs) deformability is a high relevant mechanical property, whose variations are associated with some diseases, such as diabetes and malaria. Therefore, the present study aims to compare different image processing methods for assessing the RBCs deformability in a continuous flow, measured in a polydimethylsiloxane (PDMS) microchannel composed by 15 μm spacing inner pillars. The images were acquired with a high speed camera and analyzed with ImageJ software for tracking and measuring the RBCs deformation index (DI). Additionally, to understand the performance of the software, it was performed a comparison between different image processing tools provided by ImageJ and the best methods for the deformation measurements were selected, considering the measured RBCs number and their DIs. The results show that those image methods significantly affect the number of measured RBCs and their DIs and, therefore, the studies focused on the deformability measurements need to take into account the effect of the image processing methods for avoiding loss of relevant information in the images.

Vera Faustino, Susana O. Catarino, Diana Pinho, Graça Minas, Rui Lima

Visualization and Measurement of the Cell-Free Layer (CFL) in a Microchannel Network

In the past years, in vitro blood studies have revealed several significant hemodynamic phenomena that have played a key role in recent developments of biomedical microdevices for cells separation, sorting and analysis. However, the blood flow phenomena happening in complex geometries, such as microchannel networks, have not been fully understood. Thus, it is important to investigate in detail the blood flow behavior occurring at microchannel networks. In the present study, by using a high-speed video microscopy system, we have used two working fluids with different haematocrit (1% Hct and 15% Hct) and we have investigated the cell-free layer (CFL) in a microchannel network composed by asymmetric bifurcations. By using the Z Project method from the image analysis software ImageJ, it was possible to conclude that the successive bifurcations and confluences influence the formation of the CFL not only along the upper and lower wall of the microchannel but also at the region immediately downstream of the confluence apex.

D. Bento, C. S. Fernandes, A. I. Pereira, J. M. Miranda, R. Lima

Numerical Simulation of Hyperelastic Behaviour in Aneurysm Models

The aneurysm is a fragile region on the wall of a blood vessel that causes it to form a bulge. In limiting situations, this weakening can lead to vessel disruption.

J. Ribeiro, C. S. Fernandes, R. Lima

Red Blood Cells (RBCs) Visualisation in Bifurcations and Bends

Bifurcating networks are commonly found in nature. One example is the microvascular system, composed of blood vessels consecutively branching into daughter vessels, driving the blood into the capillaries, where the red blood cells (RBCs) are responsible for delivering O2 and up taking cell waste and CO2.In this preliminary study, we explore a microfluidic bifurcating geometry inspired by such biological models, for investigating RBC partitioning as well as RBC-plasma separation favored by the consecutive bifurcating channels.A biomimetic design rule [1] based on Murray’s law [2] was used to set the channels’ dimensions along the network, which consists of consecutive bifurcating channels of reducing diameter. The ability to apply differential flow resistances by controlling the flow rates at the end of the network allowed us to monitor the formation of a cell-free layer (CFL) for different flow conditions at haematocrits of 1% and 5%. We have also compared the values of CFL thickness determined directly by the measurement on the projection image created from a stack of images or indirectly by analyzing the intensity profile in the same projection.The results obtained from this study confirm the potential to study RBC partitioning along bifurcating networks, which could be of particular interest for the separation of RBCs from plasma in point-of-care devices.

Joana Fidalgo, Diana Pinho, Rui Lima, Mónica S. N. Oliveira

Thematic Session Papers – Infrared Thermal Imaging in Biomedicine

Frontmatter

Thermal Imaging Improves the Accuracy of Estimation of Human Resistance to Sudden Hypoxia

Developed new methods of sustainability assessment of adults and fetuses inside the womb to acute hypoxia. The essence of the invented methods is to determine the duration of the period of stable stationary position of the limbs and thorax and radiation properties of small pillows of fingers of hands during artificial apnea, cuff occlusion test and/or during critical clinical situation (massive blood loss, fetal hypoxia, etc.). Obtained results indicate a high diagnostic and prognostic value of the technical solutions. To obtain the relevant registration, it is proposed use to the ultrasonic and/or infrared video registration.

Aleksandr Urakov, Natalia Urakova

Multi Regression Analysis of Skin Temperature Variation During Cycling Exercise

In the last years, infrared thermography (IRT) has become a popular technique to determine human skin temperature during exercise [1–3]. IRT presents several applications in sport science such as the detection of injury, the thermophysiology assessment, the sport clothing assessment/design, or its application in equestrian sport, among others [3]. However, IRT in sports is still a recent topic and there are many fundamental discussions concerning different methodological aspects, being one of them the analysis of the thermal data [4].

Jose Ignacio Priego Quesada, Rosario Salvador Palmer, Pedro Pérez-Soriano, Joan Izaguirre, Rosa Mª Cibrián Ortiz de Anda

Infrared Thermography Versus Conventional Image Techniques in Pediatrics: Cases Study

The use of infrared thermography has been shown to be useful in several areas. Its applicability in medicine is based on the fact that the skin emits spontaneously and continuously infrared radiation, whose body distribution is symmetrical in a healthy individual. Infrared thermography can offer an alternative to X-rays for a large number of diseases related to peripheral vascularization. In these cases, infrared thermography can avoid the use of biologically ionizing radiation. This is of special interest in pediatric patients who, because of their age, are more radiosensitive.We present a prospective descriptive study of 3 cases study of children with inflammatory/infectious cutaneous, osteoarticular and vascular (hemangioma) pathology. The objective of this study is double, on the one hand to evaluate the use of infrared thermography for the diagnosis and follow-up of these patients, through quantitative and qualitative analysis of the temperature differences between symmetric zones and, on the other hand, to evaluate the correlation with other imaging techniques (Ultrasonography, Computerized Tomography, Magnetic Resonance).

Olga Benavent Casanova, Francisco Núñez Gómez, Jose Ignacio Priego Quesada, Rosa Mª Cibrián Ortiz de Anda, Rolando González-Peña, Teresa Cuenca Bandín, Rosario Salvador Palmer

Infrared Thermography. An in Vitro Study on Its Use as Diagnostic Test in Dentistry

Pulp tissue consists of richly vascularized and innervated tissue with a very small circulatory access zone (the apical foramen) [1]. The amount and quality of pulp tissue can only be determined using histological techniques, which imply necrosis of the tissue [2].

Ana Mª Paredes, Leopoldo Forner, Rosa Cibrián, José Ignacio Priego, Rosario Salvador Palmer, Leonor del Castillo, Carmen Llena

Multi-spectral Face Recognition System

Face recognition is being actively pursued as a research area since the last five decades [1], due to the increase in crime rates and health rates, especially due to cases where security systems fail e.g. in case of disguise [2]. Therefore, this issue is worth solving to help control the crime rate and make the people feel more secure by identifying the actual identity of a subject by taking advantage of the visible and thermal imaging domain. The main challenge to the face recognition process is the variations like illumination, pose, expression and disguise [3]. The proposed framework will help in solving this problem using intensityhist and RLBP (ITR) features for classifying a facial patch as usable or unusable, local binary patterns (LBP) for feature extraction related to facial recognition purposes and mahalanobis cosine as the distance measure technique. The proposed framework is tested using a multi-spectral facial dataset (I2BVSD), which contains images form 75 different subjects in both, the visible and thermal domain. Results obtained by the proposed framework are better than reported frameworks on other datasets for face recognition process, and better as compared to the Anavrta [4] framework which is also tested on this (I2BVSD) dataset. This methodology can be employed to identify febrile subjects at places with innumerous numbers of people like airports, preventing the spread of pandemic situations.

H. Ahmed, M. Umair, A. Murtaza, U. I. Bajwa, R. Vardasca

Characterization of Thermographic Normality of Horse Extremities

Proper use of thermography in equine in veterinary medicine to detect abnormality requires determination of temperature distribution. The aim of the study was to define basal temperature distributions in the region of hoof and sole in healthy horses. The study included 12 clinically healthy horses. The images were analysed numerically through statistical analysis defining temperature differences objectibly. The sensitivity of the analysis allowed to detect unexpected subclinical injures of the hoof, becoming a valuable diagnostic tool in early detection of inflammation.The study concluded that results are clinically relevant and that thermography is reliable diagnostic tool for early detection of abnormalities and subclinical injuries.

Irene Díez Artigao, Sergio Díez Domingo, Rosa Cibrián Ortiz de Anda

Skin Temperature Bilateral Differences at Upper Limbs and Joints in Healthy Subjects

The absence of reference skin temperature values of the upper limbs and joints difficult the wider usage of infrared thermal imaging as a complimentary method for diagnosis and treatment assessment of musculoskeletal conditions affecting the upper extremities. In this research 40 healthy participants were screened with infrared images using internationally accepted capture protocol, and regions of interest such as arm, forearm, hand, shoulder, elbow and wrist both in both anterior and posterior views were characterized in mean temperature distribution and bilateral differences. It was found that the highest bilateral difference was of 0.5 ± 0.3 °C at the anterior forearm region. Being 0.5 °C suggested as threshold for further investigation for pathological state.

Ricardo Vardasca, Maria T. Restivo, Joaquim Mendes

Physiological Changes of the Horse Musculoskeletal System During Physiological Effort Measured by Infrared Thermography

Despite progress in equine sports science which has brought improved training programmes, little is known about the basic physiology of the musculature in the exercising horse. Using electromyography, Harrison et al. [1] demonstrated a range of activation times for the different limb muscles, which was found to be dependent on the gait of the horse. However, electromyography is unsuitable for large studies due to its complex nature.

Maria Soroko, Kevin Howell, Krzysztof Dudek, Izabela Wilk, Iwona Janczarek

Infrared Thermography Protocol for the Diagnosis and Monitoring of the Diabetic Foot: Preliminary Results

The diabetic foot, according to the International Consensus on the Diabetic Foot, is an infection, ulceration or destruction of the deep tissues related to neurological alterations and peripheral vascular disease in the lower limbs [1]. This pathology represents an important problem of public health because the affected patients can suffer amputations and even the death [2].

Jose Ignacio Priego Quesada, María Benimeli, Lucía Carbonell, Rosa Mª Cibrián, Rosario Salvador, Rolando González-Peña, Mª Carmen Blasco, M. Fe Mínguez, Pedro Retorta, Cecili Macián

Segmentation of Infrared Images Using Stereophotogrammetry

Image Segmentation has historically been a difficult part of Image Analysis in Infrared Imaging. Solutions include using a low-emissivity or cold material as a background or using a separate visual camera to perform image analysis tasks such as with the FLIR One. The proposed method utilises Stereophotogrammetry to obtain a depth map which is subsequently refined and segmented into fore- and background to give an accurate depiction of the object being imaged. The resulting 3D model can then be viewed on a computer for further analysis.

Benjamin Kluwe, David Christian, Marius Miknis, Peter Plassmann, Carl Jones

Skin Temperature in Diabetic Foot Patients: A Study Focusing on the Angiosome Concept

The effect of peripheral artery disease in the skin temperature of diabetic foot patients is not known. In this study, skin temperature was assessed in patients with established diagnosis of diabetic foot, all with neuropathy, with or without peripheral artery disease. Thermograms of feet with neuropathy and peripheral vascular disease were compared with thermograms of feet with neuropathy, without peripheral artery disease. Skin temperature was lower in feet with neuropathy and peripheral artery disease and differences were statistically significant (p < 0.05) in most regions of interest.

Adérito Seixas, Kurt Ammer, Rui Carvalho, João Paulo Vilas-Boas, Ricardo Vardasca, Joaquim Mendes

Infrared Thermal Imaging as an Assessment Tool in a Rehabilitation Program Following an Ankle Sprain

Lower limb trauma has an increasing incidence and often determines significant physical impairment and disability. Rehabilitation plays a significant role in pain relief and restoring physical function. The aim of our study was to analyze the benefits of rehabilitation programs in terms of functional improvement in patients with posttraumatic ankle-foot status and to monitor functional progress following rehabilitation using infrared thermal imaging assessment. We evaluated 22 patients with ankle sprains (3 or 4 stage), aged between 19 and 78 years, being treated at the National Institute of Rehabilitation - 3rd Clinic between September and December 2016. All patients followed a two weeks individualized rehabilitation program including laser therapy, cryo-ultrasound, TENS, therapeutic massage and physical therapy. The subjects were evaluated clinical and functional before and after the program. For clinical and functional assessment, we used VAS pain scale, Foot Function Index, Thermal Infrared Imaging. The results were analyzed from bio-statistical point of view. Our study demonstrated the usefulness of thermal infrared imaging in monitoring the evolution of pain and functioning following rehabilitation in patients with posttraumatic ankle-foot sequelae.

Nica Adriana Sarah, Nartea Roxana, Meiu Lili, Constantinovici Mariana, Mologhianu Gilda, Ojoga Florina, Mitoiu Brindusa

Skin Temperature of the Foot: A Comparative Study Between Familial Amyloid Polyneuropathy and Diabetic Foot Patients

Skin temperature regulation is dependant of the autonomic nervous system function, which may be impaired in patients with neuropathy. Studies reporting thermographic assessment of patients with established diagnosis of Diabetic Foot (DF) are scarce but this information is completely absent in patients suffering from Transthyretin Familial Amyloid Polyneuropathy (TTR-FAP). The aim of this study is to compare skin temperature distribution in patients with DF and TTR-FAP. Thermograms of the dorsal and plantar surfaces were compared. Skin temperature was higher in the diabetic foot group and differences were statistically significant (p < 0.05) in both regions of interest.

Adérito Seixas, Maria do Carmo Vilas-Boas, Rui Carvalho, Teresa Coelho, Kurt Ammer, João Paulo Vilas-Boas, Ricardo Vardasca, João Paulo Silva Cunha, Joaquim Mendes

Towards the Automatic Detection of Hand Fingertips and Phalanges in Thermal Images

Manual analysis of thermal images and definition of regions of interest (ROI) of hands is a tedious and time-consuming task. Towards the automatic detection of anatomical thermal ROIs, an algorithm to automatically detect fingertips and phalanges were developed, using MATLAB®. Two strategies for fingertips detection were applied. The two algorithms were developed using 48 hand thermal images (size 320 × 240) as a training set, while 12 integrated the test set. The algorithms’ evaluation metric assesses the hand binarization and edge extraction, as well as the fingertip and phalanges detection. This evaluation metric showed a hit score of 70.2% for the first approach and 88.6% for the second method. The first approach shows a reasonable relationship between the average temperature of the detected phalanges and the manual marked phalanges (R2 = 0.6477), while the second approach presents a much strongest linear correlation (R2 = 0.9551), where the variables are highly associated, proving its efficiency. Future work includes improving the algorithm by using a larger training set.

Elsa Sousa, Ricardo Vardasca, Joaquim Mendes, António Costa-Ferreira

Pre-drilling vs. Self-drilling of Pin Bone Insertion – A Thermography Experimental Evaluation

This work presents an experimental in-vitro study, involving fresh pork bone, which compares the temperature generated in the bone when using a medical drill bit, a self-drilling tapered half-pin and a self-drilling cylindrical half-pin. The study compares the insertion of pins based on a pre-drilling process and their direct insertion. The drilling process of bone have been done in a milling machine with control in the speed advance and rotation. The temperature measurement was made using an infrared thermography camera. When comparing the maximum temperature obtained with the different procedures, the pre-drilling process in pin insertion clearly suggests an advantage in order to minimize the appearance of bone tissue necrosis.

M. Ghazali, L. Roseiro, A. Garruço, L. Margalho, F. Expedito

Thermographic Evaluation of the Saxophonists’ Embouchure

The orofacial complex is the primarily link between the instrument and the instrumentalist when performing the musician’s embouchure. The contact point is established between the saxophonist lower lip, the upper maxillary dentition and the mouthpiece. The functional demands of the saxophone player and consequent application of forces with an excessive pressure can significantly influence the orofacial structures. A thermographic evaluation was performed to an anatomical zone vital for the embouchure, such as the lip of the saxophonist. Substantial temperature changes occurred before and after playing saxophone. The specificity of the embouchure regarding the position of the lower lip inside the oral cavity, the anatomy and position of the central lower incisors can be some of the factors involved in the origin of the existing temperature differences on the thermographic evaluation.

Joana Cerqueira, Miguel Pais Clemente, Gilberto Bernardes, Henk Van Twillert, Ana Portela, Joaquim Gabriel Mendes, Mário Vasconcelos

Thematic Session Papers – Meta-learning in Deep Learning

Frontmatter

Using Metalearning for Parameter Tuning in Neural Networks

Neural networks have been applied as a machine learning tool in many different areas. Recently, they have gained increased attention with what is now called deep learning. Neural networks algorithms have several parameters that need to be tuned in order to maximize performance. The definition of these parameters can be a difficult, extensive and time consuming task, even for expert users. One approach that has been successfully used for algorithm and parameter selection is metalearning. Metalearning consists in using machine learning algorithm on (meta)data from machine learning experiments to map the characteristics of the data with the performance of the algorithms. In this paper we study how a metalearning approach can be used to obtain a good set of parameters to learn a neural network for a given new dataset. Our results indicate that with metalearning we can successfully learn classifiers from past learning tasks that are able to define appropriate parameters.

Catarina Félix, Carlos Soares, Alípio Jorge, Hugo Ferreira

Impact of Feature Selection on Average Ranking Method via Metalearning

Selecting appropriate classification algorithms for a given dataset is crucial and useful in practice but is also full of challenges. In order to maximize performance, users of machine learning algorithms need methods that can help them identify the most relevant features in datasets, select algorithms and determine their appropriate hyperparameter settings. In this paper, a method of recommending classification algorithms is proposed. It is oriented towards the average ranking method, combining algorithm rankings observed on prior datasets to identify the best algorithms for a new dataset. Our method uses a special case of data mining workflow that combines algorithm selection preceded by a feature selection method (CFS).

Salisu Mamman Abdulrahman, Miguel Viana Cachada, Pavel Brazdil

A Deep Learning Architecture for Histology Image Classification with Curriculum Learning

Inspired by curriculum learning, we present a two-tiered approach for CNN training. First, we learn an intermediate representation, called a pixel labeling. The pixel labels capture low-level details and textures within objects, providing a more complete semantic description that is used as input to a subsequent image classifier. The two learning tasks can be considered together as a single deep network. Extensive experiments show that our architecture, which includes an intermediate layer substantially outperforms fine-tuned CNN models trained without an intermediate target, even when the two networks have an identical overall topology and numbers of parameters. We demonstrate our approach on a histology image classification task.

Chia-Yu Kao, Mallika Madduri, Leonard McMillan

Thematic Session Papers – Shape Analysis in Medical Imaging

Frontmatter

Integrated 3D Anatomical Model for Automatic Myocardial Segmentation in Cardiac CT Imagery

Segmentation of epicardial and endocardial boundaries is a critical step in diagnosing cardiovascular function in heart patients. The manual tracing of organ contours in Computed Tomography Angiography (CTA) slices is subjective, time-consuming and impractical in clinical setting. We propose a novel multi-dimensional automatic edge detection algorithm based on shape priors and principal component analysis. Inspired by the work of Tsai et al. [3] and Yezzi et al. [1], we have developed a highly customized parametric model for implicit representations of segmenting curves (3D) for Left Ventricle (LV), Right Ventricle (RV), and Epicardium (Epi) used simultaneously to achieve myocardial segmentation. We have extended the Chan-Vese [4] image modeling framework to segment four regions simultaneously with high level constraints enabling the modeling of complex cardiac anatomical structures to automatically guide the segmentation of endo/epicardial boundaries. Test results on 30 short-axis CTA datasets show robust segmentation with error (mean ± std mm) of (1.46 ± 0.41), (2.06 ± 0.65), (2.88 ± 0.59) for LV, RV and Epi respectively.

Navdeep Dahiya, Anthony Yezzi, Marina Piccinelli, Ernest Garcia

A Threefold Deformation Decomposition in Shape Analysis for Medical Imaging: Spherical, Deviatoric and Non Affine Components

Statistical Shape Analysis, that includes classic Geometric Morphometrics (GM), is often based on landmarks and frequently used to describe shape and shapes changes in biological studies. In this context changes in shape are typically analyzed separately from changes in size measured, in most cases, with centroid size (CS) [1]. Changes in shape are projected in a common linear space: the tangent space to the consensus and decomposed in affine and non affine components. The non affine component can be in turn de-composed in a series of local deformations (partial warps). This approach relies on the assumption that shapes are limitedly scattered in the shape space. In these conditions the difference between centroid size and m-volume is barely appreciable. In medical image, and in general in soft tissues, bodies can undergo very large deformations, involving also large changes in size strictly coupled with the change in shape from mechanical point of view. The cardiac example, analyzed in the present paper, shows changes in volume that can reach the 60% when comparing systole and diastole, coupled with severe longitudinal, radial and torsional strains. Because ventricle’s volume, together with its pressure, is one of the most important descriptors of the pumping heart function, it is natural considering the size change as volumes change. In fact, when dealing with such large size differences, CS and volume behave differently. In the last years the emerging disciplines of Diffeomorphometry and the related Functional Anatomy apply to replace classic GM through the use of diffeomorphisms, more suitable to describe soft tissues changes [2, 3]. On the other hand, these descriptions are very sophisticated but ignore some synthetic properties of the decomposition of deformation in some significant aspects. The goal of the present work is to show that standard GM tools (landmarks, Thin Plate Spline, and related decomposition of the deformation) can be generalized to better describe the very large deformations of biological tissues, without losing a synthetic description. In particular, the classical decomposition of the tangent space to the shape space in affine and non affine components [4] is enriched to include also the change in size, in such a way to give a complete description of the tangent space to the size-and-shape space. The proposed generalization is formulated by means of a new Riemannian metric describing the change in size as change in volume rather than change in CS. This lead to a redefinition of some aspect of the Kendalls size-and-shape space without abandoning Kendalls original formulation. This new formulation is discussed by analyzing 3D heart ventricular shapes coming from 3D Speckle Tracking Echocardiography. We evaluate the performances of different methods in recognizing Control (healthy) subjects from patients affected by Hypertrophic Cardiomyopathy.

Valerio Varano, Paolo Piras, Luciano Teresi, Stefano Gabriele, Ian L. Dryden, Paola Nardinocchi, Antonietta Evangelista, Concetta Torromeo, Paolo Emilio Puddu

Distortion Minimizing Geodesic Subspaces in Shape Spaces and Computational Anatomy

The estimation of finite dimensional nonlinear submanifold representing shape samples is of paramount importance in many applications. The Distortion Minimizing Geodesic Submanifold (DMGS) approach allows to select the most accurate submanifolds in term of distortion under a dimensionality constraint for shape spaces. We show that the computation of DMGS is widely compatible with the Large Deformation Diffeomorphic Metric Mapping (LDDMM) framework and the varifold distortion for application to computational anatomy. It allows the estimation of finite dimensional geodesic submanifolds in the difficult situation where we do not assume any one to one correspondance between shapes (parametrisation invariance). Unlike regular Tangent PCA, the computation of DMGS does not need to deal with the classical balance between the deformation cost from the template to target and the resulting distortion. On the contrary, the greedy minimization of the distortion under dimensionality constraints, hiding the deformation metric in the exponential map, suggests a new way to select between alternative metrics and shape spaces under the unifying point of view of the dimension/distortion curves in the spirit of the rate/distortion curves in information theory. Proof of concept on 2D and 3D experiments are discussed.

Benjamin Charlier, Jean Feydy, David W. Jacobs, Alain Trouvé

Transporting Deformations via Integration of Local Strains

Transporting deformations from a template to a different one is a typical task of the shape analysis. In particular, it is necessary to perform such a kind of transport when performing group-wise statistical analyses in Shape or Size and Shape Spaces. A typical example is when one is interested in separating the difference in function from the difference in shape. The key point is: given two different templates $$\mathscr {B}_X$$ and $$\mathscr {B}_Y$$ both undergoing its own deformation, and describing these two deformations with the diffeomorphisms $$\varPhi _X$$ and $$\varPhi _Y$$, then when it is possible to say that they are experiencing the same deformation? Given a correspondence between the points of $$\mathscr {B}_X$$ and $$\mathscr {B}_Y$$ (i.e. a bijective map), then a naive possible answer could be that the displacement vector $$\mathbf {u}$$, associated to each corresponding point couple, is the same. In this manuscript, we assume a different viewpoint: two templates undergo the same deformation if for each corresponding point couple of the two templates the condition $$\mathbf {C}_X := \nabla ^T \varPhi _X \nabla \varPhi _X = \nabla ^T \varPhi _Y \nabla \varPhi _Y =: \mathbf {C}_Y$$ holds or, in other words, the local metric (non linear strain) induced by the two diffeomorphisms is the same for all the corresponding points.

Franco Milicchio, Stefano Gabriele, Gianluca Acunzo

Backmatter

Additional information