Skip to main content

2019 | Buch

Bioinformatics and Biomedical Engineering

7th International Work-Conference, IWBBIO 2019, Granada, Spain, May 8-10, 2019, Proceedings, Part II

herausgegeben von: Ignacio  Rojas, Prof. Olga Valenzuela, Fernando Rojas, Dr. Francisco Ortuño

Verlag: Springer International Publishing

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

The two-volume set LNBI 11465 and LNBI 11466 constitutes the proceedings of the 7th International Work-Conference on Bioinformatics and Biomedical Engineering, IWBBIO 2019, held in Granada, Spain, in May 2019.

The total of 97 papers presented in the proceedings, was carefully reviewed and selected from 301 submissions. The papers are organized in topical sections as follows:

Part I: High-throughput genomics: bioinformatics tools and medical applications; omics data acquisition, processing, and analysis; bioinformatics approaches for analyzing cancer sequencing data; next generation sequencing and sequence analysis; structural bioinformatics and function; telemedicine for smart homes and remote monitoring; clustering and analysis of biological sequences with optimization algorithms; and computational approaches for drug repurposing and personalized medicine.

Part II: Bioinformatics for healthcare and diseases; computational genomics/proteomics; computational systems for modelling biological processes; biomedical engineering; biomedical image analysis; and biomedicine and e-health.

Inhaltsverzeichnis

Frontmatter

Bioinformatics for Healthcare and Diseases

Frontmatter
Developing a DEVS-JAVA Model to Simulate and Pre-test Changes to Emergency Care Delivery in a Safe and Efficient Manner

Patients’ overcrowding in Emergency Department (ED) is a major problem in medical care worldwide, predominantly due to time and resource constraints. Simulation modeling is an economical approach to solve complex healthcare problems. We employed Discrete Event System Specification-JAVA (DEVS-JAVA) based model to simulate and test changes in emergency service conditions with an overall goal to improve ED patient flow. Initially, we developed a system based on ED data from South Carolina hospitals. Later, we ran simulations on four different case scenarios. 1. Optimum number (no.) of doctors and patients needed to reduce average (avg) time of assignment (avg discharge time = 33 min). 2. Optimum no. of patients to reduce avg discharge time (avg wait time = 150 min). 3. Optimum no. of patients to reduce avg directing time to critical care (avg wait time = 58 min). 4. Optimum no. of patients to reduce avg directing time to another hospital (avg wait time = 93 min). Upon execution of above 4 simulations, 4 patients got discharged utilizing 3 doctors; 5 patients could be discharged from ED to home; 2 patients could be transferred from ED to critical care; 3 patients could be transferred from ED to another hospital. Our results suggest that the generated DEVS-JAVA simulation method seems extremely effective to solve time-dependent healthcare problems.

Shrikant Pawar, Aditya Stanam
Concept Bag: A New Method for Computing Concept Similarity in Biomedical Data

Biomedical data are a rich source of information and knowledge, not only for direct patient care, but also for secondary use in population health, clinical research, and translational research. Biomedical data are typically scattered across multiple systems and syntactic and semantic data integration is necessary to fully utilize the data’s potential. This paper introduces new algorithms that were devised to support automatic and semi-automatic integration of semantically heterogeneous biomedical data. The new algorithms incorporate both data mining and biomedical informatics methods to create “concept bags” in the same way that “word bags” are used in data mining and text retrieval. The methods are highly configurable and were tested in five different ways on different types of biomedical data. The new methods performed well in computing similarity between medical terms and data elements - both critical for semi/automatic data integration operations.

Richard L. Bradshaw, Ramkiran Gouripeddi, Julio C. Facelli
A Reliable Method to Remove Batch Effects Maintaining Group Differences in Lymphoma Methylation Case Study

The amount of biological data is increasing and their analysis is becoming one of the most challenging topics in the information sciences. Before starting the analysis it is important to remove unwanted variability due to some factors such as: year of sequencing, laboratory conditions and use of different protocols. This is a crucial step because if the variability is not evaluated before starting the analysis of interest, the results may be undesirable and the conclusion can not be true. The literature suggests to use some valid mathematical models, but experience shows that applying these to high-throughput data with a non-uniform study design is not straightforward and in many cases it may introduce a false signal. Therefore it is necessary to develop models that allow to remove the effects that can negatively influence the study preserving biological meaning. In this paper we report a new case study related lymphoma methylation data and we propose a suitable pipeline for its analysis.

Giulia Pontali, Luciano Cascione, Alberto J. Arribas, Andrea Rinaldi, Francesco Bertoni, Rosalba Giugno
Predict Breast Tumor Response to Chemotherapy Using a 3D Deep Learning Architecture Applied to DCE-MRI Data

Purpose: Many breast cancer patients receiving chemotherapy cannot achieve positive response unlimitedly. The main objective of this study is to predict the intra tumor breast cancer response to neoadjuvant chemotherapy (NAC). This aims to provide an early prediction to avoid unnecessary treatment sessions for no responders’ patients.Method and material: Three-dimensional Dynamic Contrast Enhanced of Magnetic Resonance Images (DCE-MRI) were collected for 42 patients with local breast cancer. This retrospective study is based on a data provided by our collaborating radiology institute in Brussels. According to the pathological complete response (pCR) ground truth, 14 of these patients responded positively to chemotherapy, and 28 were not responsive positively. In this work, a convolutional neural network (CNN) model were used to classify responsive and non-responsive patients. To make this classification, two CNN branches architecture was used. This architecture takes as inputs three views of two aligned DCE-MRI cropped volumes acquired before and after the first chemotherapy. The data was split into 20% for validation and 80% for training. Cross-validation was used to evaluate the proposed CNN model. To assess the model’s performance, the area under the receiver operating characteristic curve (AUC) and accuracy were used.Results: The proposed CNN architecture was able to predict the breast tumor response to chemotherapy with an accuracy of 91.03%. The Area Under the Curve (AUC) was 0.92.Discussion: Although the number of subjects remains limited, relevant results were obtained by using data augmentation and three-dimensional tumor DCE-MRI.Conclusion: Deep CNNs models can be exploited to solve breast cancer follow-up related problems. Therefore, the obtained model can be used in future clinical data other than breast images.

Mohammed El Adoui, Stylianos Drisis, Mohammed Benjelloun
Data Fusion for Improving Sleep Apnoea Detection from Single-Lead ECG Derived Respiration

This work presents two algorithms for detecting apnoeas from the single-lead electrocardiogram derived respiratory signal (EDR). One of the algorithms is based on the frequency analysis of the EDR amplitude variation applying the Lomb-Scargle periodogram. On the other hand, the sleep apnoeas detection is carried out from the temporal analysis of the EDR amplitude variation. Both algorithms provide accuracies around 90%. However, in order to improve the robustness of the detection process, it is proposed to fuse the results obtained with both techniques through the Dempster-Shafer evidence theory. The fusion of the EDR-based algorithm results indicates that, the 84% of the detected apnoeas have a confidence level over 90%.

Ana Jiménez Martín, Alejandro Cuevas Notario, J. Jesús García Domínguez, Sara García Villa, Miguel A. Herrero Ramiro
Instrumented Shoes for 3D GRF Analysis and Characterization of Human Gait

The main objective of this work is to develop a computerized system based on instrumented shoes to characterize and analyze the human gait. The system uses instrumented shoes connected to a personal computer to provide the 3D Ground Reaction Forces (GRF) patterns of the human gait. This system will allow a much more objective understanding of the clinical evolution of patients, enabling a more effective functional rehabilitation of a patient’s gait. The sample rate of the acquisition system is 100 Hz and the system uses wireless communication.

João P. Santos, João P. Ferreira, Manuel Crisóstomo, A. Paulo Coimbra
Novel Four Stages Classification of Breast Cancer Using Infrared Thermal Imaging and a Deep Learning Model

According to a recent study conducted in 2016, 2.8 million women worldwide had already been diagnosed with breast cancer; moreover, the medical care of a patient with breast cancer is costly and, given the cost and value of the preservation of the health of the citizen, the prevention of breast cancer has become a priority in public health. We have seen the apparition of several techniques during the past 60 years, such as mammography, which is frequently used for breast cancer diagnosis. However, false positives of mammography can occur in which the patient is diagnosed positive by another technique. Also, the potential side effects of using mammography may encourage patients and physicians to look for other diagnostic methods. This article, present a Novel technique based on an inceptionV3 couples to k-Nearest Neighbors (InceptionV3-KNN) and a particular module that we named: “StageCancer.” These techniques succeed to classify breast cancer in four stages (T1: non-invasive breast cancer, T2: the tumor measures up to 2 cm, T3: the tumor is larger than 5 cm and T4: the full breast is cover by cancer).

Sebastien Mambou, Ondrej Krejcar, Petra Maresova, Ali Selamat, Kamil Kuca
Comparison of Numerical and Laboratory Experiment Examining Deformation of Red Blood Cell

In this work, we are dealing with a numerical model of red blood cell (RBC), which is compared to a real RBC. Our aim is to verify the accuracy of the model by comparing the behavior of simulated RBCs with behavior of the real cells in equivalent situations. For this comparison, we have chosen a microfluidic channel with narrow constrictions which was already explored in laboratory conditions with real RBCs. The relation between the cell deformation and its velocity is the element of comparison of simulated and real RBC. We conclude that the velocity-deformation dependence is similar in the two compared experiments.

Kristina Kovalcikova, Ivan Cimrak, Katarina Bachrata, Hynek Bachraty
Levenberg-Marquardt Variants in Chrominance-Based Skin Tissue Detection

Levenberg-Marquardt method is a very useful tool for solving nonlinear curve fitting problems; while it is also a very promising alternative of weight adjustment in feed forward neural networks. Forcing the Hessian matrix to stay positive definite, the parameter $$ \uplambda $$ λ also turns the algorithm into the well-known variations: steepest-descent and Gauss-Newton. Given the computation time, the results achieved by these methods surely differ while minimizing the sum of squares of errors and with an acceptable accuracy rate in skin tissue recognition. Therefore in this paper, we propose the implementation of these variations in network training by a set of tissue samples borrowed from SFA human skin database. The RGB images taken from the set are converted into YCbCr color space and the networks are individually trained by these methods to create weight arrays minimizing the error squares between the pixel values and the function output. Consisting of hands on computer keyboards, the images are analyzed to find skin tissues for achieving high accuracy with low computation time and for comparison of the methods.

Ayca Kirimtat, Ondrej Krejcar, Ali Selamat
A Mini-review of Biomedical Infrared Thermography (B-IRT)

Infrared thermography (IRT) is a non-destructive imaging technique that is used for revealing temperature differences on the surfaces of the human body or objects. Once it is used for biomedical purpose, it measures the emitted radiation on the surfaces of the human body. We, in this research, present Biomedical Infrared Thermography (B-IRT) applications with various measurement methods, analysis types, analysis schemes and study types from the existing literature in a detailed literature matrix. A mini-review of 30 studies from the literature are summarized through focusing on substantial features and backgrounds. Finally, recent advances and future opportunities are also presented to highlight high potential use of IRT in biomedical applications.

Ayca Kirimtat, Ondrej Krejcar, Ali Selamat
Mathematical Modeling and Docking of Medicinal Plants and Synthetic Drugs to Determine Their Effects on Abnormal Expression of Cholinesterase and Acetyl Cholinesterase Proteins in Alzheimer

Alzheimer’s is a neurodegenerative disease, typically begins slowly and after the passage of time gets worse. Around 70% of the cause of the disease is accepted to be hereditary with numerous genes generally involved. Objective: There are no proper medications for the disease. At present, the most acknowledged Alzheimer’s treatment is cholinesterase inhibitor that inactivates the acetyl cholinesterase chemical to expand acetylcholine level in the brain. Medicinal plants are also used for Alzheimer’s. Here, an In-silico attempt is made to compare the Medicinal plants with synthetic drugs to determine the efficacy of medicinal plants in the treatment of Alzheimer’s, data is collected for both medicinal plants and commercial drugs. Interactions of compounds with proteins are determined and side effects are calculated and compared to find out the best among them. The mathematical modeling of all the drugs is done to determine their effects on proteins expression. It is observed that none of the synthetic drugs interacts with acetyl cholinesterase, while the chemical constituents of medicinal plants represent greater interactions with both cholinesterase and acetyl cholinesterase, as compared to the synthetic drugs. The mathematical modeling of compounds also confirmed the inhibitory effects of medicinal plants compounds on proteins, whereas the synthetic drugs showed an in increase in the expression level of proteins. On the basis of the results, it is suggested that these chemical constituents are better used as the remedy against Alzheimer’s.

Shaukat Iqbal Malik, Anum Munir, Ghulam Mujtaba Shah, Azhar Mehmood
Definition of Organic Processes via Digital Monitoring Systems

Digital monitoring systems in medical practice is one of the instruments for determination of organic processes. Various kinds of organic processes such as childbirth, growing, development, eating, ageing, dying can be witnessed by digital monitoring systems. It aims to establish knowledge about working bodies and minimize spreading of illness or negative processes. Systems are witnessing about human actions, about functioning of invisible organs and very little changes in human organisms. Witnessing has links with measuring and judgemental codes in computer programs. Sensors convert mode of body to numeric parameters, measure statement of organism and compare it with necessary levels. As a result, digital monitoring systems mark changes and transgression of norms in individual organic processes.In contemporary reaching of certain numeric parameters becomes in medicine a strategy of organic processes’ improvement. Devices’ witnessing connected with measuring and judgemental codes is an instrument for it. Doctors or patients look at results of bodies functioning and make their own decisions based on them. Specificity of witnessing of organic processes by digital monitoring systems is in its digital nature. Digital code is a basement for witnessing and measuring. It is created to save normal body’s functioning. So digital code presents new reality (order), which helps understand the statement of organism. In this paper, we clear the question how digital code correlates with principles of body’s functioning. What are the positive and negative aspects of digital determining? How digital monitoring systems can advance functioning of organism and what they need to pay back?

Svetlana Martynova, Denis Bugaev
Detection of Pools of Bacteria with Public Health Importance in Wastewater Effluent from a Municipality in South Africa Using Next Generation Sequencing and Metagenomics Analysis

Wastewater effluents are always accompanied with possibilities for human health risks as diverse pathogenic microorganisms are harboured in them, especially if untreated or poorly treated. They allow the release of pathogens into the environment and these may find its way into food cycle. This paper reports the findings of our research work that focused on the characterization of microorganisms from a municipal final wastewater effluent that receives bulk of its spent water from a research farm. High throughput sequencing using Illumina MiSeq apparatus and metagenomics analysis showed a high abundance of microbial genes, which was dominated by Bacteria (99.88%), but also contained Archaea (0.07%) and Viruses (0.05%). Most prominent in the bacterial group is the Proteobacteria (86.6%), which is a major phylum containing wide variety of pathogens, such as Escherichia, Salmonella, Vibrio, Helicobacter, etc. Further analysis showed that the Genus Thauera occurred in largest amounts across all 6 data sets, while Thiomonas and Bacteroides propionicifaciens also made significant appearances. The presence of some of the detected bacteria like Corynebacterium crenatum showed degradation and/or fermentation in the effluent, which was evidenced by fouling during sampling. Notable pathogens classified with critical criteria by World Health Organization (WHO) for research and development including Acinetobacter sp., Escherichia coli, and Pseudomonas sp. in the effluent were being released to the environment. Our results suggest a potential influence of wastewater effluent on microbial community structure of the receiving water bodies, the environment as well as possible effects on the individuals exposed to the effluents. The evidences from the results in this study suggest an imminent public health problem that may become sporadic if the discharged effluent is not properly treated. This situation is also a potential contributor of antimicrobial resistance genes to the natural environments.

Anthony Ayodeji Adegoke, Emmanuel Adetiba, Daniel T. Babalola, Matthew B. Akanle, Surendra Thakur, Anthony I. Okoh, Olayinka Ayobami Aiyegoro

Computational Genomics/Proteomics

Frontmatter
Web-Based Application for Accurately Classifying Cancer Type from Microarray Gene Expression Data Using a Support Vector Machine (SVM) Learning Algorithm

Intelligent optimization algorithms have been widely used to deal complex nonlinear problems. In this paper, we have developed an online tool for accurate cancer classification using a SVM (Support Vector Machine) algorithm, which can accurately predict a lung cancer type with an accuracy of approximately 95%. Based on the user specifications, we chose to write this suite in Python, HTML and based on a MySQL relational database. A Linux server supporting CGI interface hosts the application and database. The hardware requirements of suite on the server side are moderate. Bounds and ranges have also been considered and needs to be used according to the user instructions. The developed web application is easy to use, the data can be quickly entered and retrieved. It has an easy accessibility through any web browser connected to the firewall-protected network. We have provided adequate server and database security measures. Important notable advantages of this system are that it runs entirely in the web browser with no client software need, industry standard server supporting major operating systems (Windows, Linux and OSX), ability to upload external files. The developed application will help researchers to utilize machine learning tools for classifying cancer and its related genes. Availability: The application is hosted on our personal linux server and can be accessed at: http://131.96.32.330/login-system/index.php .

Shrikant Pawar
Associating Protein Domains with Biological Functions: A Tripartite Network Approach

Protein domains are key determinants of protein function. However, a large number of domains have no recorded functional annotation. These domains of unknown function (DUFs) are a recognised problem and efforts have been made to remedy this situation, including the use of data such as structural and sequence similarity and annotation data such as that of Gene Ontology (GO) and The Enzyme Commission.Here, we present a new approach based on tripartite network analysis to assign functional terms to DUFs. We combine functional annotation at the protein level, taken from GO, KEGG, Reactome and UniPathway, with structural domain annotation, taken from the CATH-Gene3D resource. We validate our method using 10-fold cross-validation and find it performs well when assigning annotation from the UniPathway, Reactome and GO resources, but less well for KEGG. We also explored using a finer functional subclassification of CATH superfamilies (FunFams) but these families were found to be too specific in this context.

Elena Rojano, James Richard Perkins, Ian Sillitoe, Christine Orengo, Juan Antonio García Ranea, Pedro Seoane
Interaction of ZIKV NS5 and STAT2 Explored by Molecular Modeling, Docking, and Simulations Studies

ZIKV NS5 has been associated with inhibition of type I IFN during the host antiviral response. The protein-protein interaction may promote the proteasomal degradation of STAT2, although the entire mechanism is still unknown. In this study, a three-dimensional model of the full STAT2 protein (C-score = $$-0.62$$ - 0.62 ) was validated. Likewise, the top scored docked complex NS5-STAT2 is presented among several other models; the top model shows a total stabilizing energy for the complex of $$-77.942$$ - 77.942 kcal mol $$^{-1}$$ - 1 and a Gibbs binding free energy of $$-4.30$$ - 4.30 kcal mol $$^{-1}$$ - 1 . The analysis of the complex has revealed that the interaction is limited to three domains known as N-terminal from STAT2 and Mtase-Thumb from NS5; both located in the ordered regions of these proteins. Key residues involved in the interaction interface that showed the highest frequency among the models are stabilized by electrostatic interactions, hydrophobic interactions, salt bridges, and ionic interactions. Therefore, our findings support the experimental preliminaries observations reported in the literature and present additional structural characterization that will help in the drug design efforts against ZIKV NS5.

Gerardo Armijos-Capa, Paúl Pozo-Guerrón, F. Javier Torres, Miguel M. Méndez
Entropy-Based Detection of Genetic Markers for Bacteria Genotyping

Genotyping is necessary for the discrimination of bacteria strains. However, methods such as multilocus sequence typing (MLST) or minim typing (mini-MLST) use a combination of several genes. In this paper, we present an augmented method for typing Klebsiella pneumoniae using highly variable fragments of its genome. These fragments were identified based on the entropy of the individual positions. Our method employs both coding and non-coding parts of the genome. These findings may lead to decrease in the number of variable parts used in genotyping and to make laboratory methods faster and cheaper.

Marketa Nykrynova, Denisa Maderankova, Vojtech Barton, Matej Bezdicek, Martina Lengerova, Helena Skutkova
Clustering of Klebsiella Strains Based on Variability in Sequencing Data

Genotyping is a method necessary to distinguish between strains of bacteria. Using whole sequences for analysis is a computational demanding and time-consuming approach. We establish a workflow to convert sequences to a numerical signal representing the variability of sequences. After segmentation and using only parts of the signals, they have still enough information to form a topologically according to the clustering structure.

Vojtech Barton, Marketa Nykrynova, Matej Bezdicek, Martina Lengerova, Helena Skutkova
Addition of Pathway-Based Information to Improve Predictions in Transcriptomics

The diagnosis and prognosis of cancer are among the more critical challenges that modern medicine confronts. In this sense, personalized medicine aims to use data from heterogeneous sources to estimate the evolution of the disease for each specific patient in order to fit the more appropriate treatments. In recent years, DNA sequencing data have boosted cancer prediction and treatment by supplying genetic information that has been used to design genetic signatures or biomarkers that led to a better classification of the different subtypes of cancer as well as to a better estimation of the evolution of the disease and the response to diverse treatments. Several machine learning models have been proposed in the literature for cancer prediction. However, the efficacy of these models can be seriously affected by the existing imbalance between the high dimensionality of the gene expression feature sets and the number of samples available, what is known as the curse of dimensionality. Although linear predictive models could give worse performance rates when compared to more sophisticated non-linear models, they have the main advantage of being interpretable. However, the use of domain-specific information has been proved useful to boost the performance of multivariate linear predictors in high dimensional settings. In this work, we design a set of linear predictive models that incorporate domain-specific information from genetic pathways for effective feature selection. By combining these linear model with other classical machine learning models, we get state-of-art performance rates in the prediction of vital status on a public cancer dataset.

Daniel Urda, Francisco J. Veredas, Ignacio Turias, Leonardo Franco
Common Gene Regulatory Network for Anxiety Disorder Using Cytoscape: Detection and Analysis

Data mining, computational biology and statistics are unified to a vast research area Bioinformatics. In this arena of diverse research, protein - protein interaction (PPI) is most crucial for functional biological progress. In this research work an investigation has been done by considering the several modules of data mining process. This investigation helps for the detection and analyzes gene regulatory network and PPI network for anxiety disorders. From this investigation a novel pathway has been found. Numerous studies have been done which exhibits that a strong association among diabetes, kidney disease and stroke for causing most libelous anxiety disorders. So it can be said that this research will be opened a new horizon in the area of several aspects of life science as well as Bioinformatics.

Md. Rakibul Islam, Md. Liton Ahmed, Bikash Kumar Paul, Sayed Asaduzzaman, Kawsar Ahmed
Insight About Nonlinear Dimensionality Reduction Methods Applied to Protein Molecular Dynamics

The advance in molecular dynamics (MD) techniques has made this method common in studies involving the discovery of physicochemical and conformational properties of proteins. However, the analysis may be difficult since MD generates a lot of conformations with high dimensionality. Among the methods used to explore this problem, machine learning has been used to find a lower dimensional manifold called “intrinsic dimensionality space” which is embedded in a high dimensional space and represents the essential motions of proteins. To identify this manifold, Euclidean distance between intra-molecular $$C_\alpha $$ C α atoms for each conformation was used. The approaches used were combining data dimensionality reduction (AutoEncoder, Isomap, t-SNE, MDS, Spectral and PCA methods) and Ward algorithm to group similar conformations and find the representative structures. Findings pointed out that Spectral and Isomap methods were able to generate low-dimensionality spaces providing good insights about the classes separation of conformations. As they are nonlinear methods, the low-dimensionality generated represents better the protein motions than PCA embedding, so they could be considered alternatives to full MD analyses.

Vinicius Carius de Souza, Leonardo Goliatt, Priscila V. Z. Capriles

Computational Systems for Modelling Biological Processes

Frontmatter
HIV Drug Resistance Prediction with Categorical Kernel Functions

Antiretroviral drugs are a very effective therapy against HIV infection. However, the high mutation rate of HIV permits the emergence of variants that can be resistant to the drug treatment. In this paper, we propose the use of categorical kernel functions to predict the resistance to 18 drugs from virus sequence data. These kernel functions are able to take into account HIV data particularities, as are the allele mixtures, and to integrate additional knowledge about the major resistance associated protein positions described in the literature.

Elies Ramon, Miguel Pérez-Enciso, Lluís Belanche-Muñoz
Deciphering General Characteristics of Residues Constituting Allosteric Communication Paths

Allostery is one of most important processes in molecular biology by which proteins transmit the information from one functional site to another, frequently distant site. The information on ligand binding or on posttranslational modification at one site is transmitted along allosteric communication path to another functional site allowing for regulation of protein activity. The detailed analysis of the general character of allosteric communication paths is therefore extremely important. It enables to better understand the mechanism of allostery and can be used in for the design of new generations of drugs.Considering all the PDB annotated allosteric proteins (from ASD - AlloSteric Database) belonging to four different classes (kinases, nuclear receptors, peptidases and transcription factors), this work has attempted to decipher certain consistent patterns present in the residues constituting the allosteric communication sub-system (ACSS). The thermal fluctuations of hydrophobic residues in ACSSs were found to be significantly higher than those present in the non-ACSS part of the same proteins, while polar residues showed the opposite trend.The basic residues and hydroxyl residues were found to be slightly more predominant than the acidic residues and amide residues in ACSSs, hydrophobic residues were found extremely frequently in kinase ACSSs. Despite having different sequences and different lengths of ACSS, they were found to be structurally quite similar to each other – suggesting a preferred structural template for communication. ACSS structures recorded low RMSD and high Akaike Information Criterion (AIC) scores among themselves. While the ACSS networks for all the groups of allosteric proteins showed low degree centrality and closeness centrality, the betweenness centrality magnitudes revealed nonuniform behavior. Though cliques and communities could be identified within the ACSS, maximal-common-subgraph considering all the ACSS could not be generated, primarily due to the diversity in the dataset. Barring one particular case, the entire ACSS for any class of allosteric proteins did not demonstrate “small world” behavior, though the sub-graphs of the ACSSs, in certain cases, were found to form small-world networks.

Girik Malik, Anirban Banerji, Maksim Kouza, Irina A. Buhimschi, Andrzej Kloczkowski
Feature (Gene) Selection in Linear Homogeneous Cuts

A layer of formal neurons ranked based on given learning data sets can linearize these sets. This means that such sets become linearly separable as a result of transforming feature vectors forming these sets through the ranked layer. After the transformation by the ranked layer, each learning set can be separated by a hyperplane from the sum of other learning sets. A ranked layer can be designed from formal neurons as a result of multiple homogenous cuts of the learning sets by separating hyperplanes. Each separating hyperplane should cut off a large number of feature vectors from only one learning set. Successive separating hyperplanes can be found through the minimization of the convex and piecewise-linear (CPL) criterion functions. The regularized CPL criterion functions can be also involved in the feature selection tasks during successive cuts.

Leon Bobrowski, Tomasz Łukaszuk
DNA Sequence Alignment Method Based on Trilateration

The effective comparison of biological data sequences is an important and a challenging task in bioinformatics. The sequence alignment process itself is a way of arranging DNA sequences in order to identify similar areas that may have a consequence of functional, structural or evolutionary relations between them. A new effective and unified method for sequence alignment on the basic of trilateration, called CAT method, and using C (cytosine), A (adenine) and T (thymine) benchmarks is presented in this paper. This method suggests solutions to three major problems in sequence alignment: creating a constant favorite sequence, reducing the number of comparisons with the favorite sequence, and unifying/standardizing the favorite sequence by defining benchmark sequences.

Veska Gancheva, Hristo Stoev
Convolutional Neural Networks for Red Blood Cell Trajectory Prediction in Simulation of Blood Flow

Computer simulations of a blood flow in microfluidic devices are an important tool to make their development and optimization more efficient. These simulations quickly become limited by their computational complexity. Analysis of large output data by machine learning methods is a possible solution of this problem. We apply deep learning methods in this paper, namely we use convolutional neural networks (CNNs) for description and prediction of the red blood cells’ trajectory, which is crucial in modeling of a blood flow. We evaluated several types of CNN architectures, formats of theirs input data and the learning methods on simulation data inspired by a real experiment. The results we obtained establish a starting point for further use of deep learning methods in reducing computational demand of microfluid device simulations.

Michal Chovanec, Hynek Bachratý, Katarína Jasenčáková, Katarína Bachratá
Platform for Adaptive Knowledge Discovery and Decision Making Based on Big Genomics Data Analytics

In the past years, researchers and analysts worldwide determine big data as a revolution in scientific research and one of the most promising trends that has given impetus to the intensive development of methods and technologies for their investigation and has resulted in the emergence of a new paradigm for scientific research Data-Intensive Scientific Discovery (DISD). The paper presents a platform for adaptive knowledge discovery and decision making tailored to the target of scientific research. The major advantage is the automatic generation of hypotheses and options for decisions, as well as verification and validation utilizing standard data sets and expertise of scientists. The platform is implemented on the basis of scalable framework and scientific portal to access the knowledge base and the software tools, as well as opportunities to share knowledge and technology transfer.

Plamenka Borovska, Veska Gancheva, Ivailo Georgiev
Simulation Approaches to Stretching of Red Blood Cells

In this article we will give brief overview of elastic parameter of our red blood cell (RBC) model. Next we will discuss the importance of calibration of these parameters to get behaviour close to the behaviour of the actual RBC. For this purpose we used a widely known experiment where RBC are stretched by attaching silica beads to their membrane and applying force to these beads. The main focus of this article is how to model this stretching process and more precisely, how to choose between several options available to simulate the optical tweezers stretching. We compared three approaches - “points”, “hat” and “ring”, and based on simulation results selected the “ring” approach. Afterwards we computed working “rings” for different cell discretizations.

Alžbeta Bohiniková, Katarína Bachratá
Describing Sequential Association Patterns from Longitudinal Microarray Data Sets in Humans

DNA microarray technology provides a powerful vehicle for exploring biological processes on a genomic scale. Machine-learning approaches such as association rule mining (ARM) have been proven very effective in extracting biologically relevant associations among different genes. Despite of the usefulness of ARM, time relations among associated genes cannot be modeled with a standard ARM approach, though temporal information is critical for the understanding of regulatory mechanisms in biological processes. Sequential rule mining (SRM) methods have been proposed for mining temporal relations in temporal data instead. Although successful, existing SRM applications on temporal microarray data have been exclusively designed for in vitro experiments in yeast and none extension to in vivo data sets has been proposed to date. Contrary to what happen with in vitro experiments, when dealing with microarray data derived from humans or animals the “subject variability” is the main issue to address, so that databases include multiple sequences instead of a single one. A wide variety of SRM approaches could be used to handle with these particularities. In this study, we propose an adaptation of the particular SRM method “CMRules” to extract sequential association rules from temporal gene expression data derived from humans. In addition to the data mining process, we further propose the validation of extracted rules through the integration of results along with external resources of biological knowledge (functional and pathway annotation databases). The employed data set consists on temporal gene expression data collected in three different time points during the course of a dietary intervention in 57 subjects with obesity (data set available with identifier GSE77962 in the Gene Expression Omnibus repository). Published by Vink [1], the original clinical trial investigated the effects on weight loss of two different dietary interventions (a low-calorie diet or a very low-calorie diet). In conclusion, the proposed method demonstrated a good ability to extract sequential association rules with further biological relevance within the context of obesity. Thus, the application of this method could be successfully extended to other longitudinal microarray data sets derived from humans.

Augusto Anguita-Ruiz, Alberto Segura-Delgado, Rafael Alcala, Concepción Maria Aguilera, Jesus Alcala-Fernandez

Biomedical Engineering

Frontmatter
Low Resolution Electroencephalographic-Signals-Driven Semantic Retrieval: Preliminary Results

Nowadays, there exist high interest in the brain-computer interface (BCI) systems, and there are multiple approaches to developing them. Lexico-semantic (LS) classification from Electroencephalographic (EEG) signals is one of them, which is an open and few explored research field. The LS depends on the creation of the concepts of each person and its context. Therefore, it has not been demonstrated a universal fingerprint of the LS either the spatial location in the brain, which depends on the variability the brain plasticity and other changes throughout the time. In this study, an analysis of LS from EEG signals was carried out. The Emotiv Epoc+ was used for the EEG acquisition from three participants reading 36 different words. The subjects were characterized throughout two surveys (Becks depression, and emotion test) for establishing their emotional state, depression, and anxiety levels. The signals were processed to demonstrate semantic category and for decoding individual words (4 pairs of words were selected for this study). The methodology was executed as follows: first, the signals were pre-processed, decomposed by sub-bands ( $$\delta , \theta , \alpha , \beta $$ δ , θ , α , β , and $$\gamma $$ γ ) and standardized. Then, feature extraction was applied using linear and non-linear statistical measures, and the Discrete Wavelet Transform calculated from EEG signals, generating the feature space termed set-1. Also, the principal component analysis was applied to reduce the dimensionality, generating the feature space termed set-2. Finally, both sets were tested independently by multiple classifiers based on the support vector machine and k- nearest neighbor. These were validated using 10-fold cross-validation achieving results upper to 95% of accuracy which demonstrated the capability of the proposed mechanism for decoding LS from a reduced number of EEG signals acquired using a portable system of acquisition.

Miguel Alberto Becerra, Edwin Londoño-Delgado, Oscar I. Botero-Henao, Diana Marín-Castrillón, Cristian Mejia-Arboleda, Diego Hernán Peluffo-Ordóñez
A Simpler Command of Motor Using BCI

Brain-Computer Interfaces discipline has recently witnessed an unprecedented boom, thanks to technological evolution, which has led to incredible results, such as the control of drones or realistic articulated arms by thought. These outputs are the result of complex algorithms whose conception required years of research. This paper will show that there is a simpler way to achieve a binary BCI in a minimum of time by using the Brain Switch paradigm, Motor Imagery, and Fourier Transform. In the present work we shall demonstrate how it was possible to control a motor by thought, using a digital recorded EEG on an untrained volunteer at the hospital. From this EEG tracing we have extracted on a single electrode a unique feature which is the “Mu” frequency related to motor imaging.

Yacine Fodil, Salah Haddab
A Biomechanical Model Implementation for Upper-Limbs Rehabilitation Monitoring Using IMUs

Rehabilitation is of great importance in helping patients to recover their autonomy after a stroke. It requires an assessment of the patient’s condition based on their movements. Inertial Measurement Units (IMUs) can be used to provide a quantitative measure of human movement for evaluation. In this work, three systems for articular angles determination are proposed, two of them based on IMUs and the last one on a vision system. We have evaluated the accuracy and performance of the proposals by analyzing the human arm movements. Finally, drift correction is assessed in long-term trials. Results show errors of $$3.43\%$$ 3.43 % in the vision system and $$1.7\%$$ 1.7 % for the IMU-based methods.

Sara García de Villa, Ana Jimenéz Martín, Juan Jesús García Domínguez
Non-generalized Analysis of the Multimodal Signals for Emotion Recognition: Preliminary Results

Emotions are mental states associated with some stimuli, and they have a relevant impact on the people living and are correlated with their physical and mental health. Different studies have been carried out focused on emotion identification considering that there is a universal fingerprint of the emotions. However, this is an open field yet, and some authors had refused such proposal which is contrasted with many results which can be considered as no conclusive despite some of them have achieved high results of performances for identifying some emotions. In this work an analysis of identification of emotions per individual based on physiological signals using the known MAHNOB-HCI-TAGGING database is carried out, considering that there is not a universal fingerprint based on the results achieved by a previous meta-analytic investigation of emotion categories. The methodology applied is depicted as follows: first the signals were filtered and normalized and decomposed in five bands ( $$\delta $$ δ , $$\theta $$ θ , $$\alpha $$ α , $$\beta $$ β , $$\gamma $$ γ ), then a features extraction stage was carried out using multiple statistical measures calculated of results achieved after applied discrete wavelet transform, Cepstral coefficients, among others. A feature space dimensional reduction was applied using the selection algorithm relief F. Finally, the classification was carried out using support vector machine, and k-nearest neighbors and its performance analysis was measured using 10 folds cross-validation achieving high performance uppon to 99%.

Edwin Londoño-Delgado, Miguel Alberto Becerra, Carolina M. Duque-Mejía, Juan Camilo Zapata, Cristian Mejía-Arboleda, Andrés Eduardo Castro-Ospina, Diego Hernán Peluffo-Ordóñez
Relations Between Maximal Half Squat Strength and Bone Variables in a Group of Young Overweight Men

The purpose of this study was to investigate the relationships between maximal half-squat strength and bone variables in a group of young overweight men. 76 young overweight men (18 to 35 years) voluntarily participated in this study. Weight and height were measured, and body mass index (BMI) was calculated. Body composition, bone mineral content (BMC) and bone mineral density (BMD) and geometric indices of hip bone strength were determined for each individual by Dual-energy X-ray absorptiometry (DXA). Maximal half-squat strength was measured by a classical fitness machine (Smith machine) respecting the instructions of the national association of conditioning and muscular strength (NCSA). Maximal half-squat strength was positively correlated to WB BMC (r = 0.37; p < 0.01), WB BMD (r = 0.29; p < 0.05), L1–L4 BMC (r = 0.43; p < 0.001), L1–L4 BMD (r = 0.42; p < 0.001), TH BMC (r = 0.30; p < 0.01), TH BMD (r = 0.26; p < 0.05), FN BMD (r = 0.32; p < 0.01), FN cross-sectional area (CSA) (r = 0.44; p < 0.001), FN cross-sectional moment of inertia (CSMI) (r = 0.27; p < 0.05), FN section modulus (Z) (r = 0.37; p < 0.001) and FN strength index (SI) (r = 0.33; p < 0.01). After adjusting for lean mass, maximal half-squat strength remained significantly correlated to WB BMC (p = 0.003), WB BMD (p = 0.047), L1–L4 BMC (p < 0.001), L1–L4 BMD (p < 0.001), TH BMC (p = 0.046), FN BMD (p = 0.016), FN CSA (p < 0.001), FN Z (p = 0.003) and FN SI (p < 0.001). The current study suggests that maximal half-squat strength is a positive determinant of BMC, BMD and geometric indices of hip bone strength in young overweight men.

Anthony Khawaja, Patchina Sabbagh, Jacques Prioux, Antonio Pinti, Georges El Khoury, Rawad El Hage
Brain Hematoma Segmentation Using Active Learning and an Active Contour Model

Traumatic brain injury (TBI) is a massive public health problem worldwide. Accurate and fast automatic brain hematoma segmentation is important for TBI diagnosis, treatment and outcome prediction. In this study, we developed a fully automated system to detect and segment hematoma regions in head Computed Tomography (CT) images of patients with acute TBI. We first over-segmented brain images into superpixels and then extracted statistical and textural features to capture characteristics of superpixels. To overcome the shortage of annotated data, an uncertainty-based active learning strategy was designed to adaptively and iteratively select the most informative unlabeled data to be annotated for training a Support Vector Machine classifier (SVM). Finally, the coarse segmentation from the SVM classifier was incorporated into an active contour model to improve the accuracy of the segmentation. From our experiments, the proposed active learning strategy can achieve a comparable result with 5 times fewer labeled data compared with regular machine learning. Our proposed automatic hematoma segmentation system achieved an average Dice coefficient of 0.60 on our dataset, where patients are from multiple health centers and at multiple levels of injury. Our results show that the proposed method can effectively overcome the challenge of limited and highly varied dataset.

Heming Yao, Craig Williamson, Jonathan Gryak, Kayvan Najarian
DYNLO: Enhancing Non-linear Regularized State Observer Brain Mapping Technique by Parameter Estimation with Extended Kalman Filter

The underlying activity in the brain can be estimated using methods based on discrete physiological models of the neural activity. These models involve parameters for weighting the estimated source activity of previous samples, however, those parameters are subject- and task-dependent. This paper introduces a dynamical non-linear regularized observer (DYNLO), through the implementation of an Extended Kalman Filter (EKF) for estimating the model parameters of the dynamical source activity over the neural activity reconstruction performed by a non-linear regularized observer (NLO). The proposed methodology has been evaluated on real EEG signals using a realistic head model. The results have been compared with least squares (LS) for model parameter estimation with NLO and the multiple sparse prior (MSP) algorithm for source estimation. The correlation coefficient and relative error between the original EEG and the estimated EEG from the source reconstruction were inspected and the results show an improvement of the solution in terms of the aforementioned measurements and a reduction of the computational time.

Andrés Felipe Soler Guevara, Eduardo Giraldo, Marta Molinas
Analysis of the Behavior of the Red Blood Cell Model in a Tapered Microchannel

Red blood cells are flexible during their movement in microchannels, they adapt easily to their immediate environment and eventually return to their relaxed shape as soon as the environment exerts no forces on the cell. This behaviour is determined by elastic properties of red cell’s membrane which must be carefully taken in consideration when creating the computational model of red blood cell.In our work we use previously developed model of red blood cell that employs five basic elastic moduli, each representing a different part of the overall elastic properties. The aim of this work is to assess the validity of such model. To this end we analyse behaviour of cells in tapered channels. By adapting to the flow conditions, cell begins to perform a certain type of repetitive motion. We focus on tank-treading and stretching. In this article we show that the model of red blood cell is capable of reconstruction these motions and we show quantitative and qualitative measures comparing the experimental and simulation data. We provide the analysis and description of these movements, and we study the effect of the cell’s initial position on the type of motion to be performed Query ID="Q3" Text="The affiliation “1” has been split into two different affiliations. And also check and confirm if the author and their affiliations are identified correctly." .

Mariana Ondrusova, Ivan Cimrak
Radiofrequency Ablation for Treating Chronic Pain of Bones: Effects of Nerve Locations

The present study aims at evaluating the effects of target nerve location from the bone tissue during continuous radiofrequency ablation (RFA) for chronic pain relief. A generalized three-dimensional heterogeneous computational model comprising of muscle, bone and target nerve has been considered. The continuous RFA has been performed through the monopolar needle electrode placed parallel to the target nerve. Finite-element-based coupled thermo-electric analysis has been conducted to predict the electric field and temperature distributions as well as the lesion volume attained during continuous RFA application. The quasi-static approximation of the Maxwell’s equations has been used to compute the electric field distribution and the Pennes bioheat equation has been used to model the heat transfer phenomenon during RFA of the target nerve. The electrical and thermo-physical properties considered in the present numerical study have been acquired from the well-characterized values available in the literature. The protocol of the RFA procedure has been adopted from the United States Food and Drug Administration (FDA) approved commercial devices available in the market and reported in the previous clinical studies. Temperature-dependent electrical conductivity along with the piecewise model of blood perfusion have been considered to correlate with the in-vivo scenarios. The numerical simulation results, presented in this work, reveal a strong dependence of lesion volume on the target nerve location from the considered bone. It is expected that the findings of this study would assist in providing a priori critical information to the clinical practitioners for enhancing the success rate of continuous RFA technique in addressing the chronic pain problems of bones.

Sundeep Singh, Roderick Melnik

Biomedical Image Analysis

Frontmatter
Visualization and Cognitive Graphics in Medical Scientific Research

The aim of the research is to apply methods of structural analysis of multidimensional data using different approaches to visualizing the results of experimental studies. To solve applied problems, the authors used NovoSparkVisualizer (demo) system, as well as the R scripting language. The leading approach to the study of this problem is the mapping of multidimensional experimental data in the form of generalized graphic images on the basis of original methods and approaches developed by the authors. The paper presents the results of solving two applied problems illustrating the effectiveness of the method for visualization of multidimensional experimental data: (1) analysis of the dynamics of the physiological state of pregnant women; (2) study of breathing parameters in patients with bronchial asthma. In the first case, the use of cognitive graphics tools made it possible to propose an effective way of displaying the dynamics of the state of the bio-object (for example, comparing the patient’s condition before and after treatment). In the second case, it helped to reveal some previously unknown patterns of physiological response of the bronchial-pulmonary system to psycho-physiological exposure. The results of the research allow the authors to state that the methods and approaches presented in the paper can be regarded as promising directions in the analysis and presentation of multidimensional experimental data.

Olga G. Berestneva, Olga V. Marukhina, Sergey V. Romanchukov, Elena V. Berestneva
Detection of Static Objects in an Image Based on Texture Analysis

The article deals with the design of a method for the automatic detection of static objects in the image captured by an optical microscope. The search algorithm for static objects in the image - non-moving cilia is based on texture description methods. The texture of the image is described by statistical values, where it can be noticed that background texture, cells and cilia have different mathematical statistical parameters. Just based on the different statistical parameters of the textures, the classification for each texture parameter was done separately. As a result, the resulting classification considers the most predominant group to which the pixel has been assigned. In the end, the obtained mask was adjusted by morphological operations to obtain the boundary of the area, where the algorithm automatically evaluated that one was about Cilia. This work is supported by medical specialists from Jessenius Faculty of Medicine in Martin (Slovakia) and proposed tools would fill the gap in the diagnostics in the field of respirology in Slovakia.

Frantisek Jabloncik, Libor Hargas, Jozef Volak, Dusan Koniar
Artefacts Recognition and Elimination in Video Sequences with Ciliary Respiratory Epithelium

The ciliary respiratory epithelium analysis is performed to detect of possible malfunction of cilia. The moving cilia is investigated and their movement is automatically evaluated. The areas with moving cilia is marked in video sequences. When the moving cilia is searched in some cases the false detection can be occur. It means that area with no cilia is marked as ciliated epithelium. These errors are caused by artefacts. The most frequent artefacts are erythrocytes and air bubbles. Article deals with techniques that helps to find artefacts which are responsible for false detection of movement. The used techniques for artefacts detection are based on pattern matching and geometrical matching. The results of designed algorithms are compared in the conclusion of this article. The main idea of this work is to create complex diagnostic tool for evaluation of ciliated epithelium in airways. This work is supported by medical specialists from Jessenius Faculty of Medicine in Martin (Slovakia) and proposed tools would fill the gap in the diagnostics in the field of respirology in Slovakia.

Libor Hargas, Zuzana Loncova, Dusan Koniar, Frantisek Jabloncik, Jozef Volak
Cross Modality Microscopy Segmentation via Adversarial Adaptation

Deep learning techniques have been successfully applied to automatically segment and quantify cell-types in images acquired from both confocal and light sheet fluorescence microscopy. However, the training of deep learning networks requires a massive amount of manually-labeled training data, which is a very time-consuming operation. In this paper, we demonstrate an adversarial adaptation method to transfer deep network knowledge for microscopy segmentation from one imaging modality (e.g., confocal) to a new imaging modality (e.g., light sheet) for which no or very limited labeled training data is available. Promising segmentation results show that the proposed transfer learning approach is an effective way to rapidly develop segmentation solutions for new imaging methods.

Yue Guo, Qian Wang, Oleh Krupa, Jason Stein, Guorong Wu, Kira Bradford, Ashok Krishnamurthy
Assessment of Pain-Induced Changes in Cerebral Microcirculation by Imaging Photoplethysmography

Evaluation of the dynamics of cerebral blood flow during surgical treatment and during experiments with laboratory animals is an important component of assessment of physiological reactions of a body. None of the modern methods of hemodynamic assessment allows this to be done in the continuous registration mode. Here we propose to use green-light imaging photoplethysmography (IPPG) for assessment of cortical hemodynamics. The technique is capable for contactless assessment of dynamic parameters synchronized with the heart rate that reflect the state of cortical blood flow. To demonstrate feasibility of the technique, we carried out joint analysis of the dynamics of systemic arterial pressure and IPPG synchronized with electrocardiogram during visceral or somatic stimulation in an anesthetized rat. IPPG parameters were estimated from video recordings of the open rat brain without dissecting the dura mater. We found that both visceral and somatic painful stimulation results in short-term hypotension with simultaneous increase in the amplitude of blood pulsations (BPA) in the cerebral cortex. BPA changes were bigger in the primary somatosensory cortical area but they correlated with other areas of the cortex. Apparently, change of BPA is in the reciprocal relationship with variations of mean systemic and pulse pressure in femoral artery that is probably a consequence of cerebrovascular reflex regulating the cerebral blood flow. Therefore, BPA reflects dynamic properties of cortex microcirculation that are synchronized with systemic arterial pressure and depend from cortical area specialization.

Alexei A. Kamshilin, Olga A. Lyubashina, Maxim A. Volynsky, Valeriy V. Zaytsev, Oleg V. Mamontov
Detection of Subclinical Keratoconus Using Biometric Parameters

The validation of innovative methodologies for diagnosing keratoconus in its earliest stages is of major interest in ophthalmology. So far, subclinical keratoconus diagnosis has been made by combining several clinical criteria that allowed the definition of indices and decision trees, which proved to be valuable diagnostic tools. However, further improvements need to be made in order to reduce the risk of ectasia in patients who undergo corneal refractive surgery. The purpose of this work is to report a new subclinical keratoconus detection method based in the analysis of certain biometric parameters extracted from a custom 3D corneal model.This retrospective study includes two groups: the first composed of 67 patients with healthy eyes and normal vision, and the second composed of 24 patients with subclinical keratoconus and normal vision as well. The proposed detection method generates a 3D custom corneal model using computer-aided graphic design (CAGD) tools and corneal surfaces’ data provided by a corneal tomographer. Defined bio-geometric parameters are then derived from the model, and statistically analysed to detect any minimal corneal deformation.The metric which showed the highest area under the receiver-operator curve (ROC) was the posterior apex deviation.This new method detected differences between healthy and sub-clinical keratoconus corneas by using abnormal corneal topography and normal spectacle corrected vision, enabling an integrated tool that facilitates an easier diagnosis and follow-up of keratoconus.

Jose Sebastián Velázquez-Blázquez, Francisco Cavas-Martínez, Jorge Alió del Barrio, Daniel G. Fernández-Pacheco, Francisco J. F. Cañavate, Dolores Parras-Burgos, Jorge Alió
A Computer Based Blastomere Identification and Evaluation Method for Day 2 Embryos During IVF/ICSI Treatments

Human embryos evaluation is one of the most important challenges in vitro fertilization (IVF) programs. The morphokinetic and the morphology parameters of the early cleaving embryo are of critical clinical importance. This stage spans the first 48 h post-fertilization, in which the embryo is dividing in smaller blastomeres at specific time-points. The morphology, in combination with the symmetry of the blastomeres, seem to be powerful features with strong prognostic value for embryo evaluation. To date, the identification of these features is based on human inspection in timed intervals, at best using camera systems that simply work as surveillance systems without any precise alerting and decision support mechanisms. This paper aims to develop a computer vision technique to automatically detect and identify the most suitable cleaving embryos (preferably at Day 2 post-fertilization) for embryo transfer (ET) during treatments. To this end, texture and geometrical features were used to localize and analyze the whole cleaving embryo in 2D grayscale images captured during in-vitro embryo formation. Because of the ellipsoidal nature of blastomeres, the contour of each blastomere is modeled with an optimal fitting ellipse and the mean eccentricity of all ellipses is computed. The mean eccentricity in combination with the number of blastomeres form the feature space on which the final criterion for the embryo evaluation is based. Experimental results with low quality 2D grayscale images demonstrated the effectiveness of the proposed technique and provided evidence of a novel automated approach for predicting embryo quality.

Charalambos Strouthopoulos, Athanasios Nikolaidis
Detection of Breast Cancer Using Infrared Thermography and Deep Neural Networks

We present a preliminary analysis about the use of convolutional neural networks (CNNs) for the early detection of breast cancer via infrared thermography. The two main challenges of using CNNs are having at disposal a large set of images and the required processing time. The thermographies were obtained from Vision Lab and the calculations were implemented using Fast.ai and Pytorch libraries, which offer excellent results in image classification. Different architectures of convolutional neural networks were compared and the best results were obtained with resnet34 and resnet50, reaching a predictive accuracy of 100% in blind validation. Other arquitectures also provided high classification accuracies. Deep neural networks provide excellent results in the early detection of breast cancer via infrared thermographies, with technical and computational resources that can be easily implemented in medical practice. Further research is needed to asses the probabilistic localization of the tumor regions using larger sets of annotated images and assessing the uncertainty of these techniques in the diagnosis.

Francisco Javier Fernández-Ovies, Edwin Santiago Alférez-Baquero, Enrique Juan de Andrés-Galiana, Ana Cernea, Zulima Fernández-Muñiz, Juan Luis Fernández-Martínez
Development of an ECG Smart Jersey Based on Next Generation Computing for Automated Detection of Heart Defects Among Athletes

Heart defects have remained one of the top causes of death in the world. As much as sporting activities and exercises are considered beneficial for the health of an individual, there are also a few but significant demerits as huge populations of athletes suffer from heart defects. This is especially pronounced among those who participate in strenuous activities for long periods of time. One of the oldest and most useful diagnostic tools for heart disease is the electrocardiograph. However, they remain bulky, heavy and not wearable. They also often require the help of medical professionals to interpret the electrocardiogram. As a result, the preliminary results of the implementation of a prototype ECG Smart Jersey using Next Generation Computing (which include IoT, Machine Learning and Android App) for the automated detection of heart defects among athletes is presented in this paper. The prototype is a proof of concept, which could be further enhanced to ensure that a wearable and automated ECG monitoring, analysis and interpretation is accomplished for athletes in order to reduce the burden of sudden deaths among them.

Emmanuel Adetiba, Ekpoki N. Onosenema, Victor Akande, Joy N. Adetiba, Jules R. Kala, Folarin Olaloye

Biomedicine and e-Health

Frontmatter
Analysis of Finger Thermoregulation by Using Signal Processing Techniques

The analysis of finger thermoregulation helps the diagnostic of some disabling pathologies. Despite there is possible to analyze the finger thermoregulation whether at the complete fingers geometry or at the fingertips, there is comparatively little experimental evidence about how different is the thermoregulation of these regions. This work presents an analysis based on computer vision and signal processing techniques applied to thermal infrared images, to identify regions inside the fingers geometry that exhibit different thermoregulation patterns in response to a controlled cold stimulation.

María Camila Henao Higuita, Macheily Hernández Fernández, Delio Aristizabal Martínez, Hermes Fandiño Toro
Comparative Study of Feature Selection Methods for Medical Full Text Classification

There is a lot of work in text categorization using only the title and abstract of the papers. However, in a full paper there is a much larger amount of information that could be used to improve the text classification performance. The potential benefits of using full texts come with an additional problem: the increased size of the data sets.To overcome the increased the size of full text data sets we performed an assessment study on the use of feature selection methods for full text classification. We have compared two existing feature selection methods (Information Gain and Correlation) and a novel method called k-Best-Discriminative-Terms. The assessment was conducted using the Ohsumed corpora. We have made two sets of experiments: using title and abstract only; and full text.The results achieved by the novel method show that the novel method does not perform well in small amounts of text like title and abstract but performs much better for the full text data sets and requires a much smaller number of attributes.

Carlos Adriano Gonçalves, Eva Lorenzo Iglesias, Lourdes Borrajo, Rui Camacho, Adrián Seara Vieira, Célia Talma Gonçalves
Estimation of Lung Properties Using ANN-Based Inverse Modeling of Spirometric Data

Spirometry is the most commonly used test of lung function because the forced expiratory flow-volume (FV) curve is effort-independent and simultaneously sensitive to pathological processes in the lungs. Despite this, a method for the estimation of respiratory system parameters, based on this association, has not been yet proposed. The aim of this work was to explore a feedforward neural network (FFNN) approximating the inverse mapping between the FV curve and respiratory parameters. To this end, the sensitivity analysis of the reduced model for forced expiration has been carried out, showing its local identifiability and the importance of particular parameters. This forward model was then applied to simulate spirometric data (8000 elements), used for training, validating, optimizing and testing the FFNN. The suboptimal FFNN structure had 52 input neurons (for spirometric data), two hidden nonlinear layers with 30 and 20 neurons respectively, and 10 output neurons (for parameter estimates). The total relative error of estimation of individual parameters was between 11 and 28%. Parameter estimates yielded by this inverse FFNN will be used as starting points for a more precise local estimation algorithm.

Adam G. Polak, Dariusz Wysoczański, Janusz Mroczka
Model of the Mouth Pressure Signal During Pauses in Total Liquid Ventilation

Total liquid ventilation (TLV) is an innovative experimental method of mechanical ventilation in which lungs are totally filled with a breathable perfluorochemical liquid (PFC). The main objective is to develop a method to estimate the alveolar pressure from a pressure mouth measurement during pause in liquid ventilation. Experimental results show that the measured mouth pressure is disturbed by disturbances (damped oscillations due to fluid-structure tube resonances and cardiogenic oscillation). Numerical analysis of $$P_Y$$ P Y allow to obtain a fractional-order model of $$\alpha =0.7$$ α = 0.7 for the alveolar pressure.

Jonathan Vandamme, Mathieu Nadeau, Julien Mousseau, Jean-Paul Praud, Philippe Micheau
Backmatter
Metadaten
Titel
Bioinformatics and Biomedical Engineering
herausgegeben von
Ignacio Rojas
Prof. Olga Valenzuela
Fernando Rojas
Dr. Francisco Ortuño
Copyright-Jahr
2019
Electronic ISBN
978-3-030-17935-9
Print ISBN
978-3-030-17934-2
DOI
https://doi.org/10.1007/978-3-030-17935-9