Skip to main content
Top

2013 | Book

Biomedical Engineering Systems and Technologies

5th International Joint Conference, BIOSTEC 2012, Vilamoura, Portugal, February 1-4, 2012, Revised Selected Papers

Editors: Joaquim Gabriel, Jan Schier, Sabine Van Huffel, Emmanuel Conchon, Carlos Correia, Ana Fred, Hugo Gamboa

Publisher: Springer Berlin Heidelberg

Book Series : Communications in Computer and Information Science

insite
SEARCH

About this book

This book constitutes the thoroughly refereed post-conference proceedings of the 5th International Joint Conference on Biomedical Engineering Systems and Technologies, BIOSTEC 2012, held in Vilamoura, Portugal, in February 2012. The 26 revised full papers presented together with one invited lecture were carefully reviewed and selected from a total of 522 submissions. The papers cover a wide range of topics and are organized in four general topical sections on biomedical electronics and devices; bioinformatics models, methods and algorithms; bio-inspired systems and signal processing; health informatics.

Table of Contents

Frontmatter

Invited Paper

Frontmatter
Biomedical 2D and 3D Imaging: State of Art and Future Perspectives
Abstract
The increasing and rapidly evolving role of 2D and 3D vision in biomedical science and technology is herein presented, based on the experience of our Laboratory and of its start-ups in recent years. Applications to ophthalmology, dentistry, forensic science and prosthetic technology are discussed.
Giovanna Sansoni, Franco Docchio

Part I: Biomedical Electronics and Devices

Frontmatter
Dry and Water-Based EEG Electrodes in SSVEP-Based BCI Applications
Abstract
This paper evaluates whether water-based and dry contact electrode solutions can replace the gel ones in measuring electrical brain activity by the electroencephalogram (EEG). The quality of the signals measured by three setups (dry, water, and gel), each using 8 electrodes, is estimated for the case of a brain-computer interface (BCI) based on steady state visual evoked potential (SSVEP). Repetitive visual stimuli in the low (12 to 21Hz) and high (28 to 40Hz) frequency ranges were applied. Six people, that had different hair length and type, participated in the experiment. For people with shorter hair style the performance of water-based and dry electrodes comes close to the gel ones in the optimal setting. On average, the classification accuracy of 0.63 for dry and 0.88 for water-based electrodes is achieved, compared to the 0.96 obtained for gel electrodes. The theoretical maximum of the average information transfer rate across participants was 23bpm for dry, 38bpm for water-based and 67bpm for gel electrodes. Furthermore, the convenience level of all three setups was seen as comparable. These results demonstrate that, having optimized headset and electrode design, dry and water-based electrodes can replace gel ones in BCI applications where lower communication speed is acceptable.
Vojkan Mihajlović, Gary Garcia-Molina, Jan Peuscher
Nitrite Biosensing Using Cytochrome C Nitrite Reductase: Towards a Disposable Strip Electrode
Abstract
This paper presents the results of a primary study that aims to produce miniaturized biosensing devices for nitrite analysis in clinical samples. Following our previous works regarding the development of amperometric nitrite biosensors using the nitrite reducing enzyme (ccNiR) from Desulfovibrio desulfuricans ATCC 27774, here we aimed at reducing the size of the experimental set-up according to the specific needs of biomedical applications. For this, thick-film strip electrodes made of carbon conductive inks deposited on plastic supports were modified with the ccNiR enzyme, previously mixed with the conductive graphite ink. Firstly, though, the electrode preparation was optimized (enzyme amount, organic solvent and curing temperature). Then, the biocompatibility of ccNiR with these harsh treatments and the analytical performance of the modified electrodes were evaluated by cyclic voltammetry. Finally, the carbon paste screen-printed electrodes were coated with the ccNiR/carbon ink composite, displaying a good sensitivity (5.3x10− 7 A.uM− 1.cm− 2) within the linear range of 0.001 - 1.5 mM.
Cátia Correia, Marcelo Rodrigues, Célia M. Silveira, José J. G. Moura, Estibaliz Ochoteco, Elena Jubete, M. Gabriela Almeida
A Real-Time and Portable Bionic Eye Simulator
Abstract
The Monash Vision Group is developing a bionic eye based on an implantable cortical visual prosthesis. The visual prosthesis aims to restore vision to blind people by electrical stimulation of the visual cortex of the brain. Due to the expected naivety of early prostheses, there is need for the development of innovative pre-processing of scene information in order to provide the most intuitive representation to user. However, in order to explore solutions to this need, prior to availability of functional implants, a simulator system is required. In this paper, we present a portable, real-time simulator and psychophysical evaluation platform that we have developed called the ‘HatPack’. It makes use of current neurophysiological models of visuotopy and overcomes limitations of existing systems. Using the HatPack, which is compiled into a neat, wearable package, we have conducted preliminary psychophysics testing, which has shown the significance of available greyscale intensity levels and frame rates. A learning effect associated with repeated trials was also made evident.
Horace Josh, Benedict Yong, Lindsay Kleeman
Pathogen Detection Using Magnetoelastic Biosentinels
Abstract
Biosentinels, used to detect, signal, and capture pathogenic bacteria, are discussed. The biosentinel is based on magnetically soft magnetoelastic resonators coated with a selective and specific biorecognition layer. The biosentinels are actuated, monitored, and controlled wirelessly by external magnetic fields. The biosentinels mimic the function of naturally occurring biological defensive systems, such as white blood cells, seeking out and capturing pathogenic bacteria. After binding with the target pathogen, the mass of the biosentinel increases causing the resonant frequency to decrease, providing instantaneous detection of the pathogen. The biosentinels require no on-board power, harvesting electromagnetic energy from the surroundings for propulsion, navigation, and signaling the detection of target pathogens.
Howard Clyde Wikle III, Suiqiong Li, Aleksandr Simonian, Bryan A. Chin
Multi-source Harvesting Systems for Electric Energy Generation on Smart Hip Prostheses
Abstract
The development of smart orthopaedic implants is being considered as an effective solution to ensure their everlasting life span. The availability of electric power to supply active mechanisms of smart prostheses has remained a critical problem. This paper reports the first implementation of a new concept of energy harvesting systems applied to hip prostheses: the multi-source generation of electric energy. The reliability of the power supply mechanisms is strongly increased with the application of this new concept. Three vibration-based harvesters, operating in true parallel to harvest energy during human gait, were implemented on a Metabloc TM hip prosthesis to validate the concept. They were designed to use the angular movements on the flexion-extension, abduction-adduction and inward-outward rotation axes, over the femoral component, to generate electric power. The performance of each generator was tested for different amplitudes and frequencies of operation. Electric power up to 55 μJ/s was harvested. The overall function of smart hip prostheses can remain performing even if two of the generators get damaged. Furthermore, they are safe and autonomous throughout the life span of the implant.
Marco P. Soares dos Santos, Jorge A. F. Ferreira, A. Ramos, Ricardo Pascoal, Raul Morais dos Santos, Nuno M. Silva, José A. O. Simões, M. J. C. S. Reis, António Festas, Paulo M. Santos
An Integrated Portable Device for the Hand Functional Assessment in the Clinical Practice
Abstract
The functionality of the human hand is of paramount importance for the daily life activity of a subject. Several chronic diseases can have localized lesions on the hands, causing disability, as for the Systemic Sclerosis and Rheumatoid Arthritis. In these cases the evaluation of the hand functionality is a necessary step for setting up the therapeutic and rehabilitation program. This research presents a novel device tackling this problem, allowing the evaluation of hand dexterity and strength on 4 simple rehabilitation exercises. Real-time controlled by a wirelessly connected PC where a C++ physician graphical interface enables a user-friendly management of the assessment, the device provides hitherto unavailable measurements. A first evaluation of the device in a real outpatient rheumatology clinic has been performed and the preliminary results reveal the potentialities of the approach.
Danilo Pani, Gianluca Barabino, Alessia Dessì, Matteo Piga, Iosto Tradori, Alessandro Mathieu, Luigi Raffo

Part II: Bioinformatics Models, Methods and Algorithms

Frontmatter
Forests of Latent Tree Models to Decipher Genotype-Phenotype Associations
Abstract
Genome-wide association studies have revolutionized the search for genetic influences on common genetic diseases such as diabetes, obesity, asthma, cardio-vascular diseases and some cancers. In particular, together with the population aging concern, increasing health care costs require that further investigations are pursued to design scalable and efficient tools. The high dimensionality and complexity of genetic data hinder the detection of genetic associations. To decrease the risks of missing the causal factor and discovering spurious associations, machine learning offers an attractive framework alternative to classical statistical approaches. A novel class of probabilistic graphical models (PGMs) has recently been proposed - the forest of latent tree models (FLTMs) - , to reach a trade-off between faithful modeling of data dependences and tractability. In this chapter, we assess the great potentiality of this model to detect genotype-phenotype associations. The FLTM-based contribution is first put into the perspective of PGM-based works meant to model the dependences in genetic data; then the contribution is considered from the technical viewpoint of LTM learning, with the vital objective of scalability in mind. We then present the systematic and comprehensive evaluation conducted to assess the ability of the FLTM model to detect genetic associations through latent variables. Realistic simulations were performed under various controlled conditions. In this context, we present a procedure tailored to correct for multiple testing. We also show and discuss results obtained on real data. Beside guaranteeing data dimension reduction through latent variables, the FLTM model is empirically proven able to capture indirect genetic associations with the disease: strong associations are evidenced between the disease and the ancestor nodes of the causal genetic marker node, in the forest; in contrast, very weak associations are obtained for other latent variables. Finally, we discuss the prospects of the model for association detection at genome scale.
Christine Sinoquet, Raphaël Mourad, Philippe Leray
Laser Doppler Flowmeters Prototypes: Monte Carlo Simulations Validation Paired with Measurements
Abstract
Two new laser Doppler flowmeter prototypes are herein validated with Monte Carlo simulations paired with measurements. The first prototype is a multi-wavelength laser Doppler flowmeter with different spaced detection fibres that will add depth discrimination capabilities to laser Doppler flowmetry skin monitoring. The other prototype is a self-mixing based laser Doppler flowmeter for brain perfusion estimation. Monte Carlo simulations in a phantom consisting of moving fluid as well as in a skin model are proposed for the first prototype validation. We obtain a good correlation between simulations and measurements. For the second prototype validation, Monte Carlo simulations are carried out on a rat brain model. We show that the mean measurement depth in the rat brain with our probe is 0.15 mm. This positioning is tested in vivo where it is shown that the probe monitors the blood flow changes.
Edite Figueiras, Anne Humeau-Heurtier, Rita Campos, Ricardo Oliveira, Luís F. Requicha Ferreira, Frits de Mul
Simulation of Prokaryotic Genome Evolution Subjected to Mutational Pressures Associated with DNA Replication
Abstract
Each of two differently replicated DNA strands (leading and lagging) is subjected to the distinct mutational pressure associated with its synthesis. To simulate the influence of these pressures on the gene and genome evolution we worked out a computer model in which protein coding sequences were mutated according to the direct pressure (of the strand on which they were located), the reverse pressure (of the opposite strand), and the changing pressure (when the latter pressures were applied alternately). Simulated genomes were eliminated by the occurrence of stop codons in the gene sequences and the loss of their coding properties. The selection against stop codons appeared more deleterious than for coding signal. The leading strand pressure eliminated more genes because of the coding signal loss whereas the lagging strand pressure generated more stop codons. Generally, the reverse and changing pressures destroyed the coding signal weaker than the direct pressure.
Paweł Błażej, Paweł Mackiewicz, Stanisław Cebrat
Single Tandem Halving by Block Interchange
Abstract
We address the problem of finding the minimal number of block interchanges required to transform a duplicated unilinear genome into a single tandem duplicated unilinear genome. We provide a formula for the distance as well as a polynomial time algorithm for the sorting problem. This is the extended version of [1].
Antoine Thomas, Aïda Ouangraoua, Jean-Stéphane Varré
Fast RNA Secondary Structure Prediction Using Fuzzy Stochastic Models
Abstract
Computational prediction of RNA secondary structures has been an active area of research over the past decades and since become of great relevance for practical applications in structural biology. To date, many popular state-of-the-art prediction tools have the same worst-case time and space requirements of \(\mathcal{O}(n^3)\) and \(\mathcal{O}(n^2)\) for sequence length n, limiting their applicability for practical purposes. Accordingly, biologists are interested in getting results faster, where a moderate loss of accuracy would willingly be tolerated in favor of saving a significant amount of computation time. Motivated by these facts, we invented a novel algorithm for predicting the secondary structure of RNA molecules that manages to reduce the worst-case time complexity by a linear factor to \(\mathcal{O}(n^2)\), while on the other hand it is still capable of producing highly accurate results. Basically, the presented method relies on a probabilistic statistical sampling approach which is actually based on an appropriate stochastic context-free grammar (SCFG): for any given input sequence, it generates a random set of candidate structures (from the ensemble of all feasible foldings) according to a “noisy” distribution (obtained by heuristically approximating the inside-outside values for the input sequence), such that finally a corresponding prediction can be efficiently derived. Notably, this method may be employed with different sampling strategies. Therefore, we not only consider a popular common strategy but also introduce a novel one that is supposed to fit especially well in connection with fuzzy stochastic models. A major advantage of the proposed prediction approach is that sampling can easily be parallelized on modern multi-core architectures or grids. Furthermore, it can be done in-place, that is only the best (here most probable) candidate structure(s) generated so far need(s) to be stored and finally collected. The combination of these two benefits immediately allows for an efficient handling of the increased sample sizes that are often necessary to achieve competitive prediction accuracy in connection with the noisy distribution.
Markus E. Nebel, Anika Scheid
A Vaccination Strategy Based on a State Feedback Control Law for Linearizing SEIR Epidemic Models
Abstract
A vaccination strategy for fighting against the propagation of epidemic diseases within a host population is purposed. A SEIR epidemic model is used to describe the propagation of the illness. This compartmental model divides the population in four classes by taking into account their status related to the infection. In this way, susceptible, exposed, infectious and recovered populations are included in the model. The vaccination strategy is based on a continuous-time nonlinear control law synthesized via an exact feedback input-output linearization approach. The asymptotic eradication of the infection from the host population under such a vaccination is proved. Moreover, the positivity and stability properties of the controlled system are investigated.
S. Alonso-Quesada, M. De la Sen, A. Ibeas
Medical Imaging as a Bone Quality Determinant and Strength Indicator of the Femoral Neck
Abstract
Early diagnosis of osteoporosis is a key factor in preventive medicine of this clinically silent bone pathology. The most severe manifestation of osteoporotic bone loss is encountered in hip fractures and therfore, this study represents an effort in associating bone quality of the femoral neck region to fragility fracture risks through FEA supported imaging techniques. The concepts of Magnetic resonance imaging (MRI), Computer Tomography (CT) and Dual-energy X-ray absorptiometry (DXA) are introduced, along with their limitations in defining bone quality and calculating the apparent bone strength. As DXA dominates surgeons’ preference in evaluating bone mineral density in the hip region, in vivo measurements of this method, sustained by ex-vivo uniaxial compression tests and FEA supported calculations are employed to determine a fracture risk indicator of the femoral neck versus bone mineral density (BMD).
Alexander Tsouknidas, Nikolaos Michailidis, Kleovoulos Anagnostidis
Parallel GPGPU Evaluation of Small Angle X-Ray Scattering Profiles in a Markov Chain Monte Carlo Framework
Abstract
Inference of protein structure from experimental data is of crucial interest in science, medicine and biotechnology. Low-resolution methods, such as small angle X-ray scattering (SAXS), play a major role in investigating important biological questions regarding the structure of proteins in solution.
To infer protein structure from SAXS data, it is necessary to calculate the expected experimental observations given a protein structure, by making use of a so-called forward model. This calculation needs to be performed many times during a conformational search. Therefore, computational efficiency directly determines the complexity of the systems that can be explored.
We present an efficient implementation of the forward model for SAXS with full hardware utilization of Graphics Processor Units (GPUs). The proposed algorithm is orders of magnitude faster than an efficient CPU implementation, and implements a caching procedure employed in the partial forward model evaluations within a Markov chain Monte Carlo framework.
Lubomir D. Antonov, Christian Andreetta, Thomas Hamelryck

Part III: Bio-inspired Systems and Signal Processing

Assessment of Gait Symmetry and Gait Normality Using Inertial Sensors: In-Lab and In-Situ Evaluation
Abstract
Quantitative gait analysis is a powerful tool for the assessment of a number of physical and cognitive conditions. Unfortunately, the costs involved in providing in-lab 3D kinematic analysis to all patients is prohibitive. Inertial sensors such as accelerometers and gyroscopes may complement in-lab analysis by providing cheaper gait analysis systems that can be deployed anywhere. The present study investigates the use of inertial sensors to quantify gait symmetry and gait normality. The system was evaluated in-lab, against 3D kinematic measurements; and also in-situ, against clinical assessments of hip-replacement patients. Results show that the system not only correlates well with kinematic measurements but it also corroborates various quantitative and qualitative measures of recovery and health status of hip-replacement patients.
Anita Sant’ Anna, Nicholas Wickström, Helene Eklund, Roland Zügner, Roy Tranberg
MRI TV-Rician Denoising
Abstract
Recent research on magnitude Magnetic Resonance Images (MRI) reconstruction from the Fourier inverse transform of complex (gaussian contaminated) data sets focuses on the proper modeling of the resulting Rician noise contaminated data. In this paper we consider a variational Rician denoising model for MRI data sets that we solve by a semi-implicit numerical scheme, which leads to the resolution of a sequence of Rudin, Osher and temi (ROF) models. The (iterated) resolution of these well posed numerical problems is then proposed for Total Variation (TV) Rician denoising. For numerical comparison we also consider a direct semi-implicit approach for the primal problem which amounts to consider some (regularizing) approximating problems. Synthetic and real MR brain images are then denoised and the results show the effectiveness of the new method in both, the accuracy and the speeding up of the algorithm.
Adrian Martin, Juan-Francisco Garamendi, Emanuele Schiavi
Applying ICA in EEG: Choice of the Window Length and of the Decorrelation Method
Abstract
Blind Source Separation (BSS) approaches for multi-channel EEG processing are popular, and in particular Independent Component Analysis (ICA) algorithms have proven their ability for artefacts removal and source extraction for this very specific class of signals. However, the blind aspect of these techniques implies well-known drawbacks. As these methods are based on estimated statistics from the data and rely on an hypothesis of signal stationarity, the length of the window is crucial and has to be chosen carefully: large enough to get reliable estimation and short enough to respect the rather non-stationary nature of the EEG signals. In addition, another issue concerns the plausibility of the resulting separated sources. Indeed, some authors suggested that ICA algorithms give more physiologically plausible results than others. In this paper, we address both issues by comparing four popular ICA algorithms (namely FastICA, Extended InfoMax, JADER and AMICA). First of all, we propose a new criterion aiming to evaluate the quality of the decorrelation step of the ICA algorithms. This criterion leads to a heuristic rule of minimal sample size that guarantees statistically robust results. Next, we show that for this minimal sample size ensuring constant decorrelation quality we obtain quasi-constant ICA performances for some but not all tested algorithms. Extensive tests have been performed on simulated data (i.i.d. sub and super Gaussian sources mixed by random mixing matrices) and plausible data (macroscopic neural population models placed inside a three layers spherical head model). The results globally confirm the proposed rule for minimal data length and show that the use of sphering as decorrelation step might significantly change the global performances for some algorithms.
Gundars Korats, Steven Le Cam, Radu Ranta, Mohamed Hamid
Electrical Impedance Properties of Deep Brain Stimulation Electrodes during Long-Term In-Vivo Stimulation in the Parkinson Model of the Rat
Abstract
Deep brain stimulation (DBS) is an invasive therapeutic option for patients with Parkinson’s disease (PD) but the mechanisms behind it are not yet fully understood. Animal models are essential for basic DBS research, because cell based in-vitro techniques are not complex enough. However, the geometry difference between rodents and humans implicates transfer problems of the stimulation conditions. For rodents, the development of miniaturized mobile stimulators and adapted electrodes are desirable. We implanted uni- and bipolar platinum/iridium electrodes in rats and were able to establish chronical instrumentation of freely moving rats (3 weeks). We measured the impedance of unipolar electrodes in-vivo to characterize the influence of electrochemical processes at the electrode-tissue interface. During the encapsulation process, the real part of the electrode impedance at 10 kHz doubled after 12 days and increased almost 10 times after 22 days. An outlook is given on the quantification of the DBS effect by sensorimotor behavioral tests.
Kathrin Badstübner, Thomas Kröger, Eilhard Mix, Ulrike Gimsa, Reiner Benecke, Jan Gimsa
Comparison between Thermal and Visible Facial Features on a Verification Approach
Abstract
A comprehensible performance analysis of a thermal and visible face verification system based on the Scale-Invariant Feature Transform algorithm (SIFT) with a vocabulary tree is presented in this work, providing a verification scheme that scales efficiently to a large number of features. The image database is formed from front-view thermal images, which contain facial temperature distributions of different individuals in 2-dimensional format and the visible image per subject, containing 1,476 thermal images and 1,476 visible images equally split into two sets of modalities: face and head, respectively. The SIFT features are not only invariant to image scale and rotation but also essential for providing a robust matching across changes in illumination or addition of noise. Descriptors extracted from local regions are hierarchically set in a vocabulary tree using the k-means algorithm as clustering method. That provides a larger and more discriminatory vocabulary, which leads to a performance improvement. The verification quality is evaluated through a series of independent experiments with various results, showing the power of the system, which satisfactorily verifies the identity of the database subjects and overcoming limitations such as dependency on illumination conditions and facial expressions. A comparison between head and face verification is made for both ranges. This approach has reached accuracy rates of 97.60% in thermal head images in relation to 88.20% in thermal face verification. For visible range, 99.05% with visible head images in relation to 97.65% in visible face verification. In this proposal and after experiments, visible range gives better accuracy than thermal range, and with independency of range, head images give the most discriminate information.
Carlos M. Travieso, Marcos del Pozo-Baños, Jesús B. Alonso

Part IV: Health Informatics

Frontmatter
Multiparameter Sleep Monitoring Using a Depth Camera
Abstract
In this study, a depth analysis technique was developed to monitor user’s breathing rate, sleep position, and body movement while sleeping without any physical contact. A cross-section method was proposed to detect user’s head and torso from the sequence of depth images. In the experiment, eight participants were asked to change the sleep positions (supine and side-lying) every fifteen breathing cycles on the bed. The results showed that the proposed method is promising to detect the head and torso with various sleeping postures and body shapes. In addition, a realistic over-night sleep monitoring experiment was conducted. The results demonstrated that this system is promising to monitor the sleep conditions in realistic sleep conditions and the measurement accuracy was better than the first experiment. This study is important for providing a non-contact technology to measure multiple sleep conditions and assist users in better understanding of his sleep quality.
Meng-Chieh Yu, Huan Wu, Jia-Ling Liou, Ming-Sui Lee, Yi-Ping Hung
Integration of a Heart Rate Prediction Model into a Personal Health Record to Support the Telerehabilitation Training of Cardiopulmonary Patients
Abstract
Chronic obstructive pulmonary disease (COPD) and coronary artery disease are severe diseases with increasing prevalence. Studies show that regular endurance exercise training affects the health state of patients positively. Heart Rate (HR) is an important parameter that helps physicians and (tele-) rehabilitation systems to assess and control exercise training intensity and to ensure the patients’ safety during the training. On the basis of 668 training sessions (325 F, 343 M), we created linear models predicting the training HR during five application scenarios. Personal Health Records (PHRs) are tools to support users to enter, manage and share their own health data, but usage of current products suffers under interoperability and acceptance problems. To overcome these problems, we implemented a PHR that is physically localized in the user’s home environment and that uses the predictive linear models to support physicians during the training plan creation process. The prediction accuracy of the model varies with a median root mean square error (RMSE) of ≈11 during the training plan creation scenario up to ≈3.2 in the scenario where the prediction takes place at the beginning of a training phase.
Axel Helmer, Riana Deparade, Friedrich Kretschmer, Okko Lohmann, Andreas Hein, Michael Marschollek, Uwe Tegtbur
Exploiting Cloud-Based Personal Health Information Systems in Practicing Patient Centered Care Model
Abstract
The introduction of new emerging healthcare models, such as patient-centered care, pharmaceutical care, and chronic care model, are changing how people think about health and of patients themselves These healthcare models need technology solutions that support the co-operation within patient’s healthcare team, provide a platform for sharing patient’s healthcare data among the healthcare team, and provide a mechanism for disseminating relevant educational material for the patient and the healthcare team. Unfortunately current health information technology solutions only provide the connection between patients and healthcare providers, and thus do not support the new emerging healthcare models. Instead, cloud-based healthcare delivery models will potentially have more impact on developing appropriate technology for the new healthcare models. In this paper, we describe our work on designing a personal health information system, which supports patient remote monitoring and the new emerging healthcare models as well. The key idea is to develop the system by integrating relevant e-health tools through a shared ontology and to exploit the flexibility of cloud computing in its implementation. In developing the ontology we have used semantic web technologies such as OWL and RDF.
Juha Puustjärvi, Leena Puustjärvi
Online Social Networks Flu Trend Tracker: A Novel Sensory Approach to Predict Flu Trends
Abstract
Seasonal influenza epidemics cause several million cases of illnesses cases and about 250,000 to 500,000 deaths worldwide each year. Other pandemics like the 1918 “Spanish Flu” may change into devastating event. Reducing the impact of these threats is of paramount importance for health authorities, and studies have shown that effective interventions can be taken to contain the epidemics, if early detection can be made. In this paper, we introduce Social Network Enabled Flu Trends (SNEFT), a continuous data collection framework which monitors flu related messages on online social networks such as Twitter and Facebook and track the emergence and spread of an influenza. We show that text mining significantly enhances the correlation between online social network(OSN) data and the Influenza like Illness (ILI) rates provided by Centers for Disease Control and Prevention (CDC). For accurate prediction, we implemented an auto-regression with exogenous input (ARX) model which uses current OSN data and CDC ILI rates from previous weeks to predict current influenza statistics. Our results show that, while previous ILI data from the CDC offer a true (but delayed) assessment of a flu epidemic, OSN data provides a real-time assessment of the current epidemic condition and can be used to compensate for the lack of current ILI data. We observe that the OSN data is highly correlated with the ILI rates across different regions within USA and can be used to effectively improve the accuracy of our prediction. Therefore, OSN data can act as supplementary indicator to gauge influenza within a population and helps to discover flu trends ahead of CDC.
Harshavardhan Achrekar, Avinash Gandhe, Ross Lazarus, Ssu-Hsin Yu, Benyuan Liu
Clustering of Human Sleep Recordings Using a Quantile Representation of Stage Bout Durations
Abstract
In this paper, a condensed representation of stage bout durations based on the q-quantiles of the duration distributions is used as a basis for the discovery of duration-related patterns in human sleep data. A collection of 244 all-night hypnograms is studied. Quartiles (q = 4) provide a good tradeoff between representational detail and sample variation. 15 descriptive variables are obtained that correspond to the bout duration quartiles of wake after sleep onset, NREM stage 1, NREM stage 2, slow wave sleep, and REM sleep. EM clustering is used to identify distinct groups of hypnograms based on stage bout durations. Each group is shown to be characterized by bout duration quartiles of specific sleep stages, with statistically significant differences among groups (p < 0.05). Several sleep-related and health-related variables are shown to be significantly different among the bout duration groups found through clustering. In contrast, multivariate linear regression fails to yield good predictive models based on the same bout duration variables used in the clustering analysis. This work demonstrates that machine learning techniques are capable of uncovering naturally occurring dynamical patterns in sleep data that also provide sleep-based indicators of health.
Chiying Wang, Francis W. Usher, Sergio A. Alvarez, Carolina Ruiz, Majaz Moonis
Touch and Speech: Multimodal Interaction for Elderly Persons
Abstract
This paper reports our work on the development and evaluation of a multimodal interactive guidance system for navigating elderly persons in hospital environments. A list of design guidelines has been proposed and implemented in our system, addressing the needs of designing a multimodal interfaces for elderly persons. Meanwhile, the central component of an interactive system, the dialogue manager, has been developed according to a unified dialogue modelling method, which combines the conventional recursive transition network based generalized dialogue models and the classic agent-based dialogue theory, and supported by a formal language based development toolkit. In order to evaluate the minutely developed multimodal interactive system, the touch and speech input modalities of the current system were evaluated by an elaborated experimental study with altogether 31 elderly. The overall positive results on the effectiveness, efficiency and user satisfaction of both modalities confirm our proposed guidelines, approaches and frameworks on interactive system development. Despite the slightly different results, there is no significant evidence for one preferred modality. Thus, further study of their combination is considered necessary.
Cui Jian, Hui Shi, Frank Schafmeister, Carsten Rachuy, Nadine Sasse, Holger Schmidt, Volker Hoemberg, Nicole von Steinbüchel
Data Integration Solution for Organ-Specific Studies: An Application for Oral Biology
Abstract
The human oral cavity is a complex ecosystem where multiple interactions occur and whose comprehension is critical in understanding several disease mechanisms. In order to comprehend the composition of the oral cavity at a molecular level, it is necessary to compile and integrate the biological information resulting from specific techniques, especially from proteomic studies of saliva. The objective of this work was to compile and curate a specific group of proteins related to the oral cavity, providing a tool to conduct further studies of the salivary proteome. In this paper we present a platform that integrates in a single endpoint all available information for proteins associated with the oral cavity. The proposed tool allows researchers in biomedical sciences to explore microorganisms, proteins and diseases, constituting a unique tool to analyse meaningful interactions for oral health.
José Melo, Joel P. Arrais, Edgar Coelho, Pedro Lopes, Nuno Rosa, Maria José Correia, Marlene Barros, José Luís Oliveira
Backmatter
Metadata
Title
Biomedical Engineering Systems and Technologies
Editors
Joaquim Gabriel
Jan Schier
Sabine Van Huffel
Emmanuel Conchon
Carlos Correia
Ana Fred
Hugo Gamboa
Copyright Year
2013
Publisher
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-38256-7
Print ISBN
978-3-642-38255-0
DOI
https://doi.org/10.1007/978-3-642-38256-7

Premium Partner