Skip to main content
main-content

Über dieses Buch

This book constitutes revised selected papers from the Second International Workshop on Brain-Inspired Computing, BrainComp 2015, held in Cetraro, Italy, in July 2015.
The 14 papers presented in this volume were carefully reviewed and selected for inclusion in this book. They deal with brain structure and function; computational models and brain-inspired computing methods with practical applications; high performance computing; and visualization for brain simulations.

Inhaltsverzeichnis

Frontmatter

Human Brainnetome Atlas and Its Potential Applications in Brain-Inspired Computing

Brain atlases are considered to be the cornerstone of neuroscience, but most available brain atlases lack fine-grained parcellation results and do not provide information about functionally important connectivity. Recently, novel methodologies and computerized brain mapping techniques could be used to explore the structure, function, and spatio-temporal changes in the human brain. The human Brainnetome Atlas is an in vivo map that includes fine-grained functional brain subregions and detailed anatomical and functional connection patterns for each area. These features should enable researchers to describe the large scale architecture of the human brain more accurately. Using the human Brainnetome Atlas, researchers could simulate and model brain networks using informatics and simulation technologies to elucidate the basic organizing principles of the brain. Others could use this same atlas to design novel neuromorphic systems that are inspired by the architecture of the brain. Therefore, this cutting-edge human Brainnetome Atlas paves the way for constructing an even more fine-grained atlas of the human brain and offers the potential for applications in brain-inspired computing.

Lingzhong Fan, Hai Li, Shan Yu, Tianzi Jiang

Workflows for Ultra-High Resolution 3D Models of the Human Brain on Massively Parallel Supercomputers

Human brain atlases [1] are indispensable tools to achieve a better understanding of the multilevel organization of the brain through integrating and analyzing data from different brains, sources, and modalities while considering the functionally relevant topography of the brain [4]. The spatial resolution of most of these electronic atlases is in the range of millimeters, which does not allow the integration of the information at the level of cortical layers, columns, microcircuits or cells. Therefore, we introduced in 2013 the first BigBrain data set with a resolution of 20 $$\upmu $$m isotropic. This data set allows to specify morphometric parameters of human brain organization, which serve as a “gold standard” for neuroimaging data obtained at a lower spatial resolution. It provides, in addition, an essential basis for realistic brain models concerning structural analysis and simulation [2]. For the generation of other, even higher-resolution data sets of the human brain, we developed an improved and more efficient data processing workflow employing high performance computing to 3D reconstruct histological data sets. To facilitate the analysis of intersubject variability on a microscopic level, the new processing framework was applied for reconstructing a second BigBrain data set with 7676 sections. Efficient data processing of a large amount of data sets with a complex nested reconstruction workflow using large number of compute nodes required optimized distributed processing workflows as well as parallel programming. A detailed documentation of the processing steps and the complex inter-dependencies of the data sets at each level of the multi-step reconstruction workflow was essential to enable transformations to images of the same histological sections obtained with even higher spatial resolution. We have addressed these challenges, and achieved efficient high throughput processing of thousands of images of histological sections in combination with sufficient flexibility, based on an effective, successive coarse-to-fine hierarchical processing.

Hartmut Mohlberg, Bastian Tweddell, Thomas Lippert, Katrin Amunts

Towards Large-Scale Fiber Orientation Models of the Brain – Automation and Parallelization of a Seeded Region Growing Segmentation of High-Resolution Brain Section Images

To understand the microscopical organization of the human brain including cellular and fiber architectures, it is a necessary prerequisite to build virtual models of the brain on a sound biological basis. 3D Polarized Light Imaging (3D-PLI) provides a window to analyze the fiber architecture and the fibers’ intricate inter-connections at microscopic resolutions. Considering the complexity and the pure size of the human brain with its nearly 86 billion nerve cells, 3D-PLI is challenging with respect to data handling and analysis in the TeraByte to PetaByte ranges, and inevitably requires supercomputing facilities. Parallelization and automation of image processing steps open up new perspectives to speed up the generation of new high resolution models of the human brain to provide groundbreaking insights into the brain’s three-dimensional micro architecture. Here, we will describe the implementation and the performance of a parallelized semi-automated seeded region growing algorithm used to classify tissue and background components in up to one million 3D-PLI images acquired from an entire human brain. This algorithm represents an important element of a complex UNICORE-based analysis workflow ultimately aiming at the extraction of spatial fiber orientations from 3D-PLI measurements.

Anna Lührs, Oliver Bücker, Markus Axer

Including Gap Junctions into Distributed Neuronal Network Simulations

Contemporary simulation technology for neuronal networks enables the simulation of brain-scale networks using neuron models with a single or a few compartments. However, distributed simulations at full cell density are still lacking the electrical coupling between cells via so called gap junctions. This is due to the absence of efficient algorithms to simulate gap junctions on large parallel computers. The difficulty is that gap junctions require an instantaneous interaction between the coupled neurons, whereas the efficiency of simulation codes for spiking neurons relies on delayed communication. In a recent paper [15] we describe a technology to overcome this obstacle. Here, we give an overview of the challenges to include gap junctions into a distributed simulation scheme for neuronal networks and present an implementation of the new technology available in the NEural Simulation Tool (NEST 2.10.0). Subsequently we introduce the usage of gap junctions in model scripts as well as benchmarks assessing the performance and overhead of the technology on the supercomputers JUQUEEN and K computer.

Jan Hahne, Moritz Helias, Susanne Kunkel, Jun Igarashi, Itaru Kitayama, Brian Wylie, Matthias Bolten, Andreas Frommer, Markus Diesmann

Designing Workflows for the Reproducible Analysis of Electrophysiological Data

The workflows that cover the experimental recording of neuronal data up to the publication of figures that illustrate neuroscientific analysis results are interwoven and complex. Unfortunately, current implementations of such workflows of electrophysiological research are far from being automatized, and software supporting such a goal is largely in development or missing. In consequence, the level of reproducibility of data analysis is poor compared to other scientific disciplines. Although the problem is well-known and leads to ineffective, unsustainable science, there is no solution in sight in terms of a complete, provenance-tracked workflow. Here, we outline principle challenges that complicate the design of workflows for electrophysiological research. We detail how existing tools can be integrated to form partial workflows which address some of the challenges. On the basis of a concrete workflow implementation we discuss open questions and urgently needed software components.

Michael Denker, Sonja Grün

Finite-Difference Time-Domain Simulation for Three-Dimensional Polarized Light Imaging

Three-dimensional Polarized Light Imaging (3D-PLI) is a promising technique to reconstruct the nerve fiber architecture of human post-mortem brains from birefringence measurements of histological brain sections with micrometer resolution. To better understand how the reconstructed fiber orientations are related to the underlying fiber structure, numerical simulations are employed. Here, we present two complementary simulation approaches that reproduce the entire 3D-PLI analysis: First, we give a short review on a simulation approach that uses the Jones matrix calculus to model the birefringent myelin sheaths. Afterwards, we introduce a more sophisticated simulation tool: a 3D Maxwell solver based on a Finite-Difference Time-Domain algorithm that simulates the propagation of the electromagnetic light wave through the brain tissue. We demonstrate that the Maxwell solver is a valuable tool to better understand the interaction of polarized light with brain tissue and to enhance the accuracy of the fiber orientations extracted by 3D-PLI.

Miriam Menzel, Markus Axer, Hans De Raedt, Kristel Michielsen

Visual Processing in Cortical Architecture from Neuroscience to Neuromorphic Computing

Primate cortices are organized into different layers which constitute a compartmental structure on a functional level. We show how composite structural elements form building blocks to define canonical elements for columnar computation in cortex. As a further abstraction, we define a dynamical three-stage model of a cortical column for processing that allows to investigate the dynamic response properties of cortical algorithms, e.g., feedforward signal integration as feature detection filters, lateral feature grouping, and the integration of modulatory (feedback) signals. Using such multi-stage cortical model, we investigate the detection and integration of spatio-temporal motion measured by event-based (frame-less) cameras. We demonstrate how the canonical neural circuit can improve such representations using normalization and feedback and develop key computational elements to map such a model onto neuromorphic hardware (IBM’s TrueNorth chip). This makes a step towards implementing real-time and energy-efficient neuromorphic optical flow detectors based on realistic principles of computation in cortical columns.

Tobias Brosch, Stephan Tschechne, Heiko Neumann

Bio-Inspired Filters for Audio Analysis

Nowadays, much is known about the functions of the components of the human auditory system. Computational models of these components are widely accepted and recently inspired the work of researchers in pattern recognition and signal processing. In this work we present a novel filter, which we call COPE (Combination of Peaks of Energy), that is inspired by the way the sound waves are converted into neuronal firing activity on the auditory nerve. A COPE filter creates a model of the pattern of the neural activity generated by a sound of interest and is able to detect the same pattern and modified versions of it. We apply the proposed method on the task of event detection for surveillance of roads. For the experiments, we use a publicly available data set, namely the MIVIA road events data set. The results that we achieve (recognition rate equal to $$94\%$$ and false positive rate lower than $$4\%$$) and the comparison with existing methods demonstrate the effectiveness of the proposed bio-inspired filters for audio analysis.

Nicola Strisciuglio, Mario Vento, Nicolai Petkov

Sophisticated LVQ Classification Models - Beyond Accuracy Optimization

Learning vector quantization models (LVQ) belong to the most successful machine learning classifiers. LVQs are intuitively designed and generally allow an easy interpretation according to the class dependent prototype principle. Originally, LVQs try to optimize the classification accuracy during adaptation, which can be misleading in case of imbalanced data. Further, it might be required by the application that other statistical classification evaluation measures should be considered, e.g. sensitivity and specificity like frequently demanded in bio-medical applications. In this article we present recent approaches, how to modify LVQ to integrate those sophisticated evaluation measures as objectives to be optimized. Particularly, we show that all differentiable functions built fro contingency tables can be incorporated into a LVQ-scheme as well as receiver operating characteristic curve optimization.

Thomas Villmann

Classification of FDG-PET Brain Data by Generalized Matrix Relevance LVQ

We apply Generalized Matrix Learning Vector Quantization (GMLVQ) to the classification of Fluorodeoxyglucose Positron Emission Tomography (FDG-PET) brain data. The aim is to achieve accurate detection and discrimination of neurodegenerative syndromes such as Parkinson’s Disease, Multiple System Atrophy and Progressive Supranuclear Palsy. Image data are pre-processed and analysed in terms of low-dimensional representations obtained by Principal Component Analysis in the Scaled Subprofile Model approach. The performance of the GMLVQ classifiers is evaluated in a Leave-One-Out framework. Comparison with earlier results shows that GMLVQ and Support Vector Machine with linear kernel achieve comparable performance while both outperform a C4-5 Decision Tree classifier.

M. Biehl, D. Mudali, K. L. Leenders, J. B. T. M. Roerdink

A Cephalomorph Real-Time Computer

Although the domain of hard real-time systems has thoroughly been elaborated in academia, architectural issues did not receive the attention they deserve, and in most cases just off-the-shelf computer systems are used as execution platforms — with no guarantee that they are able to meet the temporal requirements specified. Therefore, a novel asymmetrical multiprocessor architecture for embedded control systems is presented. Resembling the structure of the human brain, it inherently supports temporal determinism.

Wolfgang A. Halang

Towards the Ultimate Display for Neuroscientific Data Analysis

This article wants to give some impulses for a discussion about how an “ultimate” display should look like to support the Neuroscience community in an optimal way. In particular, we will have a look at immersive display technology. Since its hype in the early 90’s, immersive Virtual Reality has undoubtedly been adopted as a useful tool in a variety of application domains and has indeed proven its potential to support the process of scientific data analysis. Yet, it is still an open question whether or not such non-standard displays make sense in the context of neuroscientific data analysis. We argue that the potential of immersive displays is neither about the raw pixel count only, nor about other hardware-centric characteristics. Instead, we advocate the design of intuitive and powerful user interfaces for a direct interaction with the data, which support the multi-view paradigm in an efficient and flexible way, and – finally – provide interactive response times even for huge amounts of data and when dealing multiple datasets simultaneously.

Torsten Wolfgang Kuhlen, Bernd Hentschel

Sentiment Analysis and Affective Computing: Methods and Applications

New computing technologies, such as affective computing and sentiment analysis, are raising a strong interest in different fields, such as marketing, politics and, recently, life sciences. Examples of possible applications in the last field, regard the detection and monitoring of depressive states or mood disorders and anxiety conditions. This paper aims to provide an introductory overview of affective computing and sentiment analysis, through the discussion of the main processing techniques and applications. The paper concludes with a discussion relative to a new approach based on the integration of sentiment analysis and affective computing to obtain a more accurate and reliable detection of emotions and feelings for applications in the life sciences.

Barbara Calabrese, Mario Cannataro

Deep Representations for Collaborative Robotics

Collaboration is an essential feature of human social interaction. Briefly, when two or more people agree on a common goal and a joint intention to reach that goal, they have to coordinate their actions to engage in joint actions, planning their courses of actions according to the actions of the other partners. The same holds for teams where the partners are people and robots, resulting on a collection of technical questions difficult to answer. Human-robot collaboration requires the robot to coordinate its behavior to the behaviors of the humans at different levels, e.g., the semantic level, the level of the content and behavior selection in the interaction, and low-level aspects such as the temporal dynamics of the interaction. This forces the robot to internalize information about the motions, actions and intentions of the rest of partners, and about the state of the environment. Furthermore, collaborative robots should select their actions taking into account additional human-aware factors such as safety, reliability and comfort. Current cognitive systems are usually limited in this respect as they lack the rich dynamic representations and the flexible human-aware planning capabilities needed to succeed in tomorrow human-robot collaboration tasks. Within this paper, we provide a tool for addressing this problem by using the notion of deep hybrid representations and the facilities that this common state representation offers for the tight coupling of planners on different layers of abstraction. Deep hybrid representations encode the robot and environment state, but also a robot-centric perspective of the partners taking part in the joint activity.

Luis J. Manso, Pablo Bustos, Juan P. Bandera, Adrián Romero-Garcés, Luis V. Calderita, Rebeca Marfil, Antonio Bandera

Backmatter

Weitere Informationen

BranchenIndex Online

Die B2B-Firmensuche für Industrie und Wirtschaft: Kostenfrei in Firmenprofilen nach Lieferanten, Herstellern, Dienstleistern und Händlern recherchieren.

Whitepaper

- ANZEIGE -

INDUSTRIE 4.0

Der Hype um Industrie 4.0 hat sich gelegt – nun geht es an die Umsetzung. Das Whitepaper von Protolabs zeigt Unternehmen und Führungskräften, wie sie die 4. Industrielle Revolution erfolgreich meistern. Es liegt an den Herstellern, die besten Möglichkeiten und effizientesten Prozesse bereitzustellen, die Unternehmen für die Herstellung von Produkten nutzen können. Lesen Sie mehr zu: Verbesserten Strukturen von Herstellern und Fabriken | Konvergenz zwischen Soft- und Hardwareautomatisierung | Auswirkungen auf die Neuaufstellung von Unternehmen | verkürzten Produkteinführungszeiten
Jetzt gratis downloaden!

Bildnachweise