Large-scale Fiber Orientation Models Derived from 3D-Polarized Light
Imaging
M. Axer
Institute of Neuroscience and Medicine (INM-1), Research Centre Jülich,
Germany
3D
Polarized Light Imaging (3D-PLI) is a neuroimaging technique that has opened up
new avenues to study the complex architecture of nerve fibers in postmortem brains.
The spatial orientations of the fibers are derived from birefringence measurements
of unstained histological brain sections that are interpreted by a
voxel-based analysis, i.e. each voxel is assigned a single 3D fiber orientation
vector. Hence, the fundamental data structure provided by 3D-PLI is a
comprehensive 3D vector field (fiber orientation map, FOM) for each brain
section. On the basis of image registration of a substantial number of serial
FOMs, large-scale 3D models of the local fiber orientations are created. Such
models are unique data sets (i) to bridge between macroscopic and microscopic
descriptions of the brain’s fiber architecture gained from diffusion MRI or
other microscopic approaches, respectively, and (ii) to extract realistic
input information for neural network simulations.
The
constituents of the 3D-PLI methodology, such as the polarimetric setup, image
processing algorithms and volume reconstruction techniques, are particularly
developed to address whole human brain analysis at the micrometer scale, i.e.
to handle TByte to PByte sized data sets. The presentation will demonstrate
the key elements developed along the 3D-PLI processing pipeline from the
measurement towards the first reconstructed large-scale 3D fiber orientation
brain models.
Back to Session II
|
Combination Of Receptive Fields (CORF): A novel computational simple cell
model with application to contour detection and delineation
G. Azzopardi
University of Malta, Malta
The
pioneering work of Hubel and Wiesel that led to the discovery of simple and complex
cells in the visual cortex of cats, has been an inspiration to a large body of
research in computational neuroscience and computer vision. We introduced a
computational model of a simple cell called Combination of Receptive Fields or
CORF for brevity. A CORF model uses as afferent inputs the responses of model
LGN cells whose center-surround receptive fields are aligned in a co-linear
manner, and combines their output with a weighted geometric mean. It achieves
properties that are typical of real simple cells, namely push-pull
inhibition, contrast invariant orientation tuning, cross orientation
suppression, among others, which are not exhibited by the Gabor model. We
demonstrated the effectiveness of the proposed CORF model in a contour
detection task, which is believed to be the primary biological role of simple
cells. We used two benchmark data sets (RuG and Berkeley) of images with
natural scenes and showed that it outperforms the Gabor function model with
(and without) surround suppression and Canny contour detectors. Moreover, the
CORF model is also very effective for the delineation of vessels in retinal
fundus images.
Back to Session III
|
Deep representations for collaborative robotics
P. Bustos, L.J. Manso, J.P. Bandera, A.
Romero-Garcés, L.V. Calderita,
A. Bandera
University of Malaga, Spain
Enabling
autonomous mobile manipulation robots to collaborate with people is a challenging
research field with a wide range of applications, future scenarios and high
economic impact. Collaboration means working with a partner to reach a common
goal, and it involves both individual actions and joint actions with her.
Human-robot collaboration requires, at least, two conditions to be efficient:
a) a common plan, usually underdefined, for all involved partners; and b) for
each partner, the capability of inferring the intentions of the other in
order to coordinate the common behavior. This is a hard problem for robotics,
as people are often unstable and execute their tasks in some feasible and
flexible, but non-optimal way. People can change their minds on their
envisaged goal, or interrupt a task without delivering legible reasons.
Assuming that recognized intentions will be uncertain, it is interesting for
collaborative robots to behave proactively and to internalize adaptive models
about the human partners abilities and intentions. Furthermore, collaborative
robots should select their actions taking into account additional human-aware
factors such as safety, reliability and comfortability. Current cognitive
systems are usually limited in this respect as they lack the rich dynamic
representations and the flexible human-aware planning capabilities needed to
succeed in tomorrow human-robot collaboration tasks. Within this paper, we
propose to address this problem by using the notion of deep hybrid
representations and the facilities that this common state representation
offers for the tight coupling of planners on different layers of abstraction.
Deep hybrid representations encode the robot and environment state, but also
a robot-centric perspective of the partners taking part in the joint
activity. The representation is organized at several layers of abstraction in
the perceptual and motor domains, ranging from continuous limb motions to symbolically
coded activities.
Back to Session V
|
Analysis of Large Scale Networks
C. Bekas
Foundations of Cognitive Computing IBM Research – Zurich
In
recent years, graph analytics has become one of the most important and ubiquitous
tools for a wide variety of research areas and applications.
Indeed,
modern applications such as brain simulations (neuronal nets), ad hoc wireless
telecommunication networks, or social networks, have dramatically increased
the number of nodes of the involved graphs, which now routinely range in the
tens of millions and out-reaching to the billions in notable cases such as
brain networks. We discuss novel near linear (O(N)) cost methods for sparse
graphs with N nodes. Key analytics include graph simplifications and
comparisons as well as node importance, which form a powerful arsenal for a
deep understanding of the characteristics of the networks at hand.
Back to Session V
|
Prototype based relevance learning and its application in the bio-medical
domain
M. Biehl
University of Groningen, The Netherlands
Prototype
based machine learning techniques are briefly introduced. The focus is on relevance
learning schemes for the identification of most significant features in the
context of classification tasks.
As
an example, we present the analysis of a medical data set by means of
Generalized Matrix Relevance Learning Vector Quantization. The concrete
application concerns the early diagnosis of Rheumatoid Arthritis based on
cytokine expression data.
Back to Session IV
|
An introduction and practitioner’s guide to Learning
Vector Quantization and Relevance Learning (Tutorial)
M. Biehl
University of Groningen, The Netherlands
This tutorial provides a brief introduction to
distance or similarity-based
systems in the context of supervised learning. The
so called Learning Vector Quantization (LVQ), in which classes are
represented by prototype vectors, will serve as a particularly intuitive
example framework for distance based classification.
A key step in the design of a classifier is, in this
context, the choice of an appropriate distance or similarity measure. In the
elegant framework of relevance learning, parameterized
distance measures are employed which are optimized
in the data-driven training process.
Benchmark problems and real world applications will
be presented and the
practical
analysis of data sets will be illustrated in terms of computer demos.
Back to Session VII
|
Keeping brain-inspired computing fed
G. Dasika
ARM, Texas, USA
The
human brain's ability comes from its rich and dense connectivity and not just
from the power of individual neurons. Brain-inspired computing has similar
needs to the brain - the infrastructure to get data easily from one place to
another and maintaining synergy between all the individual nodes is vital for
a good solution. This talk will focus on ARM's research efforts in this area,
from understanding the similarly-scaled high-performance computing space to
accelerating machine learning and computer vision on mobile phones and
embedded systems.
Back to Session III
|
Reproducible Workflows for the Analysis of Electrophysiological Data
M. Denker
Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced
Simulation (IAS-6), Jülich Research Centre and JARA, Germany
The
unprecedented degree of complexity in electrophysiological experiments has
reached a level where well-structured data analysis workflows have become a
necessity. Here, we present recent community efforts aimed at strengthening
the reproducibility and collaborative potential of analysis workflows.
The
availability of metadata information is of extreme relevance for reproducible
science and correct interpretation of results. We describe the sources of
complexity in a electrophysiological experiments, and demonstrate how to
organize the resulting metadata in an easily accessible, machine-readable way
using the odML (open metadata Markup Language) framework [2].
Another
key component of such a workflow is the availability of well-tested analysis
methods for data processing. We introduce the Electrophysiology Analysis
Toolkit (Elephant) [3] as a recent community-centered initiative to develop
an analysis framework for multi-scale activity data based on common data
representations provided by the Neo library [4].
We demonstrate how these tools can be used to
assist a complex analysis workflow in a study where we analyze massively
parallel neuronal activity recorded using a 10-by-10 electrode array
(Blackrock Microsystems) in three monkeys trained in a delayed reach-to-grasp
task [5]. We introduce measures to detect classes of spatial wave-like [6]
patterns of the oscillatory beta (12-40 Hz) activity across the array. The observed
patterns correlate strongly with the instantaneous beta amplitude. In
combination with previous results [7], this raises the hypothesis that the
power of beta oscillations is indicative of the spatio-temporal organization
of precise pair-wise spike synchronization.
References
[1] Denker et al., Front Neuroinf Conf Abstr.:
Neuroinformatics, 2012
[2] Grewe et al., Front Neuroinf, 2011
[3] http://neuralensemble.org/elephant/
[4] Garcia et al. (2014)
Front. Neuroinform 8:10.
[5] Riehle et al. (2013)
Front Neural Circuits 7:48
[6] Rubino et al. (2006)
Nat Neurosci 9:154
[7] Denker et al. (2011)
Cereb Cortex 21:2681
Acknowledgements
Helmholtz
Portfolio Theme Supercomputing and Modeling for the Human Brain (SMHB), EU
grant 604102 (Human Brain Project, HBP), G-Node (BMBF Grant 01GQ1302), ANR-GRASP, Neuro_IC2010, CNRS-PEPS,
Riken-CNRS Research Agreement.
Back to Session II
|
Characterization of network states from multi-electrode recordings in human
and monkey cerebral cortex
A. Destexhe
Centre National de la Recherche Scientifique, France
Inhibitory
neurons form about 25% of all neurons in cerebral cortex, but they are surprisingly
relegated to a very secondary role by most computational studies. We show here results from analyses
demonstrating that inhibitory neurons have a very important role to explain
neural dynamics and its timing relations.
We
analyzed multi-electrode (Utah) array recordings from human and monkey
cortex, and in many cases, the single units can be separated into "fast
spiking" (FS) and "regular spiking" (RS) cells, based on spike
shape. Thanks to the fact that Utah
arrays are very dense (100 electrodes spaced of 400um), many pairs of neurons
show a functional interaction and can be identified as excitatory or
inhibitory, which in general corresponds well to RS and FS cells,
respectively. Analyzing their behavior
during different brain states, inhibitory neurons are found to be tightly
balanced with excitatory neurons, for all brain states except during
seizures, where the excitatory - inhibitory balance breaks down. We also show that inhibitory units are more tightly
correlated with local field potentials, and furthermore, they are the most
reliable predictors of brain oscillations. Finally, these data are consistent with previous
intracellular recordings, showing that action potentials are most tightly
related to inhibitory conductances. Overall, these findings show that inhibitory
neurons play much more than a secondary role, and they may even be linked to information
processing at large scales, because they are the only type of neurons showing
persistent pairwise correlations over large cortical distances.
Work supported by BrainScales and the Human Brain
Project.
Back to Session IV
|
Necessity and feasibility of brain-scale simulation or My brain is finite
M. Diesmann
(1) Institute of Neuroscience and Medicine (INM-6) and Institute for
Advanced Simulations (IAS-6) Forschungszentrum Juelich and JARA, Germany
(2) Department of Psychiatry, Psychotherapy and Psychosomatics, Medical
Faculty, RWTH Aachen University, Germany
(3) Department of Physics, Faculty 1, RWTH Aachen University, Germany
The
explanatory power of models of brain networks comprising about a cubic millimeter
of brain tissue is limited because individual nerve cells receive 50% of
their inputs from non-local sources and brain processes are typically
distributed across multiple brain areas. A prominent computational unit of
the brain is the neuron, non-linearly transforming the input received from
10,000 contacts, called synapses, with other nerve cells. The activity of
neurons is experimentally accessible and their relation to behavior differs
even for neighboring cells. The fluctuations in the population activity drive
mesoscopic signals such as the local field potential (LFP). For the
construction of mathematical models this raises two questions: (1) Are
brain-scale simulations at the resolution of neurons and synapses feasible
with upcoming computer technology, and (2) Are full-scale simulations
required, or can researchers work with downscaled substitutes? This
contribution first discusses the simulation technology available for petascale
computers [1], its limitations, and the promise of exascale systems. In the
second part we provide formal arguments why brain networks are generally
irreducible in a non-trivial sense [2]. While first-order statistical
measures like the firing rate are easily conserved, maintaining second-order
measures like correlations already imposes severe constraints. This is
relevant because correlations directly interact with synaptic plasticity, and
underlie population fluctuations, thereby determining mesoscopic measures of
brain activity. The theory also provides insight into scaling in the opposite
direction, revealing how in the case of the brain, the intuition gained from
the limit of infinite system size fails, despite its success in explaining
properties in many other physical systems. The formalism exposes that
correlations follow from a measure of effective interaction rather than
directly from the anatomy. This explains why the effective connectivity is
state-dependent and the functional properties of the system are constrained
by its structure but cannot be inferred from anatomy alone.
www.nest-initiative.org
www.csn.fz-juelich.de
[1] Kunkel S, Schmidt M, Eppler JM, Plesser H E,
Masumoto G,
Igarashi J, Ishii S, Fukai T, Morrison A, Diesmann M,
Helias M
(2014) Front Neuroinf 8:78
[2] van Albada S, Helias M, Diesmann M (2014) arXiv:1411.4770
[q-bio.NC]
Back to Session III
|
Pinwheel-inspired multi-orientation scores: contextual models and computer
vision applications
B. ter Haar Romeny
Eindhoven University of Technology, Netherlands / Northeastern University,
Shenyang, China
The
discovery of the cortical pinwheel structure has sparked many models for multi-orientation
contextual analysis. We lift 2D images to complex valued invertible 3D
functions called orientation scores by including an orientation dimension in
the domain. In the extended domain of positions and orientations (identified
with the Euclidean motion group SE(2)) many computer vision operations can be
redefined. The transform can be made invertible by exploiting kernels
reconstructed from angular segments in the FFT domain. We introduce
left-invariant evolution equations on orientation scores, and consider the
diffusion equation on SE(2), which is a useful equation for curve
enhancement, and the convection-diffusion equation, which is a useful equation
for curve completion.
Different
numerical implementations and approximations will be compared for efficiency,
and compared with the exact solution, and compared with the use of Gabor kernels.
The theory gives an elegant basis for contextual processing through association
fields and affinity.
Powerful
biomedical applications are shown, with emphasis on a variety of retinal vessel
analysis problems, such as robust enhancement of crossing and branching
vessels by multi-orientation vesselness, vessel tracking in complex
configurations (high curvature, closely parallel, low contrast etc.),
disentangling bifurcations and cross-overs, vessel curvature quantification,
excellent optic disk detection in RGB and laser scanning cameras by SE(2)
template matching and wave front propagation.
Back to Session III
|
A Cephalomorph Real-time Computer
W. Halang
FernUniversiteit, Hagen, Germany
Although
the domain of hard real-time systems was thoroughly elaborated, architectural
issues did not yet receive the attention they deserve. In practice, just off-the-shelf
computer systems are used as execution platforms. With a minimum of features,
viz. process peripherals, user-accessible interrupt lines and general
multitasking operating systems, they are adapted to work as embedded systems.
This too primitive an approach leads to many problems, since almost all
conventional hardware and software features, while improving average
performance, do little for or even worsen the prospects for predictable
real-time performance. Hence, they must be considered harmful.
As
a remedy, a novel asymmetrical multiprocessor architecture for automation systems
is presented. It is derived from the structure of the human brain, which
consists of cerebrum, midbrain, diencephalon, cerebellum and extended spinal
cord. The signals to and from various parts of the body are transmitted via
the spinal marrow, which has some similarities with a computer bus. At the
brain's side, the nerves of the spinal marrow end in the extended spinal
cord, being closely connected to midbrain, diencephalon and cerebellum. The
last four organs have non-arbitrary and routine reflex functions, and are an
important switching site between the nerves of the body and those of the
brain. Furthermore, the immediate reflex centre is located here. In contrast
to this, the other information processing functions of higher complexity,
such as evaluation of sensual impressions, control of arbitrary actions and
all intellectual tasks, are performed by the cerebrum.
Following
this pattern, a computer targeting to meet the specific requirements of real-time
operation is organised. The concept provides for an asymmetrical system
consisting of two dissimilar processors. The first one may be a classical von
Neumann computer. It executes user application tasks and outer supervisor
shell services, such as data exchange with peripherals and file management,
provided in form of independent tasks or subroutines called by the user
tasks. Being the creative part, the task processor corresponds to the
cerebrum. The operating system kernel is clearly and physically separated from
the outer-layer tasks. It is allocated to a co-processor dedicated to event,
time and task management, communication and synchronisation. Although important
and actually controlling the operation of the task processor, these functions
are routine and would impose unnecessary burden to the latter. Thus, the
kernel processor corresponds to the brain's reflex centre.
Back to Session V
|
Brainnetome Atlas: A New Brain Atlas Based on Connectivity Profiles
T. Jiang
Institute of Automation, The Chinese Academy of Sciences, China
Brain
atlas is considered to be the cornerstone of basic neuroscience and clinical
researches. However, the existed atlases are lack finer grained parcellation results
and do not provide the functional important connectivity information. Over
the past thirty years, remarkable advances of multimodal neuroimaging techniques
that are rapidly advancing our understanding of the organization and function
of the human brain. The introduction of the framework for identifying the
brain subdivisions with in vivo connectivity architecture has opened the door
to neuroanatomical studies at the macro-scale brain studies. In this lecture,
we present a new brain atlas - brainnetome atlas. It is constructed with
brain connectivity profiles. The brainnetome atlas is in vivo, with
finer-grained brain subregions, and with anatomical and functional connection
profiles. Here we first give a brief introduction on the history of the brain
atlas development. Then we present the basic ideas of the brainnetome atlas
and the procedure to construct this atlas. After that, some parcellation results
of representative brain areas will be presented, which include brain areas
with heterogeneous cytoarchitectures and homogeneous cytoarchitecture. We
also give a brief presentation on how to use the brainnetome atlas to address
issues in neuroscience and clinical research. For example, how to determine
the boundary of Wernicke’s area, what is the organization of Broca’ area
across languages, and what is mechanism of visuospatial attention
lateralization, and what new findings can be made with the brainnetome atlas
for basic and clinical neuroscience issues.
Back to Session I
|
About the (Non-)Sense of Immersion in Neuroscientific Data Analysis
T. Kuhlen
JSC at Forschungszentrum Juelich and RWTH, Aachen, Germany
Since
its hype in the early 90’s, Virtual Reality has undoubtedly been adopted as a
useful tool in a variety of application domains, e.g. product development, training,
and psychology. Furthermore, Immersive Visualization – defined as a combination
of Virtual Reality techniques with Scientific Visualization methods – has
proven its potential to support the process of scientific data analysis.
First, Immersive Visualization promises faster, more comprehensive
understanding of complex, spatial-temporal relationships owing to
head-tracked, stereoscopic rendering and large field of regard. Second, it
would provide a more natural user interface, specifically for spatial
interaction. In some domains of Simulation Science, like Computational Fluid
Dynamics, success stories of fully-fledged solutions as well as systematic
studies have already proven the potential of Immersive Visualization to
significantly enhance explorative analysis processes.
It
is still an open question however, whether or not Immersive Visualization techniques
make sense in the context of neuroscientific visualization. In particular,
the installation and maintenance of high-end immersive VR systems, like
CAVEs, is quite expensive. Operating immersive systems requires an expert
team that provides both hardware management and user support. Finally,
scientists must physically travel to the VR lab; it typically isn’t “just
next door.” This creates an additional entry barrier for using such systems.
In
order to become a widely accepted part of a neuroscientist’s daily work, immersive
analysis tools will have to provide significant added values. In particular,
they should feature intuitive user interfaces and concise visualization
metaphors. Most importantly they must integrate seamlessly with existing
workflows. All in all, the talk wants to give some impulses for a discussion
about how the “ultimate” display and interaction techniques should look like
to support the Neuroscience community in an optimal way.
Back to Session IV
|
Stereoscopic 3D Visualization as a Service
M. Kunze
Heidelberg University, Germany
Synchrotron
X-ray micro-tomography enables the investigation of morphological problems
allowing to observe internal structures in optically dense organisms
non-invasively in 3D/4D. Evaluation of the data, however, is very complex. In
particular, the automated classification of internal structures is only
possible in close cooperation of biologists, imaging and computing experts.
The ASTOR project addresses these challenges and concentrates on the
following objectives:
1)
High-resolution high-speed tomography of living and dynamic systems.
2)
Simplify the segmentation of tomographic data in 3D and 4D.
3)
Building an online portal for morphological studies based on cloud and big
data technologies.
The
talk introduces a modern framework combining Big Data, HPC and cloud
computing technologies in a biology related experimental setup. High
resolution stereoscopic 3D/4D visualization of complex poly-structured data
sets has been realized based on virtual machines with support of NVIDIA K2
GPUs. The underlying cloud computing model enables visualization as a
service, even to remote scientists over wide area connections. The
corresponding portal implements a self-service environment to manage large
scale computing needs on-demand. Besides on-premise operation the advent of
large size virtual machines with GPU support in the public cloud has the potential
to implement an even more versatile and flexible hybrid model in the future.
Back to Session VI
|
Creating the HPC and Data Analytics Infrastructure for the Human Brain
Project
T. Lippert
JSC at Forschungszentrum Juelich, Germany
HBP,
the human brain project, is one of two European flagship projects foreseen to
run for 10 years. The HBP aims at creating a open European neuroscience
driven infrastructure for simulation and big data aided modelling and
research with a credible user program. The goal of the HBP is to progressively
understand structure and functionality of the human brain, strongly based on
a reverse engineering philosophy. In addition, it aims at advancements in
digital computing by means of brain inspired algorithms with the potential to
create completely novel analogue computing technology called neuromorphic computing.
The HBP simulation and data analytics infrastructure will be based on a
federation of supercomputer and data centers contributing to specific
requirements of neuroscience in a complementary manner. It will encompass a
variety of simulation services and data analytics services ranging from the
molecular level towards synaptic and neuronal levels up to cognitive and
robotic models. The major challenge is that HBP research will require exascale
capabilities for computing, data integration and data analytics. Mastering
these challenges requires a huge interdisciplinary software and hardware
co-design effort including neuroscientists, physicists, mathematicians, and
computer scientists on an international scale. The HBP is a long-term endeavor
and thus puts large emphasis on educational and training aspects. The maturity
of a service is critical, and it is important to differentiate between an
early prototype, the development phase, and the delivery of services, in
order to assess capability levels. The services and infrastructures of the
HBP will successively include more European partners, in particular PRACE
sites and EUDAT data services, and will be made available step by step to the
pan-European neuroscience community.
Back to Session V
|
A new cognitive architecture for bidirectional
perception-reasoning-action loop closing
A. J. Palomino, R. Marfil, J. P. Bandera and A.
Bandera
University of Malaga, Spain
An
autonomous robot placed in a real world has to deal with a lot of visual information.
At the same time, the agent has to address different actions, different tasks
that vary over the time, reacting to unexpected situations. When developing a
perception system for such a robot, some key questions come up: is it
possible to modify the way a robotic agent perceives the world depending on
its current responsibilities? And, vice versa, are new interesting objects
able to modify the ongoing task? How can perception and reasoning
interoperate simultaneously in an autonomous robot?
Biological
vision systems present an interesting set of features of adaptability and robustness.
These
features allow them to analyse and process the visual information of a complex
scene in a very efficient manner. Research in Psychology and Physiology
demonstrates that the efficiency of natural vision has foundations in visual
attention, which is a process that filters out irrelevant information and
limits processing to items that are relevant to the present task.
In
the past few years, emphasis has increased in the development of robot vision
systems that are inspired by the model of natural vision. This approach is especially
useful when developing a social robot, that is, an embodied agent which is
part of a heterogeneous community of people and other robots. In this case,
added to the increased efficiency mentioned above, the agent is able to
process the visual information in the same way that people do. Furthermore,
mobile robots are often carelessly placed at complex environments where they
have to apply their knowledge to figure out what needs to be attended to and
when and what to do in correspondence with visual feedback.
This
work presents a novel attention-based cognitive architecture for a social robot.
This architecture aims to join perception and reasoning considering a double
imbrication: the current task biases the perceptual process whereas perceived
items determine the behaviours to be accomplished. Therefore, the proposed
architecture represents a bidirectional solution to theperception-reasoning-action
loop closing problem. The proposal is divided into two levels of performance,
employing an Object-Based Visual Attention model as perception system and a
general purpose Planning Framework at the top deliberative level. On the one
hand, the most relevant elements in the scene are selected, taking into
consideration not only their intrinsic features but also the constraints
provided by the ongoing behaviour and context. On the other hand, perceived
items determine the tasks that can be executed at each moment, following a
need-based approach. Thereby, the tasks that better fit the perceived
environment are more likely to be executed. The architecture has been tested
using a real and unrestricted environment that involves a real robot,
time-varying tasks and daily life situations.
Back to Session V
|
Simulation and Modeling for 3D Polarized Light Imaging
Miriam Menzel
Institute of Neuroscience and Medicine (INM-1), Research Centre Jülich,
Germany
The
neuroimaging technique 3D Polarized Light Imaging (3D-PLI) reconstructs the
spatial orientations of nerve fibers in post-mortem brains from birefringence
measurements of histological brain sections. In recent years, 3D-PLI has
proven its potential to map the fiber architecture of a whole human brain
with micrometer resolution.
As
the 3D-PLI measurement is interpreted by a voxel-based analysis, each
measured tissue voxel is assigned a single fiber orientation vector. In order
to better understand how the derived orientation vectors are related to the
underlying fiber structure of the measured brain tissue and to improve the
accuracy and reliability of the reconstructed fiber orientations, numerical simulations
are employed.
Here,
we present two complementary simulation approaches that reproduce the entire
3D-PLI analysis starting from synthetic fiber arrangements and ending with
measurement-like tissue images: The first simulation approach uses the Jones
matrix calculus and models the birefringent myelin sheaths surrounding the
nerve fibers as series of optical retarder elements. The second simulation
approach uses a 3D Maxwell solver and computes the propagation of the
polarized light wave through the tissue sample based on a finite-difference
time-domain algorithm.
The
presentation will demonstrate that both simulation methods are valuable tools
to better understand the interaction of polarized light with brain tissue and
to test hypotheses on the underlying fiber structure of brain tissue. In this
way, the simulations help to improve the reliability of the extraction of
nerve fiber orientations with 3D-PLI.
Back to Session II
|
A principled approach to developing extremely scalable neuronal
network simulators
A. Morrison
Research Center, Jülich, Germany
Today, simulation technology
for spiking neuronal networks offers manifold possibilities to computational
neuroscientists -- from studying small networks on laptops in an interactive
fashion to exploring the dynamics of brain-scale models on supercomputers
[1,2]. In the case of the NEST simulator [3], the development of simulation
code that enables such a wide range of application goes hand in hand with the
development of efficient methods that enable the systematic analysis of
different simulator components with respect to memory usage and run time for
different regimes of number of processes. In the last couple of years, the
fundamental data structures of NEST have undergone major changes in order to
meet the memory requirements of contemporary supercomputers. The redesign of
the neuronal and connection infrastructure was guided by a model of the
simulator's memory usage [4], which has since established as a useful
development tool. Similarly, the novel performance model of NEST's simulation
phase allows the analysis of different simulator components with respect to
run time [5,6]. To bridge the gap between the analytical models and the
benchmarks for evaluating new implementations on supercomputers, we have
developed the dry-run mode of NEST, which emulates a large-scale simulation
on a single process and hence saves precious supercomputer resources.
[1] Helias, M., Kunkel, S.,
Masumoto, G., Igarashi, J., Eppler, J. M., Ishii, S., et al. (2012).
Supercomputers ready for use as discovery machines for neuroscience. Front.
Neuroinform. 6:26.
[2] Kunkel S, Schmidt M, Eppler
JM, Plesser HE, Masumoto G, Igarashi J, Ishii S, Fukai T, Morrison A,
Diesmann M and Helias M (2014) Spiking network simulation code for petascale
computers. Front. Neuroinform. 8:78.
[3] Gewaltig, M.-O., and
Diesmann, M. (2007). NEST (NEural Simulation Tool). Scholarpedia 2, 1430.
[4] Kunkel, S., Potjans, T. C.,
Eppler, J. M., Plesser, H. E., Morrison, A., and Diesmann, M. (2012). Meeting
the memory challenges of brain-scale simulation. Front. Neuroinform. 5:35.
[5] Schenck, W., Adinetz, A.
V., Zaytsev, Y. V., Pleiter, D., and Morrison, A. (2014), Performance model
for large-scale neural simulations with NEST, Extended abstract for the
poster session at the SC14,
(New Orleans, LA, USA)
[6] Adinetz, A.V., Baumeister, P.F., Böttiger, H., Hater, T.,
Maurer, T., Pleiter, D., Schenck, W., and Schifano, S.F. (2015), Performance
Evaluation of Scientific Applications on POWER8. In: Jarvis, S.A., Wright, S.A.,
Hammond, S.D. (eds.), High Performance Computing Systems - Performance
Modeling, Benchmarking,
and Simulation, Lecture Notes
in Computer Science 8966, Springer, pp. 24-45
Back to Session III
|
Form and motion analysis in cortical architecture - from neuroscience
to neuromorphic computing
H. Neumann
University of Ulm, Germany
Principles
of neural processing of visual information have been investigated in numerous
experimental and theoretical modeling studies. The identification of generic
structural as well as computational principles is a driving force for the
development of biologically inspired computer vision algorithms and neuromorphic
computing principles. Here, generic principles of a biologically inspired
network architecture are presented that build upon generic principles of
cortical organization.
I
will discuss several examples of our modeling investigations in the light of
canonical processing principles and their potential to serve for future
development of neuromorphic computing mechanism. Event-based sensors free
from frame-based processing utilizing principles of the spike-emitting retina.
It is demonstrated how motion can be robustly detected on the basis of such
address-event representations. Furthermore, localized fast moving structures in
the sensory input generate so-called speedline representations, or motion
streaks, in the form pathway. Sub-cortical and cortical areas are bidirectionally
coupled in general. We suggest a canonical model of convergent feedforward
and re-entrant feedback signal flows to enhance driving signals by top-down
and cross-channel expectations and predictions. The model circuit accounts
for signal integration at single cells and columns of cells. Such mechanism
is further augmented by lateral recurrent interaction from a pool of cells
that realizes context sensitive normalization of activities. Taken together,
such model mechanisms serve as a basis for building complex model systems to
explain experimental data and transfer to applications as well.
Work supported by DFG & BMBF
Back to Session III
|
A framework for Neuroscience data visualization within the HBP
L. Pastor
Universidad Rey Juan Carlos, Madrid, Spain
The
complexity of the human brain, the different levels at which it can be studied,
the number of Neuroscience research groups working worldwide and the speed at
which new data is being produced are all factors that contribute to make the
understanding of brain’s structure and function one of the biggest challenges
Science is confronting nowadays. In order to advance towards this goal,
scientists need new tools that can speed up the analysis and understanding
process, given the fact that data is being gathered at an ever increasing
pace. This presentation will focus on a framework for designing visualization
solutions adapted to the specific problems of brain research.
Back to Session IV
|
The primary visual cortex as a sub-Riemannian geometrical engine
A. Sarti
CNRS, France
The
functional architecture of the primary visual cortex plays a key role at all levels
of visual perception. A geometric model of V1 will be presented in terms of
the rototranslation Lie group equipped with a sub-Riemannian metric. Local
integral curves model association fields and are neurally implemented by
horizontal connectivity. This geometrical structure is learned by the
symmetries of the visual stimuli and accounts for a number of perceptual
phenomena. Amodal completion and inpainting is performed by means of minimal
surfaces in the group. Modal completion, i.e. the completion of the Kanitzsa triangle,
is accomplished in terms of the natural Gauge field on the group. Finally
visual perceptual units are naturally segregated by means of spectral
analysis in the rototranslation Lie group.
Back to Session III
|
Design Space of Neurosimulations
F. Schürmann
Ecole Polytechnique Federale de Lausanne, Geneva, Switzerland
To
a large degree, computational modeling of neural tissue has been a modeling challenge
rather than a simulation challenge in the sense that the actual formalism are
highly dependent on which spectrum of observables the models intend to
describe and the approaches thus are plenty and heavily debated. There is
little agreement as to which level of detail should be considered or not
despite the fact that neurons commonly are considered the main computational
elements. Accordingly, when it comes to simulation, ie. the exploration of the
time course of the aforementioned models, the necessary data structures, algorithms
and rate limiting steps vary profoundly. This talk will try to map some
portion of this design space.
Back to Session III
|
System Architecture for Exascale
K. Solchenbach
Director Pathfinding Europe, Intel, Germany
In
order to build exascale systems several challenges need to be solved:
·
The performance/energy ratio has to improve by an order of magnitude
·
A new memory architetcure is needed
·
Applications have to become highly scalable, supporting 1M+ cores
Intel
is working on future system architectures, including many-core nodes, high-bandwidth
interconnects, and new memory concepts.
To
design future system Intel works with the partner and customer community , in
particular in Europe. The Intel Exascale Labs in Belgium, France, Germany and
Spain are collaborations with leading European HPC organisations, to address
the above challenges and to define the requirements for future HPC systems.
These systems won’t be pure number crunchers any more, they will solve
problems in a mix of HPC, high performance analytics, and data-centric
computing.
In
the presentation we will describe the basic principles of future exascale architectures
and present some results of the European Exascale Labs.
Back to Session V
|
Building an HPX Asynchronous Multi-Neuronal Brain Model
T. Sterling and M. Anderson
Indiana University, USA
The
human brain comprises approximately 89 billion neurons, each on average with a
degree of connectivity of ten thousand creating a network of 1015 links. Brain
science is at its inchoate phase as humanity is only beginning to relate the
structures of the neo-cortex, the limbic systems, the cerebellum, and other
definable subsystems to human mental behavior. Modeling the brain is an exascale
problem with potentially a billion billion operations per second. But in
truth it is far more complicated than that with every neuron performing an
intricate function in time and space. It is so hard that some experts in the
field assert that it is premature to attempt to simulate even a small subset
of the total structure. They may be right. A major challenge is the
asynchrony and the uncertainty of the actions of the neurons and the
distributions of their signals as well as the effects they have on down-stream
receptor neurons. HPX is a runtime system developed to support event-driven
dynamic adaptive execution for runtime control of resource management and
task scheduling. It supports a global address space and advanced
synchronization functions. Together, the semantics of state and control
enabled by HPX make possible a new generation of brain-inspired simulation to
explore the frontiers of brain science. Indiana University is preparing to
develop such a simulation framework based on HPX. This presentation will
describe the basic strategy embodied by this simulation project and the way
that it is being developed.
Back to Session III
|
Memory Hierarchy and Data Optimization within Neuroinformatics
A. Tate
Cray, Inc., USA
As
the cost of moving data begins to dominate the cost of performing arithmetic,
Cray's hardware roadmap presents an increasingly varied memory hierarchy offering
including high-bandwidth or on-package memories, non-volatile and solid-state
memories, traditional CPU cache architectures, burst-buffer technology and an
array of storage options. Applications can benefit from some or all levels of
this hierarchy depending on their specific access and persistence
requirements. To take full advantage of such complex memory systems,
improvements in software must be also developed. This talk will detail Cray
R&D efforts and product solutions relating to memory hierarchy, data
optimization, modeling, and data-centric software infrastructure. We will
describe and emphasize how this data problem extends to neuroinformatic applications,
specifically through Cray's involvement with the Human Brain Project early phases.
Back to Session VI
|
Hebbian Learning of Classification Models - Beyond Accuracy Optimization
T. Villmann
University of Applied Sciences Mittweida, Germany
Hebbian
learning is a fundamental paradigm in neural network learning. Powerful approaches
for classification learning for vector data based on this principle are the
family of learning vector quantizers (LVQ). These algorithms adapt weight
vectors of modified perceptron neurons to learn classification tasks.
Geometrically, the weight vectors can be seen as prototypes distributed in
the data space according to the presented stimuli (data vectors). In this setting
learning takes place as data dependend attraction and repulsion of the weight
vectors. Thereby, the goal of LVQ classification
learning is to maximize the classification accuracy or to minimize the
approximated Bayes error. In the talk we will discuss alternatives for LVQ algorithms. In particular,
we will focus on other statistical quality measures like precision and recall
or ROC-optimization as well as on incorporation of reject options during the
learning process. These abilities extend the range of possible applications
in classification learning for LVQ but keeping the original idea of Hebbian learning.
Back to Session IV
|