Computational Neuroscience

Dynamic synapses

The common understanding of long term memory is that it is stored in the synaptic connections between neurons in such a way that memory retrieval occurs as the relaxation of the neural activity to a constant spiking pattern, that represents the memory. This idea was put forward by Hopfield (1982) and others as the attractor neural network. Synaptic dynamics challenges this mechanism, since persistent pre-synaptic activity typically weakens the synaptic strength. The inclusion of short-term synaptic plasticity in an attractor neural network make memories metastable states that rapidly switch from one state to the next, depending on the sensory context. This work provides some insights on the puzzle how the brain, viewed as a dynamical system, is able to build stable representations of the world and at the same time is capable to effortlessly switch between them. With (J. Torres, University of Granada).

Neural control

Control theory is the obvious theoretical framework to describe the dynamics of animal behavior, whether this is the movement of limbs or more abstract cognitive planning. However, the computation of the controls require an accurate model of the 'plant' (the system that must be controlled) and a substrate to store the state dependent optimal control. We investigate how neural networks can be used for both of these tasks. For instance, it is known that Echo state networks can accurately learn complex dynamical systems. We train these networks to learn the dynamics of the plant. In addition, we show that the optimal control solution can also be represented effectively in these networks.

Related Articles

file type image Satoh Satoshi, Kappen H.J., Saeki M.
An iterative method for nonlinear stochastic optimal control based on path integrals.
IEEE transactions on Automatic Control, vol. 62, no. 3, 2017

file type image Kappen H.J., Ruiz H.C.
Adaptive importance sampling for control and inference.
Journal of Statistical Physics, vol. 162, pp. 1244-1266, 2016

Thalmeier D., Uhlmann M., Memmesheimer R., Kappen H.J.
Learning universal computations with spikes.
Plos Computational Biology, pp. 1-29, 2016

Thalmeier D., Gómez V., Kappen H.J.
Action selection in growing state spaces: control of network structure growth.
Journal of Physics A, 2016

Thalmeier D., Gómez V., Kappen H.J.
Optimal control of network structure growth.
NIPS workshop on Advances in Approximate Bayesian Inference, 2016

All SNN publications