One of the marked differences between computers and animals is the ability of the latter to learn and flexibly adapt to changing situations. Whereas computers need to be programmed with provisions for all possible future circumstances, the brain adapts its 'program' when needed, striking a remarkable balance between flexibility to adapt on the one hand and persistence by re-using pre-learned facts and skills on the other. This is commonly referred to as intelligent behavior. Particular examples of intelligence are pattern recognition, learning and memory, reasoning, planning and motor control.

The question how intelligence arises and how it is computed in the animal brain is not well understood. Historically, the two dominant computational paradigms are the digital and the analog view. The digital view reduces computation to manipulating individual bits. As shown by Turing, in this way one can compute (almost) any function. It has been proposed that the spiking behavior of neurons implements digital computation in the brain. But, although in principle correct, this view has not helped us very much to understand how these functions can be built or learned. The analog view instead, computes with analog currents and voltages. According to this view, computation in the brain can be viewed as the time-evolution of a complex dynamical system implemented as a neural network. This is currently the dominant view to model brain function.

One can try to reproduce intelligence in artificial systems and the problems of how intelligence is encoded in the brain and how it can be created artificially in computers are clearly related. Also here, the digital and analog approaches have both made important contributions, many of which have their origin in the engineering sciences. In addition to being an important intellectual challenge in itself, artificial intelligence research also has clear practical implications. We currently witness an explosion of applications in machine learning - the formal study of how machines learn - in for instance robotics and data analysis.

Due to the essential roles that noise and uncertainty play in perception and learning, a useful way to model intelligence is to use probability models. In the mid 90s, the fields of analog and digital computing as separate approaches to model intelligence, have begun to merge using the idea of Bayesian inference: One can generalize the logic of digital computation to a probabilistic calculus, embodied in a so-called graphical model. Similarly, one can generalize dynamical systems to stochastic dynamical systems that allow for a probabilistic description in terms of a Markov process. The Bayesian paradigm has greatly helped to integrate different schools of thought in particular in the field of artificial intelligence and machine learning but also in computational neuroscience.

Research at SNN is dedicated to the design of efficient and novel computational methods for Bayesian inference and stochastic control theory and the application of these ideas in artificial intelligence and computational neuroscience.