Keywords: Bayesian inference, learning and reasoning, stochastic control theory, neural networks, statistical physics
One of the marked differences between computers and animals is the ability of the latter to learn and flexibly adapt to
changing situations. Whereas computers need to be programmed with provisions for all possible future circumstances, the
brain adapts its 'program' when needed, striking a remarkable balance between flexibility to adapt on the one hand and
persistence by re-using pre-learned facts and skills on the other. This is commonly referred to as intelligent behavior.
Particular examples of intelligence are pattern recognition, learning and memory, reasoning, planning and motor control.
The question how intelligence arises and how it is computed in the animal brain is not well understood. One can try to
reproduce intelligence in artificial systems and the problems of how intelligence is encoded in the brain and how it can be
created artificially in computers are clearly related. In addition to being an important intellectual challenge in itself,
artificial intelligence research also has clear practical implications. We currently witness an explosion of applications
in machine learning - the formal study of how machines learn - in for instance robotics and data analysis.
Due to the essential roles that noise and uncertainty play in perception and learning, a useful way to model intelligence
is to use probability models. In the mid 90s, the fields of analog and digital computing as separate approaches to model
intelligence, have begun to merge using the idea of Bayesian inference: One can generalize the logic of digital computation
to a probabilistic calculus, embodied in a so-called graphical model. Similarly, one can generalize dynamical systems to
stochastic dynamical systems that allow for a probabilistic description in terms of a Markov process. The Bayesian paradigm
has greatly helped to integrate different schools of thought in particular in the field of artificial intelligence and
machine learning but also provides a computational paradigm for neuroscience.
My research is dedicated to the design of efficient and novel computational methods for Bayesian inference and stochastic
control theory using ideas and methods from statistical physics. These novel methods are used by me and by others in
artificial intelligence research and computational models of brain function.
Bayesian models are probability models and the typical computation, whether in the context of a complex data analysis
problem or in a stochastic neural network, is to compute an expectation value, which is referred to as Bayesian inference.
Bayesian inference is intractable, which means that computation time and memory use scale exponentially with the problem
size. However, many methods exist to compute these quantities approximately. Most of these methods origin from statistical
physics, such as the mean field method, belief propagation or Monte Carlo sampling. Application of these methods to machine
learning problems is challenging and an active field of research to which I have made several contributions.
Current projects focus on the application of these ideas in concrete problems:
The efficient approximate inference methods allow the design of large artificial reasoning systems. Currently, we are
designing a diagnostic decision support system for internal medicine consisting of thousands of diagnoses, that should
help the doctor during the diagnostic process (in collaboration with Radboud academic hospital).
Design of high-dimensional Bayesian data analysis methods. The motivation is that Bayesian integration of the pos- terior
distribution improves the statistical power of these methods compared to the maximum likelihood approaches. Approximate
inference is used to efficiently compute statistics in the posterior distribution. One example is the use of the mean
field approximation for sparse L0 regression. Another example is Gaussian Process regression with Monte Carlo
sampling. In this case we have shown for yeast data that this method significantly outperforms all other methods and is
able to identify novel genetic causes.
Control theory is a theory from engineering that gives a formal description of how a system, such as a robot or animal, can
move from a current state to a future state at minimal cost, where cost can mean time spent, or energy spent or any other
quantity. Control theory is used traditionally to control industrial plants, airplanes or missiles, but is also the natural
framework to model intelligent behavior in animals or robots. The mathematical formulation of deterministic control theory
is very similar to classical mechanics. In fact, classical mechanics can be viewed as a special case of control theory.
Stochastic control theory uses the language of stochastic differential equations. For a certain class of stochastic
control problems, the solution is described by a linear partial differential equation that can be solved formally as a path
integral. This so-called path integral control method provides a deep link between control, inference and statistical
physics. This statistical physics view of control theory shows that qualitative different control solutions exist for
different noise levels separated by phase transitions. The control solutions can be computed using efficient approximate
inference methods such as Monte Carlo sampling or deterministic approximation methods. The path integral control theory is
successfully being used by leading research groups in robotics world wide. For more information see the
path integral control theory page.
Another line of research, that was started in 2000 and is still continued,
is on the effect that short-term synaptic plasticity has on memory
storage. The common understanding of long term memory is that it is stored
in the synaptic connections between neurons in such a way that memory
retrieval occurs as the relaxation of the neural activity to a constant
spiking pattern, that represents the memory. This idea was put forward
by Hopfield (1982) and others as the attractor neural network. Synaptic
dynamics challenges this mechanism, since persistent pre-synaptic
activity typically weakens the synaptic strength. The inclusion
of short-term synaptic plasticity in an attractor neural network
make memories metastable states that rapidly switch from one state
to the next, depending on the sensory context. This work provides
some insights on the puzzle how the brain, viewed as a dynamical
system, is able to build stable representations of the world and at
the same time is capable to effortlessly switch between them. With (J. Torres, University of Granada).
Modern experimental neuro-science has been revolutionarized by
sophisticated measurement equipment, such as fMRI, MEG and others. Also,
the advances in EEG measurement systems has accelerated the research
on Brain Computer Interfaces. Thirdly, research tools in genetics have
led to an explosion of DNA and expression data. These massive data sets require
advanced data analysis tools. Machine learning methods (kernel methods,
sparse dimension reduction methods, ICA, Bayesian approaches) provide
the most promising approach to analyze these data.
Brain Computer Interface
Since 2009, I have started research on
the design of an adaptive BCI system, based on the idea that subjects will
be surprised when the BCI output differs from their expectation. This
surprise is measurable as a so-called error potential. The detection of the
error potential can be used to adapt the BCI device, using the perceptron
learning rule (V. Gomez and A. Llera with O. Jensen, Donders Neuro-imaging Center).
Bayesian methods have a big potential for
immediate application in areas outside science. There is a long-standing and quite
in the SNN group to build such application together with her spin-off companies Smart Research and Promedas.
Here are a few examples:
We have applied an advanced approximate
inference method (the Cluster Variation Method) to construct haplotypes in complex pedigrees.
method was shown to outperform the state-of-the-art Monte Carlo sampling
approach on a subset of problems. The software is
publicly available. Contact Kees Albers for details caa at sanger dot ac dot uk.
Aladin is a software tool for performing efficient linkage analysis of a small number of distantly-related individuals. It estimates multipoint IBD probabilities and parametric LOD scores. Contact Kees Albers for details caa at sanger dot ac dot uk.
For Shell, we built a petrophysical expert system.
It estimates the type of soil and the probability that it contains oil, gas
or other valuable minerals, based on drilling measurements. The system is
based on a Bayesian network where the probability computation is done using
a Monte Carlo sampling method. See Smart Research for further details and other products.
For the Netherlands Forensic Institute, we are
building a victim identification system by matching of
their DNA profiles against the Pedigrees of Relatives from Missing Person's
DNA profiles in large databases, using a Bayesian network. See
Bonaparte for further details.
We have built the world largest and most up-to-date medical
expert system for diagnostic advice in internal medicine. The system is
being commercialized by Promedas bv. The system is since end 2008 operational
at the Utrecht academic hospital. See
Promedas for further details.
Wine and food
We have built a system that selects the most appropriate wines
to combine with your food Wine wine wine.
Kees Albers was PhD student and postdoc on approximate
inference methods for genetic linkage analysis. Kees was 4 years at Sanger
Institute, Cambridge UK and is since 2012 at Human Genetics in Nijmegen.
Bram Kasteel was Bachelor student on the topic of multi-agent control
Stijn Tonk was Master student on the topic of multi-agent control
Ender Akay was programmer for Smart Research bv and Promedas bv
Gulliver de Boer was Bachelor student on the topic of multi-agent control applied to poker
Max Bakker was a Bachelor student on the topic of multi-agent systems
Ben Ruijl was a Bachelor student on the topic of multi-agent systems
Henk Griffioen was Master student on the topic of genetic association studies
Bart van den broek was PhD student on the topic of stochastic optimal control theory
Gheshlaghi Azar M., Munos R., Ghavamzadaeh M., Kappen H.J.Speedy q-learning: a computationally efficient reinforcement learning algorithm with a near optimal rate of convergenceJournal of Machine Learning Research,2013bibtex
Tramper J.J., Broek J.L. van den, Wiegerinck W.A.J.J., Kappen H.J., Gielen C.C.A.M.Stochastic optimal control predicts human motor behavior in time-constrained
sensorimotor tasksBiological Cybernetics, pp. xx,2011bibtex
Gheshlaghi Azar M., Munos R., Ghavamzadaeh M., Kappen H.J.Speedy q-learningNIPS 2011, Advances in Neural Information Processing Systems 24, vol. 25, pp. 2411--2419,2011bibtex
Albers C.A., Kappen H.J.Modeling linkage disequilibrium in exact linkageBMC Proceedings of the genetic Analysis Workshop 15: Gene Expression Analysis and Approaches to Detecting Multiple Functional, vol. 1, pp. S159,2007bibtex
Torres J.J., Marro J., Cortes J.M., Kappen H.J.Attractor neural networks with activity-dependent synapses: the role
of synaptic facilitation.Neurocomputing, vol. 70, no. 10-12, pp. 2022-2025,2007bibtex
Cemgil A.T., Kappen H.J.Tempo tracking and rhythm quantization by sequential monte carloAdvances in Neural Information Processing Systems 14, Part VIII Applications, vol. 14-2, pp. 1361-1368,2002bibtex
Kappen H.J.A novel iteration scheme for the cluster variation methodNeural Information Processing Systems, vol. 13, pp. 415-422,2001bibtex
Kappen H.J., Gielen C.C.A.M., Wiegerinck W.A.J.J., Cemgil A.T., Nijman M.J.Approximate reasoning: real world applications of graphical modelsFoundations of Real-World Intelligence, pp. 73-121,2001bibtex
Cemgil A.T., Kappen H.J.A dynamic belief network implementation for realtime music transcriptionProceedings of the 13th Belgian-Dutch Conference on Artificial Intelligence, pp. 473-474,2001bibtex
Wiegerinck W.A.J.J., Burg W.J.P.P. ter, Dam P.S. van, Ying-Lie O., Neijt J.P., Kappen H.J.An advisory system for anaemia based on boltzmann machines5th Eurpean Congres on Intelligent Techniques and Soft Computing, vol. 1, pp. 364-368,1997bibtex
Kappen H.J.A polynomial time algorithm for boltzmann machine learningWorkshop Cambridge,1997bibtex
Kappen H.J., Mempen, Dijken A. van, Otten H.A.F.M.Voorspelling van frisdrankverkoop1997bibtex
Kappen H.J., Gielen C.C.A.M.Neural networks: best practice in europeNeural Networks: Best Practice in Europe, pp. 209,1997bibtex
Kappen H.J., Rodriquez F.B.Accelerated learning in boltzmann machines using mean field theoryArtificial Neural Networks 7, pp. 301-306,1997bibtex
Wiegerinck W.A.J.J., Burg W.J.P.P. ter, Dam P.S. van, Ying-Lie O., Neijt J.P., Kappen H.J.Lab-test selection in diagnosis of anaemiaNeural Networks: Best Practice in Europe, pp. 179-181,1997bibtex
Wiegerinck W.A.J.J., Kappen H.J.Practical confidence and prediction intervals for prediction tasksNeural Networks: Best Practice in Europe, pp. 128-135,1997bibtex
Kappen H.J., Wiegerinck W.A.J.J., Morgan T., Harris T.J., Paillet G., Kopecz K.Stimulation initiative for european neural applications (siena)Neural Networks: Best Practice in Europe, pp. 1-8,1997bibtex