Graphical models are probability models, typically consisting of a large number of variables. The term graphical model is derived from the fact that relations between the variables can be represented as a graph, i.e. a network in which the nodes represent the variables and the links represent the probabilistic relations between the variables. Well-known examples of graphical models are the Boltzmann Machine as an abstract model for interaction between neurons in the brain and Promedas, a probabilistic model for diagnosis of diseases in internal medicine. Quite some information about graphical models, and then especially Bayesian networks, can be found at the website of the Association for Uncertainty in Artificial Intelligence. You may also want to check out Kevin Murphy′s introduction on Bayesian networks and graphical models, or our own (English and Dutch).
The main computation that is performed in graphical models is called inference, which is the computation of the probability of a subset of variables. Inference is the basic computation that is needed for any system that is learning, such as a neural network or a robot, or reasoning, such as an expert system. Inference is intractable, which means that in general the computation time and the memory requirement scales exponentially with the number of variables. For this reason, using exact inference methods often cannot be applied to very large applications.
The aim of our research on graphical models is to develop efficient approximate methods that are both fast and accurate. Our approach is to use methods derived from statistical physics. These include the mean-field and TAP approximation and variational methods. Currently, a class of message passing methods, known as belief propagation has become popular. They have been shown to implement another statistical physics method, known as the Cluster Variation Method. These methods currently are the state-of-the-art for approximate inference, both in terms of accuracy and efficiency. This work has been applied by our group to large scale applications, such as the Promedas medical diagnostic expert system and more recently to applications in stochastic optimal control. It is also relevant for computational neuro-science because brains must solve very similar problems. The insights from approximate inference methods provide guiding principles that can constrain model design (for instance, low level vision).
We have developed two generic software packages for graphical model inference. BayesBuilder is a generic tool to develop a graphical model application. LibDAI is a library of software routines for approximate inference.