This course will present various models for neural networks and for learning in the brain. Principles for optimal representation of information and for learning will be introduced using concept from information theory and statistics. This will be done for feed-forward neural networks and for recurrent neural networks. The behavior of the neural networks will be discussed with implications for understanding of biological neural networks, as well as applications of the basic principles of neuronal information processing for pattern recognition.
This course is mandatory for NW students with biophysics specialization. The course is available for all physics and NW students and is part of the minor neuroscience.
The course Introduction Biophysics is recommended for this course.
The course uses the book Theoretical Neuroscience by Peter Dayan and LF Abbott.
The course is on monday from 1330-1530 in hg01.028
The practicals (werkcollege) is wednesday from 1330-1530 in Hg01.057 or hg01.139
The practicals instructor is Patrick Lessman.
The exercise material is found here
Most of the exercises require the use of
If you need help with Matlab you can consult
matlab help pages.
To help you started, here are the m-files that contain the solution to exercises 1 and 6 of chapter 1:
This document gives the material for each week and exercises for each week. This document will change probably every week throughout the course
I made powerpoint sheets for all chapters. They can be downloaded here:
Note, that these powerpoint files are still being improved while the course progresses.