Advanced Computational Neuroscience (NWI-NM085C) - 2021/2022

Course information

The aim of this course is to provide an overview of some key theoretical concepts commonly used in computational neuroscience. The course consists of two parts, given by Bert Kappen and Paul Tiesinga. This page describes the part of the course taught by Bert Kappen.

Course material:

Week

Topic Material Weekly exercises Take home exercises
5 Lecture 1: Stochastic neural processes
Poisson processes
First passage time model
DA chapter 1, chapter 5.4
handouts chapter 2
Gerstein and Mandelbrot, 1964
DA 1 Ex. 1 (here is the answer of 1.1)
Handouts ch.2 Ex. 1, 2, 3a
DA 5 Ex. 3
The drift diffusion model for reaction times driftdiffusion.pdf
0 McCulloch-Pitts neurons, IF neuron, Perceptron
Gradient descent rules, logistic regression
Multi-layered perceptrons
Deep neural networks
slides CDS ML 66-90
slides CDS ML 98-106, 112-116
slides CDS ML 117-127
Handouts chapter 6 (based on HKP chapters 5 and 6)
Handouts ch.6 Ex. 2,3,4

Choose one of these two exercises:

  • Perceptron learning rule exercise.
    • Write a computer program that implements the perceptron learning rule. Take as data p random input vectors of dimension n with binary components. Take as outputs random assignments to -1 and +1. Take n=50 and test empirically that when p < 2n the rule converges almost always and for p > 2n the rule converges almost never. Here is the template of a Matlab program.
    • Reconstruct the curve C(p,n) as a function of p for n=50 in the following way. For each p construct a number of learning problems randomly and compute the fraction of these problems for which the perceptron learning rule converges. Plot this fraction versus p.

Or:

6a Lecture 2: Sparse visual coding
PCA, Sparse coding, Lasso
Olshausen, Field 1996 Emergence of simple-cell receptive field properties by learning a sparse code for natural images   Sparse coding exercise
program template
Natural image
6b Lecture 3: Stochastic neural networks
Stochastic binary neurons and networks
Markov Processes
Mean field and linear response approximation
handouts cns chapter 3
7 Lecture 4: Boltzmann Machine
Boltzmann machine learning
Application to salamander retina data
Restricted BM, contrasted divergence and collaborative filtering
HKP chapter 7.1
DA 7.6
handouts cns chapter 4
Schneidmann et al. Nature 2006
handouts cns chapter 4 exercises 1a Boltzmann Machine exercise
salamander retina data.
8 Lecture 5: Attractor neural networks: The Hopfield model
Delayed neural activity
Evidence for attractor dynamics
The Hopfield model
HKP chapter 2
DA pg. 322-324
handounts cns chapter 5
Trappenberg 8.2 and 8.3
Further reading: Wills et al. Science 2005. Attractor in Hippocampus
Capacity Hopfield network
Attractor dynamics in hippocampus
9 RL 1
Trial level RL (DA 9.1,9.3)
Real time RL (DA 9.2)
Introduction to the theory of RL
Introduction to control theory
Sutton Barto
Intro to theory of RL, Kaelbling
Intro to control theory, Kappen
slides animal learning
slides RL and control theory
Further reading:
Montague et al.
Waelti 2001
Reproduce fig. 17 in Sutton Barto
DA 9.5
Construct the Bayesian solution of the two armed bandit problem using dynamic programming
10 RL 2
Exploration and exploitation in the bandit problem
Value iteration, policy iteration
Model free and model based approaches
Actor critic, TD learning
Illustration of policy iteration and direct actor for animal learning (DA 9.4)
Q learning and Dyna
Linear function approximation
Foster water maze
Intro to theory of RL, Kaelbling
Foster et al.
slides RL and control theory
DA 9.6, DA 9.8 and DA 9.9 Reproduce the reinforcement learning result as presented in fig. 3 of Foster et al.. The more complex coordination based navigation model also discussed in the paper is not required.

Further reading:

Other topics, not covered


Grading, final assignments and examination

Exercises

Exercises will be handed in one week after the assignment has been given. Exercises that are handed in on time will contribute to a total average. This total average on the scale of 0-1 will be added to the final grade.

Examination

There will be no final examination. The grade will be based on the take-home exercises. In addition, there is one bonus point for the weekly exercises. All students should work individually and hand in their own results. For each take-home exercise, hand in the code that can be run stand-alone and write a report.

Hand in the final result of your take home exercises of week 5,6,7 before March 15 2022 and the remaining exercises before April 15 2022.

There is overlap of this course with CDS Machine learning and with advanced Machine Learning. Very few students have not passed or followed CDS Machine learning are advised to study the material and exercises of the perceptron (week 0) on their own. This will not add to their grade.
Students have followed/follow Advanced ML will not make the exercise for the Boltzmann machine (week 7), but instead make the exercise for the Hopfield network (week 8, to be provided).
Students that do not follow Advanced ML will make the exercise for the Boltzmann Machine (week 7) and not the exercise for the Hopfield network.