Computational Neuroscience (NWI-NM047C) - autumn 2018


Course information


Overview of lectures and assignments

Lectures: Mon, 10:30-12:15 in HG00.308
Practice hours: Wed 10:30-12:15 in HG00.206 (wk 36-43); HG00.029 (wk 45-51)
The planning and locations may change. Check the online timetable!

Schedule
Remark: this is the general planning, always check Brightspace for changes.
Tips for handing in your assignments.

Date Lecture

Code Topic Date practical Weekly exercises Take home exercises
Sept 3 Paul 1 Non-linear systems 1: Numerical integration ODEs, Root finding, Bifurcations in 1D Sept 5 Assignments Paul 1 - non-linear dynamics 1

Remark: Exercise 3 is shifted to the Paul 2 practical, but try to finish 3a already this week.

 
Sept 10 Paul 2 Non-linear systems 2: Bifurcations in 2D Sept 12 Assignments 1- 2 of Paul 2 - non-linear dynamics 2 +
Assignment 3 of Paul 1 - non-linear dynamics 1
 
Sep 17 Paul 3 Non-linear systems 3: Bifurcations in Neuronal models Nov 19

Assignments 3 of Paul 2 - non-linear dynamics 2 +
Assignment 1 of Paul 3 - bifurcations in neuronal models

 
Sep 24 Paul 4 Neural coding 1: Population coding, Maximum Likelihood, MAP Sep 26

Assignment 2 of Paul 3 - bifurcations in neuronal models +
Assignment Paul 4 - Kuramoto

 
Oct 1 Paul 5 Neural coding 2: Fisher information Oct 3 Assignments Paul 5 - Coding of information  
Oct 8 Paul 6 Information theory: Mutual information, Methods Oct 10

Assignments Paul 6 - information theory

 
Oct 15 Paul 7 Topographic Maps: Overview visual system, Kohonen map Oct 24

Assignments Paul 7 - Mutual information and Kohonen map

 
Nov 5 Bert 1 Poisson processes
First passage time model
Nov 7 DA 1 Ex. 1* (here is the answer of 1.1)
Handouts ch.2 Ex. 1, 2, 3a
DA 5 Ex. 3*
The drift diffusion model for reaction times driftdiffusion.pdf
Nov 12 Bert 2 McCulloch-Pitts neurons, IF neuron, Perceptron
Gradient descent rules, logistic regression
Nov 14 Handouts ch.6 Ex. 2*

Choose one of these two exercises:

  • Perceptron learning rule exercise.
    • Write a computer program that implements the perceptron learning rule. Take as data p random input vectors of dimension n with binary components. Take as outputs random assignments to -1 and +1. Take n=50 and test empirically that when p < 2n the rule converges almost always and for p > 2n the rule converges almost never. Here is the template of a Matlab program.
    • Reconstruct the curve C(p,n) as a function of p for n=50 in the following way. For each p construct a number of learning problems randomly and compute the fraction of these problems for which the perceptron learning rule converges. Plot this fraction versus p.

Or:

Nov 19 Bert 3 Sparse visual coding
Multi-layered perceptrons
Deep neural networks
Nov 21   Sparse coding exercise
program template
Natural image
Nov 26 Bert 4 Stochastic neural networks, Markov Processes
Boltzmann Machines
Nov 28 handouts cns chapter 4 exercises 1a

Two exercises:

  • Write a computer program to implement the Boltzmann machine learning rule as given on pg. 44 of chapter 4
    . Use N=10 neurons and generate random binary patterns. Use these data to compute the clamped statistics (x_i x_j)_c and (x_i)_c. Use K=200 learning steps. In each learning step use T=500 steps of sequential stochastic dynamics to compute the free statistics (x_i x_j) and (x_i). Test the convergence by plotting the size of the change in weights versus iteration.

And:

Dec 3 Bert 5 Introduction Reinforcement learning
Article Sutton Barto 1990
DA 9.1 and 9.2
Sutton Barto 1990 and DA Ch9 sheets 1-29
Dec 5 Reproduce fig. 17 in Sutton Barto 1990
DA 9.5*
 
Dec 10 Bert 6 Control theory and theory of reinforcement learning
Article Kaelbling 1996, RL Survey
RL Survey and control sheets
Dec 12   Construct the Bayesian solution of the two armed bandit problem using dynamic programming
Dec 17 Bert 7 Reinforcement learning
Dayan and Abbott 9.3 except The Direct Actor
Dayan and Abbott 9.4. Sections Policy Evaluation, Policy Improvement and Generalization use Kaelbling instead
RL Survey and control sheets 50, 51
Sutton Barto 1990 and DA Ch9 sheets 19 to 35
Dec 19 DA 9.6, DA 9.8 and DA 9.9* Reproduce the reinforcement learning result as presented in fig. 3 of Foster et al.. The more complex coordination based navigation model also discussed in the paper is not required.
*: Only the exercises that are indicated above, not all exercises displayed when clicking the link.

Course material - Paul Tiesinga part

Check Brightspace for changes

Background information - C. Gielen

Non-linear Dynamics

Neural coding

Information theory

Topographic maps

Models


Course material - Bert Kappen part

The aim of this course is to provide an overview of some key theoretical concepts commonly used in computational neuroscience: Topics anc course material:

If time permits (probably doesn't):


Grading, final assignments and examination

Exercises

Exercises will be handed in one week after the assignment has been given. Exercises that are handed in on time will contribute to a total average. This total average on the scale of 0-1 will be added to the final grade.

Click here for tips for handing in your assignments.

Examination of the Paul Tiesinga part

This part has a written exam, counting for maximal 5 points.

Examination of the Bert Kappen part

There is no written exam. Instead you are asked to write computer programs and reports for the computer exercises as listed in the column "Take home exercises" in the Overview of lectures and assignments. These final assignments count for a maximum of 5 points.

For each exercise, hand in the code that can be run stand-alone. In addition write a report for each exercise:

Hand in the final result of your examination before end January 2019.