Illing, Gerstner & Brea (2019) - Biologically Plausible Deep Learning - but how far can we go with shallow networks?

You can find the paper here.

Why am I reading this paper?

Stumbled upon this paper while researching how the delta rule is actually implemented in SNN: ex. which error signal do we consider? how to summarize the activity of the output layer over time into a single value to compute the loss?


Motivation


Main Contribution

Interesting comparison study contrasting a simple yet promising model trained with biologically plausible local learning rule to deep learning models trained with backprop.

The main question the authors try to address is the following:

Given a single hidden layer network and a biologically-plausible, spike-based, local learning rule, how well can we perform on standard classification tasks compared to rate-based networks trained with backprop.


Methods

simulated_networks

Local Supervised Learning Rule

The supervised learning rule used for SNN is a supervised delta-rule via STDP:


Results

A summary of networks performance on MNIST for biologically plausible DL models can be found in the table below:

summary