Supervised learning based on
temporal coding in spiking
neural networks
Introduction
ANNs, however, are fundamentally different from spiking networks. Unlike ANN
neurons that are analog-valued, spiking neurons communicate using all-or-
nothing discrete spikes. A spike triggers a trace of synaptic current in the target
neuron
While backpropagation is a well-developed general technique for training
feedforward ANNs, there is no general technique for training feedforward
spiking neural networks.
In a stochastic formulation, the goal is to maximize the likelihood of an entire
output spike pattern. The stochastic formulation is needed to ’smear out’ the
discrete nature of the spike, and to work instead with spike generation
probabilities that depend smoothly on network parameters and are thus more
suitable for gradient descent learning.
In this paper, we develop a direct training approach that does not try to reduce
spiking networks to conventional ANNs. Instead, we relate the time of any spike
differentiably to the times of all spikes that had a causal influence on its
generation. We can then impose any differentiable cost function on the spike
times of the network and minimize this cost function directly through gradient
descent.
Network Model
Membrane Dynamics: The membrane potential (V) of neuron j is described by
a differential equation where the right hand side is the synaptic current (which
is determined by the weights).
Synaptic current thus jumps instantaneously on the arrival of an input spike,
then decays exponentially with time constant τsyn
Supervised learning based on temporal coding in spiking neural networks 1
, Spiking Behaviour: A neuron spikes when its membranes potential crosses a
firing threshold (set to 1 in this case). After spiking, the membrane potential is
reset to 0. The model allows the membrane potential to go below zero if the
integral of the synaptic current is negative.
Initial Equation:
Membrane Potential for a neuron
recieving N spikes at several times
with weights
This is because set prediction is given a predefined number of objects
(some can be empty)
The model learns to predict the locations and sizes of the objects without
relying on a pre-placed grid
Thanks to the one-to-one matching with bipartite matching there will be no
overlapping bboxes and thus no need for NMS :)
In a feedforward spiking network that uses a temporal coding scheme where
information is encoded in spike times instead of spike rates, the network input-
output relation is differentiable almost everywhere.
The neuron spikes when its
membrane potential reaches the
firing threshold (to 1)=
Exponents to simplify the calculations. The sum of the weights needs to be
greater than 1 which ensures that z_out = exp(t_out) is always positive
Supervised learning based on temporal coding in spiking neural networks 2
temporal coding in spiking
neural networks
Introduction
ANNs, however, are fundamentally different from spiking networks. Unlike ANN
neurons that are analog-valued, spiking neurons communicate using all-or-
nothing discrete spikes. A spike triggers a trace of synaptic current in the target
neuron
While backpropagation is a well-developed general technique for training
feedforward ANNs, there is no general technique for training feedforward
spiking neural networks.
In a stochastic formulation, the goal is to maximize the likelihood of an entire
output spike pattern. The stochastic formulation is needed to ’smear out’ the
discrete nature of the spike, and to work instead with spike generation
probabilities that depend smoothly on network parameters and are thus more
suitable for gradient descent learning.
In this paper, we develop a direct training approach that does not try to reduce
spiking networks to conventional ANNs. Instead, we relate the time of any spike
differentiably to the times of all spikes that had a causal influence on its
generation. We can then impose any differentiable cost function on the spike
times of the network and minimize this cost function directly through gradient
descent.
Network Model
Membrane Dynamics: The membrane potential (V) of neuron j is described by
a differential equation where the right hand side is the synaptic current (which
is determined by the weights).
Synaptic current thus jumps instantaneously on the arrival of an input spike,
then decays exponentially with time constant τsyn
Supervised learning based on temporal coding in spiking neural networks 1
, Spiking Behaviour: A neuron spikes when its membranes potential crosses a
firing threshold (set to 1 in this case). After spiking, the membrane potential is
reset to 0. The model allows the membrane potential to go below zero if the
integral of the synaptic current is negative.
Initial Equation:
Membrane Potential for a neuron
recieving N spikes at several times
with weights
This is because set prediction is given a predefined number of objects
(some can be empty)
The model learns to predict the locations and sizes of the objects without
relying on a pre-placed grid
Thanks to the one-to-one matching with bipartite matching there will be no
overlapping bboxes and thus no need for NMS :)
In a feedforward spiking network that uses a temporal coding scheme where
information is encoded in spike times instead of spike rates, the network input-
output relation is differentiable almost everywhere.
The neuron spikes when its
membrane potential reaches the
firing threshold (to 1)=
Exponents to simplify the calculations. The sum of the weights needs to be
greater than 1 which ensures that z_out = exp(t_out) is always positive
Supervised learning based on temporal coding in spiking neural networks 2