PySNN Documentation

pysnn.learning

pysnn.learning

class pysnn.learning.LearningRule(layers, defaults)

Base class for correlation based learning rules in spiking neural networks.

Parameters
  • layers – An iterable or dict of dict the latter is a dict that contains a pysnn.Connection state dict, a pre-synaptic pysnn.Neuron state dict, and a post-synaptic pysnn.Neuron state dict that together form a single layer. These objects their state’s will be used for optimizing weights. During initialization of a learning rule that inherits from this class it is supposed to select only the parameters it needs from these objects. The higher lever iterable or dict contain groups that use the same parameter during training. This is analogous to PyTorch optimizers parameter groups.

  • defaults – A dict containing default hyper parameters. This is a placeholder for possible changes later on, these groups would work exactly the same as those for PyTorch optimizers.

update_state()

Update state parameters of LearningRule based on latest network forward pass.

step()

Performs single learning step.

reset_state()

Reset state parameters of LearningRule.

check_layers(layers)

Check if layers provided to constructor are of the right format.

Parameters

layers – OrderedDict containing state dicts for each layer.

pre_mult_post(pre, post, con_type)

Multiply a presynaptic term with a postsynaptic term, in the following order: pre x post.

The outcome of this operation preserves batch size, but furthermore is directly broadcastable with the weight of the connection.

This operation differs for Linear or Convolutional connections.

Parameters
  • pre – Presynaptic term

  • post – Postsynaptic term

  • con_type – Connection type, supports Linear and Conv2d

Returns

Tensor broadcastable with the weight of the connection

reduce_connections(tensor, con_type, red_method=<built-in method mean of type object>)

Reduces the tensor along the dimensions that represent seperate connections to an element of the weight Tensor.

The function used for reducing has to be a callable that can be applied to single axes of a tensor.

This operation differs or Linear or Convolutional connections. For Linear, only the batch dimension (dim 0) is reduced. For Conv2d, the batch (dim 0) and the number of kernel multiplications dimension (dim 3) are reduced.

Parameters
  • tensor – Tensor that will be reduced

  • con_type – Connection type, support Linear and Conv2d

  • red_method – Method used to reduce each dimension

Returns

Reduced Tensor

class pysnn.learning.MSTDPET(layers, a_pre, a_post, lr, e_trace_decay)

Apply MSTDPET from (Florian 2007) to the provided connections.

Uses just a single, scalar reward value. Update rule can be applied at any desired time step.

Parameters
  • layers – OrderedDict containing state dicts for each layer.

  • a_pre – Scaling factor for presynaptic spikes influence on the eligibilty trace.

  • a_post – Scaling factor for postsynaptic spikes influence on the eligibilty trace.

  • lr – Learning rate.

  • e_trace_decay – Decay factor for the eligibility trace.

update_state()

Update eligibility trace based on pre and postsynaptic spiking activity.

This function has to be called manually at desired times, often after each timestep.

reset_state()

Reset state parameters of LearningRule.

step(reward)

Performs single learning step.

Parameters

reward – Scalar reward value.

class pysnn.learning.FedeSTDP(layers, lr, w_init, a)

STDP version for Paredes Valles, performs mean operation over the batch dimension before weight update.

Defined in “Unsupervised Learning of a Hierarchical Spiking Neural Network for Optical Flow Estimation: From Events to Global Motion Perception - F.P. Valles, et al.”

Parameters
  • layers – OrderedDict containing state dicts for each layer.

  • lr – Learning rate.

  • w_init – Initialization/reference value for all weights.

  • a – Stability parameter, a < 1.

step()

Performs single learning step.