Parallel Algorithms for Reservoir Computing with Diverse Spiking Neurons

From Master Projects
Jump to: navigation, search


has title::Parallel Algorithms for Reservoir Computing with Diverse Spiking Neurons
status: ongoing
Master: project within::Technical Artificial Intelligence
Student name: student name::Leszek Ślażyński
Dates
Start start date:=2012/02/01
End end date:=2012/07/31
Supervision
Supervisor: Zoltán Szlávik
Second supervisor: Sander Bohte
Second reader: has second reader::Evert Haasdijk
Company: has company::CWI
Thesis: has thesis::Media:Thesis.pdf
Poster: has poster::Media:Posternaam.pdf

Signature supervisor



..................................

Abstract

Spiking Neural Network (SNN) is a new class of artificial neural network modeled closely after real biological neurons. Networks of such neurons are particularly suitable for learning and predicting non-linear dynamical systems, for example in Liquid State Machines and Reservoir Computing models. However, simulation of spiking neurons is computationally expensive and this limits current non-supercomputer based efforts to relatively small networks.

To run large neural networks in Reservoir Computing, in particular with recently developed fractionally predictive spiking neurons, we are looking for algorithmic implementations that can run (and learn) on fast multi-core GPU's. Unlike many standard applications, the simulation of spiking neurons is inherently parallel and mostly localized. State-of-the-art work on bringing simulation of spiking neurons to the GPU are showing great promise, and about a 25x speedup. What is still lacking is the implementation of more diverse spiking neuron models, as well as the development of fast parallel implementations of learning rules. Here, the aim would be to develop these more diverse models and learning rules within a Reservoir Computing neural network, using the CUDA-supporting Tesla GPU boards at the national supercomputer facility SARA and the Little Green Machine at Leiden University.

Abstract KIM 2

The arrival of graphics processing (GPU) cards suitable for massively parallel computing promises affordable large-scale neural network simulation previously only available at supercomputing facilities. While the raw numbers suggest that GPUs may outperform CPUs by at least an order of magnitude, the challenge is to develop fine-grained parallel algorithms to fully exploit the particular GPUs.

Computation in a neural network is inherently parallel and thus a natural match for GPU architectures: given inputs, the internal state for each neuron can be updated in parallel. Here, we show that for filter-based spiking neurons, like the Spike Response Model, the additive nature of membrane potential dynamics allows additional update parallelism. Additionally, such a formulation means that given input spikes, the numerical error in the membrane potential does not accumulate over time like in models based on differential equations. This makes the simulation of filter-based spiking neurons more suitable for single precision computation, the much-faster native precision of GPUs.

With GPU-specific optimizations, we can simulate in better-than-realtime plausible spiking neural networks comprised of up to 50,000 neurons, processing 40 million spiking events per second. We also show how the performance scales with different network parameters and properties, showing performance up to the order of 600 million spiking events per second for settings with higher connectivity and activity.

Abstract KIM 1

Spiking Neural Network (SNN) is a new class of artificial neural network modeled closely after real biological neurons. Spiking Neurons differ from traditional neuron models significantly as they send impulses called spikes at specific times in all-or-none fashion. The concept of time is therefore introduced into the computation. It has been shown that such an approach better captures the nature of the computation happening in a real brain.

Due to the complexity of the operation and neuron models, and often also complex connectivity pattern, simulation of such a network is computationally intensive, effectively bounding network sizes which can be simulated. As the simulation is also inherently parallel and localized, it can benefit from massively parallel architectures, like modern Graphic Processing Units (GPUs).

One of the applications of large SNNs are Liquid State Machines - a type of Reservoir Computing. In this machine learning model, a large recurrent neural network called liquid or reservoir is used as a dynamical system to effectively map the input into a higher dimensional space. The reservoir itself is not trained, there is a separate trained readout mechanism which maps the reservoir state into the output.

This project aims to develop algorithms for efficient parallel SNN simulation and learning on modern GPUs. Large networks based on the state-of-the-art GLM computational neuron model will also be validated in a Reservoir Computing setting.