Skip to main content
Video s3
    Details
    Presenter(s)
    Tengxiao Wang Headshot
    Display Name
    Tengxiao Wang
    Affiliation
    Affiliation
    Chongqing University
    Country
    Abstract

    Error back-propagation (BP) in deep spiking neural networks (SNN) involves complex operations and a high layer-by-layer latency. To overcome these problems, we propose a method to efficiently and rapidly train deep SNNs, by extending the well-known single-layer Tempotron learning rule to multiple SNN layers under the Direct Feedback Alignment framework that directly projects output errors onto each hidden layer via a fixed random feedback matrix. A trace-based optimization for Tempotron learning is also proposed. Using such two techniques, our learning process becomes spatiotemporally local and is very plausible for neuromorphic hardware implementations. We applied the proposed hardware-friendly method in training multi-layer and deep SNNs, and obtained comparably high recognition accuracies on the MNIST dataset.

    Slides
    • DeepTempo: A Hardware-Friendly Direct Feedback Alignment Multi-Layer Tempotron Learning Rule for Deep Spiking Neural Networks (application/pdf)