Details
![Tengxiao Wang Headshot](https://confcats-catavault.s3.amazonaws.com/CATAVault/ieeecass/master/files/styles/cc_user_photo/s3/user-pictures/11292.jpg?h=ff2033e8&itok=kegsUBhX)
- Affiliation
-
AffiliationChongqing University
- Country
Error back-propagation (BP) in deep spiking neural networks (SNN) involves complex operations and a high layer-by-layer latency. To overcome these problems, we propose a method to efficiently and rapidly train deep SNNs, by extending the well-known single-layer Tempotron learning rule to multiple SNN layers under the Direct Feedback Alignment framework that directly projects output errors onto each hidden layer via a fixed random feedback matrix. A trace-based optimization for Tempotron learning is also proposed. Using such two techniques, our learning process becomes spatiotemporally local and is very plausible for neuromorphic hardware implementations. We applied the proposed hardware-friendly method in training multi-layer and deep SNNs, and obtained comparably high recognition accuracies on the MNIST dataset.