Details
Presenter(s)
![Christopher Bennett Headshot](https://confcats-catavault.s3.amazonaws.com/CATAVault/ieeecass/master/files/styles/cc_user_photo/s3/user-pictures/25905.jpg?h=a7ffc51c&itok=yp8b9Ls_)
Display Name
Christopher Bennett
- Affiliation
-
AffiliationSandia National Laboratories
- Country
Abstract
Machine learning implements backpropagation via abundant gradient storage and training samples. We demonstrate a multi-stage learning system which is realized by the physics of a promising non-volatile memory device, the domain-wall magnetic tunnel junction (DW-MTJ). The system consists of unsupervised (clustering) as well as supervised sub-systems, and generalizes quickly. We demonstrate interactions between physical properties of this device and optimal implementation of neuroscience inspired plasticity learning rules, and highlight performance on a suite of tasks. Our energy analysis confirms the value of the approach, as the learning budget stays below 20microJoules even for large tasks used typically in machine learning.