Skip to main content
Video s3
    Details
    Presenter(s)
    Christopher Bennett Headshot
    Affiliation
    Affiliation
    Sandia National Laboratories
    Country
    Abstract

    Machine learning implements backpropagation via abundant gradient storage and training samples. We demonstrate a multi-stage learning system which is realized by the physics of a promising non-volatile memory device, the domain-wall magnetic tunnel junction (DW-MTJ). The system consists of unsupervised (clustering) as well as supervised sub-systems, and generalizes quickly. We demonstrate interactions between physical properties of this device and optimal implementation of neuroscience inspired plasticity learning rules, and highlight performance on a suite of tasks. Our energy analysis confirms the value of the approach, as the learning budget stays below 20microJoules even for large tasks used typically in machine learning.

    Slides