Skip to main content
Video s3
    Details
    Poster
    Presenter(s)
    Devon Janke Headshot
    Display Name
    Devon Janke
    Affiliation
    Affiliation
    Georgia Institute of Technology
    Country
    Abstract

    When a neural network is trained and used for inference in a digital hardware architecture, it is normally expected that the performance will translate exactly to new hardware. When implemented in low-power analog systems, there is no such luxury. This paper explores methods for reducing the impact of inter-device variations on the final performance of a neural network. An analysis of the effect of the network shape, including the number of layers and neurons in each layer, reveals a general pattern for minimizing the variability in accuracy. A number of means to reduce overfitting in neural networks are discussed and compared, and a more thorough demonstration is given to training methods that mitigate device overfitting.

    Slides
    • Best Practices for Designing and Training Neural Networks Subjected to Random Variations (application/pdf)