Skip to main content
Video s3
    Details
    Poster
    Author(s)
    Affiliation
    Affiliation
    Polytechnique Montréal
    Display Name
    David JeanPierre
    Affiliation
    Affiliation
    Polytechnique Montréal
    Abstract

    Binarized Neural Networks (BNNs) offer the promise of low power and high throughput, but this is difficult to achieve on regular processors. A considerable amount of research has thus been devoted to mapping (BNNs) to specialized hardware, especially LUT-based FPGAs, losing the flexibility of instruction-set processors. This paper introduces a configurableVLIW processor with a specialized instruction set to compute efficiently the inference of an artificial binary neuron in a single clock cycle. For the MNIST data-set, the processor achieves an increased throughput of 2994× when compared to the inference done on a base VLIW processor.

    Slides
    • ASIP Accelerator for LUT-Based Neural Networks Inference (application/pdf)