Skip to main content
Video s3
    Details
    Presenter(s)
    Luciano Prono Headshot
    Display Name
    Luciano Prono
    Affiliation
    Affiliation
    Politecnico di Torino
    Country
    Author(s)
    Display Name
    Luciano Prono
    Affiliation
    Affiliation
    Politecnico di Torino
    Display Name
    Mauro Mangia
    Affiliation
    Affiliation
    Università di Bologna
    Display Name
    Fabio Pareschi
    Affiliation
    Affiliation
    Politecnico di Torino / Università di Bologna
    Display Name
    Riccardo Rovatti
    Affiliation
    Affiliation
    Università di Bologna
    Display Name
    Gianluca Setti
    Affiliation
    Affiliation
    Politecnico di Torino / Università di Bologna
    Abstract

    Currently, the increasing need for small and low-power Deep Neural Networks (DNN) for edge computing applications involves the search for new architectures that allow good performance on low-resources/mobile devices. In this sense, many different structures have been proposed in literature, mainly with the aim of reducing the cost introduced by multiply operations. In this work, a special DNN layer based on the novel Sum and Max (SAM) paradigm is proposed. It completely avoids both the use of multiplications and also the insertion of complex non-linear operations. Furthermore, it is especially prone to aggressive pruning, thus needing a very low number of parameters to work. The layer is tested on a simple classification task and its cost is compared with a classic DNN layer with equivalent accuracy based on the Multiply and Accumulate (MAC) operation, in order to assess the reduction of resources that the use of this new structure could introduce.

    Slides
    • A Non-Conventional Sum-and-Max Based Neural Network Layer for Low Power Classification (application/pdf)