Details
![Luciano Prono Headshot](https://confcats-catavault.s3.amazonaws.com/CATAVault/ieeecass/master/files/styles/cc_user_photo/s3/user-pictures/11261.jpg?h=2c4e73f8&itok=8tXHzwYr)
- Affiliation
-
AffiliationPolitecnico di Torino
- Country
Currently, the increasing need for small and low-power Deep Neural Networks (DNN) for edge computing applications involves the search for new architectures that allow good performance on low-resources/mobile devices. In this sense, many different structures have been proposed in literature, mainly with the aim of reducing the cost introduced by multiply operations. In this work, a special DNN layer based on the novel Sum and Max (SAM) paradigm is proposed. It completely avoids both the use of multiplications and also the insertion of complex non-linear operations. Furthermore, it is especially prone to aggressive pruning, thus needing a very low number of parameters to work. The layer is tested on a simple classification task and its cost is compared with a classic DNN layer with equivalent accuracy based on the Multiply and Accumulate (MAC) operation, in order to assess the reduction of resources that the use of this new structure could introduce.