Skip to main content
Video s3
    Details
    Presenter(s)
    Yuma Tashiro Headshot
    Display Name
    Yuma Tashiro
    Affiliation
    Affiliation
    Osaka University
    Country
    Author(s)
    Display Name
    Yuma Tashiro
    Affiliation
    Affiliation
    Osaka University
    Display Name
    Hiromitsu Awano
    Affiliation
    Affiliation
    Kyoto University
    Abstract

    Modern deep learning algorithms consist of highly complex artificial neural networks, making it extremely difficult for humans to track the inference process. While the social implementation of deep learning is progressing, the human and economic losses caused by inference errors are becoming more and more problematic, and there is a need for methods to explain the basis for the decisions of deep learning algorithms. Although, in an automated driving task, a method to visualize the regions that contribute to steering angle prediction using an attention mechanism has been proposed, its explanatory capability is still low. In this paper, we focus on the difference in the importance of each bit in the activation (i.e., the LSBs have the lowest weight while the MSBs have the highest weight), and propose a method to add attention only to the sign bits to further enhance the explanation. Our numerical experiment using the Udacity dataset revealed that the proposed method achieves 33% higher area under curve (AUC) in terms of the deletion metric.

    Slides
    • Pay Attention via Binarization: Enhancing Explainability of Neural Networks via Binarization of Activation (application/pdf)