Skip to main content
Video s3
    Details
    Presenter(s)
    Ziwei Wang Headshot
    Display Name
    Ziwei Wang
    Affiliation
    Affiliation
    Imperial College London
    Country
    Author(s)
    Display Name
    Ziwei Wang
    Affiliation
    Affiliation
    Imperial College London
    Display Name
    Zhiqiang Que
    Affiliation
    Affiliation
    Imperial College London
    Display Name
    Wayne Luk
    Affiliation
    Affiliation
    Imperial College
    Display Name
    Hongxiang Fan
    Affiliation
    Affiliation
    Imperial College London
    Abstract

    Graph Convolutional Networks (GCNs) have become a promising approach in the graph data processing and analysis. Nevertheless, the massive amount of input graph nodes and computation put a heavy burden on their hardware performance, limiting their their deployment in real-life applications. To address this performance issue, we propose a customizable architecture for Binarized GCNs (BiGCNs), which can be reconfigured to fulfill different user needs with high hardware performance. We overlap the binarization and computation to ease the bandwidth requirement for BiGCNs. To further improve the hardware performance, we also adopt COO(Coordinate) format storage for the adjacency matrix to skip the redundant computation for higher hardware performance. Our FPGA design,for the Cora and Flickr datasets,can run 77 and 16 times faster than the corresponding CPU and GPU platforms.

    Slides
    • Customizable FPGA-Based Accelerator for Binarized Graph Neural Networks (application/pdf)