Skip to main content
    Details
    Author(s)
    Display Name
    Haoyang Wang
    Affiliation
    Affiliation
    Northwestern Polytechnical University
    Display Name
    Shengbing Zhang
    Affiliation
    Affiliation
    Northwestern Polytechnical University
    Display Name
    Kaijie Feng
    Affiliation
    Affiliation
    Northwestern Polytechnical University
    Display Name
    Miao Wang
    Affiliation
    Affiliation
    Northwestern Polytechnical University
    Display Name
    Zhao Yang
    Affiliation
    Affiliation
    Northwestern Polytechnical University
    Abstract

    Graph neural networks (GNNs) operations contain a large number of irregular data operations and sparse matrix multiplications, resulting in the under-utilization of computing resources. The problem becomes even more complex and challenging when it comes to large graph training. Scaling GNN training is an effective solution. However, the current GNN operation accelerators do not support the mini-batch structure. We analyze the GNN operational characteristics from multiple aspects and take both the acceleration requirements in the GNN training and inference process into account, and then propose the SaGNN system structure. SaGNN offers multiple working modes to provide acceleration solutions for different GNN frameworks while ensuring system configurability and scalability. Compared to related works, SaGNN brings 5.0x improvement in system performance.