Skip to main content
Video s3
    Details
    Author(s)
    Affiliation
    Affiliation
    IBM Research Europe
    Display Name
    Jan van Lunteren
    Affiliation
    Affiliation
    IBM Research Europe
    Display Name
    Andreea Anghel
    Affiliation
    Affiliation
    IBM Research Europe
    Display Name
    Thomas Parnell
    Affiliation
    Affiliation
    IBM Research Europe
    Display Name
    Martin Petermann
    Affiliation
    Affiliation
    IBM Research Europe
    Affiliation
    Affiliation
    IBM Research Europe
    Display Name
    Cedric Lichtenau
    Affiliation
    Affiliation
    IBM Deutschland R&D GmbH
    Display Name
    Andrew Sica
    Affiliation
    Affiliation
    IBM Systems
    Display Name
    Dominic Röhm
    Affiliation
    Affiliation
    IBM Deutschland R&D GmbH
    Display Name
    Elpida Tzortzatos
    Affiliation
    Affiliation
    IBM Systems
    Affiliation
    Affiliation
    IBM Research Europe
    Abstract

    This paper presents a tensor-based algorithm that leverages a hardware accelerator for inferencing decision-tree-based machine learning models. The algorithm has been integrated in a public software library and is demonstrated on an IBM z16 server, using the Telum processor with the Integrated Accelerator for AI. We describe the architecture and implementation of the algorithm and present experimental results that demonstrate its superior runtime performance compared with popular CPU-based machine learning inference implementations.