Skip to main content
    Details
    Author(s)
    Display Name
    Gianluca Zoppo
    Affiliation
    Affiliation
    Politecnico di Torino
    Display Name
    Anil Korkmaz
    Affiliation
    Affiliation
    Texas A&M University
    Display Name
    Francesco Marrone
    Affiliation
    Affiliation
    Politecnico di Torino
    Display Name
    Suin Yi
    Affiliation
    Affiliation
    Texas A&M University
    Display Name
    Samuel Palermo
    Affiliation
    Affiliation
    Texas A&M University
    Display Name
    Fernando Corinto
    Affiliation
    Affiliation
    Politecnico di Torino
    Display Name
    Richard Williams
    Affiliation
    Affiliation
    Texas A&M University
    Abstract

    Over the last decade, Gaussian processes (GPs) have become popular in the area of machine learning and data analysis for their flexibility and robustness. Despite their attractive formulation, practical use in large-scale problems remains out of reach due to computational complexity. Existing direct computational methods for manipulations involving large-scale n×n covariance matrices require O(n^3) calculations. In this work, we present the design and evaluation of a simulated computing platform for exact GP inference, that achieves true model parallelism using memristive crossbars. To achieve a one-shot solution, a linear equation solver and a vector-matrix multiplication solver crossbar configurations are used together. The transistor level op-amps, ADC models for quantization, circuit and interconnect parasitics, together with the finite memristor precision are incorporated into the system. The analog system resulted in %1.51 mean error and %2.93 average variance error in solving a nonlinear regression problem. The proposed method achieved 9× to 144× better energy efficiency compared to TPU and 7× compared to a custom analog linear regression solver.