Skip to main content
Video s3
    Details
    Presenter(s)
    Daeyong Shim Headshot
    Display Name
    Daeyong Shim
    Affiliation
    Affiliation
    SK hynix Inc.
    Country
    Author(s)
    Display Name
    Daeyong Shim
    Affiliation
    Affiliation
    SK hynix Inc.
    Display Name
    Chunseok Jeong
    Affiliation
    Affiliation
    SK hynix Inc.
    Display Name
    Euncheol Lee
    Affiliation
    Affiliation
    SK hynix Inc.
    Display Name
    Junmo Kang
    Affiliation
    Affiliation
    SK hynix Inc.
    Display Name
    Seokcheol Yoon
    Affiliation
    Affiliation
    SK hynix Inc.
    Display Name
    Yongkee Kwon
    Affiliation
    Affiliation
    SK hynix Inc.
    Display Name
    Il Park
    Affiliation
    Affiliation
    SK hynix Inc.
    Display Name
    Hyun Ahn
    Affiliation
    Affiliation
    SK hynix Inc.
    Display Name
    Seonyong Cha
    Affiliation
    Affiliation
    SK hynix Inc.
    Display Name
    Jinkook Kim
    Affiliation
    Affiliation
    SK hynix Inc.
    Abstract

    As DNNs improving state-of-the-art accuracy on many artificial intelligence (AI) applications such as computer vision processing for autonomous driving, the data processing bandwidth and power consumption between neural network accelerator and the off-chip memory are big challenge to enhance the compute performance metric TOPs/watt. To overcome the limited compute and energy resources in automobile environment, inferencing by PIM (Processing in Memory) or AiM (Accelerator in Memory) which deployed MAC(Multiply and Accumulation) units and activation function inside DRAM is one of the key solution by using multi bank parallelism and memory cell architecture. When memory technology equipped with analog logic inside mature in the near future, ultra-low power analog accelerator based neuromorphic computing architecture will lead the future autonomous driving solution.

    Slides
    • Holistic Approaches to Memory Solutions for the Autonomous Driving Era (application/pdf)