Skip to main content
Video s3
    Details
    Presenter(s)
    Khalid Al-Hawaj Headshot
    Display Name
    Khalid Al-Hawaj
    Affiliation
    Affiliation
    Cornell University
    Country
    Abstract

    Vector accelerators can efficiently execute regular data-parallel workloads, but they require expensive multi-ported register files to feed large vector ALUs. Recent work on in-situ processing-in-SRAM shows promise in enabling area-efficient vector acceleration. This work explores two different approaches to leveraging in-situ processing-in-SRAM: BS-VRAM, which uses bit-serial execution, and BP-VRAM, which uses bit-parallel execution. The two approaches have very different latency vs. throughput trade-offs. BS-VRAM requires more cycles per operation, but is able to execute thousands of operations in parallel, while BP-VRAM requires fewer cycles per operation, but can only execute hundreds of operations in parallel. This paper is the first work to perform a rigorous evaluation of bit-serial vs. bit-parallel in-situ processing-in-SRAM. The detailed circuit-level implementation of BS-VRAM and BP-VRAM are similar, and we believe a reconfigurable approach could enable targeting either high-throughput or low-latency at run-time based on application requirements.

    Slides