Reconfigurable Processing-in-Memory Architecture for Data Intensive Applications

Sathwika Bavikadi, Purab Ranjan Sutradhar, Amlan Ganguly, Sai Manoj Pudukotai Dinakarrao

Research output: Chapter in Book/Report/Conference proceedingChapter

4 Scopus citations

Abstract

Emerging applications reliant on deep neural networks (DNNs) and convolutional neural networks (CNNs) demand substantial data for computation and analysis. Deploying DNNs and CNNs often leads to resource constraints, data movement overheads between memory and compute units. Architectural paradigms like Processing-in-Memory (PIM) have emerged to mitigate these challenges. However, existing PIM architectures necessitate trade-offs involving power, performance, area, energy efficiency, and programmability. Our proposed solution focuses on achieving higher energy efficiency while preserving programmability and flexibility. We introduce a novel multi-core reconfigurable architecture with fine-grained integration within DRAM sub-arrays, resulting in superior performance and energy-efficiency compared to conventional PIM architectures. Each core in our design comprises multiple processing elements (PEs), standalone processors equipped with programmable functional units constructed using high-speed reconfigurable multi-functional look-up-tables (M-LUTs). These M-LUTs enable multiple functional outputs, such as convolution, pooling, and activation functions, in a time-multiplexed manner, eliminating the need for different LUTs for each function. Special function LUTs provide simultaneous outputs, enabling ultra-low latency parallel processing for tasks like multiplication and accumulation, along with functions like activation, pooling, and batch-normalization required for CNN acceleration. This comprehensive approach enhances efficiency and performance, rendering our reconfigurable architecture suitable for demanding Big Data and AI acceleration applications.
Original languageAmerican English
Title of host publication2024 37th International Conference on VLSI Design and 2024 23rd International Conference on Embedded Systems (VLSID)
DOIs
StatePublished - 2024
Externally publishedYes

Keywords

  • convolutional neural networks
  • look-up-tables
  • processing-in-memory
  • reconfigurable

EGS Disciplines

  • Electrical and Computer Engineering

Fingerprint

Dive into the research topics of 'Reconfigurable Processing-in-Memory Architecture for Data Intensive Applications'. Together they form a unique fingerprint.

Cite this