A review of in-memory computing architectures for machine learning applications

Sathwika Bavikadi, Purab Ranjan Sutradhar, Khaled N. Khasawneh, Amlan Ganguly, Sai Manoj Pudukotai Dinakarrao

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

57 Scopus citations

Abstract

The state-of-the-art traditional computing hardware is struggling to meet the extensive computational load presented by the rapidly growing Machine Learning (ML) and Artificial Intelligence (AI) algorithms such as Deep Neural Networks (DNNs) and Convolutional Neural Networks (CNNs). In order to obtain hardware solutions to meet the low-latency and high-throughput computational demands from these algorithms, Non-Von Neumann computing architectures such as In-memory Computing (IMC)/ Processing-in-memory (PIM) are being extensively researched and experimented with. In this survey paper, we analyze and review pioneer IMC/PIM works designed to accelerate ML algorithms such as DNNs and CNNs. We investigate different architectural aspects and dimensions of these works and provide our comparative evaluations. Furthermore, we discuss challenges and limitations in IMC research and also present feasible directions based on our observations and insight.

Original languageEnglish
Title of host publicationGLSVLSI 2020 - Proceedings of the 2020 Great Lakes Symposium on VLSI
Pages89-94
Number of pages6
ISBN (Electronic)9781450379441
DOIs
StatePublished - 7 Sep 2020
Event30th Great Lakes Symposium on VLSI, GLSVLSI 2020 - Virtual, Online, China
Duration: 7 Sep 20209 Sep 2020

Publication series

NameProceedings of the ACM Great Lakes Symposium on VLSI, GLSVLSI

Conference

Conference30th Great Lakes Symposium on VLSI, GLSVLSI 2020
Country/TerritoryChina
CityVirtual, Online
Period7/09/209/09/20

Keywords

  • Artificial Intelligence
  • CNN
  • DNN
  • In-memory Computing
  • Machine learning
  • Non Von-Neumann Architectures
  • Processing-in-memory

Fingerprint

Dive into the research topics of 'A review of in-memory computing architectures for machine learning applications'. Together they form a unique fingerprint.

Cite this