Skip to content
CELL @ DGIST
Primary Menu
  • Home
  • Professor
  • Members
  • Research
  • Publications
  • Contacts

Brain-Inspired Hyperdimensional Computing

“Design cognitive machine learning based on a human memory model”

Hyperdimensional (HD) computing is an alternative computing method that processes cognitive tasks in a lightweight and error-torrent way based on theoretical neuroscience. The mathematical properties of high-dimensional spaces have remarkable agreements with brain behaviors. Thereby, HD computing emulates human cognition by computing with high-dimensional vectors, called hypervectors, as an alternative to traditional computing numbers like integers and booleans. With concrete hypervector arithmetics, we enable various pattern-based computations such as memorizing and reasoning, similar to what humans are doing. Our group works on developing diverse learning and cognitive computing techniques based on HD computing, focusing on typical ML tasks, neuro-symbolic AI, and acceleration in next-generation computing environments.

Selected Publications

  1. Yeseong Kim, Jiseung Kim, and Mohsen Imani, “CascadeHD: Efficient Many-Class Learning Framework Using Hyperdimensional Computing,” IEEE/ACM Design Automation Conference (DAC), 2021
  2. Jiseung Kim, Hyunsei Lee, Mohsen Imani, and Yeseong Kim, “Efficient Brain-Inspired Hyperdimensional Learning with Spatiotemporal Structured Data,” IEEE International Symposium on the Modeling, Analysis, and Simulation of Computer and Telecommunication Systems (MASCOTS), 2021
  3. Yeseong Kim, Mohsen Imani, Niema Moshiri, and Tajana Rosing, “GenieHD: Efficient DNA Pattern Matching Accelerator Using Hyperdimensional Computing,” IEEE/ACM Design Automation and Test in Europe Conference (DATE), 2020 (Best paper candidate)

Learning with Alternative Computing

“Systems for ML and ML for Systems”

Machine Learning (ML) gains popularity as an autonomous solution that extracts useful information and learns patterns from data collection. We rethink the role of ML for systems in various ways to design alternative system solutions. Near-data processing (NDP) and processing in-memory (PIM) architecture are such technologies that our group focuses on. We utilize computation-enabled memory to enable high power and performance efficiency for deep learning and massive AI. We are developing software-level framework to orchestrate the NDP/PIM architectures with various next-generetion technologies, e.g., CXL and 6G. We also explore the ML-driven system software for efficient processing environments. Our endeavor includes cross-platform power/performance prediction and resource usage characterization for edge devices based on ML.

Selected Publications

  1. Jongho Park, Hyukjun Kwon, Seowoo Kim, Junyoung Lee, Minho Ha, Euicheol Lim, Mohsen Imani, Yeseong Kim, QuiltNet: Efficient Deep Learning Inference on Multi-Chip Accelerators Using Model Partitioning,” IEEE/ACM Design Automation Conference (DAC), 2022
  2. Yeseong Kim, Mohsen Imani, Saransh Gupta, Minxuan Zhou, Tajana S Rosing, Massively Parallel Big Data Classi cation on a Programmable Processing In-Memory Architecture,” IEEE/ACM International Conference On Computer Aided Design (ICCAD), 2021
  3. M. Imani, S. Pampana, S. Gupta, M. Zhou, Y. Kim*, and T. Rosing, Dual: Acceleration of clustering algorithms using digital-based processing in-memory,” in 53rd Annual IEEE/ACM International Symposium on Microarchitecture (MICRO), 2020 (*Co-corresponding author)
  4. Mohsen Imani, Saransh Gupta, Yeseong Kim, and Tajana S. Rosing, “FloatPIM: In-Memory Acceleration of Deep Neural Network Training with High Precision,” International Symposium on Computer Architecture (ISCA), 2019

Learning on Edge Devices

“Developing lightweight learning for low-power edge devices”

We build state-of-the-art learning software and hardware on low-power devices. It includes the design of self-learning systems capable of autonomous sensing, learning, and actuating on diverse IoT platforms. We are exploring interesting ideas beyond traditional learning examples such as brain signal-based learning and virtual neural connectome. To realize accelerated learning on resource-limited devices, we utilize various low-power parallel computing platforms. Existing platform candidates are NVIDIA CUDA for high performance and ARM Helium recently developed for new vector extension on the lower-power M-Profile architecture. We focus on software and hardware, including efficient reinforcement learning and intelligent uses of new computing paradigms. The promise of a very-low cost realization of cognitive functions is a prime enabler for various application domains such as self-controlling appliances and self-driving cars.

Selected Publications

  1. Y. Ni, Y. Kim*, T. Rosing, and M. Imani, “Algorithm-hardware co-design for effcient brain-inspired hyperdimensional learning on edge,” in 2022 IEEE Design, Automation & Test in Europe Conference & Exhibition (DATE), 2022 (*Co-corresponding author)
  2. Yeseong Kim, Mohsen Imani, and Tajana S. Rosing, “Efficient Human Activity Recognition Using Hyperdimensional Computing,” IEEE Conference on Internet of Things (IoT), Oct 2018

Copyright © 2022 | Powered by WordPress | Crypto AirDrop theme by WP Frank
Images used under license from Shutterstock.com, Freepik , and www.flaticon.com