Research @ CELL


Hyperdimensional (HD) computing is an alternative computing method that processes cognitive tasks in a lightweight and error-torrent way based on theoretical neuroscience. The mathematical properties of high-dimensional spaces have remarkable agreements with brain behaviors. Thereby, HD computing emulates human cognition by computing with high-dimensional vectors, called hypervectors, as an alternative to traditional computing numbers like integer and boolean. With concrete hypervector arithmetics, we enable various pattern-based computations such as memorizing and reasoning, similar to what humans are doing. Our group works on developing diverse learning tasks such as classification, regression, clustering, and reinforcement learning based on HD computing. The advanced learning algorithms have several superb properties: (i) fast training, (ii) extremely robust against most failure mechanisms and noise, and (iii) significantly parallelizable operations suitable to design hardware like GPGPU, FPGA, and in-memory computing platforms.

Selected Publications

  1. Yeseong Kim, Mohsen Imani, Niema Moshiri and Tajana Rosing, "GenieHD: Efficient DNA Pattern Matching Accelerator Using Hyperdimensional Computing," IEEE/ACM Design Automation and Test in Europe Conference (DATE), Mar 2020 (Best paper candidate)
  2. Mohsen Imani, Yeseong Kim, Sadegh Riazi, John Merssely, Patrick Liu, Farinaz Koushanfar and Tajana S. Rosing, "A Framework for Collaborative Learning in Secure High-Dimensional Space," IEEE Cloud Computing (CLOUD), Jul 2019 (M. Imani and Y. Kim contributed equally, acceptance rate 14.3%)

We build state-of-the-art learning software and hardware on low-power devices. It includes the design of self-learning systems capable of autonomous sensing, learning, and actuating on diverse IoT platforms. To realize accelerated learning on resource-limited devices, we utilize various low-power parallel computing platforms. Existing platform candidates are NVIDIA CUDA for high performance and ARM Helium recently developed for new vector extension on the lower-power M-Profile architecture. Our goal is to design a system solution for efficient learning on edge devices operating without explicitly programmed. We also focus on software and hardware, including efficient reinforcement learning and intelligent uses of new computing paradigms. The promise of a very-low cost realization of cognitive functions is a prime enabler for various application domains such as self-controlling appliances and self-driving cars.

Selected Publications

  1. Yeseong Kim, Mohsen Imani, and Tajana S. Rosing, "Efficient Human Activity Recognition Using Hyperdimensional Computing," IEEE Conference on Internet of Things (IoT), Oct 2018
  2. Anthony Thomas, Yunhui Guo, Yeseong Kim, Baris Aksanli, Arun Kumar, and Tajana S. Rosing, "Hierarchical and Distributed Machine Learning Inference Beyond the Edge," IEEE International Conference on Networking, Sensing and Control (ICNSC), May 2019

Machine Learning (ML) gains popularity as an autonomous solution that extracts useful information and learns patterns from data collection. We rethink the role of ML for systems in various ways to design alternative system solutions. Processing in-memory architecture and near-data-computing are such technologies that our group focuses on. We utilize computation-enabled memory to enable high power and performance efficiency for ML applications. We have demonstrated that state-of-the-art ML algorithms, adaptive boosting, random forest, and deep neural network training, can be performed to massively-parallel in-memory computations. We also explore the ML-driven system software for efficient processing environments. Our endeavor includes cross-platform power/performance prediction and resource usage characterization for edge devices based on ML.

Selected Publications

  1. Yeseong Kim, Mohsen Imani, and Tajana S. Rosing, "Image Recognition Accelerator Design Using In-Memory Processing," IEEE MICRO, IEEE Computer Society, Jan/Feb 2019
  2. Yeseong Kim, Pietro Mercati, Ankit More, Emily Shriver, and Tajana S. Rosing, "P4: Phase-Based Power/Performance Prediction of Heterogeneous Systems via Neural Networks," 2017 International Conference on Computer-Aided Design (ICCAD 2017), November 2017
  3. Mohsen Imani, Saransh Gupta, Yeseong Kim, and Tajana S. Rosing, "FloatPIM: In-Memory Acceleration of Deep Neural Network Training with High Precision," International Symposium on Computer Architecture (ISCA), Jun 2019