Skip to content
CELL @ DGIST
Primary Menu
  • Home
  • Professor
  • Members
  • Research
  • Publications
  • Highlights
  • Wanna join
  • Contacts

On-Device Generative AI

“Optimizing On-Device Generative AI for advanced applications.”

Generative AI leverages knowledge learned from large-scale datasets to create novel data and content. Beyond its well-known application in Text-to-Image generation, this field extends its impact across diverse domains such as music, video, 3D reconstruction, and system configuration, driving significant technological advancements.
Our research focuses on the development of state-of-the-art generative AI models, including Large Language Models (LLMs) and Diffusion models, as well as the exploration of foundational theoretical methods. We apply these advanced techniques to emerging fields (e.g. system optimization, performance evaluation), aiming to transcend the limitations of traditional approaches and to propose innovative designs for contemporary systems.
Additionally, our approach explores the potential of LLMs to address a variety of tasks. Specifically, we target the development of acceleration techniques, such as knowledge compression and learning strategies, to optimize and lightweight LLM structures for more efficient deployment.

Selected Publications

  1. Junyoung Lee, Seohyun Kim, Shinhyoung Jang, Jongho Park, Yeseong Kim. “Diffusion-Based Generative System Surrogates for Scalable Learning-Driven Optimization in Virtual Playgrounds.” Proceedings of the ACM on Measurement and Analysis of Computing Systems 9.2 2025
  2. Junyoung Lee, Shinhyoung Jang, Seohyun Kim, Jongho Park, Il Hong Suh, Hoon Sung Chwa , Yeseong Kim. “Dynamically Scalable Pruning for Transformer-Based Large Language Models”, in 2025 Design, Automation & Test in Europe Conference & Exhibition (DATE), IEEE, 2025
  3. Seohyun Kim, Junyoung Lee, Jongho Park,  Jinhyung Koo, Sungjin Lee and Yeseong Kim. “A Diffusion-Based Framework for Configurable and Realistic Multi-Storage Trace Generation”, in 2025 Design Automation Conference (DAC), 2025

Brain-Inspired Hyperdimensional Computing

“Design cognitive machine learning based on a human memory model”

Hyperdimensional (HD) computing is an alternative computing method that processes cognitive tasks in a lightweight and error-torrent way based on theoretical neuroscience. The mathematical properties of high-dimensional spaces have remarkable agreements with brain behaviors. Thereby, HD computing emulates human cognition by computing with high-dimensional vectors, called hypervectors, as an alternative to traditional computing numbers like integers and booleans. With concrete hypervector arithmetics, we enable various pattern-based computations such as memorizing and reasoning, similar to what humans are doing. Our group works on developing diverse learning and cognitive computing techniques based on HD computing, focusing on typical ML tasks, neuro-symbolic AI, and acceleration in next-generation computing environments.

Selected Publications

  1. Hyukjun Kwon, Kangwon kim, Junyoung Lee, Hyunsei Lee, Jiseung Kim, J. KIM, T. Kim, Y. Kim, Y. Ni, M. Imani, I. Suh, Yeseong Kim, “Brain-Inspired Hyperdimensional Computing in the Wild: Lightweight Symbolic Learning for Sensorimotor Controls of Wheeled Robots”, in 2024 IEEE International Conference on Robotics and Automation(ICRA), ICRA/IEEE, 2024
  2. H. Lee, J. Kim, H. Chen, A. Zeria, N. Srinivasa, M. Imani, and Y. Kim, “Comprehensive integration of hyperdimensional computing with deep learning towards neuro-symbolic AI,” in 2023 60th ACM/IEEE Design Automation Conference (DAC), IEEE, 2023
  3. Jiseung Kim, Hyunsei Lee, Mohsen Imani, and Yeseong Kim, “Efficient hyperdimensional learning with trainable, quantizable, and holistic data presentation,” in 2023 Design, Automation & Test in Europe Conference & Exhibition (DATE), IEEE, 2023

Learning with Alternative Computing

“Systems for ML and ML for Systems”

Machine Learning (ML) is increasingly recognized as a pivotal technology for autonomous data analysis and pattern recognition. Our research group is at the forefront of redefining the role of ML in system design, focusing on innovative solutions such as Near-data Processing (NDP) and Processing In-Memory (PIM) architectures. These technologies are integrated into both main memory and cache levels, utilizing DRAM and SRAM to address the critical bottleneck of data movement. By performing computations directly within memory, NDP and PIM architectures substantially minimize redundant data transfers and enhance computational efficiency.
We are developing software-level framework to orchestrate the NDP/PIM architectures with various next-generetion technologies, e.g., CXL and 6G. Our research also extends to ML-driven system software that optimizes processing environments, facilitating robust cross-platform power and performance predictions as well as resource usage characterizations for edge devices. This approach not only augments computational efficiency but also exploits the transformative potential of ML in evolving computing architectures.

Selected Publications

  1. S. Lee, J. Park, H. Minho, K. Byungil, P. Kyoung, and Y. Kim, “Sidekick: Near Data Processing for Clustering Enhanced by Automatic Memory Disaggregation” in 2023 60th ACM/IEEE Design Automation Conference (DAC), IEEE, 2023
  2. Jongho Park, Hyukjun Kwon, Seowoo Kim, Junyoung Lee, Minho Ha, Euicheol Lim, Mohsen Imani, Yeseong Kim, QuiltNet: Efficient Deep Learning Inference on Multi-Chip Accelerators Using Model Partitioning,” IEEE/ACM Design Automation Conference (DAC), 2022
  3. Yeseong Kim, Mohsen Imani, Saransh Gupta, Minxuan Zhou, Tajana S Rosing, Massively Parallel Big Data Classi cation on a Programmable Processing In-Memory Architecture,” IEEE/ACM International Conference On Computer Aided Design (ICCAD), 2021
  4. M. Imani, S. Pampana, S. Gupta, M. Zhou, Y. Kim*, and T. Rosing, Dual: Acceleration of clustering algorithms using digital-based processing in-memory,” in 53rd Annual IEEE/ACM International Symposium on Microarchitecture (MICRO), 2020 (*Co-corresponding author)
Copyright © 2022 | Powered by WordPress | Crypto AirDrop theme by WP Frank
Images used under license from Shutterstock.com, Freepik , istock, and www.flaticon.com