“Design cognitive machine learning based on a human memory model”
Hyperdimensional (HD) computing is an alternative computing method that processes cognitive tasks in a lightweight and error-torrent way based on theoretical neuroscience. The mathematical properties of high-dimensional spaces have remarkable agreements with brain behaviors. Thereby, HD computing emulates human cognition by computing with high-dimensional vectors, called hypervectors, as an alternative to traditional computing numbers like integers and booleans. With concrete hypervector arithmetics, we enable various pattern-based computations such as memorizing and reasoning, similar to what humans are doing. Our group works on developing diverse learning and cognitive computing techniques based on HD computing, focusing on typical ML tasks, neuro-symbolic AI, and acceleration in next-generation computing environments.
“Systems for ML and ML for Systems”
Machine Learning (ML) gains popularity as an autonomous solution that extracts useful information and learns patterns from data collection. We rethink the role of ML for systems in various ways to design alternative system solutions. Near-data processing (NDP) and processing in-memory (PIM) architecture are such technologies that our group focuses on. We utilize computation-enabled memory to enable high power and performance efficiency for deep learning and massive AI. We are developing software-level framework to orchestrate the NDP/PIM architectures with various next-generetion technologies, e.g., CXL and 6G. We also explore the ML-driven system software for efficient processing environments. Our endeavor includes cross-platform power/performance prediction and resource usage characterization for edge devices based on ML.
“Developing lightweight learning for low-power edge devices”
We build state-of-the-art learning software and hardware on low-power devices. It includes the design of self-learning systems capable of autonomous sensing, learning, and actuating on diverse IoT platforms. We are exploring interesting ideas beyond traditional learning examples such as brain signal-based learning and virtual neural connectome. To realize accelerated learning on resource-limited devices, we utilize various low-power parallel computing platforms. Existing platform candidates are NVIDIA CUDA for high performance and ARM Helium recently developed for new vector extension on the lower-power M-Profile architecture. We focus on software and hardware, including efficient reinforcement learning and intelligent uses of new computing paradigms. The promise of a very-low cost realization of cognitive functions is a prime enabler for various application domains such as self-controlling appliances and self-driving cars.