Research

Postdoc at KAIST AMILab. Designing brain-inspired cognitive architectures for systematic generalization and continual learning.

My work sits at the intersection of cognitive neuroscience and AI. The central question driving everything: what computational principles does biological intelligence use to generalize so gracefully, and can we build AI systems around those same principles?

I approach this through a two-direction loop:

  1. AI for Neuroscience β€” using computer vision and generative models as β€œpractical microscopes” to decode neural and behavioral data that was previously too complex to quantify.
  2. Neuroscience for AI β€” drawing on canonical neural computations (grid cells, cortical columns, complementary learning systems) to architect AI that generalizes more robustly.

Core Principles

Three mechanistic ideas from neuroscience anchor everything I build:

  1. Universal Reference Frames β€” Abstract knowledge anchored to stable spatial representations (inspired by hippocampal grid cells and the Thousand Brains theory).
  2. Predictive Modeling in Canonical Circuits β€” World models learned through local prediction, mirroring cortical column computations.
  3. Structure / Content Factorization β€” Separating reusable structure (the β€œgrammar”) from variable content (the β€œwords”) to enable compositional generalization and lifelong learning without catastrophic forgetting.

4-Stage Cognitive Architecture

These principles map onto a modular hierarchy:

StageNameFunction
IObject-Centric PerceptionGrounded object representations via Slot Attention + grid-cell reference frames
IIPredictive AbstractionJEPA (Joint Embedding Predictive Architecture)-style prediction β†’ discrete symbol conversion
IIISemantic ConsolidationEpisodic β†’ semantic knowledge integration (CLS theory)
IVMetacognitive ControlMoE-PRM (Mixture-of-Experts Process Reward Model): dynamic routing between System 1 intuition and System 2 reasoning

Research Themes

These three pillars integrate the core principles above with my research trajectory.

Structured Representation & Memory Consolidation

Compositional generalization through structure / content factorization; context-sensitive coordination of working and long-term memory; episodic-to-semantic integration via complementary learning systems (CLS); representational hierarchies inspired by cortical columns; sparse and disentangled coding for continual learning.

Multi-modal Grounding via Reference Frames

Spatial anchoring through grid-cell-inspired coding; cross-modal binding of vision, language, and other sensory streams; world models learned through local prediction in canonical circuits; reference-frame–based generalization.

Social & Context-Adaptive Cognition

Empathy and social inference grounded in perception–action coupling and theory of mind; context-conditioned representations that modulate behavior across situations; multi-agent interaction and active inference. This pillar grew out of rodent affective-empathy neuroscience and now informs human-aligned multi-modal AI.


Publications and CV: Google Scholar Β· CV