Minsu Kim

Mila - Quebec AI Institute and KAIST .

profile_minsu.jpg

I am a postdoctoral researcher at KAIST-Mila Prefrontal AI Research Center, who is jointly hosted by:

This research center focuses on System 2 Deep Learning, a collaborative effort between KAIST and Mila. Our research topics include prefrontal AI, safety-guaranteed AGI, and AI for Science.


Backgrounds

I got a Ph.D. at KAIST, under the guidance of Prof. Jinkyoo Park.

During my Ph.D., I’ve had the privilege of collaborating with several esteemed professors and their research groups:

Before pursuing my Ph.D., I completed my master’s degree under the supervision of Prof. Joungho Kim, an expert in designing 3D ICs (e.g., HBM) for SI/PI performance.

Research statement

I am focused on advancing reasoning in deep learning, particularly in large language models and scientific discovery. My short-term research aims to fine-tune large models using Bayesian posterior inference, leveraging GFlowNets’ off-policy amortized inference. Long-term, I’m interested in System 2 deep learning, developing world models that measure uncertainty, represent causal relationships, and support sequential reasoning for planning. These models should also identify risks, enabling the creation of safety-guaranteed pessimistic agents.

I’m also keen on combinatorial optimization (CO) and NP-hard problems, often integrating deep learning with CO techniques like local search and tree search. I see a strong link between these areas and System 2 deep learning, particularly in combinatorial reasoning, and I’m eager to explore this further.

Detailed research topics

My research methodology includes:

  • GFlowNets (e.g., better exploration and credit assignments for GFlowNets)
  • Diffusion Models (e.g., discrete diffusion and Boltzmann generator)
  • Deep Reinforcement Learning (e.g., replay training for sample efficient DRL)

My research applications includes:

  • Scientific discovery (e.g., de novo discovery of small molecular graphs)
  • Hardware design optimization (e.g., Placement of decoupling capacitance, and channel routing)
  • Combinatorial optimization (e.g., Vehicle routing, scheduling, and graph covering).
  • Alignment of large multimodal model (Finetuning text-to-image model with human feedback)
  • Alignment of large language model (e.g., Red-Teaming with safety tuning, RLHF, amortizing chain-of-thought)

My research at master prieods.

One surprising fact about my background is that I worked in hardware system design and analysis from 2020 to 2022 during my master’s degree. My focus was on signal integrity and power integrity in 2.5D/3D semiconductor architectures, including high-bandwidth memory (HBM) modules. I developed advanced deep learning algorithms to automate and optimize hardware layout design and device placement. These experiences provided me with a deep understanding of computing systems and HBM, which are crucial for AI computing, as well as practical knowledge in using deep learning methods for hardware optimization challenges.

Education

  • Ph.D. Candidate at KAIST IE
    • Advisor: Jinkyoo Park
    • 2022.Mar ~ 2025.Feb
  • M.S. at KAIST EE
    • Advisor: Joungho Kim
    • 2020.Mar ~ 2022.Feb
  • B.S. at KAIST, Math and CS (Dual Degree)
    • 2015.Mar ~ 2020.Feb

Awards

  • KAIST Presidential Best Ph.D. Thesis Award
  • Google Conference Scholarship for ICLR 2024 (as a First author of the paper “Local Search GFlowNets”)
  • Qualcomm Innovation Fellowship Award 2023 Korea (as a First author of the paper “Sym-NCO: Leveraging Symmetricity for Neural Combinatorial Optimization”)
  • NeurIPS 2022 Scholar Award (Travel Grant)
  • DesignCon 2022 Best Paper Award (as a Second author for a paper of Haeyeon Rachel Kim)
  • DesignCon 2022 Best Paper Award (as a Second author for a paper of Seonguk Choi)
  • DesignCon 2021 Best Paper Award (as a First author)
  • IEEE EDAPS 2020 Best Student Paper Award (as a Second author for a paper of Kyungjune Son)

Academic activities

  • Reviewer (Conference): NeurIPS, ICML, ICLR, AISTATS, AAAI, IJCAI, Learning on Graphs (LoG)
  • Reviewer (Journal): IEEE Transactions on Neural Networks and Learning Systems (TNNLS), IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI)
  • Senior Reviewer: Reinforcement Learning Conference (RLC), Reinforcement Learning Journal (RLJ)

news

Feb 14, 2025 I got Ph.D degree with the KAIST presidential best Ph.D. thesis award.
Jan 12, 2025 4 papers accepted at ICLR 2025!
Sep 12, 2024 4 main track papers and 6 workshop papers are accepted at NeurIPS 2024!
May 21, 2024 I’ve received an postdoc offer from Professor Yoshua Bengio at Mila – Quebec AI Institute
Dec 01, 2023 I’ve received Qualcomm Innovative Fellowship Award.

latest posts

selected publications

  1. Thesis
    Off-policy Training Methods for Probablistic Agents in Combinatorial Space
    Minsu Kim
    Korea Advanced Institute of Science and Technology (KAIST), 2025
  2. AISTATS
    Ant Colony Sampling with GFlowNets for Combinatorial Optimization
    Minsu Kim*, Sanghyeok Choi*, Jiwoo Son, Hyeonah Kim, Jinkyoo Park, and Yoshua Bengio
    International Conference on Artificial Intelligence and Statistics, 2025
  3. ICLR
    Adaptive Teachers for Amortized Samplers
    Minsu Kim*, Sanghyeok Choi*, Taeyoung Yun, Emmanuel Bengio, Leo Feng, Jarrid Rector-Brooks, Sungsoo Ahn, Jinkyoo Park, Nikolay Malkin, and Yoshua Bengio
    International Conference on Learning Representations., 2025
  4. ICML
    Learning to Scale Logits for Temperature-Conditional GFlowNets
    Minsu Kim*, Joohwan Ko*, Taeyoung Yun*, Dinghuai Zhang, Ling Pan, Woochang Kim, Jinkyoo Park, Emmanuel Bengio, and Yoshua Bengio
    International Conference on Machine Learning, 2024
  5. ICLR
    Local Search GFlowNets
    Minsu Kim, Taeyoung Yun, Emmanuel Bengio, Dinghuai Zhang, Yoshua Bengio, Sungsoo Ahn, and Jinkyoo Park
    International Conference on Learning Representations, 2024
  6. NeurIPS
    Bootstrapped Training of Score-Conditioned Generator for Offline Design of Biological Sequences
    Minsu Kim, Federico Berto, Sungsoo Ahn, and Jinkyoo Park
    Advances in Neural Information Processing Systems, 2023
  7. NeurIPS
    Sym-NCO: Leveraging Symmetricity for Neural Combinatorial Optimization
    Minsu Kim, Junyoung Park, and Jinkyoo Park
    Advances in Neural Information Processing Systems, 2022
  8. NeurIPS
    Learning collaborative policies to solve NP-hard routing problems
    Minsu Kim, Jinkyoo Park, and Joungho Kim
    Advances in Neural Information Processing Systems, 2021