Minsu Kim

I am a postdoctoral researcher at KAIST-Mila Prefrontal AI Research Center, who is jointly hosted by:
This research center focuses on System 2 Deep Learning, a collaborative effort between KAIST and Mila. Our research topics include prefrontal AI, safety-guaranteed AGI, and AI for Science.
Backgrounds
I got a Ph.D. at KAIST, under the guidance of Prof. Jinkyoo Park.
During my Ph.D., I’ve had the privilege of collaborating with several esteemed professors and their research groups:
-
I have been working with Prof. Sungsoo Ahn on generative models for scientific discovery, including our joint efforts on GFlowNets with his student, Hyosoon Jang.
-
I have also engaged deeply with researchers at Mila, having physically visited Prof. Yoshua Bengio’s group from December 2023 to May 2024. Throughout this period, I’ve been fortunate to collaborate with many talented researchers, including Emmanuel Bengio, Nikolay Malkin, Seanie Lee, Moksh Jain, Siddarth Venkatraman, and Dinghuai Zhang, among others.
Before pursuing my Ph.D., I completed my master’s degree under the supervision of Prof. Joungho Kim, an expert in designing 3D ICs (e.g., HBM) for SI/PI performance.
Research statement
I am focused on advancing reasoning in deep learning, particularly in large language models and scientific discovery. My short-term research aims to fine-tune large models using Bayesian posterior inference, leveraging GFlowNets’ off-policy amortized inference. Long-term, I’m interested in System 2 deep learning, developing world models that measure uncertainty, represent causal relationships, and support sequential reasoning for planning. These models should also identify risks, enabling the creation of safety-guaranteed pessimistic agents.
I’m also keen on combinatorial optimization (CO) and NP-hard problems, often integrating deep learning with CO techniques like local search and tree search. I see a strong link between these areas and System 2 deep learning, particularly in combinatorial reasoning, and I’m eager to explore this further.
Detailed research topics
My research methodology includes:
- GFlowNets (e.g., better exploration and credit assignments for GFlowNets)
- Diffusion Models (e.g., discrete diffusion and Boltzmann generator)
- Deep Reinforcement Learning (e.g., replay training for sample efficient DRL)
My research applications includes:
- Scientific discovery (e.g., de novo discovery of small molecular graphs)
- Hardware design optimization (e.g., Placement of decoupling capacitance, and channel routing)
- Combinatorial optimization (e.g., Vehicle routing, scheduling, and graph covering).
- Alignment of large multimodal model (Finetuning text-to-image model with human feedback)
- Alignment of large language model (e.g., Red-Teaming with safety tuning, RLHF, amortizing chain-of-thought)
My research at master prieods.
One surprising fact about my background is that I worked in hardware system design and analysis from 2020 to 2022 during my master’s degree. My focus was on signal integrity and power integrity in 2.5D/3D semiconductor architectures, including high-bandwidth memory (HBM) modules. I developed advanced deep learning algorithms to automate and optimize hardware layout design and device placement. These experiences provided me with a deep understanding of computing systems and HBM, which are crucial for AI computing, as well as practical knowledge in using deep learning methods for hardware optimization challenges.
Education
- Ph.D. Candidate at KAIST IE
- Advisor: Jinkyoo Park
- 2022.Mar ~ 2025.Feb
- M.S. at KAIST EE
- Advisor: Joungho Kim
- 2020.Mar ~ 2022.Feb
- B.S. at KAIST, Math and CS (Dual Degree)
- 2015.Mar ~ 2020.Feb
Awards
- KAIST Presidential Best Ph.D. Thesis Award
- Google Conference Scholarship for ICLR 2024 (as a First author of the paper “Local Search GFlowNets”)
- Qualcomm Innovation Fellowship Award 2023 Korea (as a First author of the paper “Sym-NCO: Leveraging Symmetricity for Neural Combinatorial Optimization”)
- NeurIPS 2022 Scholar Award (Travel Grant)
- DesignCon 2022 Best Paper Award (as a Second author for a paper of Haeyeon Rachel Kim)
- DesignCon 2022 Best Paper Award (as a Second author for a paper of Seonguk Choi)
- DesignCon 2021 Best Paper Award (as a First author)
- IEEE EDAPS 2020 Best Student Paper Award (as a Second author for a paper of Kyungjune Son)
Academic activities
- Reviewer (Conference): NeurIPS, ICML, ICLR, AISTATS, AAAI, IJCAI, Learning on Graphs (LoG)
- Reviewer (Journal): IEEE Transactions on Neural Networks and Learning Systems (TNNLS), IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI)
- Senior Reviewer: Reinforcement Learning Conference (RLC), Reinforcement Learning Journal (RLJ)
news
Feb 14, 2025 | I got Ph.D degree with the KAIST presidential best Ph.D. thesis award. |
---|---|
Jan 12, 2025 | 4 papers accepted at ICLR 2025! |
Sep 12, 2024 | 4 main track papers and 6 workshop papers are accepted at NeurIPS 2024! |
May 21, 2024 | I’ve received an postdoc offer from Professor Yoshua Bengio at Mila – Quebec AI Institute |
Dec 01, 2023 | I’ve received Qualcomm Innovative Fellowship Award. |
latest posts
Feb 04, 2025 | LLM 시대의 베이지안 머신러닝과 GFlowNet |
---|
selected publications
- ICLRAdaptive Teachers for Amortized SamplersInternational Conference on Learning Representations., 2025
- NeurIPSBootstrapped Training of Score-Conditioned Generator for Offline Design of Biological SequencesAdvances in Neural Information Processing Systems, 2023