Minsu Kim


Starting in March 2025, I will be joining the KAIST-Mila Prefrontal AI Research Center as a postdoctoral researcher, jointly hosted by:

This research center focuses on System 2 Deep Learning, a collaborative effort between KAIST and Mila. Our research topics include prefrontal AI, safety-guaranteed AGI, and AI for Science.


Backgrounds

I am a Ph.D. candidate at KAIST, under the guidance of Prof. Jinkyoo Park, and a Collaborating Researcher at Mila, supervised by Prof. Yoshua Bengio.

During my Ph.D., I’ve had the privilege of collaborating with several esteemed professors and their research groups:

Before pursuing my Ph.D., I completed my master’s degree under the supervision of Prof. Joungho Kim, an expert in designing 3D ICs (e.g., HBM) for SI/PI performance.

Research statement

I am focused on advancing reasoning in deep learning, particularly in large language models and scientific discovery. My short-term research aims to fine-tune large models using Bayesian posterior inference, leveraging GFlowNets’ off-policy amortized inference. Long-term, I’m interested in System 2 deep learning, developing world models that measure uncertainty, represent causal relationships, and support sequential reasoning for planning. These models should also identify risks, enabling the creation of safety-guaranteed pessimistic agents.

I’m also keen on combinatorial optimization (CO) and NP-hard problems, often integrating deep learning with CO techniques like local search and tree search. I see a strong link between these areas and System 2 deep learning, particularly in combinatorial reasoning, and I’m eager to explore this further.

Detailed research topics

My research methodology includes:

  • GFlowNets (e.g., better exploration and credit assignments for GFlowNets)
  • Diffusion Models (e.g., discrete diffusion and Boltzmann generator)
  • Deep Reinforcement Learning (e.g., replay training for sample efficient DRL)

My research applications includes:

  • Scientific discovery (e.g., de novo discovery of small molecular graphs)
  • Hardware design optimization (e.g., Placement of decoupling capacitance, and channel routing)
  • Combinatorial optimization (e.g., Vehicle routing, scheduling, and graph covering).
  • Alignment of large multimodal model (Finetuning text-to-image model with human feedback)
  • Alignment of large language model (e.g., Red-Teaming with safety tuning, RLHF, amortizing chain-of-thought)

My research at master prieods.

One surprising fact about my background is that I worked in hardware system design and analysis from 2020 to 2022 during my master’s degree. My focus was on signal integrity and power integrity in 2.5D/3D semiconductor architectures, including high-bandwidth memory (HBM) modules. I developed advanced deep learning algorithms to automate and optimize hardware layout design and device placement. These experiences provided me with a deep understanding of computing systems and HBM, which are crucial for AI computing, as well as practical knowledge in using deep learning methods for hardware optimization challenges.

Education

  • Ph.D. Candidate at KAIST IE
    • Advisor: Jinkyoo Park
    • 2022.Mar ~ 2025.Feb (Expected)
  • M.S. at KAIST EE
    • Advisor: Joungho Kim
    • 2020.Mar ~ 2022.Feb
  • B.S. at KAIST, Math and CS (Dual Degree)
    • 2015.Mar ~ 2020.Feb

Awards

  • Google Conference Scholarship for ICLR 2024 (as a First author of the paper “Local Search GFlowNets”)
  • Qualcomm Innovation Fellowship Award 2023 Korea (as a First author of the paper “Sym-NCO: Leveraging Symmetricity for Neural Combinatorial Optimization”)
  • NeurIPS 2022 Scholar Award (Travel Grant)
  • DesignCon 2022 Best Paper Award (as a Second author for a paper of Haeyeon Rachel Kim)
  • DesignCon 2022 Best Paper Award (as a Second author for a paper of Seonguk Choi)
  • DesignCon 2021 Best Paper Award (as a First author)
  • IEEE EDAPS 2020 Best Student Paper Award (as a Second author for a paper of Kyungjune Son)

Academic activities

  • Conference Reviewer: NeurIPS, ICML, ICLR, AISTATS, AAAI, IJCAI, Learning on Graphs (LoG)
  • Journal Reviewer: IEEE Transactions on Neural Networks and Learning Systems (TNNLS), IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI)

news

Oct 11, 2024 I got 4 papers accepted at NeurIPS 2024 and 6 papers accepted at NeurIPS workshops.
May 16, 2024 I got two paper accepted at ICML 2024.
Apr 17, 2024 I got a Google Conference Scholarships (ICLR 2024, “Local Search GFlowNets”)
Jan 16, 2024 I got two paper accepted at ICLR 2024 (one spotlight and one oral).
Dec 9, 2023 I got a paper accepted at AAAI 2024