Chanwoong Yoon

        

I am a Master's student at Korea University, advised by Professor Jaewoo Kang. My research focuses on making language models more interpretable, safer, and reliable.

🚀 Starting October 2025, I will be joining Georgia Tech as a visiting researcher, working with Professor Alan Ritter. I'll primarily focus on LLM moderation.

Research Areas: Natural Language Processing, AI Safety, Model Interpretability

Email  /  CV  /  Google Scholar  /  LinkedIn  /  X  /  Github

profile photo

Research Interests

News


Selected Publications

retpo

Ask Optimal Questions: Aligning Large Language Models with Retriever's Preference in Conversational Search
Chanwoong Yoon*, Gangwoo Kim*, Byeongguk Jeon, Sungdong Kim, Yohan Jo, Jaewoo Kang
NAACL 2025 Findings.

Paper / Code

we present Retriever’s Preference Optimization (RetPO), which optimizes a language model (LM) for reformulating search queries in line with the preferences of the target retrieval systems.

compact

CompAct: Compressing Retrieved Documents Actively for Question Answering
Chanwoong Yoon, Taewhoo Lee, Hyeon Hwang, Minbyul Jeong, Jaewoo Kang
EMNLP 2024 Main.

Paper / Code

We propose a novel framework that employs an active strategy for compressing extensive documents. CompAct dynamically preserves query-related contexts, focusing on the integration of information across documents.

ethic

ETHIC: Evaluating Large Language Models on Long-Context Tasks with High Information Coverage
Taewhoo Lee, Chanwoong Yoon, Kyubin Jang, Dongwoo Lee, Minwoo Song, Hyunsouk Kim, Jaewoo Kang
NAACL 2025 Main.

Paper / Code

We introduce ETHIC, a new benchmark to evaluate the ability of large language models on long-context tasks that require high information coverage.

temporal

Does Time Have Its Place? Temporal Heads: Where Language Models Recall Time-specific Information
Yein Park, Chanwoong Yoon, Juhee Park, Minbyul Jeong, Jaewoo Kang
ACL 2025 Main.

Paper / Code

We introduce Temporal Heads, an investigation into the mechanisms by which language models recall time-specific information, identifying specific components responsible for temporal reasoning.

chro

ChroKnowledge: Unveiling Chronological Knowledge of Language Models in Multiple Domains
Chanwoong Yoon, Yejin Park, Juhee Park, Jaewoo Kang
ICLR 2025.

Paper / Code

We introduce ChroKnowledge, a comprehensive benchmark designed to evaluate the chronological knowledge of language models across multiple diverse domains.

meerkat

Small language models learn enhanced reasoning skills from medical textbooks

NPJ digital medicine.

Paper / Model

We released a new medical LM, Meerkat-7B, passed the United States Medical Licensing Examination (USMLE) for the first time among 7B-parameter models. This study demonstrates that small language models can achieve enhanced medical reasoning abilities through targeted training on specialized medical textbooks.


Awards and Honors

  • Research Fund Recipient, Korea Institute for Advancement of Technology (KIAT) May 2025
    - Selected to receive over USD 21,000 in research support
  • Outstanding Research Paper Award, Korea University Feb. 2025
    - CompAct: Compressing Retrieved Documents Actively for Question Answering (EMNLP 2024)
  • Encouragement Award, KIISE Korea Computer Congress Competition Jul. 2022
  • Merit-based Scholarships (50% Tuition), Hanyang University Fall 2020 — Spring 2022

Academic Service

  • Reviewer: ICLR 2026
  • Secondary Reviewer: ARR Review (Dec. 2025), ACL 2025

Template based on Jon Barron's website.