I am a Master's student at Korea University, advised by Professor Jaewoo Kang. My research focuses on making language models more interpretable, safer, and reliable.
🚀 Starting October 2025, I will be joining Georgia Tech as a visiting researcher, working with Professor Alan Ritter. I'll primarily focus on LLM moderation.
Research Areas: Natural Language Processing, AI Safety, Model Interpretability
Email  /  CV  /  Google Scholar  /  LinkedIn  /  X  /  Github
we present Retriever’s Preference Optimization (RetPO), which optimizes a language model (LM) for reformulating search queries in line with the preferences of the target retrieval systems.
We propose a novel framework that employs an active strategy for compressing extensive documents. CompAct dynamically preserves query-related contexts, focusing on the integration of information across documents.
We introduce ETHIC, a new benchmark to evaluate the ability of large language models on long-context tasks that require high information coverage.
We introduce Temporal Heads, an investigation into the mechanisms by which language models recall time-specific information, identifying specific components responsible for temporal reasoning.
We introduce ChroKnowledge, a comprehensive benchmark designed to evaluate the chronological knowledge of language models across multiple diverse domains.
We released a new medical LM, Meerkat-7B, passed the United States Medical Licensing Examination (USMLE) for the first time among 7B-parameter models. This study demonstrates that small language models can achieve enhanced medical reasoning abilities through targeted training on specialized medical textbooks.
Template based on Jon Barron's website.