Seonghyeon Ye

KAIST Graduate school of AI

[Mail] [GitHub] [Google Scholar] [Twitter]

Hello, I am a second year Ph.D student in KAIST Graduate school of AI. I am advised by Professor Minjoon Seo and a member of LKLab (Language & Knowledge Lab).

I am currently interested in making Large Language Models (LLMs) follow human instructions. Recently, I am trying to make LLMs as policies for general low-level embodied tasks.


Publications


2024

FLASK: Fine-grained Language Model Evaluation based on Alignment Skill Sets

Seonghyeon Ye*, Doyoung Kim*, Sungdong Kim, Hyeonbin Hwang, Seungone Kim, Yongrae Jo, James Thorne, Juho Kim, Minjoon Seo ( * denotes equal contribution)
ICLR 2024 (Spotlight)
[paper][code]

Improving Probability-based Prompt Selection Through Unified Evaluation and Analysis

Sohee Yang, Jonghyeon Kim, Joel Jang, Seonghyeon Ye, Hyunji Lee, Minjoon Seo
TACL 2024
[paper][code]

Investigating the Effectiveness of Task-Agnostic Prefix Prompt for Instruction Following

Seonghyeon Ye, Hyeonbin Hwang, Sohee Yang, Hyeongu Yun, Yireun Kim, Minjoon Seo
AAAI 2024
[paper][code]


2023

Carpe Diem: On the Evaluation of World Knowledge in Lifelong Language Models

Yujin Kim, Jaehong Yoon, Seonghyeon Ye, Sung Ju Hwang, Se-young Yun
Syntheticdata4ML Workshop @ NeurIPS 2023 (Oral)
[paper]

The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-tuning

Seungone Kim, Se June Joo, Doyoung Kim, Joel Jang, Seonghyeon Ye, Jamin Shin, Minjoon Seo
EMNLP 2023
[paper][code]

Efficiently Enhancing Zero-Shot Performance of Instruction Following Model via Retrieval of Soft Prompt

Seonghyeon Ye, Joel Jang, Doyoung Kim, Yongrae Jo, Minjoon Seo
EMNLP 2023 Findings
[paper][code]

Exploring the Benefits of Training Expert Language Models over Instruction Tuning

Joel Jang, Seungone Kim, Seonghyeon Ye, Doyoung Kim, Lajanugen Logeswaran, Moontae Lee, Kyungjae Lee, Minjoon Seo
ICML 2023
[paper][code]

Guess the Instruction! Flipped Learning Makes Language Models Stronger Zero-Shot Learners

Seonghyeon Ye, Doyoung Kim, Joel Jang, Joongbo Shin, Minjoon Seo
ICLR 2023
[paper][code][demo]

SelFee: Iterative Self-Revising LLM Empowered by Self-Feedback Generation

Seonghyeon Ye*, Yongrae Jo*, Doyoung Kim*, Sungdong Kim, Hyeonbin Hwang, Minjoon Seo ( * denotes equal contribution)
Blog post
[blog][code]


2022

Can Large Language Models Truly Understand Prompts? A Case Study with Negated Prompts

Joel Jang*, Seonghyeon Ye*, Minjoon Seo ( * denotes equal contribution)
Transfer Learning for NLP Workshop @ NeurIPS 2022
[paper][code]

TemporalWiki: A Lifelong Benchmark for Training and Evaluating Ever-Evolving Language Models

Joel Jang*, Seonghyeon Ye*, Chango Lee, Sohee Yang, Joongbo Shin, Janghoon Han, Gyeonghun Kim, Minjoon Seo ( * denotes equal contribution)
EMNLP 2022
[paper][code]

Towards Continual Knowledge Learning of Language Models

Joel Jang, Seonghyeon Ye, Sohee Yang, Joongbo Shin, Janghoon Han, Gyeonghun Kim, Stanley Jungkyu Choi, Minjoon Seo
ICLR 2022
[paper][code]


2021

Efficient Contrastive Learning via Novel Data Augmentation and Curriculum Learning

Seonghyeon Ye, Jiseon Kim, Alice Oh
EMNLP 2021 (short)
[paper] [code]

Dimensional Emotion Detection from Categorical Emotion

Sungjoon Park, Jiseon Kim, Seonghyeon Ye, Jaeyeol Jeon, Hee Young Park, Alice Oh
EMNLP 2021
[paper] [code]



Education


Mar 2022 - Present KAIST Graduate School of AI
Mar 2017 - Aug 2021 KAIST B.S School of Computing

Academic Services


Reviewer : EMNLP 2022, ICLR 2023, ACL 2023, NeurIPS 2023, EMNLP 2023, ICLR 2024


Work Experience


Jul 2022 - Mar 2023 LG AI Research (Research Intern)
Host : Joongbo Shin
Jul 2021 - Feb 2022 LKLab (Research Intern)
Host : Minjoon Seo
Jul 2020 - Jun 2020 U&I Lab (Research Intern)
Host : Alice Oh
Dec 2019 - Feb 2020 ELICE (Frontend Engineer)

Teaching Experience


Spring 2021 CS101 Introduction to Programming