Yongyuan Liang

Yongyuan (Cheryl) Liang's research focuses on building robust, versatile, and efficient intelligent agents with strong generalization capacity for AI systems.
She actively explores both theoretical frameworks and empirical findings, with specific research interests in:

  • Reinforcement Learning (robust RL, offline RL, IRL, RLHF)
  • Foundation models for policy learning (generative models, representation)
  • Trustworthy LLM agents for planning and reasoning

profile photo

Email  /  Google Scholar  /  Github /  Twitter

I join UMD CS as a PhD student, advised by Furong Huang and also work closely with Huazhe Xu (IIIS). I received my B.S. degree in Mathematics from Sun Yat-sen University, developing my interests in Stochastic Process and Game Theory.

I'm always happy to collaborate with graduate/undergraduate students. Please drop me an email if you'd like to have a (virtual) coffee chat :)
I'm looking for part-time/full-time internship opportunities. Feel free to reach out if you're interested in my research.

June' 24  

ACE has been selected as long oral presentation in ICML 2024.

May' 24  

Two papers to appear in ICML 2024.

Feb' 24  

Have been awarded a Dean’s Fellowship.

Jan' 24  

Three papers to appear in ICLR 2024, including two spotlights and one poster.


Selected Publications & Preprints

* denotes equal contributions; § indicates mentoring involvement.

Make-An-Agent: A Generalizable Policy Network Generator with Behavior-Prompted Diffusion
Yongyuan Liang, Tingqiang Xu, Kaizhe Hu, Guangqi Jiang, Furong Huang, Huazhe Xu

arXiv, 2024
Project Page  /  Paper  /  Code /  Models & Dataset /  Twitter

Is poisoning a real threat to LLM alignment? Maybe more so than you think
Pankayaraj Pathmanathan, Souradip Chakraborty, Xiangyu Liu, Yongyuan Liang, Furong Huang

arXiv, 2024
ICML Workshop on Models of Human Feedback for AI Alignment, 2024
Paper  /  Code

ACE: Off-Policy Actor-Critic with Causality-Aware Entropy Regularization
Tianying Ji*, Yongyuan Liang*, Yan Zeng, Yu Luo, Guowei Xu, Jiawei Guo, Ruijie Zheng, Furong Huang, Fuchun Sun, Huazhe Xu

ICML, 2024 (Oral - Top 1.5%)
Project Page  /  Paper  /  Code /  Twitter

PREMIER-TACO is a Few-Shot Policy Learner: Pretraining Multitask Representation via Temporal Action-Driven Contrastive Loss
Ruijie Zheng, Yongyuan Liang, Xiyao Wang, Shuang Ma, Hal Daumé III, Huazhe Xu, John Langford, Praveen Palanisamy, Kalyan Basu, Furong Huang

ICML, 2024
NeurIPS Workshop FMDM, 2023
Project Page  /  Paper  /  Code /  Twitter

DrM: Mastering Visual Reinforcement Learning through Dormant Ratio Minimization
Guowei Xu*, Ruijie Zheng*, Yongyuan Liang*, Xiyao Wang, Zhecheng Yuan, Tianying Ji, Yu Luo, Xiaoyu Liu, Jiaxin Yuan, Pu Hua, Shuzhen Li, Yanjie Ze, Hal Daumé III, Furong Huang, Huazhe Xu

ICLR, 2024 (Spotlight - Top 5%)
CORL Workshop PRL, 2023
Project Page  /  Paper  /  Code /  Twitter

Game-Theoretic Robust Reinforcement Learning Handles Temporally-Coupled Perturbations
Yongyuan Liang, Yanchao Sun, Ruijie Zheng, Xiangyu Liu, Benjamin Eysenbach, Tuomas Sandholm, Furong Huang, Stephen Marcus McAleer

ICLR, 2024
ICML Workshop AdvML-Frontiers, 2023
Paper  /  Code /  Twitter

Efficient Adversarial Training without Attacking: Worst-Case-Aware Robust Reinforcement Learning
Yongyuan Liang*, Yanchao Sun*, Ruijie Zheng, Furong Huang

NeurIPS, 2022
NeurIPS Workshop SafeRL, 2021 (Spotlight Talks)
Paper  /  Code /  Slides

Beyond Worst-case Attacks: Robust RL with Adaptive Defense via Non-dominated Policies
Xiangyu Liu*, Chenghao Deng*, Yanchao Sun, Yongyuan Liang, Furong Huang

ICLR, 2024 (Spotlight - Top 5%)
NeurIPS Workshop MASEC, 2023
Project Page /  Paper /  Code /  Twitter

Who Is the Strongest Enemy? Towards Optimal and Efficient Evasion Attacks in Deep RL
Yanchao Sun, Ruijie Zheng, Yongyuan Liang, Furong Huang

ICLR, 2022
NeurIPS Workshop SafeRL, 2021 (Best Paper Award)
Project Page /  Paper /  Code

Certifiably Robust Policy Learning against Adversarial Communication in Multi-agent Systems
Yanchao Sun, Ruijie Zheng, Parisa Hassanzadeh, Yongyuan Liang, Soheil Feizi, Sumitra Ganesh, Furong Huang

ICLR, 2023
Paper  /  Code


Professional Service

Conference Reviewer: ICML(2022, 2023, 2024), NeurIPS(2021, 2022, 2023, 2024), ICLR(2021, 2022, 2023, 2024), AAAI(2020)

Workshop Program Committee: FMDM 2023 at NeurIPS


Misc

If my name is a bit tricky to pronounce for you, it is also great to call me Cheryl [ˈʃerəl].

I've been playing the violin🎻 for over 15 years and served as a principal violinist in the university orchestra.

Been a fan of Novak Djokovic since 2012.

My Erdős number = 4.





© Yongyuan Liang
credits