Zixuan Ke

I am a research scientist at Salesforce AI Research. I earned my Ph.D. degree at the University of Illinois, Chicago, where I was fortunate to be advised by Bing Liu (we continue to work closely). Prior to that, I received my M.Sc. in Computer Science from the University of Texas, Dallas, under the guidance of Vincent Ng. During the summers, I was a research intern at Google DeepMind, Meta AI, and Amazon Science.

Email  /  Google Scholar  /  Github  /  Twitter  /  LinkedIn  /  Blog

If you'd like to chat with me about research or anything, please feel free to reach out via email or schedule a chat here. I'd be happy to connect!

profile photo

📰 News

June 2025: Latest preprint on Multi-agent System (MAS) and reasoning: MAS-Zero: Designing Multi-Agent Systems with Zero Supervision. Explore the 1,000+ discovered MAS designs in the collection!.
May 2025: Tutorial at NAACL 2025: Adaptation of Large Language Models (recording, proposal, slides and more!).
January 2025: Preprint on domain-adaptive post-training: Demystifying Domain-Adaptive Post-Training for Financial LLMs.

📚 Selected Publications & Preprints

Full list on Google Scholar. (* indicates equal contribution)

Agentic Systems and Reasoning

MAS-Zero: Designing Multi-Agent Systems with Zero Supervision
Zixuan Ke, Yifei Ming, Xuan-Phi Nguyen, Caiming Xiong, Shafiq Joty
arXiv, 2025
Project Page | arXiv | code

A Survey of Frontiers in LLM Reasoning: Inference Scaling, Learning to Reason, and Agentic Systems
Zixuan Ke, Fangkai Jiao, Yifei Ming, Xuan-Phi Nguyen, Austin Xu, Do Xuan Long, Minzhi Li, Chengwei Qin, Peifeng Wang, Silvio Savarese, Caiming Xiong, Shafiq Joty
TMLR, 2025 🏅 Survey Certification
Project Page | arXiv

Large Language Models

Demystifying Domain-adaptive Post-training for Financial LLMs
Zixuan Ke, Yifei Ming, Xuan-Phi Nguyen, Caiming Xiong, Shafiq Joty
arXiv, 2025
arXiv | data (FinEval)

Bridging the Preference Gap between Retrievers and LLMs
Zixuan Ke, Weize Kong, Cheng Li, Mingyang Zhang, Qiaozhu Mei, Michael Bendersky
ACL, 2024
arXiv | talk | poster

Continual Pre-training of Language Models
Zixuan Ke*, Yijia Shao*, Haowei Lin*, Tatsuya Konishi, Gyuhak Kim, Bing Liu
ICLR, 2023
arXiv | poster | model | code

Adapting a Language Model While Preserving its General Knowledge
Zixuan Ke, Yijia Shao, Haowei Lin, Hu Xu, Lei Shu, Bing Liu
EMNLP, 2022a
arXiv | poster | code

Continual Training of Language Models for Few-Shot Learning
Zixuan Ke, Haowei Lin, Yijia Shao, Hu Xu, Lei Shu, Bing Liu
EMNLP, 2022b
arXiv | poster | model | code

Continual Learning

Sub-network Discovery and Soft-masking for Continual Learning of Mixed Tasks
Zixuan Ke, Bing Liu, Wenhan Xiong, Asli Celikyilmaz, Haoran Li
EMNLP, 2023
arXiv | code

A Theoretical Study on Solving Continual Learning
Gyuhak Kim, Changnan Xiao, Zixuan Ke, Bing Liu
NeurIPS, 2022
arXiv

Achieving Forgetting Prevention and Knowledge Transfer in Continual Learning
Zixuan Ke, Bing Liu, Nianzu Ma, Hu Xu, Lei Shu
NeurIPS, 2021
arXiv | talk | poster | code

Continual Learning of A Mixed Sequence of Similar and Dissimilar Tasks
Zixuan Ke, Bing Liu, Xingchang Huang
NeurIPS, 2020
arXiv | talk | poster | code

🎤 Recent Talks & Classes

Adaptation of Large Language Models
Tutorial at NAACL 2025, New Mexico, May 3, 2025
Recording | Webpage


Domain-specific Post-training
Talk at VISA Research, Remote, March 4, 2025
Slides


Adapting Large Language Models for the Dynamic World
Talks at:
Snowflake, Remote, Feb 1, 2024
Salesforce AI Research, Remote, Jan 11, 2024
Google DeepMind, Remote, Nov 9, 2023
Slides


Continual Pre-training of Language Models
Talk at ContinualAI, Remote, April 27, 2023
Slides | Video


Continual Learning in NLP
Tutorial at DEIM 2023, Remote, March 6, 2023
Slides


Lifelong and Continual Learning
Short PhD Course (8 hours), Aalborg University, June 14–16, 2022
Part 1 | Part 2


Conference Talks
See the Selected Publications section or more on Underline.

Research Services

  • Area Chair/Action Editor (2024-):
    • ARR

  • Program Committee/Reviewer (2021-):
    • ICLR, NeurIPS, ICML, ACL, EMNLP, NAACL, IJCAI, ARR, COLING, Collas, NLPCC

  • Journal Reviewer (2021-):
    • TPAMI, TKDE, Neural Networks, Neurocomputing, Artificial Intelligence, TALLIP

Awards

  • Exceptional Research Premise (the highest honor for CoE PhD students at UIC), 2023

Collaborators
I have had the privilege of working with and learning from great mentors and mentees, including:

  • Mentors:
    • Bing Liu, distinguished professor at UIC
    • Hu Xu, research scientist at Facebook AI Research (FAIR)
    • Lei Shu, research scientist at Google Research

  • Mentees:
    (They're making great achievements and I couldn't be more thrilled and proud of them)
    • Yijia Shao, BS at Peking University ->PhD at Standford












Template modified from here.