Comprehensive experiments on domain-adaptive post-training of financial LLMs. We aim to answer the following questions: Given a strong general-purpose LLM (e.g., Llama3-8b-inst), how can we effectively adapt it to a target domain (e.g., finance) through post-training? What criteria are desirable for successful adaptation? What are the most effective training recipes with respect to data and model?
Published: EMNLP, 2025 🏅 Oral (Top 50% of accepted papers, ARR best paper nomination)