Unlike conventional NLP tasks that prioritize factual accuracy and penalize hallucination, role-playing tasks uniquely to create immersive, fictional scenarios. In role-playing, LLMs must generate narratives aligned with explicit boundaries (e.g., personas, rules) and implicit constraints (e.g., temporal knowledge or character capabilities). For example, a 12-year-old Harry Potter should not recognize Sirius Black or write Python code, even if the underlying LLM possesses this knowledge. Achieving authenticity demands a dual focus: the model must hallucinate creatively to build engaging storylines while rigorously adhering to scenario-specific limitations—rejecting both under-hallucination (resulting in rigid, unimmersive interactions) and excessive hallucination that violates boundaries (e.g., breaking character or introducing factual inconsistencies). Striking this balance defines “controlled hallucination,” where the model operates as a constrained storyteller, blending fictional innovation with discipline.
This tutorial bridges the gap between theoretical frameworks and industrial practices, addressing how to cultivate controlled hallucination without compromising reliability. Although excessive hallucination remains unacceptable, undermining role consistency, user trust, or even safety, this task challenges conventional mitigation strategies. We dissect methods across the LLM lifecycle: reinforcing scenario constraints through continued pre-training, fine-tuning dialogue coherence via SFT, optimizing preference alignment to prioritize boundary adherence, adjusting decoding for creativity-within-rules, and designing evaluations to measure hallucination quality (e.g., role alignment, coherency). By focusing on real-world applications, we demonstrate how industry practitioners refine LLMs into adaptable role-playing agents capable of “safe creativity,” ensuring harmonious integration of imaginative storytelling and user-defined guardrails.
Our tutorial will be held on August 29th (all the times are based on UTC+8 = Beijing local time).
Time | Section | Presenter |
---|---|---|
15:45-16:00 | Section 1: What is Role Play | Yan Wang |
16:00-16:15 | Section 2: The Difference Between Role-Play and other LLM tasks | Yan Wang |
16:15-16:45 | Section 3: Models (Continue Pretrain & Alignment) | Jiaan Wang |
16:45-16:55 | Section 3: Models (Knowledge-Augmentation) | Jiaan Wang |
16:55-17:10 | Section 3: Models (Evaluation) | Yan Wang |
17:10—17:15 | Q & A Session I | |
17:15—17:35 | Section 4: Role-Play for Games (Game Demos) | Hongqiu Wu |
17:35—18:15 | Section 4: Role-Play for Games (Commercial Games) | Hongqiu Wu |
18:15—18:35 | Section 5: Challenges and Future Directions | Yan Wang |
18:35—18:45 | Q & A Session II |
@article{ ijcai-roleplay-tutorial,
author = { Wang, Yan and Wang, Jiaan and Cui, Leyang and Huang, Xinting and Wu, Hongqiu and Chen, Nuo and Cai, Deng and Shi, Shuming },
title = { IJCAI 2025 Tutorial: LLM-based Role-Playing from the Perspective of Hallucination },
journal = { IJCAI 2025 },
year = { 2025 },
}