Unlike conventional NLP tasks that prioritize factual accuracy and penalize hallucination, role-playing tasks uniquely to create immersive, fictional scenarios. In role-playing, LLMs must generate narratives aligned with explicit boundaries (e.g., personas, rules) and implicit constraints (e.g., temporal knowledge or character capabilities). For example, a 12-year-old Harry Potter should not recognize Sirius Black or write Python code, even if the underlying LLM possesses this knowledge. Achieving authenticity demands a dual focus: the model must hallucinate creatively to build engaging storylines while rigorously adhering to scenario-specific limitations—rejecting both under-hallucination (resulting in rigid, unimmersive interactions) and excessive hallucination that violates boundaries (e.g., breaking character or introducing factual inconsistencies). Striking this balance defines “controlled hallucination,” where the model operates as a constrained storyteller, blending fictional innovation with discipline.
This tutorial bridges the gap between theoretical frameworks and industrial practices, addressing how to cultivate controlled hallucination without compromising reliability. Although excessive hallucination remains unacceptable, undermining role consistency, user trust, or even safety, this task challenges conventional mitigation strategies. We dissect methods across the LLM lifecycle: reinforcing scenario constraints through continued pre-training, fine-tuning dialogue coherence via SFT, optimizing preference alignment to prioritize boundary adherence, adjusting decoding for creativity-within-rules, and designing evaluations to measure hallucination quality (e.g., role alignment, coherency). By focusing on real-world applications, we demonstrate how industry practitioners refine LLMs into adaptable role-playing agents capable of “safe creativity,” ensuring harmonious integration of imaginative storytelling and user-defined guardrails.
TBD
TBD
@article{ ijcai-roleplay-tutorial,
author = { Wang, Yan and Chen, Nuo and Huang, Xinting and Wu, Hongqiu and Wang, Jiaan and Cui, Leyang and Cai, Deng and Shi, Shuming },
title = { IJCAI 2025 Tutorial: LLM-based Role-Playing from the Perspective of Hallucination },
journal = { IJCAI 2025 },
year = { 2025 },
}