TY - JOUR
T1 - How to Enhance Causal Discrimination of Emotional Utterances
T2 - A Case on LLMs
AU - Yang, Xinyu
AU - Zhao, Daiying
AU - Chen, Hang
AU - Du, Keqing
N1 - Publisher Copyright:
© 2010-2012 IEEE.
PY - 2025
Y1 - 2025
N2 - Existing methods, including large language models (LLMs), excel at capturing semantic correlations between utterances, but often struggle to accurately distinguish specific causal relationships. This limitation poses a significant challenge for reasoning-intensive tasks in affective computing, where precise identification of emotional triggers and their effects is crucial. Our preliminary work demonstrated the potential of introducing i.i.d. noise terms within Structural Causal Models (SCMs) for the Emotion-Cause Pair Extraction (ECPE) task. However, this approach relied on end-to-end learning of high-dimensional latent representations, which hindered both scalability to LLMs and model interpretability. To address these issues, we conceptualize i.i.d. noise terms as token-level implicit causes—natural language expressions that reflect a speaker's underlying emotions, intentions, or situational context. Building on this insight, we introduce ICE (Implicit-Cause-Enhanced), an instruction-based framework that leverages implicit causes to enhance causal reasoning in LLMs. First, we design prompts that heuristically guide LLMs to generate implicit causes, which are then iteratively refined via an external evaluation mechanism. Second, by incorporating these implicit causes as intermediate reasoning steps, ICE improves the accuracy of emotion-cause pair prediction. Moreover, we distill the rationales produced by ICE into lightweight generative models, demonstrating that even small models can benefit from implicit-cause-driven reasoning. Extensive experiments in both instruction-based and distillation-based settings confirm the effectiveness, robustness, and interpretability of our approach.
AB - Existing methods, including large language models (LLMs), excel at capturing semantic correlations between utterances, but often struggle to accurately distinguish specific causal relationships. This limitation poses a significant challenge for reasoning-intensive tasks in affective computing, where precise identification of emotional triggers and their effects is crucial. Our preliminary work demonstrated the potential of introducing i.i.d. noise terms within Structural Causal Models (SCMs) for the Emotion-Cause Pair Extraction (ECPE) task. However, this approach relied on end-to-end learning of high-dimensional latent representations, which hindered both scalability to LLMs and model interpretability. To address these issues, we conceptualize i.i.d. noise terms as token-level implicit causes—natural language expressions that reflect a speaker's underlying emotions, intentions, or situational context. Building on this insight, we introduce ICE (Implicit-Cause-Enhanced), an instruction-based framework that leverages implicit causes to enhance causal reasoning in LLMs. First, we design prompts that heuristically guide LLMs to generate implicit causes, which are then iteratively refined via an external evaluation mechanism. Second, by incorporating these implicit causes as intermediate reasoning steps, ICE improves the accuracy of emotion-cause pair prediction. Moreover, we distill the rationales produced by ICE into lightweight generative models, demonstrating that even small models can benefit from implicit-cause-driven reasoning. Extensive experiments in both instruction-based and distillation-based settings confirm the effectiveness, robustness, and interpretability of our approach.
KW - Emotion-cause pair extraction
KW - LLMs
KW - causal discrimination
KW - prompt learning
UR - https://www.scopus.com/pages/publications/105008819645
U2 - 10.1109/TAFFC.2025.3580755
DO - 10.1109/TAFFC.2025.3580755
M3 - 文章
AN - SCOPUS:105008819645
SN - 1949-3045
VL - 16
SP - 2640
EP - 2652
JO - IEEE Transactions on Affective Computing
JF - IEEE Transactions on Affective Computing
IS - 4
ER -