TY - GEN
T1 - Rethinking Sentiment Analysis under Uncertainty
AU - Wu, Yuefei
AU - Shi, Bin
AU - Chen, Jiarun
AU - Liu, Yuhang
AU - Dong, Bo
AU - Zheng, Qinghua
AU - Wei, Hua
N1 - Publisher Copyright:
© 2023 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 979-8-4007-0124-5/23/10...$15.00.
PY - 2023/10/21
Y1 - 2023/10/21
N2 - Sentiment Analysis (SA) is a fundamental task in natural language processing, which is widely used in public decision-making. Recently, deep learning have demonstrated great potential to deal with this task. However, prior works have mostly treated SA as a deterministic classification problem, and meanwhile, without quantifying the predictive uncertainty. This presents a serious problem in the SA, different annotator, due to the differences in beliefs, values, and experiences, may have different perspectives on how to label the text sentiment. Such situation will lead to inevitable data uncertainty and make the deterministic classification models feel puzzle to make decision. To address this issue, we propose a new SA paradigm with the consideration of uncertainty and conduct an expensive empirical study. Specifically, we treat SA as the regression task and introduce uncertainty quantification to obtain confidence intervals for predictions, which enables the risk assessment ability of the model and can improve the credibility of SA-aids decision-making. Experiments on five datasets show that our proposed new paradigm effectively quantifies uncertainty in SA while remaining competitive performance to point estimation, in addition to being capable of Out-Of-Distribution (OOD) detection.
AB - Sentiment Analysis (SA) is a fundamental task in natural language processing, which is widely used in public decision-making. Recently, deep learning have demonstrated great potential to deal with this task. However, prior works have mostly treated SA as a deterministic classification problem, and meanwhile, without quantifying the predictive uncertainty. This presents a serious problem in the SA, different annotator, due to the differences in beliefs, values, and experiences, may have different perspectives on how to label the text sentiment. Such situation will lead to inevitable data uncertainty and make the deterministic classification models feel puzzle to make decision. To address this issue, we propose a new SA paradigm with the consideration of uncertainty and conduct an expensive empirical study. Specifically, we treat SA as the regression task and introduce uncertainty quantification to obtain confidence intervals for predictions, which enables the risk assessment ability of the model and can improve the credibility of SA-aids decision-making. Experiments on five datasets show that our proposed new paradigm effectively quantifies uncertainty in SA while remaining competitive performance to point estimation, in addition to being capable of Out-Of-Distribution (OOD) detection.
KW - Out-Of-Distribution Detection
KW - Quantifying Uncertainty
KW - Sentiment Analysis
UR - https://www.scopus.com/pages/publications/85178109759
U2 - 10.1145/3583780.3615034
DO - 10.1145/3583780.3615034
M3 - 会议稿件
AN - SCOPUS:85178109759
T3 - International Conference on Information and Knowledge Management, Proceedings
SP - 2775
EP - 2784
BT - CIKM 2023 - Proceedings of the 32nd ACM International Conference on Information and Knowledge Management
PB - Association for Computing Machinery
T2 - 32nd ACM International Conference on Information and Knowledge Management, CIKM 2023
Y2 - 21 October 2023 through 25 October 2023
ER -