TY - JOUR
T1 - Mind Reasoning Manners
T2 - Enhancing Type Perception for Generalized Zero-Shot Logical Reasoning Over Text
AU - Xu, Fangzhi
AU - Liu, Jun
AU - Lin, Qika
AU - Zhao, Tianzhe
AU - Zhang, Jian
AU - Zhang, Lingling
N1 - Publisher Copyright:
© 2012 IEEE.
PY - 2024
Y1 - 2024
N2 - Logical reasoning task involves diverse types of complex reasoning over text, based on the form of multiple-choice question answering (MCQA). Given the context, question and a set of options as the input, previous methods achieve superior performances on the full-data setting. However, the current benchmark dataset has the ideal assumption that the reasoning type distribution on the train split is close to the test split, which is inconsistent with many real application scenarios. To address it, there remain two problems to be studied: 1) how is the zero-shot capability of the models (train on seen types and test on unseen types)? and 2) how to enhance the perception of reasoning types for the models? For problem 1, we propose a new benchmark for generalized zero-shot logical reasoning, named ZsLR. It includes six splits based on the three type sampling strategies. For problem 2, a type-aware model TaCo is proposed. It utilizes the heuristic input reconstruction and builds a text graph with a global node. Incorporating graph reasoning and contrastive learning, TaCo can improve the type perception in the global representation. Extensive experiments on both the zero-shot and full-data settings prove the superiority of TaCo over the state-of-the-art (SOTA) methods. Also, we experiment and verify the generalization capability of TaCo on other logical reasoning dataset.
AB - Logical reasoning task involves diverse types of complex reasoning over text, based on the form of multiple-choice question answering (MCQA). Given the context, question and a set of options as the input, previous methods achieve superior performances on the full-data setting. However, the current benchmark dataset has the ideal assumption that the reasoning type distribution on the train split is close to the test split, which is inconsistent with many real application scenarios. To address it, there remain two problems to be studied: 1) how is the zero-shot capability of the models (train on seen types and test on unseen types)? and 2) how to enhance the perception of reasoning types for the models? For problem 1, we propose a new benchmark for generalized zero-shot logical reasoning, named ZsLR. It includes six splits based on the three type sampling strategies. For problem 2, a type-aware model TaCo is proposed. It utilizes the heuristic input reconstruction and builds a text graph with a global node. Incorporating graph reasoning and contrastive learning, TaCo can improve the type perception in the global representation. Extensive experiments on both the zero-shot and full-data settings prove the superiority of TaCo over the state-of-the-art (SOTA) methods. Also, we experiment and verify the generalization capability of TaCo on other logical reasoning dataset.
KW - Generalized zero-shot
KW - logical reasoning
KW - natural language processing (NLP)
KW - question answering
UR - https://www.scopus.com/pages/publications/85173356269
U2 - 10.1109/TNNLS.2023.3317254
DO - 10.1109/TNNLS.2023.3317254
M3 - 文章
C2 - 37773893
AN - SCOPUS:85173356269
SN - 2162-237X
VL - 35
SP - 18499
EP - 18511
JO - IEEE Transactions on Neural Networks and Learning Systems
JF - IEEE Transactions on Neural Networks and Learning Systems
IS - 12
ER -