跳到主要导航 跳到搜索 跳到主要内容

Answering knowledge-based visual questions via the exploration of Question Purpose

  • Lingyun Song
  • , Jianao Li
  • , Jun Liu
  • , Yang Yang
  • , Xuequn Shang
  • , Mingxuan Sun
  • Northwestern Polytechnical University Xian
  • University of Electronic Science and Technology of China
  • Louisiana State University

科研成果: 期刊稿件文章同行评审

23 引用 (Scopus)

摘要

Visual question answering has been greatly advanced by deep learning technologies, but still remains an open problem subjected to two aspects of factors. First, previous works estimate the correctness of each candidate answer mainly by its semantic correlations with visual questions, overlooking the fact that some questions and their answers are semantically inconsistent. Second, previous works that require external knowledge mainly uses the knowledge facts retrieved by key words or visual objects. However, the retrieved knowledge facts may only be related to the semantics of the question, but are useless or even misleading for answer prediction. To address these issues, we investigate how to capture the purpose of visual questions and propose a Purpose Guided Visual Question Answering model, called PGVQA. It mainly has two appealing properties: (1) It can estimate the correctness of candidate answers based on the Question Purpose (QP) that reveals which aspects of the concept are examined by visual questions. This is helpful for avoiding the negative effect of the semantic inconsistency between answers and questions. (2) It can incorporate the knowledge facts accordant with the QP into answer prediction, which helps to improve the probability of answering visual questions correctly. Empirical studies on benchmark datasets show that PGVQA achieves state-of-the-art performance.

源语言英语
文章编号109015
期刊Pattern Recognition
133
DOI
出版状态已出版 - 1月 2023

学术指纹

探究 'Answering knowledge-based visual questions via the exploration of Question Purpose' 的科研主题。它们共同构成独一无二的指纹。

引用此