Predicting Human Activities Using Stochastic Grammar

  • Siyuan Qi
  • , Siyuan Huang
  • , Ping Wei
  • , Song Chun Zhu

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

81 Scopus citations

Abstract

This paper presents a novel method to predict future human activities from partially observed RGB-D videos. Human activity prediction is generally difficult due to its non-Markovian property and the rich context between human and environments. We use a stochastic grammar model to capture the compositional structure of events, integrating human actions, objects, and their affordances. We represent the event by a spatial-temporal And-Or graph (ST-AOG). The ST-AOG is composed of a temporal stochastic grammar defined on sub-activities, and spatial graphs representing sub-activities that consist of human actions, objects, and their affordances. Future sub-activities are predicted using the temporal grammar and Earley parsing algorithm. The corresponding action, object, and affordance labels are then inferred accordingly. Extensive experiments are conducted to show the effectiveness of our model on both semantic event parsing and future activity prediction.

Original languageEnglish
Title of host publicationProceedings - 2017 IEEE International Conference on Computer Vision, ICCV 2017
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages1173-1181
Number of pages9
ISBN (Electronic)9781538610329
DOIs
StatePublished - 22 Dec 2017
Event16th IEEE International Conference on Computer Vision, ICCV 2017 - Venice, Italy
Duration: 22 Oct 201729 Oct 2017

Publication series

NameProceedings of the IEEE International Conference on Computer Vision
Volume2017-October
ISSN (Print)1550-5499

Conference

Conference16th IEEE International Conference on Computer Vision, ICCV 2017
Country/TerritoryItaly
CityVenice
Period22/10/1729/10/17

Fingerprint

Dive into the research topics of 'Predicting Human Activities Using Stochastic Grammar'. Together they form a unique fingerprint.

Cite this