Recognizing physical contexts of mobile video learners via smartphone sensors

Research output: Contribution to journalArticlepeer-review

4 Scopus citations

Abstract

Current studies can effectively recognize several human activities in a single semantic context, but don't recognize the semantics of a single activity in different contexts. The main challenge is the conflicting phone usages as well as the special requirements of the energy consumption. This paper tests a classic learning scenario regarding mobile video viewing and validates the proposed recognition method by comprehensively taking the recognizing accuracy, effectiveness and the energy consumption into consideration. Readings of four carefully-selected sensors are collected and a wide range of machine learning algorithms are investigated. The results show the combination of accelerometer, light and sound sensors is better than that of acceleration, light and gyroscope sensors, the features with respect to energy spectral don't improve the recognition accuracy, and the system reaches robustness in a few minutes. The proposed method is simple, effective and practical in real applications of pervasive learning.

Original languageEnglish
Pages (from-to)75-84
Number of pages10
JournalKnowledge-Based Systems
Volume136
DOIs
StatePublished - 15 Nov 2017

Keywords

  • Context recognition
  • Mobile video learners
  • Physical context
  • Smartphone sensors

Fingerprint

Dive into the research topics of 'Recognizing physical contexts of mobile video learners via smartphone sensors'. Together they form a unique fingerprint.

Cite this