Recognizing physical contexts of mobile video learners via smartphone sensors

作者:

Highlights:

摘要

Current studies can effectively recognize several human activities in a single semantic context, but don’t recognize the semantics of a single activity in different contexts. The main challenge is the conflicting phone usages as well as the special requirements of the energy consumption. This paper tests a classic learning scenario regarding mobile video viewing and validates the proposed recognition method by comprehensively taking the recognizing accuracy, effectiveness and the energy consumption into consideration. Readings of four carefully-selected sensors are collected and a wide range of machine learning algorithms are investigated. The results show the combination of accelerometer, light and sound sensors is better than that of acceleration, light and gyroscope sensors, the features with respect to energy spectral don’t improve the recognition accuracy, and the system reaches robustness in a few minutes. The proposed method is simple, effective and practical in real applications of pervasive learning.

论文关键词:Physical context,Smartphone sensors,Context recognition,Mobile video learners,71.35.-y,71.35.Lk,71.36.+c

论文评审过程:Received 22 March 2017, Revised 24 July 2017, Accepted 1 September 2017, Available online 5 September 2017, Version of Record 4 October 2017.

论文官网地址:https://doi.org/10.1016/j.knosys.2017.09.002