TY - GEN
T1 - A hybrid emotion recognition on android smart phones
AU - Zhang, Weishan
AU - Meng, Xin
AU - Lu, Qinghua
AU - Rao, Yuan
AU - Zhou, Jiehan
PY - 2013
Y1 - 2013
N2 - Awareness of emotion status of people is fairly important for aged ones, the ones with sub-health status, and various patients in order to keep them in good mood. The emotion recognition at run time is intrinsically challenging due to its complexity nature. On the one hand, the awareness of human emotion should be achieved as non-intrusive as possible. On the other hand, the android smart phones on the market are increasingly popular which are equipped with various sensors that can be used to achieve the awareness of emotion status. In this paper, we propose an approach based on the heart beat rate and contents of user's talk, which are obtained from built-in camera and microphone on smart phones. We first classify anger, joy, normal, and sadness based on heart rates, then the emotion recognition is further improved by emotional key words in a talk. We have evaluated this approach in terms of recognition accuracy and power consumption found that the accuracy can achieve 84.7%.
AB - Awareness of emotion status of people is fairly important for aged ones, the ones with sub-health status, and various patients in order to keep them in good mood. The emotion recognition at run time is intrinsically challenging due to its complexity nature. On the one hand, the awareness of human emotion should be achieved as non-intrusive as possible. On the other hand, the android smart phones on the market are increasingly popular which are equipped with various sensors that can be used to achieve the awareness of emotion status. In this paper, we propose an approach based on the heart beat rate and contents of user's talk, which are obtained from built-in camera and microphone on smart phones. We first classify anger, joy, normal, and sadness based on heart rates, then the emotion recognition is further improved by emotional key words in a talk. We have evaluated this approach in terms of recognition accuracy and power consumption found that the accuracy can achieve 84.7%.
KW - Emotion recognition
KW - Physiological signals
KW - Speech recognition
UR - https://www.scopus.com/pages/publications/84893492907
U2 - 10.1109/GreenCom-iThings-CPSCom.2013.228
DO - 10.1109/GreenCom-iThings-CPSCom.2013.228
M3 - 会议稿件
AN - SCOPUS:84893492907
SN - 9780769550466
T3 - Proceedings - 2013 IEEE International Conference on Green Computing and Communications and IEEE Internet of Things and IEEE Cyber, Physical and Social Computing, GreenCom-iThings-CPSCom 2013
SP - 1313
EP - 1318
BT - Proceedings - 2013 IEEE International Conference on Green Computing and Communications and IEEE Internet of Things and IEEE Cyber, Physical and Social Computing, GreenCom-iThings-CPSCom 2013
T2 - 2013 IEEE International Conference on Green Computing and Communications and IEEE Internet of Things and IEEE Cyber, Physical and Social Computing, GreenCom-iThings-CPSCom 2013
Y2 - 20 August 2013 through 23 August 2013
ER -