李丹, 解仑, 卢婷, 韩晶, 胡波, 王志良, 任福继. 基于光流方向信息熵统计的微表情捕捉[J]. 工程科学学报, 2017, 39(11): 1727-1734. DOI: 10.13374/j.issn2095-9389.2017.11.016
引用本文: 李丹, 解仑, 卢婷, 韩晶, 胡波, 王志良, 任福继. 基于光流方向信息熵统计的微表情捕捉[J]. 工程科学学报, 2017, 39(11): 1727-1734. DOI: 10.13374/j.issn2095-9389.2017.11.016
LI Dan, XIE Lun, LU Ting, HAN Jing, HU Bo, WANG Zhi-liang, REN Fu-ji. Capture of microexpressions based on the entropy of oriented optical flow[J]. Chinese Journal of Engineering, 2017, 39(11): 1727-1734. DOI: 10.13374/j.issn2095-9389.2017.11.016
Citation: LI Dan, XIE Lun, LU Ting, HAN Jing, HU Bo, WANG Zhi-liang, REN Fu-ji. Capture of microexpressions based on the entropy of oriented optical flow[J]. Chinese Journal of Engineering, 2017, 39(11): 1727-1734. DOI: 10.13374/j.issn2095-9389.2017.11.016

基于光流方向信息熵统计的微表情捕捉

Capture of microexpressions based on the entropy of oriented optical flow

  • 摘要: 以光流法为依据,提出了一种基于光流方向信息熵(entropy of oriented optical flow,EOF)统计的方法捕捉微表情关键帧.首先,采用改进的Horn-Schunck光流法提取视频流中相邻两帧图像的微表情运动特征;其次,采用阈值分析法筛选出投影速度模值较大的光流向量;之后,采用图像信息熵统计光流变化角度,进而得到视频序列的方向信息熵向量,通过对熵向量的分析,实现微表情关键帧捕捉;最后,本实验采用芬兰奥卢大学的SMIC微表情数据库和中国科学院心理研究所傅小兰的CASME微表情数据库作为实验样本,通过与传统的帧差法比较,证明了本文提出的算法优于帧差法,能够较好地表现出微表情变化趋势,为微表情识别提供基础.

     

    Abstract: This paper proposes an algorithm that is effective in detecting the key frame of microexpression based on the entropy of oriented optical flow. Initially, this paper used an improved Horn-Schunck optical flow to extract the motion features of adjacent frames. Then, the threshold algorithm was used to filter the optical flow vectors with high-projective modulus. To capture the key frame of microexpression, the paper used information entropy to count the direction of optical flow vectors and analyzed the changing of microexpressions using an entropy vector of video sequences. Finally, the algorithm in this paper was verified with microexpression database SMIC (Oulu University) and CASME (the Director of the Institute of Psychology at the Chinese Academy of Sciences, Fu Xiaolan). Compared with traditional frame differences, experiments show that the algorithm is good not only in expressing the trend of the microexpression but also in providing the basis for microexpression recognition.

     

/

返回文章
返回