APP下载

基于局部区域方法的微表情识别

2019-08-01张延良卢冰洪晓鹏赵国英张伟涛

计算机应用 2019年5期
关键词:特征向量

张延良 卢冰 洪晓鹏 赵国英 张伟涛

摘 要:微表情(ME)的发生只牵涉到面部局部区域,具有动作幅度小、持续时间短的特点,但面部在产生微表情的同时也存在一些无关的肌肉动作。现有微表情识别的全局区域方法会提取这些无关变化的时空模式,从而降低特征向量对于微表情的表达能力,进而影响识别效果。针对这个问题,提出使用局部区域方法进行微表情识别。首先,根据微表情发生时所牵涉到的动作单元(AU)所在区域,通过面部关键点坐标将与微表情相关的七个局部区域划分出来;然后,提取这些局部区域组合的时空模式并串联构成特征向量,进行微表情识别。留一交叉验证的实验结果表明局部区域方法较全局区域方法进行微表情识别的识别率平均提高9.878%。而通过对各区域识别结果的混淆矩阵进行分析表明所提方法充分利用了面部各局部区域的结构信息,并有效摒除与微表情无关区域对识别性能的影响,较全局区域方法可以显著提高微表情识别的性能。

关键词:微表情识别; 特征向量;动作单元;全局区域方法;局部区域方法

中图分类号:TP391.41

文献标志码:A

Abstract: MicroExpression (ME) occurrence is only related to local region of face, with very short time and subtle movement intensity. There are also some unrelated muscle movements in the face during the occurrence of microexpressions. By using existing global method of microexpression recognition, the spatiotemporal patterns of these unrelated changes were extracted,thereby reducing the representation capability of feature vectors, and thus affecting the recognition performance. To solve this problem, the local region method was proposed to recognize microexpression. Firstly, according to the region with the Action Units (AU) related to the microexpression, seven local regions related to the microexpression were partitioned by facial key coordinates. Then, the spatiotemporal patterns of these local regions were extracted and connected in series to form feature vectors for microexpression recognition. The experimental results of leaveonesubjectout cross validation show that the microexpression recognition accuracy of local region method is 9.878% higher than that of global region method. The analysis of the confusion matrix of each regions recognition result shows that the proposed method makes full use of the structural information of each local region of face, effectively eliminating the influence of unrelated regions of the microexpression on the recognition performance, and its performance of microexpression recognition can be significantly improved compared with the global region method.

英文關键词Key words: microexpression recognition; feature vector; Action Unit (AU);global region method; local region method

0 引言

面部表情是人类传递个人内心感受的一个重要途径。在过去几十年中,表情识别一直是机器视觉领域的重要研究课题之一。除了日常生活中经常见到的常规表情,在特定情形下,人们会试图掩盖内在情绪的外露,而产生不易被人察觉的微表情(MicroExpression, ME)。

它持续时间通常少于 0.5s,发生时面部肌肉的动作幅度轻微、区域小。这种特殊的面部微小动作,可以作为识别人内在情感的重要依据,在司法审讯[1]、交流谈判[2]、教学效果评估[3-4]及心理咨询[5]等场合有广泛的应用价值。因为用裸眼准确捕捉和识别微表情成功率很低,Ekman[6]开发了微表情识别训练工具METT来提高人们对微表情的识别率, 但是,经过专业训练的人士,识别率也仅能达到47%[7]。因此,运用计算机视觉方法实现微表情的识别成为情感计算领域的一个重要研究课题。

自动识别微表情的一般步骤是:首先设计特征表达方法,提取微表情视频序列的特征向量; 然后再通过模式分类的方法实现识别。在微表情识别的开创性工作中,Pfiste等运用局部二值模式(Local Binary Pattern, LBP)[8]的一种拓展描述算子三正交平面局部二值模式(Local Binary Pattern on Three Orthogonal Plane, LBPTOP)[9]来编码局部像素的时空共生模式。该方法采用时空描述子,分别抽取视频在XY、YT及XT三个平面的LBP特征,既考虑了图像的局部纹理信息,又对视频随时间变化的情况进行了描述。这种采用时空局部特征建立特征向量的思路被微表情识别领域的研究者广泛采用。后续的时空完备局部量化模式[10]、六交点局部二值模式[11]、中心化二值模式[12]等均是对LBPTOP方法的改进。采用在三个平面抽取不同于LBP的局部特征作为时空描述子的思路,后续又出现了三正交平面局部相位量化算子(Local Phase Quantization on Three Orthogonal Planes, LPQTOP)[13]、三正交平面方向梯度直方图算子(Histograms of Oriented Gradients on Three Orthogonal Planes, HOGTOP)[14]及改进的三正交平面方向梯度直方图算子(Histogram of Image Gradient Orientation on Three Orthogonal Planes,HIGOTOP)[14]等局部时空模式用于微表情识别。

这些方法往往将视频序列面部全局区域等分成若干个立方块,然后提取每个块的特征,再将这些特征串联起来构成该视频序列的特征向量。这种作法虽然考虑了局部模式的位置信息,但它将各个块同等对待,没有利用面部各组成部分(如眼睛、鼻子、嘴巴、下巴等)的内部结构信息,是一种全局的方法。实际上,微表情在发生时只牵涉到部分面部区域。面部在产生微表情时也存在一些无关的变化,全局方法提取這些无关变化的局部模式会降低特征向量对于微表情特征的表达能力,进而影响识别成功率。本文根据微表情发生时所牵涉到的动作单元所在区域作为分块的标准,依据面部关键点坐标,将与微表情相关的7个区域划分出来。提取这些区域的局部时空模式串联构成特征向量,再用模式分类的方法进行微表情识别。实验表明局部区域的方法可以有效摒除与微表情无关区域对识别性能的影响,较全局方法可以显著提高微表情识别的性能。

5 结语

微表情是个体在特定情形下,无意识、不能自主控制的面部表情,具有动作幅度小、持续时间短的特点。现有的微表情识别方法的步骤是: 首先对面部全局区域进行无差别分块,然后分别提取各块的时空模式特征并串联构成特征向量,再通过模式分类的方法实现识别。实际上,微表情只牵涉到面部局部区域,面部在产生微表情时也存在一些无关的肌肉动作,全局方法提取这些无关变化的局部模式会降低特征向量对于微表情特征的表达能力,进而影响识别效果。本文根据微表情发生时所牵涉到的动作单元所在区域,通过面部关键点坐标,将与微表情相关的7个局部区域划分出来。提取这些局部区域组合的时空模式并串联构成特征向量,进行微表情识别。实验表明局部区域的方法充分利用了面部各局部区域的结构信息,有效摒除与微表情无关区域对识别性能的影响,较全局方法可以显著提高微表情识别的性能。

参考文献 (References)

[1] CLANCY M. The Philosophy of Deception[M]. Oxford: Oxford University Press, 2009: 118-133.

[2] SALTER F, GRAMMER K, RIKOWSKI A. Sex differences in negotiating with powerful males[J]. Human Nature, 2005, 16(3):306-321.

[3] WHITEHILL J, SERPELL Z, LIN Y C, et al. The faces of engagement: automatic recognition of student engagementfrom facial expressions[J]. IEEE Transactions on Affective Computing, 2014, 5(1):86-98.

[4] POOL L D, QUALTER P. Improving emotional intelligence and emotional selfefficacy through a teaching intervention for university students[J]. Learning & Individual Differences, 2012, 22(3):306-312.

[5] STEWART P A, WALLER B M, SCHUBERT J N. Presidential speechmaking style: emotional response to microexpressions of facial affect[J]. Motivation & Emotion, 2009, 33(2):125-135.

[6] EKMAN P. Micro Expression Training Tool (METT)[M]. San Francisco: University of California, 2002: 1877-1903.

[7] FRANK M G, HERBASZ M, SINUK K, et al. I see how you feel: training laypeople and professionals to recognize fleeting emotions[C]// Proceedings of the 2009 Annual Meeting of the International Communication Association. New York: [s. n.], 2009: 3515-3522.

[8] OJALA T, PIETIKAINEN M, HARWOOD D. A comparative study of texture measures with classification based on featured distributions[J]. Pattern Recognition, 1996, 29(1): 51-59.

[9] ZHAO G, PIETIKAINEN M. Dynamic texture recognition using local binary patterns with an application to facial expressions[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2007, 29(6): 915-928.

[10] HUANG X, ZHAO G, HONG X, et al. Spontaneous facial microexpression analysis using spatiotemporal completed local quantized patterns[J]. Neurocomputing, 2016, 175: 564-578.

[11] WANG Y, SEE J, PHAN R C, et al. LBP with six intersection points: reducing redundant information in LBPTOP for microexpression recognition[C]// Proceedings of the 12th Asian Conference on Computer Vision. Berlin: Springer, 2014: 525-537.

[12] FU X, WEI W. Centralized binary patterns embedded with image Euclidean distance for facial expression recognition[C]// Proceedings of the 2008 4th International Conference on Natural Computation. Washington, DC: IEEE Computer Society, 2008:115-119.

[13] SUN B, LI L, WU X, et al. Combining featurelevel and decisionlevel fusion in a hierarchical classifier for emotion recognition in the wild[J]. Journal on Multimodal User Interfaces, 2016, 10(2):125-137.

[14] LI X, HONG X, MOILANEN A, et al. Towards reading hidden emotions: a comparative study of spontaneous microexpression spotting and recognition methods[J]. IEEE Transactions on Affective Computing, 2018, 9(4): 563-577.

[15] COHN J F, AMBADAR Z, EKMAN P. Observerbased measurement of facial expression with the facial action coding system[J]. Neuroscience Letters, 2007, 394(3): 203-221.

[16] MARTINEZ B, VALSTAR M F, JIANG B, et al. Automatic analysis of facial actions: a survey[J/OL]. IEEE Transactions on Affective Computing, 2017 [2018-06-20].https://ieeexplore.ieee.org/document/7990582.

[17] YAN W J, LI X, WANG S J, et al. CASME II: an improved spontaneous microexpression database and the baseline evaluation[J]. PLoS One, 2014, 9(1): 1-8.

[18] ASTHANA A, ZAFEIRIOU S, CHENG S, et al. Incremental face alignment in the wild[C]// Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition. Washington, DC: IEEE Computer Society, 2014:1859-1866.

[19] COOTES T F, EDWARDS G J, TAYLOR C J. Active appearance models[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2001, 23(6): 681-685.

[20] 劉丽,谢毓湘,魏迎梅,等.局部二进制模式方法综述[J]. 中国图象图形学报,2014,19(12): 1696-1720.(LIU L,XIE Y X,WEI Y M,et al. Survey of local binary pattern method[J].Journal of Image and Graphics, 2014, 19(12): 1696-1720.)

[21] OH Y, NGO A C, SEE J, et al. Monogenic Riesz wavelet representation for microexpression recognition[C]// Proceedings of the 2015 IEEE International Conference on Digital Signal Processing. Piscataway, NJ: IEEE, 2015: 1237-1241.

[22] LIONG S, SEE J, PHAN R C, et al. Subtle expression recognition using optical strain weighted features[C]// Proceedings of the 12th Asian Conference on Computer Vision. Berlin: Springer, 2014: 644-657.

[23] NGO A C, PHAN R C, SEE J, et al. Spontaneous subtle expression recognition: imbalanced databases and solutions[C]// Proceedings of the 12th Asian Conference on Computer Vision. Berlin: Springer, 2014: 33-48.

猜你喜欢

特征向量
高中数学特征值和特征向量解题策略
三个高阶微分方程的解法研究
基于情感特征的背景音乐分类方法
基于大数据平台的人体跌倒检测研究
基于特征向量自动选取的谱聚类算法
一种基于多维聚类预处理的云计算任务调度算法
氨基酸序列特征向量提取方法的探讨
基于鼠标行为的电子商务中用户异常行为检测
一种改进的峰均功率比判源方法
基于特征提取的飞机结冰严重程度识别