Invariant representation of facial expressions for blended expression recognition on unknown subjects
作者:
Highlights:
•
摘要
Facial expressions analysis plays an important part in emotion detection. However, having an automatic and non-intrusive system to detect blended facial expression is still a challenging problem, especially when the subject is unknown to the system. Here, we propose a method that adapts to the morphology of the subject and that is based on a new invariant representation of facial expressions. In our system, one expression is defined by its relative position to 8 other expressions. As the mode of representation is relative, we show that the resulting expression space is person-independent. The 8 expressions are synthesized for each unknown subject from plausible distortions. Recognition tasks are performed in this space with a basic algorithm. The experiments have been performed on 22 different blended expressions and on either known or unknown subjects. The recognition results on known subjects demonstrate that the representation is robust to the type of data (shape and/or texture information) and to the dimensionality of the expression space. The recognition results on 22 expressions of unknown subjects show that a dimensionality of the expression space of 4 is enough to outperform traditional methods based on active appearance models and accurately describe an expression.
论文关键词:
论文评审过程:Received 2 July 2012, Accepted 11 July 2013, Available online 25 July 2013.
论文官网地址:https://doi.org/10.1016/j.cviu.2013.07.005