Face inpainting based on high-level facial attributes
作者:
Highlights:
•
摘要
We introduce a novel data-driven approach for face inpainting, which makes use of the observable region of an occluded face as well as its inferred high-level facial attributes, namely gender, ethnicity, and expression. Based on the intuition that the realism of a face inpainting result depends significantly on its overall consistency with respect to these high-level attributes, our approach selects a guidance face that matches the targeted attributes and utilizes it together with the observable input face regions to inpaint the missing areas. These two sources of information are balanced using an adaptive optimization, and the inpainting is performed on the intrinsic image layers instead of the RGB color space to handle the illumination differences between the target face and the guidance face to further enhance the resulting visual quality. Our experiments demonstrate that this approach is effective in inpainting facial components such as the mouth or the eyes that could be partially or completely occluded in the input face. A perceptual study shows that our approach generates more natural facial appearances by accounting for high-level facial attributes.
论文关键词:
论文评审过程:Received 1 August 2016, Revised 11 May 2017, Accepted 20 May 2017, Available online 25 May 2017, Version of Record 18 August 2017.
论文官网地址:https://doi.org/10.1016/j.cviu.2017.05.008