An end-to-end framework for unconstrained monocular 3D hand pose estimation

作者:

Highlights:

• Our proposed framework can robustly infer 3D hand pose without requiring a prior.

• Novel keypoint-based hand detector robust to confusing background and adjacent hands.

• Two anatomy-based constraints for aiding 3D hand pose estimation network performance.

• An end-to-end pipeline with state-of-the-art performance on several datasets.

摘要

•Our proposed framework can robustly infer 3D hand pose without requiring a prior.•Novel keypoint-based hand detector robust to confusing background and adjacent hands.•Two anatomy-based constraints for aiding 3D hand pose estimation network performance.•An end-to-end pipeline with state-of-the-art performance on several datasets.

论文关键词:Hand detection,Hand tracking,Hand pose estimation,Computer vision,Deep learning

论文评审过程:Received 26 February 2020, Revised 11 January 2021, Accepted 11 February 2021, Available online 16 February 2021, Version of Record 26 February 2021.

论文官网地址:https://doi.org/10.1016/j.patcog.2021.107892