使用時序卷積和半監督訓練對視頻中的3D人類位姿估計(Facebook AI Research)

3D human pose estimation in video with temporal convolutions and semi-supervised training

https://arxiv.org/abs/1811.11742v1?

arxiv.org

[1811.11742v1] 3D human pose estimation in video with temporal convolutions and semi-supervised training

[1811.11742v1] 3D human pose estimation in video with temporal convolutions and semi-supervised training?

arxiv.org

  1. 基於2D關鍵點的空洞時序卷積的全卷積模型。
  2. 引入反投影,一個簡單有效的半監督方法,可以利用未標註的視頻數據。

摘要:

In this work, we demonstrate that 3D poses in video can be effectively estimated with a fully convolutional model based on dilated temporal convolutions over 2D key-points. We also introduce back-projection, a simple and effective semi-supervised training method that leverages unlabeled video data. We start with predicted 2D key-points for unlabeled video, then estimate 3D poses and finally back-project to the input 2D keypoints. In the supervised setting, our fully-convolutional model outper-forms the previous best result from the literature by 6 mm mean per-joint position error on Human3.6M, correspond-ing to an error reduction of 11%, and the model also shows significant improvements on HumanEva-I. More-over, experiments with back-projection show that it comfort-ably outperforms previous state-of-the-art results in semi-supervised settings where labeled data is scarce. Code and models are available at github.com/facebookrese

效果動圖:

視頻:

https://www.zhihu.com/video/1051777230560100352
推薦閱讀:

TAG:深度學習(DeepLearning) | Facebook |