论文标题

自我监督的人网格恢复与交叉代理对齐

Self-supervised Human Mesh Recovery with Cross-Representation Alignment

论文作者

Gong, Xuan, Zheng, Meng, Planche, Benjamin, Karanam, Srikrishna, Chen, Terrence, Doermann, David, Wu, Ziyan

论文摘要

全面监督的人网状恢复方法是渴望数据的,由于3D宣布的基准数据集的可用性有限和多样性,因此具有较差的概括性。使用合成数据驱动的训练范例,已经从合成配对的2D表示(例如2D关键点和分段口罩)和3D网格中训练了模型的最新进展。但是,由于合成训练数据和实际测试数据之间的域间隙很难解决2D密集表示,因此很少探索合成密集的对应图(即IUV)。为了减轻IUV上的这个领域差距,我们提出了使用互补的信息从可靠但稀疏的表示(2D关键点)中提出的交叉代理对准。具体而言,初始网格估计和两个2D表示之间的比对误差将转发为回归器,并在以下网格回归中动态校正。这种自适应的跨代理对准明确地从偏差中学习并捕获了互补信息:从稀疏表示和浓缩的鲁棒性中的鲁棒性。我们对多个标准基准数据集进行了广泛的实验,并展示了竞争结果,有助于迈出一步,以减少在人类网格估计中产生最新模型所需的注释工作。

Fully supervised human mesh recovery methods are data-hungry and have poor generalizability due to the limited availability and diversity of 3D-annotated benchmark datasets. Recent progress in self-supervised human mesh recovery has been made using synthetic-data-driven training paradigms where the model is trained from synthetic paired 2D representation (e.g., 2D keypoints and segmentation masks) and 3D mesh. However, on synthetic dense correspondence maps (i.e., IUV) few have been explored since the domain gap between synthetic training data and real testing data is hard to address for 2D dense representation. To alleviate this domain gap on IUV, we propose cross-representation alignment utilizing the complementary information from the robust but sparse representation (2D keypoints). Specifically, the alignment errors between initial mesh estimation and both 2D representations are forwarded into regressor and dynamically corrected in the following mesh regression. This adaptive cross-representation alignment explicitly learns from the deviations and captures complementary information: robustness from sparse representation and richness from dense representation. We conduct extensive experiments on multiple standard benchmark datasets and demonstrate competitive results, helping take a step towards reducing the annotation effort needed to produce state-of-the-art models in human mesh estimation.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源