论文标题
从现实世界图像到3D字符的克隆服装重新识别
Cloning Outfits from Real-World Images to 3D Characters for Generalizable Person Re-Identification
论文作者
论文摘要
最近,大规模合成数据集被证明对可概括的人的重新识别非常有用。但是,现有数据集中的合成人员主要是卡通状的,并且是随机的着装搭配,这限制了他们的性能。为了解决这一问题,在这项工作中,提出了一种自动方法,可以将整个服装直接从现实世界的人图像到虚拟3D字符,以便如此创建的任何虚拟人看起来都与现实世界中的对应物非常相似。具体而言,根据紫外线纹理映射,设计了两种克隆方法,即注册的衣服映射和均匀的布膨胀。鉴于在人图像上检测到的衣服关键点,并在带有清晰的衣服结构的常规紫外线地图上标记,注册的映射将透视均匀统计应用于紫外线地图上的对应物。至于看不见的衣服零件和不规则的紫外线图,均匀的扩展段是在衣服上作为逼真的布图案或牢房的均匀区域,并扩展了细胞以填充紫外线图。此外,提出了一种相似性多样性的扩展策略,通过群集的人图像,每个群集采样图像以及为3D角色生成的克隆服装。这样,可以在视觉相似性中密集地扩展虚拟人,以挑战模型学习,并在人群中多样化以丰富样本分布。最后,通过在Unity3D场景中渲染克隆的字符,创建了一个更现实的虚拟数据集,称为ClonedPerson,具有5,621个身份和887,766张图像。实验结果表明,对ClonedPerson培训的模型具有更好的概括性能,优于对其他流行的现实世界和合成人重新识别数据集的培训。 ClonedPerson项目可在https://github.com/yanan-wang-cs/clonedperson上获得。
Recently, large-scale synthetic datasets are shown to be very useful for generalizable person re-identification. However, synthesized persons in existing datasets are mostly cartoon-like and in random dress collocation, which limits their performance. To address this, in this work, an automatic approach is proposed to directly clone the whole outfits from real-world person images to virtual 3D characters, such that any virtual person thus created will appear very similar to its real-world counterpart. Specifically, based on UV texture mapping, two cloning methods are designed, namely registered clothes mapping and homogeneous cloth expansion. Given clothes keypoints detected on person images and labeled on regular UV maps with clear clothes structures, registered mapping applies perspective homography to warp real-world clothes to the counterparts on the UV map. As for invisible clothes parts and irregular UV maps, homogeneous expansion segments a homogeneous area on clothes as a realistic cloth pattern or cell, and expand the cell to fill the UV map. Furthermore, a similarity-diversity expansion strategy is proposed, by clustering person images, sampling images per cluster, and cloning outfits for 3D character generation. This way, virtual persons can be scaled up densely in visual similarity to challenge model learning, and diversely in population to enrich sample distribution. Finally, by rendering the cloned characters in Unity3D scenes, a more realistic virtual dataset called ClonedPerson is created, with 5,621 identities and 887,766 images. Experimental results show that the model trained on ClonedPerson has a better generalization performance, superior to that trained on other popular real-world and synthetic person re-identification datasets. The ClonedPerson project is available at https://github.com/Yanan-Wang-cs/ClonedPerson.