论文标题
Terrapn:使用在线自我监督学习的非结构化地形导航
TerraPN: Unstructured Terrain Navigation using Online Self-Supervised Learning
论文作者
论文摘要
我们提出了Terrapn,这是一种新颖的方法,它可以通过自我监督的学习直接从机器人 - 塔林相互作用中了解复杂室外地形的表面特性(牵引力,颠簸,可变形等),并将其用于自主机器人导航。我们的方法使用地形表面和机器人的速度的RGB图像作为输入,以及机器人作为自我选择的标签所经历的IMU振动和探测器错误。我们的方法计算了一个表面成本图,该图与颠簸,湿滑,可变形的表面(高导航成本)区分开了光滑,高牵引力(低导航成本)。我们通过检测表面之间的边界来从输入RGB图像中从输入RGB图像中进行非均匀采样贴片来计算成本图,与均匀的采样和现有分割方法相比,推理时间较低(减少47.27%)。我们提出了一种新颖的导航算法,该算法可以说明表面成本,计算机器人基于成本的加速度限制以及动态可行的无碰撞轨迹。 Terrapn的表面成本预测可以在约25分钟内训练五个不同的表面,而先前基于学习的分割方法数小时。在导航方面,我们的方法在成功率(高达35.84%),轨迹的振动成本(降低21.52%)方面优于先前的工作,并在不同的情况下在颠簸,可变形的表面上放慢机器人的速度(高达46.76%的慢速))。
We present TerraPN, a novel method that learns the surface properties (traction, bumpiness, deformability, etc.) of complex outdoor terrains directly from robot-terrain interactions through self-supervised learning, and uses it for autonomous robot navigation. Our method uses RGB images of terrain surfaces and the robot's velocities as inputs, and the IMU vibrations and odometry errors experienced by the robot as labels for self-supervision. Our method computes a surface cost map that differentiates smooth, high-traction surfaces (low navigation costs) from bumpy, slippery, deformable surfaces (high navigation costs). We compute the cost map by non-uniformly sampling patches from the input RGB image by detecting boundaries between surfaces resulting in low inference times (47.27% lower) compared to uniform sampling and existing segmentation methods. We present a novel navigation algorithm that accounts for a surface's cost, computes cost-based acceleration limits for the robot, and dynamically feasible, collision-free trajectories. TerraPN's surface cost prediction can be trained in ~25 minutes for five different surfaces, compared to several hours for previous learning-based segmentation methods. In terms of navigation, our method outperforms previous works in terms of success rates (up to 35.84% higher), vibration cost of the trajectories (up to 21.52% lower), and slowing the robot on bumpy, deformable surfaces (up to 46.76% slower) in different scenarios.