Skip to main content
Menu

VILENS - Tightly Fused Multi-Sensor Odometry

VILENS - Tightly Fused Multi-Sensor Odometry

VILENS (Visual Inertial Legged/Lidar Navigation System) is a factor-graph based odometry algorithm that fuses multiple sources of measurements (IMU, vision, lidar and leg odometry) in a single consistent optimisation. This algorithm has been developed by David Wisth, Marco Camurri, Lintong Zhang and Maurice Fallon at the Oxford Robotics Institute (ORI). The papers describing this work are listed below.

VILENS is entirely ROS-based, uses GTSAM as a back end optimiser and achieves results equivalent to VINS-Mono and OKVIS on the EUROC datasets as well as LOAM on relevant LIDAR datasets. Our front-end uses consumer grade cameras (RealSense D435i, T265), the SevenSense AlphaSense or a 3D LIDAR (Velodyne, Hesai or Ouster) or a combination of both.

Balancing the Budget: Feature Selection and Tracking for Multi-Camera Visual-Inertial Odometry

Abstract: We present a multi-camera visual-inertial odometry system based on factor graph optimization which estimates motion by using all cameras simultaneously while retaining a fixed overall feature budget. We focus on motion tracking in challenging environments, such as narrow corridors, dark spaces with aggressive motions, and abrupt lighting changes. These scenarios cause traditional monocular or stereo odometry to fail. While tracking motion with extra cameras should theoretically prevent failures, it leads to additional complexity and computational burden. To overcome these challenges, we introduce two novel methods to improve multi-camera feature tracking. First, instead of tracking features separately in each camera, we track features continuously as they move from one camera to another. This increases accuracy and achieves a more compact factor graph representation. Second, we select a fixed budget of tracked features across the cameras to reduce back-end optimization time. We have found that using a smaller set of informative features can maintain the same tracking accuracy. Our proposed method was extensively tested using a hardware-synchronized device consisting of an IMU and four cameras (a front stereo pair and two lateral) in scenarios including: an underground mine, large open spaces, and building interiors with narrow stairs and corridors. Compared to stereo-only state-of-the-art visual-inertial odometry methods, our approach reduces the drift rate, relative pose error, by up to 80% in translation and 39% in rotation.

Publication:

  • Lintong Zhang, David Wisth, Marco Camurri, Maurice Fallon, "Balancing the Budget: Feature Selection and Tracking for Multi-Camera Visual-Inertial Odometry", IEEE Robotics and Automation Letters, 2022. pdf

VILENS: Visual, Inertial, Lidar, and Leg Odometry for All-Terrain Legged Robots

Note: this is the main journal version of our work.

Abstract:
We present VILENS (Visual Inertial Lidar Legged Navigation System), an odometry system for legged robots based on factor graphs. The key novelty is the tight fusion of four different sensor modalities to achieve reliable operation when the individual sensors would otherwise produce degenerate estimation. To minimize leg odometry drift, we extend the robot’s state with a linear velocity bias term which is estimated online. This bias is only observable because of the tight fusion of this preintegrated velocity factor with vision, lidar, and IMU factors. Extensive experimental validation on the ANYmal quadruped robots is presented, for a total duration of 2 h and 1.8 km traveled. The experiments involved dynamic locomotion over loose rocks, slopes, and mud; these included perceptual challenges, such as dark and dusty underground caverns or open, feature- deprived areas, as well as mobility challenges such as slipping and terrain deformation. We show an average improvement of 62% translational and 51% rotational errors compared to a state-of- the-art loosely coupled approach. To demonstrate its robustness, VILENS was also integrated with a perceptive controller and a local path planner.

Publication:

  • D. Wisth, M. Camurri, and M. Fallon, “VILENS: Visual, Inertial, Lidar, and Leg Odometry for All-Terrain Legged Robots”, 2022. pdf

RAL/ICRA 2021: Unified Multi-Modal Landmark Tracking for Tightly Coupled Lidar-Visual-Inertial Odometry

Abstract: We present an efficient multi-sensor odometry system for mobile platforms that jointly optimizes visual, lidar, and inertial information within a single integrated factor graph. This runs in real-time at full framerate using fixed lag smoothing. To perform such tight integration, a new method to extract 3D line and planar primitives from lidar point clouds is presented. This approach overcomes the suboptimality of typical frame-to-frame tracking methods by treating the primitives as landmarks and tracking them over multiple scans. True integration of lidar features with standard visual features and IMU is made possible using a subtle passive synchronization of lidar and camera frames. The lightweight formulation of the 3D features allows for real-time execution on a single CPU. Our proposed system has been tested on a variety of platforms and scenarios, including underground exploration with a legged robot and outdoor scanning with a dynamically moving handheld device, for a total duration of 96 min and 2.4 km traveled distance. In these test sequences, using only one exteroceptive sensor leads to failure due to either underconstrained geometry (affecting lidar) or textureless areas caused by aggressive lighting changes (affecting vision). In these conditions, our factor graph naturally uses the best information available from each sensor modality without any hard switches.

Publication:

  • D. Wisth, M. Camurri, S. Das and M. Fallon, “Unified Multi-Modal Landmark Tracking for Tightly Coupled Lidar-Visual-Inertial Odometry”, in IEEE Robotics and Automation Letters, 2021. pdfICRA Best Student paper finalist

ICRA 2020: Preintegrated Velocity Bias Estimation to Overcome Contact Nonlinearities in Legged Robot Odometry

Abstract: In this paper, we present a novel factor graph formulation to estimate the pose and velocity of a quadruped robot on slippery and deformable terrains. The factor graph includes a new type of preintegrated velocity factor that incorporates velocity inputs from leg odometry. To accommodate for leg odometry drift, we extend the robot’s state vector with a bias term for this preintegrated velocity factor. This term incorporates all the effects of unmodeled uncertainties at the contact point, such as slippery or deformable grounds and leg flexibility. The bias term can be accurately estimated thanks to the tight fusion of the preintegrated velocity factor with stereo vision and IMU factors, without which it would be unobservable. The system has been validated on several scenarios that involve dynamic motions of the ANYmal robot on loose rocks, slopes and muddy ground. We demonstrate a 26% improvement of relative pose error compared to our previous work and 52% compared to a state-of-the-art proprioceptive state estimator.

Publication:

  • D. Wisth, M. Camurri, and M. Fallon, “Preintegrated Velocity Bias Estimation to Overcome Contact Nonlinearities in Legged Robot Odometry,” in IEEE Intl. Conf. on Robotics and Automation (ICRA), 2020. pdf

RAL/IROS 2019: Robust Legged Robot State Estimation Using Factor Graph Optimization [RA-L/IROS 2019]

Abstract: Legged robots, specifically quadrupeds, are becoming increasingly attractive for industrial applications such as inspection. However, to leave the laboratory and to become useful to an end user requires reliability in harsh conditions. From the perspective of state estimation, it is essential to be able to accurately estimate the robot’s state despite challenges such as uneven or slippery terrain, textureless and reflective scenes, as well as dynamic camera occlusions. We are motivated to reduce the dependency on foot contact classifications, which fail when slipping, and to reduce position drift during dynamic motions such as trotting. To this end, we present a factor graph optimization method for state estimation which tightly fuses and smooths inertial navigation, leg odometry and visual odometry. The effectiveness of the approach is demonstrated using the ANYmal quadruped robot navigating in a realistic outdoor industrial environment. This experiment included trotting, walking, crossing obstacles and ascending a staircase. The proposed approach decreased the relative position error by up to 55% and absolute position error by 76% compared to kinematic-inertial odometry.

Publication:

  • D. Wisth, M. Camurri, and M. Fallon, “Robust Legged Robot State Estimation Using Factor Graph Optimization,” in IEEE Robotics and Automation Letters, 2019. pdf