This paper presents an extension to visual inertial odometry (VIO) by introducing tightly-coupled fusion of magnetometer measurements. A sliding window of keyframes is optimized by minimizing re-projection errors, relative inertial errors, and relative magnetometer orientation errors. The results of IMU orientation propagation are used to efficiently transform magnetometer measurements between frames producing relative orientation constraints between consecutive frames. The soft and hard iron effects are calibrated using an ellipsoid fitting algorithm. The introduction of magnetometer data results in significant reductions in the orientation error and also in recovery of the true yaw orientation with respect to the magnetic north. The proposed framework operates in all environments with slow-varying magnetic fields, mainly outdoors and underwater. We have focused our work on the underwater domain, especially in underwater caves, as the narrow passage and turbulent flow make it difficult to perform loop closures and reset the localization drift. The underwater caves present challenges to VIO due to the absence of ambient light and the confined nature of the environment, while also being a crucial source of fresh water and providing valuable historical records. Experimental results from underwater caves demonstrate the improvements in accuracy and robustness introduced by the proposed VIO extension.
2023
OCEANS
Hybrid Visual Inertial Odometry for Robust Underwater Estimation
Bharat Joshi, Chanaka Bandara, Ioannis Poulakakis, and
2 more authors
In OCEANS 2023 - MTS/IEEE U.S. Gulf Coast, Sep 2023
Vision-based state estimation is challenging in underwater environments due to color attenuation, low visibility and floating particulates. All visual-inertial estimators are prone to failure due to degradation in image quality. However, underwater robots are required to keep track of their pose during field deployments. We propose robust estimator fusing the robot’s dynamic and kinematic model with proprioceptive sensors to propagate the pose whenever visual-inertial odometry (VIO) fails. To detect the VIO failures, health tracking is used, which enables switching between pose estimates from VIO and a kinematic estimator. Loop closure implemented on weighted posegraph for global trajectory optimization. Experimental results from an Aqua2 Autonomous Underwater Vehicle field deployments demonstrates the robustness of our approach over different underwater environments such as over shipwrecks and coral reefs. The proposed hybrid approach is robust to VIO failures producing consistent trajectories even in harsh conditions.
ICRA
SM/VIO: Robust Underwater State Estimation Switching Between Model-based
and Visual Inertial Odometry
Bharat Joshi, Hunter Damron, Sharmin Rahman, and
1 more author
In IEEE International Conference on Robotics and Automation (ICRA), Sep 2023
This paper addresses the robustness problem of visual-inertial state estimation for underwater
operations. Underwater robots operating in a challenging environment are required to know their
pose at all times. All vision-based localization schemes are prone to failure due to poor visibility
conditions, color loss, and lack of features. The proposed approach utilizes a model of the robot’s
kinematics together with proprioceptive sensors to maintain the pose estimate during visual-inertial
odometry (VIO) failures. Furthermore, the trajectories from successful VIO and the ones from the
model-driven odometry are integrated in a coherent set that maintains a consistent pose at all times.
Health-monitoring tracks the VIO process ensuring timely switches between the two estimators.
Finally, loop closure is implemented on the overall trajectory. The resulting framework is a robust
estimator switching between model-based and visual-inertial odometry (SM/VIO). Experimental results
from numerous deployments of the Aqua2 vehicle demonstrate the robustness of our approach over
coral reefs and a shipwreck.
ICRA
Real-Time Dense 3D Mapping of Underwater Environments
Weihan Wang, Bharat Joshi, Nathaniel Burgdorfer, and
4 more authors
In IEEE International Conference on Robotics and Automation (ICRA), Sep 2023
This paper addresses real-time dense 3D reconstruction for a resource-constrained Autonomous
Underwater Vehicle (AUV). Underwater vision-guided operations are among the most challenging as they
combine 3D motion in the presence of external forces, limited visibility, and absence of global
positioning. Obstacle avoidance and effective path planning require online dense reconstructions of the
environment. Autonomous operation is central to environmental monitoring, marine archaeology, resource
utilization, and underwater cave exploration. To address this problem, we propose to use SVIn2, a robust
VIO method, together with a real-time 3D reconstruction pipeline. We provide extensive evaluation on four
challenging underwater datasets. Our pipeline produces comparable reconstruction with that of COLMAP,
the state-of-the-art offline 3D reconstruction method, at high frame rates on a single CPU.
ISSR
Towards Mapping of Underwater Structures by a Team of Autonomous Underwater Vehicles
Marios Xanthidis, Bharat Joshi, Monika Roznere, and
6 more authors
In this paper, we discuss how to effectively map an underwater structure with a team of robots
considering the specific challenges posed by the underwater environment. The overarching goal of
this work is to produce high-definition, accurate, photorealistic representation of underwater
structures. Due to the many limitations of vision underwater, operating at a distance from the
structure results in degraded images that lack details, while operating close to the structure
increases the accumulated uncertainty due to the limited viewing area which causes drifting. We
propose a multi-robot mapping framework that utilizes two types of robots: proximal observers which
map close to the structure and distal observers which provide localization for proximal observers
and bird’s-eye-view situational awareness. The paper presents the fundamental components and
related current results from real shipwrecks and simulations necessary to enable the proposed
framework, including robust state estimation, real-time 3D mapping, and active perception navigation
strategies for the two types of robots. Then, the paper outlines interesting research directions
and plans to have a completely integrated framework that allows robots to map in harsh environments.
2022
AUV
Underwater Exploration and Mapping
Bharat Joshi, Marios Xanthidis, Monika Roznere, and
4 more authors
In IEEE/OES Autonomous Underwater Vehicles Symposium (AUV), Sep 2022
This paper analyzes the open challenges of exploring and mapping in the underwater realm with the
goal of identifying research opportunities that will enable an Autonomous Underwater Vehicle (AUV)
to robustly explore different environments. A taxonomy of environments based on their 3D structure
is presented together with an analysis on how that influences the camera placement. The difference
between exploration and coverage is presented and how they dictate different motion strategies.
Loop closure, while critical for the accuracy of the resulting map, proves to be particularly
challenging due to the limited field of view and the sensitivity to viewing direction. Experimental
results of enforcing loop closures in underwater caves demonstrate a novel navigation strategy.
Dense 3D mapping, both online and offline, as well as other sensor configurations are discussed
following the presented taxonomy. Experimental results from field trials illustrate the above analysis.
IFAC CAMS
Multi-Robot Exploration of Underwater Structures
Marios Xanthidis, Bharat Joshi, Jason M. O’Kane, and
1 more author
IFAC-PapersOnLine, Sep 2022
14th IFAC Conference on Control Applications in Marine Systems, Robotics, and Vehicles CAMS 2022
This paper discusses a novel approach for the exploration of an underwater structure. A team of
robots splits into two roles: certain robots approach the structure collecting detailed information
(proximal observers) while the rest (distal observers) keep a distance providing an overview of the
mission and assist in the localization of the proximal observers via a Cooperative Localization
framework. Proximal observers utilize a novel robust switching model-based/visual-inertial odometry
to overcome vision-based localization failures. Exploration strategies for the proximal and the distal
observer are discussed.
ICRA
High Definition, Inexpensive, Underwater Mapping
Bharat Joshi, Marios Xanthidis, Sharmin Rahman, and
1 more author
In IEEE International Conference on Robotics and Automation (ICRA), Sep 2022
In this paper we present a complete framework for Underwater SLAM utilizing a single inexpensive
sensor. Over the recent years, imaging technology of action cameras is producing stunning results
even under the challenging conditions of the underwater domain. The GoPro 9 camera provides high
definition video in synchronization with an Inertial Measurement Unit (IMU) data stream encoded in
a single mp4 file. The visual inertial SLAM framework is augmented to adjust the map after each loop
closure. Data collected at an artificial wreck of the coast of South Carolina and in caverns and
caves in Florida demonstrate the robustness of the proposed approach in a variety of conditions.
2021
ICRA_Workshop
Towards Multi-Robot Shipwreck Mapping
Marios Xanthidis, Bharat Joshi, Nare Karapetyan, and
8 more authors
In Advanced Marine Robotics Technical Committee Workshop on Active Perception at IEEE International Conference on Robotics and Automation (ICRA), Sep 2021
This paper introduces on-going work about a novel methodology for cooperative mapping of an underwater
structure by a team of robots, focusing on accurate photorealistic mapping of shipwrecks. Submerged
vessels present a history capsule and they appear all over the world; as such it is important to capture
their state through visual sensors. The work in literature addresses the problem with a single expensive
robot or robots with similar capabilities that loosely cooperate with each other. The proposed methodology
utilizes vision as the primary sensor. Two types of robots, termed distal and proximal observers, having
distinct roles, operate around the structure. The first type keeps a distance from the wreck providing
a “bird’s”-eye-view of the wreck in sync with the pose of the vehicles of the other type. The second
type operates near the wreck mapping in detail the exterior of the vessel. Preliminary results illustrate
the potential of the proposed strategy.
2020
IROS
DeepURL: Deep Pose Estimation Framework for Underwater Relative Localization
Bharat Joshi, Md Modasshir, Travis Manderson, and
5 more authors
In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Sep 2020
In this paper, we propose a real-time deep learning approach for determining the 6D relative
pose of Autonomous Underwater Vehicles (AUV) from a single image. A team of autonomous robots
localizing themselves in a communication-constrained underwater environment is essential for many
applications such as underwater exploration, mapping, multi-robot convoying, and other multi-robot
tasks. Due to the profound difficulty of collecting ground truth images with accurate 6D poses
underwater, this work utilizes rendered images from the Unreal Game Engine simulation for training.
An image-to-image translation network is employed to bridge the gap between the rendered and the
real images producing synthetic images for training. The proposed method predicts the 6D pose of an
AUV from a single image as 2D image keypoints representing 8 corners of the 3D model of the AUV,
and then the 6D pose in the camera coordinates is determined using RANSAC-based PnP. Experimental
results in real-world underwater environments (swimming pool and ocean) with different cameras
demonstrate the robustness and accuracy of the proposed technique in terms of translation error
and orientation error over the state-of-the-art methods. The code is publicly available.
2019
IROS
Experimental Comparison of Open Source Visual-Inertial-Based State Estimation Algorithms in the Underwater Domain
Bharat Joshi, Brennan Cain, James Johnson, and
8 more authors
In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Sep 2019
A plethora of state estimation techniques have appeared in the last decade using visual data,
and more recently with added inertial data. Datasets typically used for evaluation include indoor
and urban environments, where supporting videos have shown impressive performance. However, such
techniques have not been fully evaluated in challenging conditions, such as the marine domain.
In this paper, we compare ten recent open-source packages to provide insights on their performance
and guidelines on addressing current challenges. Specifically, we selected direct and indirect methods
that fuse camera and Inertial Measurement Unit (IMU) data together. Experiments are conducted by
testing all packages on datasets collected over the years with underwater robots in our laboratory.
All the datasets are made available online.
2014
IOE
Modeling, Simulation and Implementation of Brushed DC Motor Speed Control Using Optical Incremental Encoder Feedback
Bharat Joshi, Rakesh Shrestha, and Ramesh Chaudhary