This paper addresses the robustness problem of visual-inertial state estimation for underwater
operations. Underwater robots operating in a challenging environment are required to know their
pose at all times. All vision-based localization schemes are prone to failure due to poor visibility
conditions, color loss, and lack of features. The proposed approach utilizes a model of the robot’s
kinematics together with proprioceptive sensors to maintain the pose estimate during visual-inertial
odometry (VIO) failures. Furthermore, the trajectories from successful VIO and the ones from the
model-driven odometry are integrated in a coherent set that maintains a consistent pose at all times.
Health-monitoring tracks the VIO process ensuring timely switches between the two estimators.
Finally, loop closure is implemented on the overall trajectory. The resulting framework is a robust
estimator switching between model-based and visual-inertial odometry (SM/VIO). Experimental results
from numerous deployments of the Aqua2 vehicle demonstrate the robustness of our approach over
coral reefs and a shipwreck.
@inproceedings{joshi_robust_estimator_icra_2023,author={Joshi, Bharat and Damron, Hunter and Rahman, Sharmin and Rekleitis, Ioannis},title={SM/VIO: Robust Underwater State Estimation Switching Between Model-based
and Visual Inertial Odometry},booktitle={IEEE International Conference on Robotics and Automation (ICRA)},year={2023},}
ICRA
Real-Time Dense 3D Mapping of Underwater Environments
Weihan Wang, Bharat Joshi, Nathaniel Burgdorfer, and
4 more authors
In IEEE International Conference on Robotics and Automation (ICRA), 2023
This paper addresses real-time dense 3D reconstruction for a resource-constrained Autonomous
Underwater Vehicle (AUV). Underwater vision-guided operations are among the most challenging as they
combine 3D motion in the presence of external forces, limited visibility, and absence of global
positioning. Obstacle avoidance and effective path planning require online dense reconstructions of the
environment. Autonomous operation is central to environmental monitoring, marine archaeology, resource
utilization, and underwater cave exploration. To address this problem, we propose to use SVIn2, a robust
VIO method, together with a real-time 3D reconstruction pipeline. We provide extensive evaluation on four
challenging underwater datasets. Our pipeline produces comparable reconstruction with that of COLMAP,
the state-of-the-art offline 3D reconstruction method, at high frame rates on a single CPU.
@inproceedings{wang_underwater_dense_reconstruction_icra_2023,author={Wang, Weihan and Joshi, Bharat and Burgdorfer, Nathaniel and Batsos, Konstantinos and {Quattrini Li}, Alberto and Mordohai, Philippos and Rekleitis, Ioannis},title={Real-Time Dense 3D Mapping of Underwater Environments},booktitle={IEEE International Conference on Robotics and Automation (ICRA)},year={2023},}
ISSR
Towards Mapping of Underwater Structures by a Team of Autonomous Underwater Vehicles
Marios Xanthidis, Bharat Joshi, Monika Roznere, and
6 more authors
In this paper, we discuss how to effectively map an underwater structure with a team of robots
considering the specific challenges posed by the underwater environment. The overarching goal of
this work is to produce high-definition, accurate, photorealistic representation of underwater
structures. Due to the many limitations of vision underwater, operating at a distance from the
structure results in degraded images that lack details, while operating close to the structure
increases the accumulated uncertainty due to the limited viewing area which causes drifting. We
propose a multi-robot mapping framework that utilizes two types of robots: proximal observers which
map close to the structure and distal observers which provide localization for proximal observers
and bird’s-eye-view situational awareness. The paper presents the fundamental components and
related current results from real shipwrecks and simulations necessary to enable the proposed
framework, including robust state estimation, real-time 3D mapping, and active perception navigation
strategies for the two types of robots. Then, the paper outlines interesting research directions
and plans to have a completely integrated framework that allows robots to map in harsh environments.
@inproceedings{canthidis_multirobot_mapping_issr,author={Xanthidis, Marios and Joshi, Bharat and Roznere, Monika and Wang, Weihan and Burgdorfer, Nathaniel and {Quattrini Li}, Alberto and Mordohai, Philippos and Nelakuditi, Srihari and Rekleitis, Ioannis},title={Towards Mapping of Underwater Structures by a Team of Autonomous Underwater Vehicles},booktitle={Robotics Research},year={2023},publisher={Springer Nature Switzerland},pages={170--185},dio={https://doi.org/10.1007/978-3-031-25555-7_12},}
2022
AUV
Underwater Exploration and Mapping
Bharat Joshi, Marios Xanthidis, Monika Roznere, and
4 more authors
In IEEE/OES Autonomous Underwater Vehicles Symposium (AUV), 2022
This paper analyzes the open challenges of exploring and mapping in the underwater realm with the
goal of identifying research opportunities that will enable an Autonomous Underwater Vehicle (AUV)
to robustly explore different environments. A taxonomy of environments based on their 3D structure
is presented together with an analysis on how that influences the camera placement. The difference
between exploration and coverage is presented and how they dictate different motion strategies.
Loop closure, while critical for the accuracy of the resulting map, proves to be particularly
challenging due to the limited field of view and the sensitivity to viewing direction. Experimental
results of enforcing loop closures in underwater caves demonstrate a novel navigation strategy.
Dense 3D mapping, both online and offline, as well as other sensor configurations are discussed
following the presented taxonomy. Experimental results from field trials illustrate the above analysis.
@inproceedings{joshi_auv_2022,author={Joshi, Bharat and Xanthidis, Marios and Roznere, Monika and Burgdorfer, Nathaniel J. and Mordohai, Philippos and Li, Alberto Quattrini and Rekleitis, Ioannis},booktitle={IEEE/OES Autonomous Underwater Vehicles Symposium (AUV)},title={Underwater Exploration and Mapping},year={2022},pages={1-7},doi={10.1109/AUV53081.2022.9965805},}
IFAC CAMS
Multi-Robot Exploration of Underwater Structures
Marios Xanthidis, Bharat Joshi, Jason M. O’Kane, and
1 more author
IFAC-PapersOnLine, 2022
14th IFAC Conference on Control Applications in Marine Systems, Robotics, and Vehicles CAMS 2022
This paper discusses a novel approach for the exploration of an underwater structure. A team of
robots splits into two roles: certain robots approach the structure collecting detailed information
(proximal observers) while the rest (distal observers) keep a distance providing an overview of the
mission and assist in the localization of the proximal observers via a Cooperative Localization
framework. Proximal observers utilize a novel robust switching model-based/visual-inertial odometry
to overcome vision-based localization failures. Exploration strategies for the proximal and the distal
observer are discussed.
@article{xanthidis_ifac_2022,title={Multi-Robot Exploration of Underwater Structures},journal={IFAC-PapersOnLine},volume={55},number={31},pages={395-400},year={2022},note={14th IFAC Conference on Control Applications in Marine Systems, Robotics, and Vehicles CAMS 2022},issn={2405-8963},doi={10.1016/j.ifacol.2022.10.460},url={https://www.sciencedirect.com/science/article/pii/S240589632202506X},author={Xanthidis, Marios and Joshi, Bharat and O'Kane, Jason M. and Rekleitis, Ioannis},}
ICRA
High Definition, Inexpensive, Underwater Mapping
Bharat Joshi, Marios Xanthidis, Sharmin Rahman, and
1 more author
In IEEE International Conference on Robotics and Automation (ICRA), 2022
In this paper we present a complete framework for Underwater SLAM utilizing a single inexpensive
sensor. Over the recent years, imaging technology of action cameras is producing stunning results
even under the challenging conditions of the underwater domain. The GoPro 9 camera provides high
definition video in synchronization with an Inertial Measurement Unit (IMU) data stream encoded in
a single mp4 file. The visual inertial SLAM framework is augmented to adjust the map after each loop
closure. Data collected at an artificial wreck of the coast of South Carolina and in caverns and
caves in Florida demonstrate the robustness of the proposed approach in a variety of conditions.
@inproceedings{joshi_gopro_icra_2022,author={Joshi, Bharat and Xanthidis, Marios and Rahman, Sharmin and Rekleitis, Ioannis},title={High Definition, Inexpensive, Underwater Mapping},booktitle={IEEE International Conference on Robotics and Automation (ICRA)},year={2022},pages={1113-1121},doi={10.1109/ICRA46639.2022.9811695},}
2021
ICRA_Workshop
Towards Multi-Robot Shipwreck Mapping
Marios Xanthidis, Bharat Joshi, Nare Karapetyan, and
8 more authors
In Advanced Marine Robotics Technical Committee Workshop on Active Perception at IEEE International Conference on Robotics and Automation (ICRA), 2021
This paper introduces on-going work about a novel methodology for cooperative mapping of an underwater
structure by a team of robots, focusing on accurate photorealistic mapping of shipwrecks. Submerged
vessels present a history capsule and they appear all over the world; as such it is important to capture
their state through visual sensors. The work in literature addresses the problem with a single expensive
robot or robots with similar capabilities that loosely cooperate with each other. The proposed methodology
utilizes vision as the primary sensor. Two types of robots, termed distal and proximal observers, having
distinct roles, operate around the structure. The first type keeps a distance from the wreck providing
a “bird’s”-eye-view of the wreck in sync with the pose of the vehicles of the other type. The second
type operates near the wreck mapping in detail the exterior of the vessel. Preliminary results illustrate
the potential of the proposed strategy.
@inproceedings{icra_workshop_2021,author={Xanthidis, Marios and Joshi, Bharat and Karapetyan, Nare and Roznere, Monika and Wang, Weihan and Johnson, James and {Quattrini Li}, Alberto and Casana, Jesse and Mordohai, Philippos and Nelakuditi, Srihari and Rekleitis, Ioannis},title={Towards Multi-Robot Shipwreck Mapping},booktitle={Advanced Marine Robotics Technical Committee Workshop on Active Perception at IEEE International Conference on Robotics and Automation (ICRA)},year={2021},}
2020
IROS
DeepURL: Deep Pose Estimation Framework for Underwater Relative Localization
Bharat Joshi, Md Modasshir, Travis Manderson, and
5 more authors
In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2020
In this paper, we propose a real-time deep learning approach for determining the 6D relative
pose of Autonomous Underwater Vehicles (AUV) from a single image. A team of autonomous robots
localizing themselves in a communication-constrained underwater environment is essential for many
applications such as underwater exploration, mapping, multi-robot convoying, and other multi-robot
tasks. Due to the profound difficulty of collecting ground truth images with accurate 6D poses
underwater, this work utilizes rendered images from the Unreal Game Engine simulation for training.
An image-to-image translation network is employed to bridge the gap between the rendered and the
real images producing synthetic images for training. The proposed method predicts the 6D pose of an
AUV from a single image as 2D image keypoints representing 8 corners of the 3D model of the AUV,
and then the 6D pose in the camera coordinates is determined using RANSAC-based PnP. Experimental
results in real-world underwater environments (swimming pool and ocean) with different cameras
demonstrate the robustness and accuracy of the proposed technique in terms of translation error
and orientation error over the state-of-the-art methods. The code is publicly available.
@inproceedings{joshi_deepurl_iros_2020,author={Joshi, Bharat and Modasshir, Md and Manderson, Travis and Damron, Hunter and Xanthidis, Marios and {Quattrini Li}, Alberto and Rekleitis, Ioannis and Dudek, Gregory},title={DeepURL: Deep Pose Estimation Framework for Underwater Relative Localization},booktitle={IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},year={2020},pages={1777-1784},doi={10.1109/IROS45743.2020.9341201},}
2019
IROS
Experimental Comparison of Open Source Visual-Inertial-Based State Estimation Algorithms in the Underwater Domain
Bharat Joshi, Brennan Cain, James Johnson, and
8 more authors
In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2019
A plethora of state estimation techniques have appeared in the last decade using visual data,
and more recently with added inertial data. Datasets typically used for evaluation include indoor
and urban environments, where supporting videos have shown impressive performance. However, such
techniques have not been fully evaluated in challenging conditions, such as the marine domain.
In this paper, we compare ten recent open-source packages to provide insights on their performance
and guidelines on addressing current challenges. Specifically, we selected direct and indirect methods
that fuse camera and Inertial Measurement Unit (IMU) data together. Experiments are conducted by
testing all packages on datasets collected over the years with underwater robots in our laboratory.
All the datasets are made available online.
@inproceedings{joshi_comppaper_iros_2019,author={Joshi, Bharat and Cain, Brennan and Johnson, James and Kalitazkis, Michail and Rahman, Sharmin and Xanthidis, Marios and Hernandez, Alan and Karaperyan, Nare and {Quattrini Li}, Alberto and Vitzilaios, Nikolaos and Rekleitis, Ioannis},title={Experimental Comparison of Open Source Visual-Inertial-Based State Estimation Algorithms in the Underwater Domain},booktitle={IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},year={2019},doi={10.1109/IROS40897.2019.8968049},}
2014
IOE
Modeling, Simulation and Implementation of Brushed DC Motor Speed Control Using Optical Incremental Encoder Feedback
Bharat Joshi, Rakesh Shrestha, and Ramesh Chaudhary