publications
* denotes equal contribution
An up-to-date list is available on Google Scholar
2022
- AUVUnderwater Exploration and MappingBharat Joshi, Marios Xanthidis, Monika Roznere, and 4 more authorsIn IEEE/OES Autonomous Underwater Vehicles Symposium (AUV) 2022
This paper analyzes the open challenges of exploring and mapping in the underwater realm with the goal of identifying research opportunities that will enable an Autonomous Underwater Vehicle (AUV) to robustly explore different environments. A taxonomy of environments based on their 3D structure is presented together with an analysis on how that influences the camera placement. The difference between exploration and coverage is presented and how they dictate different motion strategies. Loop closure, while critical for the accuracy of the resulting map, proves to be particularly challenging due to the limited field of view and the sensitivity to viewing direction. Experimental results of enforcing loop closures in underwater caves demonstrate a novel navigation strategy. Dense 3D mapping, both online and offline, as well as other sensor configurations are discussed following the presented taxonomy. Experimental results from field trials illustrate the above analysis.
@inproceedings{auv, author = {Joshi, Bharat and Xanthidis, Marios and Roznere, Monika and Burgdorfer, Nathaniel J. and Mordohai, Philippos and Li, Alberto Quattrini and Rekleitis, Ioannis}, booktitle = {IEEE/OES Autonomous Underwater Vehicles Symposium (AUV)}, title = {Underwater Exploration and Mapping}, year = {2022}, pages = {1-7}, doi = {10.1109/AUV53081.2022.9965805}, }
- IFAC CAMSMulti-Robot Exploration of Underwater StructuresMarios Xanthidis, Bharat Joshi, Jason M. O’Kane, and 1 more authorIFAC-PapersOnLine 2022
This paper discusses a novel approach for the exploration of an underwater structure. A team of robots splits into two roles: certain robots approach the structure collecting detailed information (proximal observers) while the rest (distal observers) keep a distance providing an overview of the mission and assist in the localization of the proximal observers via a Cooperative Localization framework. Proximal observers utilize a novel robust switching model-based/visual-inertial odometry to overcome vision-based localization failures. Exploration strategies for the proximal and the distal observer are discussed.
@article{ifac, title = {Multi-Robot Exploration of Underwater Structures}, journal = {IFAC-PapersOnLine}, volume = {55}, number = {31}, pages = {395-400}, year = {2022}, note = {14th IFAC Conference on Control Applications in Marine Systems, Robotics, and Vehicles CAMS 2022}, issn = {2405-8963}, doi = {https://doi.org/10.1016/j.ifacol.2022.10.460}, url = {https://www.sciencedirect.com/science/article/pii/S240589632202506X}, author = {Xanthidis, Marios and Joshi, Bharat and O'Kane, Jason M. and Rekleitis, Ioannis}, }
- ICRAHigh Definition, Inexpensive, Underwater MappingBharat Joshi, Marios Xanthidis, Sharmin Rahman, and 1 more authorIn IEEE International Conference on Robotics and Automation (ICRA) 2022
In this paper we present a complete framework for Underwater SLAM utilizing a single inexpensive sensor. Over the recent years, imaging technology of action cameras is producing stunning results even under the challenging conditions of the underwater domain. The GoPro 9 camera provides high definition video in synchronization with an Inertial Measurement Unit (IMU) data stream encoded in a single mp4 file. The visual inertial SLAM framework is augmented to adjust the map after each loop closure. Data collected at an artificial wreck of the coast of South Carolina and in caverns and caves in Florida demonstrate the robustness of the proposed approach in a variety of conditions.
@inproceedings{gopro, author = {Joshi, Bharat and Xanthidis, Marios and Rahman, Sharmin and Rekleitis, Ioannis}, title = {High Definition, Inexpensive, Underwater Mapping}, booktitle = {IEEE International Conference on Robotics and Automation (ICRA)}, year = {2022}, pages = {1113-1121}, doi = {10.1109/ICRA46639.2022.9811695}, }
2021
- ICRA_WorkshopTowards Multi-Robot Shipwreck MappingMarios Xanthidis, Bharat Joshi, Nare Karapetyan, and 8 more authorsIn Advanced Marine Robotics Technical Committee Workshop on Active Perception at IEEE International Conference on Robotics and Automation (ICRA) 2021
This paper introduces on-going work about a novel methodology for cooperative mapping of an underwater structure by a team of robots, focusing on accurate photorealistic mapping of shipwrecks. Submerged vessels present a history capsule and they appear all over the world; as such it is important to capture their state through visual sensors. The work in literature addresses the problem with a single expensive robot or robots with similar capabilities that loosely cooperate with each other. The proposed methodology utilizes vision as the primary sensor. Two types of robots, termed distal and proximal observers, having distinct roles, operate around the structure. The first type keeps a distance from the wreck providing a “bird’s”-eye-view of the wreck in sync with the pose of the vehicles of the other type. The second type operates near the wreck mapping in detail the exterior of the vessel. Preliminary results illustrate the potential of the proposed strategy.
@inproceedings{icra_workshop_2021, author = {Xanthidis, Marios and Joshi, Bharat and Karapetyan, Nare and Roznere, Monika and Wang, Weihan and Johnson, James and {Quattrini Li}, Alberto and Casana, Jesse and Mordohai, Philippos and Nelakuditi, Srihari and Rekleitis, Ioannis}, title = {Towards Multi-Robot Shipwreck Mapping}, booktitle = {Advanced Marine Robotics Technical Committee Workshop on Active Perception at IEEE International Conference on Robotics and Automation (ICRA)}, year = {2021}, }
2020
- IROSDeepURL: Deep Pose Estimation Framework for Underwater Relative LocalizationBharat Joshi, Md Modasshir, Travis Manderson, and 5 more authorsIn IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2020
In this paper, we propose a real-time deep learning approach for determining the 6D relative pose of Autonomous Underwater Vehicles (AUV) from a single image. A team of autonomous robots localizing themselves in a communication-constrained underwater environment is essential for many applications such as underwater exploration, mapping, multi-robot convoying, and other multi-robot tasks. Due to the profound difficulty of collecting ground truth images with accurate 6D poses underwater, this work utilizes rendered images from the Unreal Game Engine simulation for training. An image-to-image translation network is employed to bridge the gap between the rendered and the real images producing synthetic images for training. The proposed method predicts the 6D pose of an AUV from a single image as 2D image keypoints representing 8 corners of the 3D model of the AUV, and then the 6D pose in the camera coordinates is determined using RANSAC-based PnP. Experimental results in real-world underwater environments (swimming pool and ocean) with different cameras demonstrate the robustness and accuracy of the proposed technique in terms of translation error and orientation error over the state-of-the-art methods. The code is publicly available.
@inproceedings{deepurl, author = {Joshi, Bharat and Modasshir, Md and Manderson, Travis and Damron, Hunter and Xanthidis, Marios and {Quattrini Li}, Alberto and Rekleitis, Ioannis and Dudek, Gregory}, title = {DeepURL: Deep Pose Estimation Framework for Underwater Relative Localization}, booktitle = {IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)}, year = {2020}, pages = {1777-1784}, doi = {10.1109/IROS45743.2020.9341201}, }
2019
- IROSExperimental Comparison of Open Source Visual-Inertial-Based State Estimation Algorithms in the Underwater DomainBharat Joshi, Brennan Cain, James Johnson, and 8 more authorsIn IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2019
A plethora of state estimation techniques have appeared in the last decade using visual data, and more recently with added inertial data. Datasets typically used for evaluation include indoor and urban environments, where supporting videos have shown impressive performance. However, such techniques have not been fully evaluated in challenging conditions, such as the marine domain. In this paper, we compare ten recent open-source packages to provide insights on their performance and guidelines on addressing current challenges. Specifically, we selected direct and indirect methods that fuse camera and Inertial Measurement Unit (IMU) data together. Experiments are conducted by testing all packages on datasets collected over the years with underwater robots in our laboratory. All the datasets are made available online.
@inproceedings{comppaper, author = {Joshi, Bharat and Cain, Brennan and Johnson, James and Kalitazkis, Michail and Rahman, Sharmin and Xanthidis, Marios and Hernandez, Alan and Karaperyan, Nare and {Quattrini Li}, Alberto and Vitzilaios, Nikolaos and Rekleitis, Ioannis}, title = {Experimental Comparison of Open Source Visual-Inertial-Based State Estimation Algorithms in the Underwater Domain}, booktitle = {IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)}, year = {2019}, }
2014
- IOEModeling, Simulation and Implementation of Brushed DC Motor Speed Control Using Optical Incremental Encoder FeedbackBharat Joshi, Rakesh Shrestha, and Ramesh ChaudharyIn IOE Graduate Conference, 2014
@inproceedings{motorcontrol, author = {Joshi, Bharat and Shrestha, Rakesh and Chaudhary, Ramesh}, booktitle = {IOE Graduate Conference,}, year = {2014}, }