Omnidirectional Vision

Logo

Project Page

View the Project on GitHub tu-darmstadt-ros-pkg/omnidirectional_vision

A Flexible Framework for Virtual Omnidirectional Vision to Improve Operator Situation Awareness

Authors: Martin Oehler and Oskar von Stryk
Affiliation: Technical University of Darmstadt

Abstract

During teleoperation of a mobile robot, providing good operator situation awareness is a major concern as a single mistake can lead to mission failure. Camera streams are widely used for teleoperation but offer limited field-of-view. In this paper, we present a flexible framework for virtual projections to increase situation awareness based on a novel method to fuse multiple cameras mounted anywhere on the robot. Moreover, we propose a complementary approach to improve scene understanding by fusing camera images and geometric 3D Lidar data to obtain a colorized point cloud. The implementation on a compact omnidirectional camera reduces system complexity considerably and solves multiple use-cases on a much smaller footprint compared to traditional approaches such as actuated pan-tilt units. Finally, we demonstrate the generality of the approach by application to the multi-camera system of the Boston Dynamics Spot.

Video

Source Code

The source code is hosted as open source on Github:

License: MIT

Citation

Please cite our paper if you use this software as part of your scientific publication:

@INPROCEEDINGS{oehler2021flexible,
  author={Oehler, Martin and von Stryk, Oskar},
  booktitle={2021 European Conference on Mobile Robots (ECMR)}, 
  title={A Flexible Framework for Virtual Omnidirectional Vision to Improve Operator Situation Awareness}, 
  year={2021}
}