Visual Behaviors for Docking

作者:

Highlights:

摘要

This paper describes visual-based behaviors for docking operations in mobile robotics. Two different situations are presented: in theego-docking,each robot is equipped with a camera, and the motion is controlled when docking to a surface, whereas in theeco-docking,the camera and all the necessary computational resources are placed in a single external docking station, which may serve several robots. In both situations, the goal consists in controlling both the orientation, aligning the camera optical axis with the surface normal, and the approaching speed (slowing down during the maneuver). These goals are accomplished without any effort to perform 3D reconstruction of the environment or any need to calibrate the setup, in contrast with traditional approaches. Instead, we use image measurements directly to close the control loop of the mobile robot. In the approach we propose, the robot motion is directly driven by the first-order time-space image derivatives, which can be estimated robustly and fast. The docking system is operating in real time and the performance is robust both in theego-dockingandeco-dockingparadigms. Experiments are described.

论文关键词:

论文评审过程:Available online 19 April 2002.

论文官网地址:https://doi.org/10.1006/cviu.1997.0528