Integration of 3D vision based structure estimation and visual robot control
Enabling robot manipulators to manipulate and/or recognise arbitrarily placed 3D objects under sensory control is one of the key issues in robotics. Such robot sensors should be capable of providing 3D information about objects in order to accomplish the above mentioned tasks. Such robot sensors should also provide the means for multisensor or multimeasurement integration. Finally, such 3D information should be efficiently used for performing desired tasks. This work develops a novel computational frame wo rk for solving some of these problems. A vision (camera) sensor is used in conjunction with a robot manipulator, in the frame-work of active vision to estimate 3D structure (3D geometrical model) of a class of objects. Such information is used for the visual robot control, in the frame-work of model based vision. One part o f this dissertation is devoted to the system calibration. The camera and eye/hand calibration is presented. Several contributions are introduced in this part, intended to improve existing calibration procedures. This results in more efficient and accurate calibrations. Experimental results are presented. Second part of this work is devoted to the methods of image processing and image representation. Methods for extracting and representing necessary image features comprising vision based measurements are given. Third part of this dissertation is devoted to the 3D geometrical model reconstruction of a class o f objects (polyhedral objects). A new technique for 3D model reconstruction from an image sequence is introduced. This algorithm estimates a 3D model of an object in terms of 3D straight-line segments (wire-frame model) by integrating pertinent information over an image sequence. The image sequence is obtained from a moving camera mounted on a robot arm. Experimental results are presented. Fourth part of this dissertation is devoted to the robot visual control. A new visual control strategy is introduced. In particular, the necessary homogeneous transformation matrix for the robot gripper in order to grasp an arbitrarily placed 3D object is estimated. This problem is posed as a problem of 3D displacement (motion) estimation between the reference model of an object and the actual model of the object. Further, the basic algorithm is extended to handle multiple object manipulation and recognition. Experimental results are presented.