Public Summary Month 5/2012

We conducted experiments where the robot has to actively perceive partially occluded objects which showed that considering object detection results when planning the next view can help finding more objects.

We integrated the components developed in ActReMa into a bin-picking application, performed by the cognitive service robot Cosero at UBO. The robot navigates to the transport box, aligns to it, acquires 3D scans, recognizes objects, plans grasps, executes the grasping, navigates to a processing station, and places the object there.


Public Summary Month 3/2012

For learning object models, we made initial scan alignment more robust by adapting the point-pair feature object detection and pose estimation method of Papazov et al. (ACCV 2010). We only allow transformations that are close to the expected ones in the RANSAC step.

 

For active object perception, we extended the simulation to include the complete experiment setup, adapted the object recognition method to identify regions of interest, and integrated the planning of the next best view (NBV).

 


Public Summary Month 11/2011

The Metronom 3D sensor has been mounted on the mobile robot Dynamaid. Its measurements are more precise and less noisy than Kinect measurements.

The experiment partners started work on the learning of object models from examples and active object recognition.
For the learning of object models, scans from different views are registered and unconstrained detection of geometric primitives is performed (Fig. 1).
For active object recognition, we also registered depth measurements from different views (Fig. 2). This reduces occlusion effects and facilitates recognition.

 




Public Summary Month 9/2011

The experiment partners continued work on object recognition, grasp selection, and motion planning for picking objects out of a transport box.

The primitive-based object recognition has been accelerated and made more robust. Now, 2D contours are also considered for recognition. The best visible object is selected.

Grasps are sampled, checked for collisions, and ranked offline. In the current situation, we check for reachability with our robot arm, collisions with the transport box and other objects, and plan reaching motions.

The components have been integrated in simulation as well as for the real robot Cosero.

 


Public Summary Month 7/2011

In the ActReMa experiment, a robot delivers parts to a process station. The robot is equipped with a 3D scanning sensor. It must recognize objects in a box and grasp them.

The experiment partners started their work according to the plan. For two scenarios: a mobile robot and a stationary robot, the sensor placement has been decided. Objects are detected using the fitting of shape primitives. The robot has been modeled for the simulation of grasp and motion planning. Objects can now be grasped flexibly from a table.