Public Summary Month 7/2012

In the end of july we had our first factory run.

It served as a test for the bigger run in October which will be done in conjunction with the EU project TAPAS.

 

The setup was slightly smaller scale to not disturbe production in case something goes wrong: We used mock-ups for the feeders and for the magazin.

The experiment was carried out in the factory hall next to the production line.

 

The experiment extended the previously published video in two ways: 1) while the video shows the training part only, the new experiment obviously also contains the programing part where the robot is able to play back the program. 2) the experiment is done under real condiditions.

 

The experiment went very well during a sunday but the programming part using the gestures essentially failed during the monday. 

We observed so far two errors

1) It seems that the Kinect camera that we are using as main input gets disturbed by some of the running machines.

2) Illumintation from the roof might have been a problem so that we might have to request robot programming at special locations, but this is not clear.

 

We will further investigate this in August.

Tags: public summary  

Public Summary Month 5/2012

In the past reporting period we have mainly consilidated our demo in order to prepare for our large scale experiment at Grundfos in October. In addition, we have started investigating the use of an iPad for skill and task programming.

Tags: public summary  

Public Summary Month 3/2012

In the last reporting period we have finished our first complete demo.

In this demo, which will be visible under youtube, the user first specifies that the robot should do a feeding task.

Then, the human specifies the necessary parameters through gestures: the human operator points at the SLCs containing the parts for the feeding task. At this point, the robot actually grasps the corresponding SLC. Then, the human shows the robot the feeder into which these parts should go. Finally, the robot empties the SLC into the right feeder.

Once the library of different task programs is provided, the robot can be use for different tasks without difficult re-programing.

 

Even though the present demo was tested so far only in the lab, most of the components of the system have already been tested at Grundfos in the real scenario. The only system component that was not tested so far is the Kinect-based human-robot interaction. Given that the Kinect is a well-tested Microsoft product, we do not expect any difficulties when finally testing the system on the shop floor.

Tags: public summary  

Public Summary Month 1/2012

In December we have finished our demo: A human operator is able to tell a robot what object to take by simply pointing at it from some distance. The video of that demo is available under YouTube http://youtu.be/H6kX494AsaE. For this demo, we have included a completely automatic calibration procedure that calibrates the robot arm with the camera. The human pointing gesture is perceived by the camera, the pointing direction is recognized. The system then identifies the object that is in the pointing direction and directs the robot arm to that object in order to pinpoint the object the robot thinks the human pointed to. In this reporting period we have done an evaluation of how good the pointing recognition works. The evaluation was done with 8 different and un-trained individuals who pointed at the three different boxes from 2m, 3m and 4m distance. It can be expected that recognition rate degrades with distance.  Recognition rates were  

2m 0.8442
3m 0.9535
4m 0.6923

 

In this demo, we have used the hand and arm for pointing. This works well, if the distance between the robot and the human is large enough. In the next reporting period we are going to report experimental results in which we only use the hand for pointing. This is useful when the robot is very close to the human.

Tags: public summary  

Public Summary Month 11/2011

An ad-hoc pointing detection and parameter extraction methods has been implemented on ROS, running real-time and on-line. We have implemented an ad-hoc version to be able to focus on implementational issues. The adhoc poiting recognition will be replaced by a parametric HMM within the next reporting period.

Steps during the next reporting period:
Next upcoming even it the deliverable in M10 where we will demonstrate the first integrated version of our system that is able to track and recognize human pointing gestures. For this, it is necessary to be able to 1) use the Kinect camera to track the human (head and hand in particular), 2) to identify the pointing gesture and to where the human is pointing, and 3) to calibrate the robot arm with the Kinect camera so that the robot is able to move its endeffector to the object at which the human was pointing. In the demo-derliverable, the robot will be able to touch the object at which the human was pointing.
The M10 deliverable will be a first demo running on the Little Helper Plus. It will be built directly on the present setup of our robot.
Tags: public summary