Public Summary Month 12/2012

In November and December the focus lay on tasks 4 and 5 for the IPR, in which gesture and action recognition and path planning was to be incorporated into the framework. For task 4, the conceptual design of the gesture based interaction was determined and two gesture classes were identified. In task 5 especially the path planning module was developed and incorporated into the experiment framework.
Project partner Reis Robotics checked the defined 4 experiments concerning safety issues and made a risk analysis involving a Reis safety expert. KUKA worked out an alternative scenario for experiment 5, involving gesture based interaction with their mobile platform.


Public Summary Month 10/2012

In September and October 2012, the work was focused on gesture and action recognition. So far, it is based on Description Logics (DLs) which uses a taxonomy about actions, activities and gestures as a knowledge base. Recognition results are inferred directly by providing assertional knowledge which state information about human kinematics and robotic state. Moreover, preliminary linking concepts have been included into the knowledge base in order to allow for incorporation of stochastical methods for action and gesture recognition.


Public Summary Month 8/2012

In July and August the work was focused on the completion of task 2 and task 3. At the end of August a meeting with our partners KUKA and Reis was held at the IPR[1], where the results of task 2 and 3 were presented. Also their role in the upcoming tasks 4 and 5 were discussed and defined. For the completion of the tasks 2 and 3, the still missing swept modelling and based on this the distance calculation were designed and implemented. Also a final system setup was created as a basis for the following tasks.   



[1] Institute for Process Control and Robotics


Public Summary Month 6/2012

In May and June the work was focused on the training of the classifier for the body part detection, and the build-up of the multi-camera setup including the implementation of the calibration and registration step. Unfortunately the training of the classifier did not deliver the expected results, so that the existing approach will be used for the ICP tracking. After the installation of the sensors in the experiment setup, and the implementation of the calibration and registration step, the sensor system now consists of 3 Kinect and one time-of-flight sensor.


Public Summary Month 4/2012

In March and April, the work focused on the important task of human pose recognition based on an ICP[1] approach, and the environment modeling, done with the OpenGL framework. To improve the results of the ICP approach, a second method[2] will be incorporated in the estimation process. Therefore, a rendering pipeline has been implemented to produce synthetic data, which will be used for the training process of a classifier.



[1] Iterative Closest Point

[2] J.Shotton, A.Fitzgibbon, M.Cook, et al.:Real-time human pose recognition in parts from single depth images. Cvpr (2011), 1297-1304