Apr 26, 2013 , by
Public Summary Month 4/2013
Our biggest event in the last two months was the preparation and realization of our exhibition an the Hannover fair (08.04. - 12.04.2013). We presented our enhanced FlexIRob system as it was evaluated in our user study with workers at Harting. In particular, we presented the fast configuration of the Kuka LWR IV to a confined workspace and our Assisted Gravition Compensation mode which allows to teach a trajectory assisted by the robot in avoiding collisions with the environment. Our booth was part of the trade-show presentation of the Spitzencluster "Intelligente Technische Systems OstWestfalenLippe (it's owl)" where we represented the activities of Bielefeld University, especially of the CoR-Lab and the CITEC, in this area.
The fair was a big success. Our system ran well through the whole week and we gathered a lot of positive feedback from the visitors of our booth. We thank Harting for sponsoring the nice obstacles for our presentation. Several members of the local governments like the mayor of Bielefeld, Pit Clausen, Svenja Schulze, the minister for research and education of NRW, state secretary Helmut Dockter, and district president Marianne Thomann-Stahl have visited our exhibition. The picture below shows Svenja Schulze and Helmut Dockter at our booth. On Friday, we had the opportunity to present our system to three groups of pupils which visited us in the context of an information day for pupils interested in engineering oriented careers. I have attached a report in the local newspaper as deliverable about our dessimination activity at the Hannover fair.
Feb 26, 2013 , by
Public Summary Month 2/2013
In the last two months we have concentrated on the integration of a 3D Vision sensor into the FlexIRob scenario and the implementation of a detection of dynamic obstacles in the workspace of the robot arm.
First, a calibration of the sensor (here, Kinect camera) and the robot arm is realized by using the end effector position as a source for corresponding points between the two coordinate systems from which the transformation from the camera's coordinate system to the robot's coordinate system can be computed using Singular Value Decomposition. By simply ensuring through manual maneuvering of the robot arm that the end-effector is the closest point in the camera, the detection of the end-effector in the 3D sensor can be realized as simple nearest point detection. The advantage of this approach is that it is independent from knowing the appearance of the concrete end-effector.
Second, the FlexIRob system detects the robot arm in the visual sensor by estimating the arm through four cylinders which closely envelope the segments of the arm and the end-effector, transforming these cylinders into the camera's coordinate system, and excluding all 3D points within these four cylinders from the raw scene.
The third component of our perception subsystem focuses on the separation of the scene layer into static obstacles which are already encoded in the redundancy resolution of the robot arm and dynamic obstacles which appear spontaneously and should be avoided through an appropriate strategy during normal task completion. We realized this separation by integrating the Articulated Scene Model approach.
Last, the 3D points being part of the dynamic layer are clustered into connected components and represented as 3D boxes.
The intermediate results of all steps described above are depicted in the picture below.
Further, we are pleased to announce that our paper "Assisted Gravity Compensation to Cope with the Complexity of Kinesthetic Teaching on Redundant Robots" was accepted for publication at the International Conference on Robotics and Automation 2013. And Sebastian Wrede will give on the 5th of March a talk at the HRI2013 in Tokyo on our journal publication "A User Study on Kinesthetic Teaching of Redundant Robots in Task and Configuration Space". Interested people can also visit us at our stand on the Hannovermesse (08.04.-12.04.2013).
Dec 21, 2012 , by
Public Summary Month 12/2012
The activities of the last two months are two-folded. We further consolidated our results from the user study at Harting and proceeded with the integration of perception in the FlexIRob-Setup.
Our submission "A User Study on Kinesthetic Teaching and Learning for Efficient Reconfiguration of Redundant Robot" is now finally accepted for publication in the Journal of Human Robot Interaction. With this publication we also get the opportunity to present our results at the HRI 2013 conference in Tokyo. To summarize the achieved results, with this publication we evaluated with workers from our industrial partner the esisting FlexIRob setup and an enhanced version of it where we introduced a new controller which supports the user in a teach-in task by keeping track of the task-independent environmental constraints learned in a previous configuration step.
As dissemination activities, we have presented the results of the user study to the workers which have participated in the study. Further, the idea of the FlexIRob system and the conducted user study have been introduced to Harting's costumers through an article in the tec.News (issue 23, p.16).
In addition to the above points, we have implemented in the last two months the perception chain which applies the Articulated Scene Model approach to co-worker scenarios. We have defined middleware specific data formats (here Robotic Service Bus and Robotics Systems Types) to provide the captured Kinect data to all components of our system (e.g., the simulation or the robot arm) and implemented the new components within the system framework. Currently, we are working on the integration of our dynamic obstacle avoidance concept on the FlexIRob demonstrator, the Kuka LWR IV, aiming at a first calibration of the robot arm and the camera(s) and the visualization of dynamic obstacles in the simulation of the robot arm and its environment as a first test.
Nov 4, 2012 , by
Public Summary Month 11/2012
In the last two months our project activities were three-folded:
- We wrote two papers. One targeting at the Human-Robot Interaction community where we presented results from our user study with Harting worker. The other paper presents our newly developed controller, the Assisted Gravitation Compensation Mode, to the robotics community. Further details are displayed bellow.
- We have established our requirements and specifications for the perceptual components and have started to integrate some existing work.
- We have given quite a number of FlexIRob demos disseminating the FlexIRob ideas.
Dissemination Activities
In the last two months we demonstrated our FlexIRobs system several times mostly in the context of educational workshops. Two companies which held their internal workshops in the rooms of the university took the opportunity to get an impression of future methods in human-robot co-worker scenarios. Further, FlexIRob was part of the professional orientation course for students carried out by the teutolab.
Obstacle Avoidance Requirements and Specifications
In our FlexIRob system we will focus on detecting moving/movable obstacles. We will rely on the Articulated Scene Modeling approach which we have developed in previous projects. We selected the Articulated Scene Model because it allows to distinguish model-free between static background and dynamic foreground and because it fits the FlexIRob paradigm best.
Submitted publications
"Assisted Gravity Compensation to Cope with the Complexity of Kinesthetic Teaching on Redundant Robots"
- Current State: submitted to the 2013 IEEE International Conference on Robotics and Automation (ICRA 2013) on 17th September
- Content:
The scope of this paper is to propose a new interaction control concept for teaching redundant robots with kinesthetic teaching. We thereby focus on the technical aspect and explain our new Assisted Gravity Compensation mode in detail.
Efficient reconfiguration of advanced robot systems to new tasks or environments is an ongoing research challenge. While imitation learning methods as well as modeling and simulation tools are continuously improving to reduce the inherent complexity of reconfiguration, novel robot systems are entering the field which pose new challenges. The paper addresses one important challenge in this area, which is the programming of kinematically redundant robots.
An important result of our field study with 48 industrial workers at Harting is that standard programming-by-demonstration methods for teaching task-space trajectories on a redundant robot using physical human-robot interaction are too complex for non-expert human tutors because of the complexity of simultaneously considering the redundancy resolution and the actual task specification. We report that most of the participants utilizing standard programming-by-demonstration methods were not able to accurately and collision-free follow a styrofoam parcours in a confined workspace.
We therefore propose a new interaction concept for redundant robot systems, Assisted Gravity Compensation, based on a hierarchical control scheme, separating task-space programming from the redundancy resolution. We propose to split the entire task into the two aforementioned aspects of kinesthetic teaching, namely task-space control of the end-effector (world mode) and joint-space control (redundancy resolution) and to assist the user in the resolution of the redundancy for letting him focus on the actual kinesthetic teaching task. The idea is to allow free movement in the taskspace by tracking the movement given by the user through kinesthetic teaching, while simultaneously controlling the joint-space according to the constrained environment. For this interaction concept we identify three requirements: a compliant and redundant
robot platform enabling close and intuitive human-robot interaction, a redundancy resolution module providing the user with a valid redundancy resolution of the inverse kinematics and a hierarchical controller fusing the cartesian task from kinesthetic teaching in taskspace and the joint configuration given by the redundancy resolution.
This interaction concept is implemented by extending our development platform FlexIRob utilizing robot platform KUKA Lightweight Robot IV. The redundancy resolution is provided by a previously trained Neural Network component (taught kinesthetically) with an encoded inverse kinematics to comply to the constraints imposed by the environment to avoid collisions with the static scene. In addition, we use a hierarchical position controller, based on ideas by Grupen and Huber (CBF controller).
A short evaluation of the new Assisted Gravity Compensation control mode by means of the results of the field study shows that the complexity of a kinesthetic teaching task is reduced, which is revealed by an improved task performance and significant less collisions. In addition, analysis of the questionaire reveals an improved user experience. In particular, a highly significant effect on the simplicity of handling the robot during the teaching task is observable.
The new Assisted Gravity Compensation mode allows non-expert users to perform kinesthetic teaching tasks faster and with higher precision making kinesthetic teaching an efficient programming-by-demonstration method for redundant robots.
"User Study on Kinesthetic Teaching of Redundant Robots in Task and Configuration Space"
- Current State: initially submitted to the Journal of Human-Robot Interaction - Special Issue: HRI System Studies on 31th August; submitted revised version on 29th October
- Content:
The focus of this paper is the Harting study on pHRI. We give a comprehensive motivation and description of the study on kinesthetic teaching of redundant robots and present a profound report of the observations, results and insights we gained.
The recent advent of compliant and kinematically redundant robots poses new research challenges for human-robot interaction. While these robots provide high flexibility for the realization of complex applications, the gained flexibility generates the need for additional modeling steps and the definition of criteria for redundancy resolution constraining the robot’s movement generation. The explicit modeling of such criteria usually require experts to adapt the robot’s movement generation subsystem. A typical way of dealing with this configuration challenge is to utilize kinesthetic teaching and guide the robot to implicitly model the specific constraints in task and configuration space.
Current approaches to kinesthetic teaching do not reflect the standard technical approaches to redundancy control, which typically separate task space planning and constraint resolution in configuration space. Furthermore, dedicated interaction support for kinesthetic teaching of constraints in confined spaces has not yet been considered. That is, standard robot control as we also apply it in this contribution distinguishes task and configuration space in the redundancy resolution, but users have to deal with the complexity of simultaneously considering the redundancy resolution in configuration space and the specification of trajectories in task space.
Based on these considerations, we propose to comply interactive teaching process to this logical distinction between configuration and task space, which allows to decompose the overall adaptation process into two separate but consecutive phases: The first phase is the Configuration Phase where kinesthetic teaching is used to implicitly model obstacles or task-independent constraints which are relevant for movement generation in the joint space of the robot. Here, constraint knowledge is implicitly transferred through application of a learning method from the human tutor to the robot system without explicit constraint modeling. The learned inverse kinematics mapping is embedded in a hierarchical controller, allowing for execution of arbitrary motions in task space but respecting the learned constraints.
The second phase is the Programming Phase where the human tutor kinesthetically teaches a specific task-space trajectory just by applying forces to the end-effector.
The robot’s joints are in this phase controlled by the hierarchical controller trained in the previous phase. This allows the users to program a specific task without consideration of the environment. We refer to this control mode as Assisted Gravity Compensation.
As a primary goal of this contribution, we evaluated in the Harting study whether kinesthetic teaching of constraints in configuration space is possible and if a separation of task and configuration space interaction provides benefits compared to the typical teach-in methods which do not consider this separation. A secondary goal of the conducted user study has been to assess if this system concept, namely explicitly distinguishing between configuration and task space interaction, allows to reduce the complexity in adapting a redundant robot to new environments and tasks.
In order to briefly summarize the results of the analysis of the study here, we report that
- concerning the general experience with our FlexIRob system, the physical interaction is easy and comfortable, not threatening to the users and self-explanatory in particular for people with good spatial vision abilities.
- concerning the configuration phase, our expectations in terms of
- time required for reconfiguration,
- collision avoidance in joint space and the
- task-space accuracy
of the learned hierarchical controller were not entirely confirmed. Nevertheless, the objective results show that the configuration of the redundancy resolution through kinesthetic teaching and the learning of the inverse kinematics mapping can be done in less than 2 minutes and that at least half of the users could perform the kinesthetic teaching accurately such that collisions with the environment are avoided.
Aug 30, 2012 , by
Public Summary Month 8/2012
As part of our dissemination activities we organized the Workshop “Technology Monitoring: Robotics today and tomorrow – Technology towards Assistive Automation”. It was part of the OWL MASCHINENBAU Academy. OWL MASCHINENBAU is a network of innovation targeting to strengthen the economic and technological power of the regional industry in production technology. It is an association of local industry, SME, and research institutions of the region East-Westphalia. It targets to transfer research to application in industry, to initiate cooperative networks, and to organize advanced training.
Our workshop focused on challenges arising from the dramatic changes of technology in automation and particular robotics especially with regard to close collaboration of humans and machines. This collaboration has big potential to improve manufacturing work flow strengthening the global competitiveness and sustaining the local value chain. The workshop approached the topic “interactive robotics” from two sides: it introduced future technologies (redundancies, force control, innovative controls) enabling human-robot interaction and showed methods for interactive configuration and safe operation of such systems. The goal of the workshop was to give an outlook on technology which might be introduced in the next years into the automation and production field and will influence future production systems. Questions concerning safety, acceptance, and the role of the Spitzencluster it’s owl2 (Intelligente Technische Systeme OstWestfalenLippe) for the local area completed the workshop. The main presenters were Dr. P. Pfaff from KUKA Laboratories, Dr. M. Ruskowski from Carl Cloos Schweißtechnik, Dr. F. R ̈thling from Robert Bosch GmbH, Prof. Dr. A. Schneider from FH Bielefeld, and Dr. A. Swadzba and Prof. Dr. J. Steil from Research Institute for Cognition and Robotics (CoR-Lab). The workshop ended with hand-on demonstrations in the CoR-Lab.