Learning to GraspGrasping is - for us humans, but also in robotics - an essential skill for interacting with the environment and, known as bin picking, an important application in the industry. However, today’s solution are usually adjusted to the specific object and are therefore not flexible or robust enough for applications in industrial or service robotics. Instead, our robot learns to grasp objects out of a bin on its own and from data. The key scientific question is how general knowledge about physics or previously learned tasks can be used to reduce the required training time. The long-term goal is a grasping controller for arbitrary and - in particular - unknown objects with industrial reliability.
|
|||
Self-Adapting Reinforcement Learning Policies for Sim-to-Real TransferReinforcement learning algorithms can be used to automatically teach robots how to solve complex tasks. An example of this is bin picking. However, these algorithms come with a high sample complexity. This makes training in the real-world prohibitively expensive. Because of this, policies are usually first trained in simulation. They are then transferred to real-world robots. One challenge of this sim-to-real transfer is that real-world physics is hard to accurately simulate. In this work, we explore how to train policies in simulation such that, during their execution on a real robot, they incrementally adapt themselves to the physical parameters of the environment. In this way, we circumvent the need for an accurate physics simulation.
|
|||
Domain Adaptation and Transfer Learning
The focus of this research area is on accelerating learning with industrial robots by an intelligent combination of simulated and real data. More specifically, reinforcement learning is applied to optimize robot trajectories subject to task-specific and dynamic constraints. This includes learning robust policies in spite of the inevitable gap between simulated and real data as well as safe exploration with real robots. The goal is to reduce robot cycle times and to enable skill transfer, which is essential for flexible manufacturing systems.
|
|||
Robot Learning
The research area "Robot Learning" investigates different machine-learning problems in the domain of robotics. This includes reinforcement learning
- of motion trajectories
- of object manipulation policies for robots and machine tools
- of complex tasks, sequentially or concurrently composed of motion trajectories
Across all aforementioned applications, special focus is laid on sim-to-real transfer.
Key Research Areas
First name surname | title | phone | |
---|---|---|---|
Jonas Kiemel | M. Sc. | +49 721 608-44049 | jonas kiemel ∂ kit edu |
Alexander Cebulla | M. Sc. | +49 721 608-47121 | alexander cebulla ∂ kit edu |