Talks

Renaud Detry

Presentation

Efficient and Uncertainty-Aware Learning for Robot Manipulation: Insights Applicable to Deformable Objects
This presentation explores key advancements in robot manipulation that promise significant impact on the manipulation of deformable objects.

First, we discuss Diffusion Policy Learning (DDPM), a method already proven effective with deformable objects. Our work addresses the high computational costs of DDPM by presenting a novel imitation learning approach that leverages an asymmetry between robot and image diffusion. This technique reduces the training time for multi-task policies by 95% and memory usage by 93%. This efficiency is crucial, making it practical to effectively learn the high-dimensional, complex action spaces typical of deformable object manipulation.

Second, we tackle the problem of decision-making under extreme state uncertainty, a central challenge with deformable objects. We model visual uncertainty using a variational autoencoder, propagating its latent variance to dynamically modulate a maximum-entropy reinforcement learning algorithm. This method encourages uncertainty-reducing exploration, which is crucial when visual data is noisy or occluded.

Finally, we introduce methods to enhance robot manipulation with equivariance. Specifically, we present a new grasping model that is equivariant to planar rotations, directly improving the efficiency of tasks like tabletop folding or arranging pliable materials. The model uses a rotation-equivariant tri-plane representation and an equivariant generative grasp planner. This work serves as an inspiration for other equivariances vital for handling deformable objects.

Together, these contributions pave the way for a new generation of mobile manipulators capable of reasoning under uncertainty and learning effectively from less data, accelerating the development of robust solutions for deformable object manipulation.

About Renaud Detry

Andrej Gams

Presentation

Learning of unfolding, flinging and flattening of textiles as part of complete textile handling pipeline
While until recently, research efforts on robot manipulation have been focused mostly on manipulation of rigid objects, showing great progress in industrial and service robotics, approaches for rigid-objects manipulation cannot be directly applied for deformable, flexible objects such as textile. To pick-up a textile object from a pile, and fold it, different perception and manipulation actions need to be taken. If picking up can still be relatively straightforward – grab and pull up, unfolding a garment in the air is far more complex. In the talk I will first present our approach at unfolding garments in the air by selecting the point on the object where the robot needs to grab the item. This work is based on our efforts at ICRA 2024 grasping and manipulation competition. Once we have the item unfolded, the robot(s) need to set it as flat as possible on the surface in front of them. To learn this task, we employed deep reinforcement learning in simulation. Finally, if the placement is not perfect, we, just like humans, can still flatten the object. To do this we employ a vision-to-motion end-to-end approach. In the talk I will present details of these three major task elements in preparing textiles for folding.

About Andrej Gams

Tamim Asfour

Learning Manipulation Constraints from Human Demonstration for Humanoid Robots
The era of functional humanoid robots has arrived, with AI-powered systems demonstrating remarkable locomotion and whole-body control capabilities. However, exciting systems still lack the versatile manipulation capabilities required for general-purpose tasks. This gap is critical because robotic manipulation stands as one of the grand challenges in robotics—not only a technical hurdle but a fundamental gateway to intelligence. Manipulation, the ability to change the physical world, is intrinsically intertwined with intelligence, the ability to detect change and adapt to it.

I argue that manipulation is fundamentally about understanding and leveraging constraints—geometric, kinematic, force, spatial and temporal relationships between objects and hands. Learning these constraints enables generalization across object variations, environmental changes, and task contexts. The talk addresses key questions: How do we detect, represent, and learn constraints from human demonstrations and environmental interaction? How do we leverage these constraints for task execution? I will present taxonomies as structured frameworks for understanding the space of possible constraints and approachs for keypoint-based visual imitation learning that extract keypoints, geometric constraints and movement primitives from human demonstration. Examples from our ARMAR humanoid robots will illustrate these concepts in complex bimanual manipulation tasks.

The talk concludes with perspectives on extending this constraint-based framework to the manipulation of deformable objects, a frontier challenge requiring richer constraint representations.

About Tamim Asfour

Irene Garcia-Camacho

Presentation

Benchmarking Cloth Manipulation: Enabling comparison through standardization
Manipulation of highly deformable objects such as cloth is an important area in robotics research that has many applications. However, while significant work has been made in the field, the lack of well-defined and widely adopted benchmarks has hindered reproducibility, fair comparison and cumulative progress across the field, making it difficult to identify the strengths, limitations and improvements of proposed methods.

In this talk, I will discuss why benchmarking is essential for advancing robotic manipulation research, and why cloth manipulation poses unique challenges for benchmark design. These challenges include the intrinsic variability of textile materials, sensitivity to initial conditions, and the difficulty of defining task success metrics that are both meaningful and generalizable across robotic platforms and algorithms. I will present how my research has addressed these issues by proposing solutions for evaluation and standardization. In particular, we will understand why object standardization is key in benchmarking and how textile objects can be characterized by their properties. This approach makes it possible to compare between cloth object sets, study the influence of textile properties on robotic manipulation, and support more informative benchmarking practices.

About Irene Garcia-Camacho

Noémie Jaquier

A differential geometric take on deformable object modeling
Lagrangian and Hamiltonian mechanics provide a robust framework for modeling the dynamics of physical systems by leveraging fundamental energy conservation laws. Integrating such physical consistency as an inductive bias into deep neural architectures has been shown to enhance prediction accuracy, data efficiency, and generalization when modeling mechanical systems. However, scaling these architectures to deformable objects remains challenging due to their high — or even infinite — dimensionality. In this talk, I will explore recent advances that adopt a differential geometric perspective on Lagrangian and Hamiltonian mechanics to extend the scope of physics-inspired neural networks to high-dimensional mechanical systems. I will present geometric architectures that learn physically consistent, interpretable, and low-dimensional dynamics that effectively capture the complex, high-dimensional behavior of deformable objects.

About Noémie Jaquier

Francis wyffels

Presentation

Beyond Vision: Sensing for Fine Manipulation of Deformable Objects
Deformable objects remain challenging in robot manipulation: self-occlusions, partial observability, and non-linear, history-dependent dynamics expose the limits of vision-only pipelines. This is especially the case when reactivity and gentle contact are required. But why limit ourselves when we can look beyond vision? We will cover sensing modalities and sensorized fingers; lessons from cloth folding/unfolding benchmarks and datasets; UnfoldIR and reactive policies; food handling, where slip matters more than force; and the impact of contact-rich sensing on imitation learning and simulation-to-real pipelines. Along the way, participants will see our Halberd ecosystem for robot sensing and instrumentation that accelerates data collection and learning. We close with open problems and high‑impact opportunities for the community. See also https://airo.ugent.be/open-source/ .

About Francis wyffels

Gokhan Solak

Presentation

Learning, Control, and Modelling for Deformable Object Manipulation and Safe Interaction
This talk explores two key research directions that are central to deploying robots in everyday human environments: learning to manipulate deformable objects and learning to act safely during physical interaction. The talk is organized into two complementary parts, each addressing one of these challenges. The first part focuses on deformable manipulation, using ropes as a motivating example. We review approaches for simulating, modelling, and tracking deformable objects, and discuss how learning can be used to identify model parameters and design controllers capable of achieving dynamic manipulation behaviours.

The second part focuses on safety in robotic learning and control. We introduce concepts from safe reinforcement learning, variable impedance control, and passivity theory, and discuss how safety constraints can be formulated and enforced to ensure safe and stable physical interactions with the environment.

Throughout the talk, we combine a high-level overview of relevant literature with practical insights and lessons learned from our own research experience, aiming to provide students with both conceptual understanding and intuition for tackling these challenges in real robotic systems.

About Gokhan Solak

Danijel Skočaj

Presentation

Dense regression for object and grasp point detection and anomaly detection
Perception is a fundamental capability in robotics, enabling interaction with complex and unstructured environments, with computer vision and deep learning playing a central role in recent advances. In this talk, we present a computer vision method originally developed for object counting and localisation with point-level supervision, addressing the challenge of reducing the need for detailed and costly annotations. The approach is based on dense regression of direction vectors pointing towards the nearest point of interest, together with the estimation of additional task-relevant quantities. We describe its extension to predicting grasp points and orientations for object manipulation, with a particular focus on textile objects, covering both 3-DoF and 6-DoF grasping scenarios.

Building on similar perception principles and a shared goal of minimising annotation requirements, the second part of the talk addresses the problem of anomaly detection. We present a range of approaches spanning from fully supervised to unsupervised and zero-shot settings, and discuss their respective strengths and limitations.

About Danijel Skočaj

Jihong Zhu

Presentation

Deformable Object Manipulation: fundamental challenges and promising applications
What makes the manipulation of deformable objects difficult? In this talk, I will discuss the fundamental challenges introduced by object deformation—particularly the computational complexity of tracking and predicting shape changes—and how my research addresses these issues. I will focus on promising applications in the domain of assistive dressing, where handling fabric is a critical necessity. A key highlight of my presentation will be my approach to simplifying the cloth model for dressing tasks. By utilizing a concept of diminishing rigidity, I will demonstrate how we can reduce the complexity of the interaction to make robotic assistance feasible and effective, bridging the gap between theoretical challenges and meaningful real-world care.

About Jihong Zhu