This work was conducted during internships at Meta Reality Labs Research.
In teleoperation of contact-rich manipulation tasks, selecting robot impedance is critical but difficult. The robot must be compliant to avoid damaging the environment, but stiff to remain responsive and to apply force when needed.
In this paper, we present Stiffness Copilot, a vision-based policy for shared-control teleoperation in which the operator commands robot pose and the policy adjusts robot impedance online.
To train Stiffness Copilot, we first infer direction-dependent stiffness matrices in simulation using privileged contact information. We then use these matrices to supervise a lightweight vision policy that predicts robot stiffness from wrist-camera images and transfers zero-shot to real images at runtime.
In a human-subject study, Stiffness Copilot achieved safety comparable to using a constant low stiffness while matching the efficiency of using a constant high stiffness.
We conducted a within-participants human-subject study on three contact-rich tasks. Each participant teleoperated the robot under three impedance conditions. The visualizations below show synchronized camera views, predicted stiffness (as an ellipsoid), and measured contact force.
Stiffness Copilot adjusts the robot stiffness based on visual context. The ellipsoid below visualizes the predicted stiffness.
Stiffness Copilot in Vase Wiping: The robot remained compliant perpendicular to the contact surface.
Third Person 2
Third Person 1
Controls: Press Space to play/pause. Drag the timeline handle to scrub. Drag the 3D view to rotate, scroll to zoom.
@article{wang2026stiffnesscopilot,
title={Stiffness Copilot: An Impedance Policy for Contact-Rich Teleoperation},
author={Yeping Wang and Zhengtong Xu and Pornthep Preechayasomboon and Ben Abbatematteo and Amirhossein H. Memar and Nicholas Colonnese and Sonny Chan},
year={2026}
}