Squeezing the Last Drop of Accuracy: Hand-Eye Calibration via Deep Reinforcement Learning-Guided Pose Tuning

Seunghui Shin, Daeho Kim, Hyoseok Hwang
{jumin1116, kdh2769, hyoseok}@khu.ac.kr
Kyung Hee University AIRLAB
IEEE Robotics and Automation Letters (RA-L) 2025

Abstract

  • Hand-eye calibration is a fundamental task in robotics, requiring high precision to ensure accurate manipulation. This is especially crucial for recent markerless methods, which depend on precise pose estimation for effective end-effector calibration. In this paper, we propose a novel approach that improves calibration performance by adjusting the end-effector's pose to reduce prediction error. Our method utilizes a reward structure derived from trained pose estimation networks, enabling a Soft Actor-Critic-Discrete agent to learn in a simulated environment how to enhance calibration performance through action selection. Our experiments show that calibration results achieved with our method outperform those from initial poses alone in both markerless and marker-based methods. Real-world experiments further validate the efficacy of our approach in actual robotic systems. These results demonstrate that our proposed method effectively enhances the performance of pose estimation-based hand-eye calibration.

Overview

Overview

  • Conventional pose-based hand-eye calibration relies on the initial end-effector pose, often yielding inaccurate calibration (trajectory shown by the red arrow).
  • Our approach leverages deep reinforcement learning (DRL) to train an agent that adjusts the end-effector toward better poses to improve calibration accuracy (trajectory shown by the blue arrow).
  • Main Architecture

    Architecture

  • Our framework first estimates the end-effector pose using a pose estimation network.
  • Subsequently, a SAC-Discrete agent is trained to adjust the end-effector to poses that yield lower estimation errors.
  • Hand-eye calibration is then performed using the known forward kinematics and the refined end-effector poses.
  • Evaluation in Simulation

    Markerless Calibration

    Markerless Calibration Results

    Our method significantly reduced estimation errors for hand-eye calibration. Trends in end-effector pose estimation matched those in calibration, as expected, since calibration depended directly on pose accuracy. These results confirmed that our approach effectively guided the end-effector toward poses that reduced errors, enhancing both pose estimation and calibration performance. Moreover, our method was modular and could be integrated into various pose estimation networks.

    Marker-Based Calibration

    Marker-Based Calibration Results

    The calibration performance was highly sensitive to marker pose accuracy; increased marker pose error led to larger calibration errors. By guiding the end-effector to poses with lower estimation errors, our method effectively reduced errors in both marker pose estimation and hand-eye calibration, demonstrating its utility in improving marker-based settings.

    Evaluation in Real-World Setting

    High-Precision Targeting

    Real-world Experiment 1

    Our method consistently reduced calibration errors, while the random policy gave marginal improvement or increased error. These results confirmed the effectiveness of our approach in improving real-world hand-eye calibration.

    Peg-Insertion

    Real-world Experiment 2

    Integrating our method increased task success 71%, showing even small improvements in hand-eye calibration substantially enhance real-world task execution.

    BibTeX

    
            @article{shin2025squeezinghec,
                      title={Squeezing the Last Drop of Accuracy: Hand-Eye Calibration via Deep Reinforcement Learning-Guided Pose Tuning},
                      author={Shin, Seunghui and Kim, Deaho and Hwang, Hyoseok},
                      journal={IEEE Robotics and Automation Letters},
                      year={2025},
                      publisher={IEEE}
                    }