← Back to Portfolio

Robot Domino Artist

Contributors: Michael Jenz, Gregory Aiosa, Daniel Augustin, Chenyu Zhu

Skills: ROS 2, MoveIt, Computer Vision

GitHub Repo

Summary

This project uses the Franka Emika Robot (FER) to autonomously manipulate dominoes into predefined patterns and then topple them. The system records the initial positions of the dominoes using a computer vision pipeline, plans collision-aware manipulation motions, and executes placement with force-controlled contact.

To avoid collisions and grasping failures, dominoes are first reoriented into a staging configuration before final placement. Due to variation in the table height, force-based placement was implemented to ensure reliable contact with the surface.

Accurate camera extrinsic calibration was required for reliable performance. The camera was calibrated in-hand using easy_handeye2, and the resulting calibration was used throughout the manipulation pipeline.

Demo

The video above shows the full system performing domino placement followed by a toppling sequence once all dominoes reach their goal poses.

System Architecture

Domino Movement Algorithm

The domino movement algorithm is the core routine responsible for moving dominoes from their initial positions to the final pattern. Each domino follows a three-stage process:

  1. Initial pickup from the table
  2. Staging and reorientation into a standing configuration
  3. Final placement into the goal pose

The staging step is critical due to the small size of the dominoes and the geometry of the gripper. Attempting to place dominoes directly from a lying configuration resulted in collisions with neighboring dominoes. Reorienting them first enabled safe and repeatable placement.

Domino Vision Algorithm

The vision pipeline identifies the pose of each domino on the table and publishes these poses to the TF tree when requested by the manipulation node.

  1. Position Identification: Color filtering is used to detect domino centers in the image, and depth data is combined with camera intrinsics to compute 3D positions.
  2. Orientation Identification: Bounding boxes are used to estimate the domino’s orientation about the vertical axis, which is converted into a quaternion.

This approach assumes the camera is perpendicular to the table and that the table surface is flat. In practice, these assumptions were imperfect and introduced small pose errors that accumulated during placement.

Force-Controlled Placement

To compensate for inaccuracies in table height and vision estimation, force-controlled placement was implemented. During pickup and placement, the robot lowers the gripper until the measured joint effort exceeds a threshold, indicating contact with the table.

This eliminated hard-coded height values and significantly increased the robustness of the system. Implementing this behavior required temporarily disabling collision objects for the table and dominoes to prevent planning failures during forced contact.

While this required careful management of collision objects and scene state, it ultimately turned discrepancies between simulation and the real world into a tool rather than a limitation.

Reflections

This project highlighted the complexity of real-world robotic manipulation. Small perception errors quickly compound without feedback, and reliable systems require tight integration between sensing, planning, and control.

Implementing force-controlled placement and asynchronous execution monitoring dramatically improved system robustness and shaped how I approach manipulation problems moving forward.