SO-ARM100 Full-Stack: From 3D Printing to Learning-Enabled Manipulation
Preparing for the LeRobot Global Hackathon 2025 meant building a complete SO-ARM100 stack—from 3D-printed parts all the way to learning-enabled manipulation demos. This guide distills what worked for me into a reusable playbook. It leans on the internal project notes in the fullstack-manip repository and should help you bring up your own SO-ARM, extend it to other manipulators, and plan a month-long sprint from research to evaluation.
Content Outline
- Content Outline
- 1. SO-ARM101 at a glance
- 2. Full-stack architecture
- 3. Hardware stack
- 4. Digital twins and modeling assets
- 5. ROS2 integration workflow
- 6. Simulation pipelines
- 7. Control and learning tracks
- 8. Experiments, evaluation, and documentation
1. SO-ARM101 at a glance
SO-ARM100 (and its refreshed SO-ARM101 variant) is an open-source 6-DOF serial manipulator from The Robot Studio. Each joint is driven by a FEETECH smart servo, and the entire arm can be fabricated with off-the-shelf 3D printing. The platform ships as a leader–follower pair for teleoperation, but you can operate a single follower arm as a standalone manipulator.
Community momentum exploded during the LeRobot Global Hackathon: the arm now anchors an end-to-end learning framework, attracts third-party simulation assets, and keeps receiving hardware upgrades. The goal of this post is to highlight how to:
- Build or source the mechanical stack quickly
- Generate digital twins for ROS2, MuJoCo, and Isaac ecosystems
- Layer model-based control with diffusion policies or RL
- Benchmark and document the whole journey so you can replicate or generalize to other fixed-base manipulators
2. Full-stack architecture
The internal architecture notes (see docs/04-architecture.md) break the system into modular layers so components can be swapped without rewriting everything:
- Hardware: 3D-printed structure, servos, drivers, power distribution, and safety interlocks.
- State estimation: Multi-camera rigs, motion capture, IMU fusion, and synchronized logging.
- Simulation: Shared URDF/XACRO, MuJoCo XML, and USD assets for IsaacSim.
- Control: Joint-level PID with impedance and operational-space extensions.
- Planning: MoveIt OMPL pipelines, trajectory smoothing, collision checking.
- Learning: Diffusion policy (LeRobot), reinforcement learning in MuJoCo/IsaacLab, and VLA research integrations.
- Evaluation: Throughput, accuracy, robustness, latency, and safety metrics.
- Documentation: Living guides, release checklists, demos, and retrospective logs.
Keeping the stack modular is what makes it possible to start with SO-ARM and later retarget the same software to UR-series or custom manipulators.
3. Hardware stack
The official build guide is on GitHub, but the project requirements document (docs/02-requirements.md) adds practical details you can act on.
3.1 Mechanical fabrication
- Printing cost: Expect roughly €50 for a follower arm and €105 for a leader–follower pair within the EU when outsourcing prints.
- Materials: PLA+ is the default; switch to PETG or nylon for higher temperature tolerance. Keep infill >35% for load-bearing links.
- Bill of materials: Follow the official STL pack and BOM; check the LeRobot Discord for community-sourced upgrades (metal joint inserts, cable harnesses).
3.2 Actuation and electronics
| Component | Notes |
|---|---|
| FEETECH STS3215 servos | Follower arm uses six identical servos (7.4 V/20 Nm or 12 V/30 Nm). Leader arm gears: 3 × 1/147 (C046), 2 × 1/191 (C044), 1 × 1/345 (C001). |
| Servo bus adapter | Waveshare Serial Bus Servo Driver Board with USB-UART bridge. |
| Debug tools | Official FEETECH Windows software, FT_SCServo_Debug_Qt, and the ROS2 driver. |
| Power | 7.4 V–12 V DC supply, sized for peak current (~6 A per arm). Match barrel jack polarity (5.5/2.1 mm or 5.5/2.5 mm). |
3.3 Sensing and peripherals
- Cameras: Mix-and-match UVC webcams, Intel RealSense D405/D435, or any RGB-D sensor supported via LeRobot camera adapters.
- Tactile: AnySkin patches add contact awareness on grippers or links.
- Additional I/O: Add an IMU module on the wrist for inertial feedback or MoCap markers if you have an optical tracking system.
3.4 Compute, networking, and safety
- Control workstation: Ubuntu 22.04 LTS (or macOS ≥12 for prototyping), 8+ CPU cores, 16 GB RAM, and an NVIDIA GPU if you plan to train diffusion or RL policies locally.
- Networking: USB-to-UART for servo bus, Ethernet for ROS2 machines, Wi-Fi for peripherals.
- Safety: Hardware emergency stop, safety relay, and over-voltage protection. Always power motors down before swapping attachments.
3.5 Calibration and troubleshooting
- Homing vs. kinematics: The LeRobot hardware integration guide “calibration” routine sets a consistent zero. For accuracy-critical work, run the Figaroh Plus toolbox to identify DH offsets from motion capture or camera data.
- Common pitfalls:
- Serial bus baud rate mismatch—some adapters top out below 1 Mbps, so configure both controller and motors accordingly.
- Mixed firmware revisions—update all servos to the same version (2.9 or 2.10) using the Windows tooling before chaining them.
- Power polarity mistakes—double-check barrel jack wiring to avoid frying adapters and servos.
4. Digital twins and modeling assets
You’ll need consistent meshes and joint definitions across tools.
- URDF/XACRO: Start from The Robot Studio URDF package. Convert to XACRO if you want parameterized link lengths or alternative end-effectors.
- MoveIt collision geometry: Simplify meshes and define planning groups. The MoveIt setup assistant from JafarAbdi’s repo is a great baseline.
- MuJoCo XML: Use the LeRobot MuJoCo scene or LitchiCheng’s assets for RL-ready environments.
- Isaac/Omniverse USD: Check MuammerBay’s IsaacLab fork or konu-droid’s USD bundle for high-fidelity rendering.
Pipeline tip: keep a single source-of-truth CAD assembly, regenerate STEP → mesh exports when you tweak hardware, and version all generated assets alongside scripts.
5. ROS2 integration workflow
Bringing the arm into ROS2 is a three-step routine:
- Robot description: Publish the URDF/XACRO via
robot_state_publisherand verify joint limits in RViz2. Leverage the control-specific configs from ros2_so_arm100. - Motion planning: Use the MoveIt Setup Assistant to define planning groups (arm, wrist, gripper), tweak OMPL planners, and generate launch files. Plan to run
move_grouplocally with RViz2 for interactive validation. - ROS2 Control bridge: Map FEETECH servos to hardware interfaces. The so_arm_100_hardware package implements custom
ros2_controlhardware handles; you can adapt it if you use different bus adapters.
Add dedicated launch files for simulation-only, teleop, and real-hardware sessions. That lets you flip between modes without editing YAML every time.
6. Simulation pipelines
- MoveIt + RViz2: Ideal for planning validation, collision checking, and quick trajectory visualization.
- Gazebo: Use the SO-100 packages for physics-based integration tests. Great for verifying controllers before touching hardware.
- MuJoCo: Pair the mujoco-learning assets with Stable Baselines3 for reinforcement learning. Domain randomization scripts live in the
fullstack_manip/simulationfolder. - IsaacSim / IsaacLab: Follow the Seeed Studio guide and MuammerBay’s IsaacLab tasks to scale to GPU-accelerated RL or photorealistic demonstrations.
Simulation is the low-risk sandbox to iron out kinematics, controllers, and scene layouts before turning on the real servos.
7. Control and learning tracks
The project splits control into model-based and learning-based streams (docs/09-model-based-stack.md and docs/10-learning-based-stack.md).
7.1 Model-based stack
- Motion planning: MoveIt/OMPL provides sampling-based planners with trajectory smoothing and collision guards already validated in tests (
tests/folder). - Controllers: Start with joint-space PID, then layer impedance or operational-space control for compliant behaviors. Watchdogs should monitor torque estimates and servo temperatures.
- Visual servoing: Plan for camera calibration (hand–eye) and image-space feedback loops to close the gap between perception and actuation.
- Grasping: Integrate with GraspIt!, Dex-Net, or custom heuristics if you need high-fidelity pick-and-place.
7.2 Learning-based stack
- Imitation/Diffusion: LeRobot’s diffusion policy pipelines accept teleop demonstrations. Record data through the ROS2 bridge or IsaacSim teleoperation tools.
- Reinforcement learning:
- MuJoCo + Stable Baselines3 for lightweight experiments.
- IsaacLab for GPU-accelerated curriculum learning (see the cube-lifting benchmark in MuammerBay’s repo).
- Vision-Language-Action: Track VLA research (e.g., the 2025 survey in
docs/03-literature-survey.md) for open-vocabulary manipulation. - Experiment tracking: Log runs with Weights & Biases or another experiment manager so you can reproduce policies and compare baselines.
8. Experiments, evaluation, and documentation
- Metrics: Evaluate accuracy (pose error), robustness (success rate over trials), speed (cycle time), latency (control loop frequency), and safety incidents. These metrics mirror the evaluation doc (
docs/11-evaluation.md). - Test harness: Create regression scenarios—simulation smoke tests, hardware dry-runs, and data-set integrity checks.
- Documentation cadence: Update build logs, wiring diagrams, calibration procedures, and demo scripts. Follow the checklist in
docs/12-documentation.mdso new collaborators can clone and run without guesswork.
Capture videos and annotated plots for each milestone; they make hackathon demos and retrospectives far more compelling.