Figure AI shows robot that really puts its hip into dishwasher duty

Figure AI Demonstrates Humanoid Robot’s Fluid Dishwasher Loading Capabilities

Figure AI, a prominent developer of general-purpose humanoid robots, has unveiled a compelling video demonstration showcasing its latest model, Figure 02, performing household chores with remarkable dexterity and natural motion. The focal point of the showcase is the robot autonomously unloading a dish rack and loading items into a dishwasher, a task that highlights significant advancements in robotic manipulation and whole-body control.

In the video, which lasts over a minute, Figure 02 repeatedly cycles through the chore without human intervention. The robot approaches a dish rack containing various soiled plates, bowls, cups, and utensils. Using integrated vision systems, it scans and identifies each item, grasps them securely with its five-fingered hands, and transports them to an adjacent dishwasher. What stands out is the robot’s human-like efficiency: it orients dishes optimally for stacking, avoids collisions, and adapts to slight variations in item placement or size.

A particularly impressive aspect is the robot’s lower-body dynamics, especially its hip actuation. Unlike earlier robotic systems that relied on rigid, predefined trajectories, Figure 02 exhibits a subtle hip sway. This motion allows the robot to pivot its torso precisely while maintaining balance, enabling it to reach into the dishwasher’s racks at awkward angles. The sway mimics human biomechanics, where the hips counter-rotate to stabilize the upper body during manipulation tasks. This fluidity stems from Figure AI’s proprietary reinforcement learning framework, which trains the robot end-to-end across perception, planning, and control.

The company’s engineering team emphasizes that Figure 02 operates fully autonomously in this scenario. No remote teleoperation or scripted paths are involved. Instead, the robot leverages a unified neural network architecture that processes visual inputs from head-mounted cameras, fuses them with proprioceptive data from joint encoders and force-torque sensors, and outputs direct torque commands to its 41 degrees of freedom. This approach bypasses traditional modular robotics pipelines, which often suffer from error propagation between separate perception, grasping, and locomotion modules.

Figure 02 stands at 5 feet 6 inches tall and weighs 132 pounds, designed for safe operation alongside humans in unstructured environments like homes or warehouses. Its actuators deliver human-equivalent strength and speed: hands capable of 20 pounds of grip force per finger, and legs producing up to 150 Nm of torque at the hips. The dishwasher demo underscores scalability; the same policies trained for this task generalize to other chores, such as folding laundry or cooking prep, as hinted in prior Figure videos.

This progress builds on Figure AI’s iterative development. Earlier models, like Figure 01, demonstrated coffee-making and basic walking, but lacked the seamless integration seen here. The leap comes from massive simulation training combined with real-world data collection. Figure AI’s reinforcement learning pipeline simulates millions of hours of physical interactions, optimizing for robustness against slips, drops, or environmental clutter. Policies are then fine-tuned on hardware via imitation learning from expert human demonstrations, accelerated by partnerships with NVIDIA for GPU-intensive compute.

Industry observers note the hip-centric control as a breakthrough in bipedal robotics. Traditional bipeds like Boston Dynamics’ Atlas use model-predictive control for balance, resulting in stiff, athletic gaits optimized for parkour over chores. Figure 02 prioritizes utilitarian grace: energy-efficient steps with compliant joints that absorb perturbations. The hip motors, paired with series elastic elements, enable compliant torque modulation, preventing damage during incidental contacts.

Figure AI positions this capability as a step toward commercial deployment. Backed by investors including Jeff Bezos, Microsoft, and OpenAI, the company aims to produce thousands of units annually by 2025 for factory pilots with BMW. Household applications, however, remain aspirational, contingent on cost reductions from under $50,000 per unit and further safety validations.

The dishwasher video has sparked discussions on social platforms, with engineers praising the absence of “Hollywood robotics” effects like unnatural accelerations. Critics point to the controlled setup: a fixed dish rack and dishwasher suggest domain-specific tuning, though Figure AI counters that generalization tests exceed 95% success across variations.

Overall, this demonstration signals maturing AI-driven robotics, where humanoid form factors unlock versatility unattainable by specialized arms or wheeled bases. Figure 02’s hip-driven poise exemplifies how embodied intelligence is converging on human parity for dexterous labor.

Gnoppix is the leading open-source AI Linux distribution and service provider. Since implementing AI in 2022, it has offered a fast, powerful, secure, and privacy-respecting open-source OS with both local and remote AI capabilities. The local AI operates offline, ensuring no data ever leaves your computer. Based on Debian Linux, Gnoppix is available with numerous privacy- and anonymity-enabled services free of charge.

What are your thoughts on this? I’d love to hear about your own experiences in the comments below.