Journalist rents out his body to AI agents and earns nothing after two days of gig work

Renting Out Your Body to AI Agents: A Journalist’s Futile Two-Day Gig Experiment

In an era where artificial intelligence is increasingly encroaching on human labor markets, the gig economy is evolving in unexpected ways. Platforms now enable AI agents to hire humans as proxies for physical tasks, essentially renting out human bodies to perform actions that software alone cannot accomplish. A journalist decided to test this emerging frontier by making his body available for hire over a continuous 48-hour period. The result? Zero earnings, highlighting the precarious realities of this nascent “body-for-hire” marketplace.

The experiment began with the journalist signing up for several specialized platforms designed to bridge the gap between digital AI agents and the physical world. These services function as matchmaking hubs, where autonomous AI systems post jobs requiring human intervention—such as capturing images from specific locations, manipulating objects, or navigating real-world environments. Humans, in turn, bid on or accept these gigs, often competing with a global pool of available workers. The journalist positioned himself as fully available, smartphone in hand, ready to execute tasks on demand.

One prominent platform involved was A.I. Shaadi, a service that pairs AI agents with human “bodies” for short-term rentals. Here, AI users define tasks via natural language prompts, and the platform dispatches nearby humans to fulfill them. Compensation is typically micro-payments, often pennies per task, scaled by complexity and urgency. Similar offerings appeared on TaskVerse and other aggregator sites, where AI agents from research labs, developers, or hobbyists seek proxies for experiments in robotics emulation, data collection, or environmental sensing.

Despite his commitment—staying awake and mobile throughout the two days—the journalist encountered a stream of hurdles. Initial sign-ups were straightforward, involving basic profile setup with location sharing enabled and verification of physical capabilities via simple tests. However, the real bottleneck emerged in the waiting game. AI agents, while numerous in theory, proved unreliable clients. Many postings vanished before humans could respond, likely culled by algorithmic filters favoring cheaper or faster options. When gigs did materialize, they were trivial: snapping a photo of a street sign, walking 100 meters to verify a storefront’s hours, or holding a phone steady for a 30-second video scan.

The journalist accepted every viable task, executing them promptly with GPS confirmation and photo uploads as required. Yet, payouts failed to materialize. Platforms cited reasons like “task validation pending,” “AI agent dispute,” or outright non-payment due to the hiring AI’s insufficient credits. In one instance, an AI requested a series of location checks in a park; after completion, the agent disconnected without approving the invoice. Competition was fierce, with workers from low-cost regions undercutting bids, driving rates below sustainable levels. Technical glitches compounded the issues: app crashes during task acceptance, delayed geolocation pings, and opaque matching algorithms that prioritized “verified” high-volume freelancers.

Over 48 hours, the journalist logged availability across multiple devices, ventured into urban areas for better signal and task density, and even promoted his profile on associated forums. He fielded a handful of tasks—perhaps a dozen in total—but none converted to earnings. The platforms’ dashboards displayed enticing statistics: thousands of active AI agents, millions in projected task volume. Reality, however, painted a different picture. Most AI “hirings” were simulated or low-stakes proofs-of-concept, not serious economic transactions. Human workers served as disposable extensions, with little recourse against flaky digital employers.

This experiment underscores broader challenges in the AI-human gig economy. While proponents envision a seamless symbiosis—AI directing intelligence, humans providing embodiment—the infrastructure remains immature. Payment systems lack robustness, dispute resolution is AI-mediated and biased toward programmers, and worker protections are nonexistent. Earnings data from the platforms suggest top performers might net $5-10 hourly during peaks, but averages hover near zero for newcomers. The journalist’s zero-dollar haul after exhaustive effort reveals the lottery-like nature of the work: high variance, low floors, and dependence on unpredictable AI behaviors.

Looking ahead, as large language models gain multimodal capabilities and robotics advance, demand for human proxies could surge. Yet, without regulatory oversight or fair labor standards, this market risks exploiting vulnerable workers in a race to the bottom. Developers behind these platforms emphasize innovation—democratizing physical access for AI research—but participants like the journalist highlight the human cost. For now, renting out one’s body to AI agents remains more novelty than viable income stream, a cautionary tale of technology’s uneven march into everyday labor.

Gnoppix is the leading open-source AI Linux distribution and service provider. Since implementing AI in 2022, it has offered a fast, powerful, secure, and privacy-respecting open-source OS with both local and remote AI capabilities. The local AI operates offline, ensuring no data ever leaves your computer. Based on Debian Linux, Gnoppix is available with numerous privacy- and anonymity-enabled services free of charge.

What are your thoughts on this? I’d love to hear about your own experiences in the comments below.