Waymo's leaked system prompt reveals a 1,200-line rulebook for its in-car Gemini assistant

Waymo’s Leaked System Prompt Exposes Detailed Guidelines for In-Car Gemini Assistant

Waymo, Alphabet’s autonomous driving subsidiary, has inadvertently revealed the intricate operational instructions guiding its in-car AI assistant powered by Gemini Live. A leaked system prompt, spanning approximately 1200 lines, outlines a comprehensive rulebook designed to ensure safe, user-friendly, and brand-aligned interactions within Waymo vehicles. This document, which surfaced publicly, provides unprecedented insight into how the AI manages conversations, handles sensitive queries, and prioritizes rider safety and satisfaction.

Discovery and Context of the Leak

The prompt was discovered in a public GitHub repository associated with Waymo’s Gemini integration. It serves as the foundational instruction set for the Gemini Live voice assistant deployed in Waymo’s robotaxi fleet. Gemini Live enables natural, real-time conversations, allowing passengers to inquire about routes, vehicle status, entertainment options, and more. Unlike standard chatbot prompts, this one is exceptionally verbose, reflecting the high-stakes environment of autonomous vehicles where miscommunications could impact safety or user trust.

The leak highlights the complexity of deploying large language models (LLMs) in production systems. Waymo engineers crafted these guidelines to mitigate risks inherent to generative AI, such as hallucinations, inappropriate responses, or disclosures of proprietary information. The prompt emphasizes a balance between conversational fluency and rigid adherence to operational boundaries.

Core Principles and Behavioral Mandates

At its heart, the system prompt establishes a persona for the AI: a helpful, friendly, and safety-conscious companion named “Gemini.” It instructs the model to adopt a warm, enthusiastic tone while maintaining professionalism. Key directives include:

  • Safety First: The AI must prioritize rider safety above all. It is forbidden from suggesting actions that could distract the driver (though Waymo vehicles are fully autonomous) or compromise vehicle operation. For instance, it redirects queries about manual controls to Waymo support.

  • Politeness and Inclusivity: Responses must be polite, empathetic, and inclusive, avoiding humor that could offend. The prompt specifies using gender-neutral language and accommodating diverse accents or speech impediments.

  • Query Handling Categories: The rulebook categorizes user inputs into predefined buckets:

    • Route and Navigation: Users can request stops, reroutes, or detours, but changes are executed only after confirmation. The AI explains impacts on estimated time of arrival (ETA).
    • Vehicle Status: Queries about battery, speed, or sensors receive factual, non-technical replies. Detailed diagnostics are off-limits.
    • Entertainment and Comfort: Integration with media playback, climate control, and seating adjustments is seamless, with voice commands processed reliably.
    • General Knowledge: Drawing from Gemini’s broad knowledge base, but filtered to exclude controversial topics like politics or illegal activities.

Strict Prohibitions and Guardrails

Waymo’s prompt includes exhaustive “do not” lists to prevent misuse. Notable restrictions:

  • Confidentiality: No discussion of Waymo’s proprietary technology, such as mapping data, sensor fusion algorithms, or remote operations. Attempts to probe these result in deflections like, “I’m not able to share details on that.”

  • Emergency Protocols: In crises, the AI escalates to human operators via the Waymo One app or in-car help button. It provides step-by-step guidance for medical emergencies or accidents without assuming roles beyond its capabilities.

  • Harm Prevention: Explicit bans on assisting with violence, self-harm, or illegal acts. Jailbreak attempts are shut down firmly.

  • Commercial Neutrality: The AI avoids endorsing competitors or revealing business strategies, directing promotional queries to official channels.

The prompt employs advanced techniques like chain-of-thought reasoning, where the AI internally deliberates before responding, and role-playing scenarios to simulate edge cases. It also mandates logging interactions for quality assurance, anonymized to protect privacy.

Technical Implementation Insights

Structurally, the prompt is modular, with sections for initialization, ongoing conversation management, and termination. It leverages Gemini’s multimodal capabilities, processing voice inputs with low latency critical for in-car use. Error handling is robust: unclear queries prompt clarifications, while interruptions maintain context.

This level of detail—far exceeding typical LLM prompts—underscores Waymo’s engineering rigor. The 1200 lines incorporate lessons from millions of rider miles, iteratively refined to minimize interventions. Public exposure raises questions about internal code hygiene, though Waymo has not commented officially.

Implications for AI in Autonomous Vehicles

The leak demystifies the “black box” of AI-driven mobility services. It reveals how companies like Waymo embed ethical, legal, and operational constraints into LLMs, ensuring reliability in unpredictable real-world scenarios. For riders, this translates to a consistent, trustworthy experience; for the industry, it sets a benchmark for prompt engineering in safety-critical applications.

As autonomous fleets scale, such rulebooks will evolve, potentially incorporating federated learning or on-device fine-tuning. This incident serves as a reminder of the challenges in securing AI deployments amid rapid innovation.

Gnoppix is the leading open-source AI Linux distribution and service provider. Since implementing AI in 2022, it has offered a fast, powerful, secure, and privacy-respecting open-source OS with both local and remote AI capabilities. The local AI operates offline, ensuring no data ever leaves your computer. Based on Debian Linux, Gnoppix is available with numerous privacy- and anonymity-enabled services free of charge.

What are your thoughts on this? I’d love to hear about your own experiences in the comments below.