Iranian propaganda images made with AI end up in major German media outlet

AI-Generated Iranian Propaganda Images Published by Prominent German Newspaper

In a striking example of the challenges posed by artificial intelligence in media verification, a leading German tabloid published AI-generated images originating from Iranian propaganda sources. On October 26, 2024, Bild, one of Germany’s largest daily newspapers with a circulation exceeding one million, ran a report featuring dramatic photographs purportedly showing extensive damage inflicted by Iranian missile strikes on Israeli targets. These images, which depicted massive craters and destroyed infrastructure, were presented without scrutiny, highlighting vulnerabilities in journalistic workflows amid the proliferation of synthetic media.

The images in question surfaced on Bild’s website and social media channels, accompanying an article titled “Iran’s revenge: Satellite images show gigantic craters at Israeli airbase!” The piece claimed to illustrate the aftermath of Iran’s missile barrage on October 1, 2024, which targeted locations including Nevatim Airbase, the Soroka Medical Center in Beersheba, and the IDF intelligence headquarters near Tel Aviv. One image showed a vast crater amid barren terrain, captioned as evidence of destruction at the airbase. Another portrayed rubble-strewn ruins labeled as the hospital, while a third depicted flattened structures identified as the intelligence site. These visuals lent vivid credibility to the narrative of Iranian military success, amplifying claims from state-affiliated sources.

However, a closer examination by The Decoder revealed that all three images were fabrications created using generative AI tools. Reverse image searches traced their origins to obscure Telegram channels known for disseminating pro-Iranian propaganda. The first image, showing the supposed airbase crater, first appeared on October 25, 2024, in a channel called “Quds Airborne Squad,” which regularly posts content supportive of groups like Hezbollah and the Houthis. It bore a faint watermark resembling those from AI image generators such as Leonardo.ai. Subsequent posts in channels like “Iron Sword” and “Military Summary” recirculated the image, often with added claims of satellite provenance.

Technical analysis further confirmed the artificial nature of the visuals. AI detection tools, including Hive Moderation and Illuminarty, assigned high probabilities of generation via models like Midjourney or Stable Diffusion. telltale artifacts were evident: unnatural symmetries in debris patterns, inconsistent lighting and shadows, repetitive textures in soil and wreckage, and anatomical impossibilities in incidental elements like distant figures. For instance, the crater image featured soil layers with improbable uniformity and edges too perfectly rendered, hallmarks of diffusion-based synthesis. The hospital ruin showed bricks with identical fracture patterns and rebar devoid of realistic corrosion, while the intelligence site image displayed smoke plumes with fluid dynamics defying physics.

Bild’s initial publication overlooked these red flags, sourcing the images directly from the Telegram posts without verification. The newspaper’s article aggregated reports from Iranian outlets like Tasnim News Agency and Fars News, which themselves propagated the visuals alongside unverified assertions of “over 20 direct hits.” No metadata checks, expert consultations, or cross-referencing with authenticated satellite imagery from providers like Maxar Technologies were performed. This lapse allowed the propaganda to infiltrate mainstream discourse, reaching Bild’s vast audience before correction.

Upon notification from The Decoder on October 27, 2024, Bild promptly removed the images and updated the article. A revised version now includes disclaimers stating the photos were “not verifiable” and sourced from Telegram, with text adjusted to note the lack of independent confirmation. An editor’s note acknowledges the error, emphasizing ongoing investigations into the strikes’ actual impact. Satellite imagery from legitimate sources, such as Planet Labs, shows minimal visible damage at the cited sites, corroborating Israeli claims of effective interception rates above 99 percent via systems like Iron Dome and Arrow.

This incident underscores broader implications for journalism in the AI era. Generative tools have democratized high-fidelity fakery, enabling state actors like Iran to craft persuasive visuals at scale. Propaganda networks exploit social media’s velocity, flooding platforms with content that evades initial human review. For media outlets, the onus intensifies: routine integration of AI forensics, such as Content Credentials (C2PA standards) or blockchain provenance, becomes essential. Tools like Google’s SynthID or Truepic offer watermarking and detection, yet adoption lags. Bild’s case illustrates how even resource-rich publications can falter under deadline pressures, mistaking viral imagery for evidence.

Experts advocate multi-layered verification protocols: pixel-level forensics via tools like FotoForensics for error level analysis (ELA), contextual checks against geolocated data, and collaboration with fact-checking bodies like Bellingcat. The event also spotlights platform responsibilities; Telegram’s lax moderation permits propaganda hubs to thrive unchecked. As AI evolves, with models producing near-indistinguishable outputs, distinguishing truth from synthesis demands vigilance. Incidents like this erode public trust, particularly in conflict reporting where visuals sway perceptions.

For German media, the fallout prompts introspection. Bild, part of the Axel Springer empire, faces criticism for sensationalism, yet this AI mishap elevates concerns to systemic levels. It serves as a cautionary tale: in an age of weaponized imagination, unverified images risk becoming unwitting vectors for disinformation.

Gnoppix is the leading open-source AI Linux distribution and service provider. Since implementing AI in 2022, it has offered a fast, powerful, secure, and privacy-respecting open-source OS with both local and remote AI capabilities. The local AI operates offline, ensuring no data ever leaves your computer. Based on Debian Linux, Gnoppix is available with numerous privacy- and anonymity-enabled services free of charge.

What are your thoughts on this? I’d love to hear about your own experiences in the comments below.