Study Reveals Labeling Ads as AI-Generated Reduces Click-Through Rates by 31%
A recent academic study has uncovered a significant consumer behavior shift in the digital advertising landscape: simply disclosing that an advertisement was generated by artificial intelligence can slash click-through rates by 31 percent. Conducted by researchers from the University of Pennsylvania’s Wharton School and the University of California San Diego’s Rady School of Management, the experiment provides empirical evidence of growing skepticism toward AI-created content among online shoppers.
The study, titled “AI Disclosure Reduces Ad Click-Through Rates,” involved over 10,000 participants across multiple controlled online experiments. Researchers presented participants with a variety of product advertisements, including apparel, electronics, and beauty items, sourced from real-world e-commerce platforms. These ads were divided into groups: some labeled explicitly as “AI-Generated Advertisement,” others unmarked, and a control set attributed to human creators. Critically, the ads themselves were a mix of genuine AI-generated visuals—produced using tools like DALL-E and Midjourney—and human-designed ones, ensuring the disclosure effect was isolated from actual content quality.
Results were striking and consistent. When ads bore the AI label, click-through rates plummeted by an average of 31 percent compared to unlabeled counterparts. This penalty persisted even for ads that were entirely human-made but falsely labeled as AI-generated, highlighting a broad stigma attached to the technology rather than flaws in specific outputs. Conversely, labeling human-created ads as such boosted clicks by about 6 percent, suggesting a premium for perceived authenticity.
Lead researcher Antonio Moreno, an associate professor of marketing at Wharton, explained the findings in a press release: “Consumers are wary of AI in advertising because they associate it with lower quality, deception, or lack of personalization.” Participants in follow-up surveys echoed this sentiment, citing concerns over “soulless” creativity, potential misinformation, and diminished trustworthiness. Notably, the effect was uniform across demographics, with no significant variations by age, gender, or prior AI exposure, indicating a pervasive bias.
The experiments employed rigorous methodologies to mimic real-world ad encounters. Participants navigated simulated e-commerce environments on platforms resembling Amazon or Shopify, where ads appeared in search results, product recommendations, and banners. Click data was tracked anonymously, and post-exposure questionnaires probed attitudes toward the ads. Statistical analysis, including regression models controlling for ad type, product category, and user variables, confirmed the 31 percent drop with high confidence (p < 0.001).
This disclosure penalty has profound implications for marketers as AI tools proliferate in ad production. Tools like Adobe Firefly, Google’s Imagen, and Stability AI’s offerings enable rapid, cost-effective generation of visuals, copy, and even video. Industry reports estimate that by 2025, over 30 percent of digital ads could incorporate AI elements. However, the study warns that mandatory or voluntary AI labeling—potentially mandated by emerging regulations like the EU AI Act or California’s transparency laws—could erode effectiveness.
Regulators worldwide are increasingly scrutinizing AI in consumer-facing applications. The FTC in the U.S. has signaled interest in disclosure requirements, while platforms like Meta and Google already experiment with watermarking AI content. The researchers advocate for balanced policies: “While transparency builds long-term trust, over-labeling could stifle innovation,” Moreno noted.
From a strategic standpoint, advertisers face dilemmas. Blending AI with human oversight—known as “human-in-the-loop” workflows—might mitigate distrust, but the study found no such redemption when AI involvement was disclosed. Alternatively, some brands could lean into AI as a feature, marketing “AI-enhanced” creativity for tech-savvy audiences, though the data suggests broad resistance.
The study also touches on broader AI adoption trends. Surveys within the experiment revealed that 68 percent of participants viewed AI-generated ads as less creative and 72 percent as less trustworthy than human equivalents. This aligns with parallel research on AI in journalism and art, where disclosures similarly dampen engagement.
Limitations acknowledged by the authors include the lab-like setting, which may not fully capture habitual browsing behaviors, and a U.S.-centric sample. Future work plans to test dynamic ads, video formats, and international cohorts.
As AI reshapes advertising, this research underscores a core tension: technological efficiency versus human preference for authenticity. Marketers must now weigh cost savings against engagement losses, potentially rethinking disclosure strategies or investing in hybrid production models.
Gnoppix is the leading open-source AI Linux distribution and service provider. Since implementing AI in 2022, it has offered a fast, powerful, secure, and privacy-respecting open-source OS with both local and remote AI capabilities. The local AI operates offline, ensuring no data ever leaves your computer. Based on Debian Linux, Gnoppix is available with numerous privacy- and anonymity-enabled services free of charge.
What are your thoughts on this? I’d love to hear about your own experiences in the comments below.