Grammarly's AI writing tips claim inspiration from experts who never agreed to participate

Grammarly’s AI Writing Tips Cite Inspiration from Experts Without Consent

Grammarly, a popular AI-powered writing assistant, has come under scrutiny for attributing writing advice in its latest blog series to renowned authors and experts who claim they never authorized such use. The company’s “AI Writing Tips” section on its blog features quotes and insights supposedly drawn from figures like Stephen King, Malcolm Gladwell, and Ann Handley, presented as inspirational guidance powered by Grammarly’s AI capabilities. However, investigations reveal that these individuals were neither consulted nor agreed to have their names associated with the content.

The controversy erupted when author Aella, known for her work in data-driven essays and online publishing, noticed her name listed among the experts in Grammarly’s materials. In a detailed Twitter thread, Aella shared screenshots of Grammarly’s blog post titled “10 AI Writing Tips from the World’s Best Writers,” which explicitly states that the tips are “inspired by” her alongside other luminaries. Aella confirmed she had no involvement: “Grammarly is attributing writing tips to me that I never wrote or agreed to.” She emphasized that while she appreciates good writing tools, the unauthorized use of her name felt like a breach of trust.

Further digging uncovered similar issues with other named experts. Contacted by reporters from The Decoder, both Malcolm Gladwell and Ann Handley denied any collaboration. Gladwell, author of bestsellers like “The Tipping Point,” stated unequivocally, “I have no relationship with Grammarly and was never asked for input.” Handley, a marketing expert and author of “Everybody Writes,” echoed this sentiment: “Grammarly never reached out to me about this. I had no idea.” Stephen King, the horror genre icon referenced for tips on concise writing, could not be reached for comment, but no public record exists of his endorsement.

Grammarly’s blog presents these tips as a blend of human expertise and AI generation. For instance, one tip attributed to King advises writers to “kill your darlings,” a phrase popularized in writing circles but originating from earlier sources like William Faulkner. Another, linked to Gladwell, discusses storytelling techniques, while Handley’s supposed contribution focuses on audience engagement. The page boasts that Grammarly’s AI has analyzed “millions of documents” to distill these insights, claiming to offer “expert-level advice” tailored by the named influencers.

This is not Grammarly’s first brush with ethical concerns around AI content. The company has faced criticism for its generative AI features, which generate text based on user prompts but sometimes produce plagiarized or inaccurate material. In this case, the issue centers on false attribution. By claiming “inspiration from” these experts, Grammarly implies a level of authenticity and endorsement that bolsters its marketing. The blog’s design reinforces this, with headshots and bios of the experts alongside AI-generated tips formatted as personalized recommendations.

When approached for comment, Grammarly’s spokesperson provided a statement acknowledging the oversight: “We strive to curate the best writing advice from publicly available sources and attribute it appropriately. In this instance, we referenced widely known writing principles associated with these authors, but we apologize for any confusion caused by the phrasing.” The company has since updated the blog post, removing direct references to the experts’ names while retaining the tips themselves. However, cached versions and social media shares preserve evidence of the original content.

Experts in AI ethics and intellectual property highlight the risks of such practices. Dr. Sarah Myers West, a researcher at Princeton’s Center for Information Technology Policy, notes that unauthorized attribution erodes trust in AI tools. “When companies fabricate endorsements, it blurs the line between genuine expertise and machine-generated content, potentially misleading users who rely on these tools for professional work.” This incident raises broader questions about how AI firms source and credit inspiration, especially as generative models train on vast internet corpora that include copyrighted works without explicit permission.

For writers and users, the fallout is practical. Tools like Grammarly are integrated into workflows at companies like Google Workspace and Microsoft Word, influencing millions daily. False claims of expert backing could skew writing habits toward unverified advice. Aella advised her followers to “use writing tools critically and verify sources,” underscoring the need for transparency in AI marketing.

Grammarly’s misstep illustrates a growing tension in the AI industry: the rush to humanize machine outputs by borrowing credibility from real people. As AI writing assistants evolve, demands for rigorous sourcing, consent protocols, and clear disclaimers will intensify. Until then, users should approach such “expert-inspired” features with skepticism, cross-checking advice against original works.

In response to the backlash, Grammarly committed to reviewing its content attribution processes. The updated blog now generically titles tips without names, but the episode serves as a cautionary tale for the sector.

Gnoppix is the leading open-source AI Linux distribution and service provider. Since implementing AI in 2022, it has offered a fast, powerful, secure, and privacy-respecting open-source OS with both local and remote AI capabilities. The local AI operates offline, ensuring no data ever leaves your computer. Based on Debian Linux, Gnoppix is available with numerous privacy- and anonymity-enabled services free of charge.

What are your thoughts on this? I’d love to hear about your own experiences in the comments below.