top of page

Grammarly Shuts Down Expert Review Feature Following $5 Million Grammarly AI Lawsuit

Grammarly Shuts Down Expert Review Feature Following $5 Million Grammarly AI Lawsuit

On March 11, 2026, investigative journalist Julia Angwin filed a class action against Superhuman Platform Inc., the parent company of the ubiquitous writing assistant Grammarly. Filed in the US District Court for the Southern District of New York, the Grammarly AI lawsuit targets a specific feature launched in August 2025: "Expert Review." Packaged into the $12 monthly Pro subscription, this tool used large language models to generate writing feedback and directly attributed that advice to real, recognizable writers, journalists, and subject matter experts. None of them consented to having their identities used to sell software subscriptions.

Faced with mounting public backlash and undeniable legal pressure, CEO Shishir Mehrotra issued a public apology on LinkedIn. Grammarly has now completely disabled the Expert Review feature.

The core of the issue centers on identity appropriation at scale. Generative AI tools have historically scraped the web for training data, which sparks copyright disputes. The Grammarly AI lawsuit highlights a different legal vulnerability: scraping a person's name and reputation, integrating it into a user interface, and selling it as a premium feature.

Assessing Your Status in the Grammarly AI Lawsuit: Legal Avenues and User Solutions

Assessing Your Status in the Grammarly AI Lawsuit: Legal Avenues and User Solutions

Because this legal action involves specific software features and distinct groups of affected individuals, finding out where you stand depends entirely on whether you are a subscriber or a creator.

Checking If Your Name Was Triggered in the Grammarly AI Lawsuit

Before Grammarly pulled the plug on the feature, the company required experts to actively opt out by emailing customer support to have their names removed from the system's prompt outputs. This requirement sits at the heart of the legal complaint.

If you are a published author, journalist, or public academic, you may be automatically included in the plaintiff class under Rule 23 of the Federal Rules of Civil Procedure. Once the court certifies a class action, affected individuals do not need to take immediate affirmative action to join. You are typically included by default. If you discover your name was utilized in the Expert Review feature and want to actively participate or provide evidence of reputational damage—such as instances where the AI generated poor, incomprehensible advice under your name—you can contact Peter Romer-Friedman Law PLLC, the firm handling the suit. Early reports indicate between 40 and 50 writers have already approached the firm. Those who wish to handle their own legal claims separate from the class must submit a formal opt-out request to the court once the notice period begins.

Why Everyday Subscribers Cannot Claim Compensation in the Grammarly AI Lawsuit

Regular software users searching for a payout will be disappointed. The Grammarly AI lawsuit class definition covers the individuals whose names and identities were misappropriated for commercial gain. It does not cover the millions of everyday users or students who paid $12 a month for the Pro version. Even though users might feel shortchanged by paying for a feature that generated what Angwin described as "very bad" and nearly incomprehensible advice, consumer dissatisfaction regarding software quality is not the basis of this specific right-of-publicity litigation. You cannot join this lawsuit just because you used the app.

Background: What Triggered the Grammarly AI Lawsuit?

Background: What Triggered the Grammarly AI Lawsuit?

Grammarly has spent years building a reputation as a benign, helpful spell-checker and syntax editor, successfully integrating itself into the workflows of enterprise businesses, freelance writers, and major universities across the US and UK. The push to monetize generative AI forced the company to look beyond basic comma placement.

The Rollout of the Grammarly AI Expert Review Feature

In August 2025, Grammarly attempted to differentiate its premium tiers by launching Expert Review. The premise was straightforward. Instead of getting generic AI feedback, users could select a famous author—say, horror icon Stephen King or investigative journalist Julia Angwin—and receive customized feedback styled as if that person had read their draft.

The execution proved disastrous. AI models do not possess the actual editing skills of the writers they mimic. They predict text based on statistical probabilities. When users triggered the Expert Review feature, the LLM hallucinated feedback that frequently missed the mark, mangled the user's original intent, and produced structural nonsense. By explicitly naming real people in the user interface—saying, in effect, "Here is what Julia Angwin thinks of your writing"—Grammarly attached professional reputations to machine-generated garbage.

Writers found out their names were being used as a product feature. The blowback was immediate. The company’s defense that the AI was simply offering advice "inspired by" experts fell flat when critics pointed out that the entire marketing appeal relied on the illusion of real expert involvement. Mehrotra’s eventual decision to sunset the feature acknowledged low user adoption, but the swift filing of the lawsuit made the shutdown inevitable.

Data and Demands: The Core of the Grammarly AI Lawsuit

Technology platforms usually rely on Section 230 of the Communications Decency Act to avoid liability for content generated on their platforms. That defense rarely shields companies from intellectual property and right of publicity violations when the company designs the exact mechanism causing the infringement.

Examining the $5 Million Threshold in the Grammarly AI Lawsuit

The initial court filings estimate commercial damages exceeding $5 million. This specific figure is highly relevant in US federal court. Under the Class Action Fairness Act (CAFA), $5 million is the exact jurisdictional threshold required to move a mass consumer or class action lawsuit out of state courts and into the federal system. It ensures the case will be handled at a higher level with broader implications.

The plaintiffs anchor their financial demands on existing state laws in New York and California. Both states maintain strict commercial protections against the unauthorized use of a person's name, image, or likeness for profit. The laws treat identity as a commodity. A local car dealership cannot put Stephen King's face on a billboard to sell minivans without paying him. The lawsuit argues that a software company cannot embed his name into a premium digital feature to sell $12 monthly subscriptions without securing the same commercial licensing.

Controversy: AI Identity Appropriation at Scale

Controversy: AI Identity Appropriation at Scale

The mechanics of this case highlight a massive blind spot in software development regarding generative AI. Product managers treated human identities as open-source API calls.

Debating Product Design in the Grammarly AI Lawsuit

Grammarly executives initially attempted to soften the impact by framing the feature as a sophisticated search function. They claimed the AI was just aggregating publicly available stylistic information to generate a specific tone. From a purely technical standpoint, training an LLM on public texts to understand a writing style is a common industry practice.

Legal experts and academics quickly dismantled that defense. The issue was not the background training data; it was the foreground product design. Grammarly made a deliberate, engineered choice to explicitly print the names of living professionals in its software interface to lend unearned credibility to AI-generated text. The system did not just say "Make this scarier." It explicitly promised the expertise of specific individuals.

For writers like Angwin, the damage went beyond commercial exploitation. A journalist’s career relies entirely on their editorial judgment. Having an AI mass-produce bad editorial advice and stamp a journalist's name on it functions as a form of automated reputational damage. Grammarly created a product that essentially counterfeited human expertise and sold it at retail scale.

Outlook: What the Grammarly AI Lawsuit Means for Software Development

Outlook: What the Grammarly AI Lawsuit Means for Software Development

The immediate shutdown of the Expert Review feature signals a hard limit on how far tech companies can push AI wrappers. Slapping an LLM into a text box and generating responses is cheap. Making those responses feel valuable is difficult. Grammarly tried to bridge that value gap by borrowing the credibility of human experts who spent decades building their craft.

Shifting Norms After the Grammarly AI Lawsuit

Product teams across the industry will have to audit their tools for similar mechanics. Features that rely on mimicking real-world professionals without explicit, signed licensing agreements represent an active liability. The old playbook of launching a disruptive feature, requiring victims to figure out how to opt-out, and apologizing later is closing. Courts are drawing a sharp line between analyzing a person's work and selling their identity.

As long as companies build features that treat living people as prompt modifiers rather than individuals with legal rights, these clashes will multiply. Selling a counterfeit version of a professional's judgment is not a technical innovation. It is an intellectual property violation rendered through code. The permanent removal of Grammarly's Expert Review tool proves that no amount of algorithmic complexity can override basic commercial law.

FAQ

What is the Grammarly AI lawsuit about?

The lawsuit was filed in March 2026 by journalist Julia Angwin against Grammarly's parent company. It alleges the software's AI "Expert Review" feature used the names and identities of real authors and journalists to generate writing advice without their permission. The suit seeks over $5 million in damages for unauthorized commercial exploitation.

Who is eligible to join the Grammarly AI lawsuit?

The lawsuit class comprises the authors, journalists, academics, and public figures whose names were integrated into the Expert Review feature to generate AI feedback. Ordinary software users and subscribers who paid for the tool are not part of the plaintiff class.

Why did Grammarly shut down the Expert Review feature?

Following severe criticism from writers and the imminent threat of litigation, Grammarly disabled the feature completely. CEO Shishir Mehrotra issued a public apology, admitting the tool failed to meet expectations and lacked appropriate consent mechanisms.

How did the AI Expert Review feature actually work?

Available in the $12 Pro tier launched in August 2025, the tool allowed users to select a specific recognizable expert. The software then used a large language model to hallucinate writing feedback and presented it to the user as if that specific expert had reviewed their work.

Do I need to opt out if I am a writer affected by this feature?

Before the feature was disabled, Grammarly required writers to actively email support to opt out of having their names used. Under current federal class action rules, if the court certifies the lawsuit class, affected writers will be included automatically unless they formally opt out of the litigation to pursue individual claims.

Does the Grammarly AI lawsuit affect normal free or Pro users?

No. Standard grammar and spelling checks continue to function normally. The legal action strictly targets the unauthorized use of human identities in the now-defunct Expert Review module, not the broader utility of the software.

Get started for free

A local first AI Assistant w/ Personal Knowledge Management

For better AI experience,

remio only supports Windows 10+ (x64) and M-Chip Macs currently.

​Add Search Bar in Your Brain

Just Ask remio

Remember Everything

Organize Nothing

bottom of page