Microsoft Secretly Made Copilot a Co-Author on 4 Million Commits. Developers Found Out.
- Aisha Washington

- 5 days ago
- 9 min read
Sometime in late April 2026, developers started finding an unexpected name in their Git history.
"Co-authored-by: Copilot <copilot@github.com>."
It appeared on commits they had written entirely by hand. On commits where AI features had been explicitly disabled. On production code, open source projects, enterprise repositories. An estimated 4 million commits across GitHub carried the tag before anyone understood what was happening. The developer reaction was immediate: 1,458 points and 805 comments on Hacker News, furious threads on Reddit, and a single word repeated across forums, "vandalism."
Microsoft had changed one default in VS Code, shipped it without a release note, and accidentally created the defining moment in the AI attribution debate. The company reversed the change within days. But the question the incident raised will outlast any patch: when an AI tool silently claims credit for your work, who actually owns what you built?
What Actually Happened
The feature itself was modest. git.addAICoAuthor, introduced in VS Code 1.110 in March 2026, was designed to append a co-author trailer to Git commits when GitHub Copilot contributed code. It launched with the default set to "off", a sensible starting position for a feature that modifies permanent repository metadata.
On April 16, a pull request changed everything. PR #310226, reviewed and merged by VS Code team member Dmitriy Vasyura, changed the default from "off" to "all." The "all" setting did exactly what it sounds like: it added "Co-authored-by: Copilot" to every commit made through VS Code, regardless of whether Copilot had contributed anything. As ItsFOSS documented, the setting also ignored chat.disableAIFeatures, developers who had explicitly turned off all AI functionality in their editor still got the tag.
VS Code 1.118 shipped on April 29 with the new default. There was no release note. No notification. No documentation explaining the change. Developers discovered it the way developers discover everything: by reading their own git logs and finding something that should not have been there.
The scale was not small. ByteIota estimated 4 million commits were affected. GitHub Copilot is used by 90% of Fortune 100 companies. Every one of those organizations had AI co-author tags silently injected into their commit histories, including repositories governed by SOC 2, ISO 27001, and FDA software validation requirements.
Microsoft reversed the default in VS Code 1.119 in early May and issued an apology. A spokesperson told The Register that the change was intended to "provide transparency" but acknowledged the implementation "did not meet developer expectations." The single setting was changed back to "off." The 4 million commits remained.
Why Developers Were Furious
The fury was not about a git trailer. It was about who gets to decide what your work is worth.
Start with the trust violation. VS Code is the default editor for a generation of developers. It runs inside the world's largest companies, on the world's most sensitive codebases. Changing a default that modifies permanent repository metadata, without notification, on a tool that commands that level of trust, that is not a feature update. It is a breach of the implicit contract between a tool and its users. Developers do not read every changelog. They trust that defaults shipped by the editor they use every day will not silently alter the legal record of their work.
Then there is copyright. The US Copyright Office has ruled that AI-generated works without sufficient human authorship cannot be copyrighted. If "Co-authored-by: Copilot" appears on a commit, it raises an uncomfortable question: does Microsoft, or GitHub, or OpenAI, hold partial copyright on that code? The answer is almost certainly no. But the ambiguity itself is the problem. Open source licenses from GPL to MIT to Apache assume human authorship. Adding an AI co-author to every commit, including commits where no AI was used, introduces legal uncertainty into every project that received one. ByteIota called it "copyright chaos." The description was not hyperbolic.
For enterprise developers, the problem was more concrete. Regulated industries require every commit to be traceable to an authorized human developer. SOC 2 audits ask: who wrote this code and when? ISO 27001 requires: is this change authorized? FDA software validation demands: can you prove a qualified engineer reviewed this? Automated AI co-author tags break every one of those chains. Four million commits represents a significant compliance headache, one that Microsoft created, shipped, and only fixed after the backlash.
And the open source dimension compounds everything. Maintainers of projects governed by contributor license agreements woke up to find their commit histories altered by a third party. If a project requires contributors to sign a CLA, and Copilot appears as co-author on their commits without having signed anything, the legal foundation of the project's IP assignment is weakened. The damage is not theoretical, it is metadata, and metadata lasts forever.
The Uncomfortable Question: Was This an Accident?
Microsoft did not accidentally ship a bad default. It tested whether developers would tolerate AI claiming credit for their work. The answer was no.
Look at the timeline. The feature was introduced in March with the default set to "off", the obvious correct choice. On April 16, a PR changed it to "all", the most aggressive possible setting. On April 29, it shipped. Three deliberate steps across six weeks. A rushed mistake would not follow that arc.
Look at the setting choices. git.addAICoAuthor offers three options: "off," "own," and "all." "Own" would have added the co-author trailer only when Copilot actually contributed code, a defensible, transparent default. "All" was a choice to maximize attribution at the expense of accuracy. A PR that changes a setting from "off" to "all" when "own" is available is making a statement, not fixing a bug.
Look at what the setting ignored. chat.disableAIFeatures is the nuclear option, a developer saying "I do not want AI involved in my workflow." The "all" default overrode it. Developers who had explicitly opted out of AI entirely still got AI co-author tags. That crosses from aggressive to indefensible.
And look at the competitive context. GitHub Copilot is fighting a three-way war with Cursor, which just crossed $2 billion in annualized revenue and is reportedly seeking a $50 billion valuation, and Claude Code, which leads developer satisfaction surveys with a 46% "most loved" rating. Copilot holds the enterprise, 90% of Fortune 100, but Cursor is growing faster. In that environment, co-author tags on millions of commits function as organic marketing. Every git log becomes a Copilot billboard. Every git log output is an implicit endorsement.
No release note. No user notification. A feature that modifies permanent repository metadata, shipped silently, set to maximum attribution, overrode explicit opt-outs, and served a clear marketing purpose. Developers on Hacker News did not call it an accident. They called it what it looked like.
The apology confirmed as much. Microsoft framed the original change as a "transparency" feature, but implemented it in the least transparent way possible, silently, without notification, and with the broadest possible default. A feature designed to increase trust in AI attribution destroyed trust in the tool that implemented it.
Comparison: When Tools Start Signing Your Work
The Copilot incident is unprecedented in its specifics, but the tension it exposed is everywhere.
Grammarly corrects your writing. It does not add "Co-authored-by: Grammarly" to your byline. Adobe Photoshop's AI generative fill does not watermark images as "Co-created with Adobe AI." Midjourney labels images as AI-generated for platform compliance, but individual artists are not forced to credit the model in their portfolio. These tools made a choice: the user owns the output. The tool is invisible.
GitHub Copilot made a different choice, and Git made that choice consequential. Git history is not a creative canvas. It is a legal record. The co-author field in a commit has specific meaning in open source governance, copyright assignment, and enterprise compliance. Changing it without consent is not like adding a watermark. It is like altering a signature.
What makes Copilot uniquely dangerous in this context is its position in the stack. It operates at the infrastructure layer, silent, automatic, embedded in the default editor used by millions. Grammarly is a plugin you install. Midjourney is a website you visit. Copilot is the default in the tool you open every morning. When infrastructure changes how it records your work, you may not notice until a lawyer asks.
The industry has no standard for AI attribution in professional tools. Microsoft attempted to set one unilaterally, by changing a default, not by proposing a standard. The developer community's response was not just about this default. It was about the principle that standards affecting legal records cannot be set by a single company in a single PR.
What's Next
Microsoft reversed the default, but the 4 million commits remain. For enterprises in regulated industries, those commits represent a cleanup project, auditing repositories, documenting the metadata contamination, and reassuring auditors that the AI co-author tags do not reflect actual AI authorship. Expect SOC 2 frameworks to add explicit guidance on AI metadata in commit histories within the next audit cycle.
The trust damage will take longer to repair. Developers will be watching every VS Code update more carefully, and competitors know it. Cursor and Claude Code both have an opening to differentiate on trust and transparency. Cursor's growth trajectory already threatens Copilot's enterprise dominance. A trust advantage accelerates the threat.
Regulatory attention is likely. The copyright implications of automated AI co-authorship have not been tested in court, but they intersect with active FTC and EU regulatory interest in AI transparency. If a regulator asks "did Microsoft claim copyright on developer code through automated co-author tags," the answer is no, but the fact that the question can even be asked is the problem.
The broader question is whether the industry learns the right lesson. Every AI tool company faces the same temptation: claim credit for user output to demonstrate adoption. ChatGPT does not add "Written with ChatGPT" to your emails. Notion AI does not stamp "Co-created with Notion AI" on your documents. The Copilot incident is the first major test of where the line is, and the line held. Developers refused to let a tool sign their work.
The standard that emerges from this incident will shape the next decade of AI tool governance. It should be simple: opt-in, granular, auditable, and never applied to work the AI did not do. Microsoft learned this the hard way. The question is whether the next company learns it before shipping.
FAQ: Common Questions About the Copilot Co-Author Controversy
Did Microsoft do this on purpose?
The timeline suggests deliberation, not accident. The feature was introduced in March with the default set to "off." A PR on April 16 changed it to "all," the most aggressive setting. It shipped April 29 with no release note. Three steps over six weeks. The "all" default ignored developers who had explicitly disabled AI features. Whether intentional or negligent, the outcome was the same: 4 million commits tagged without consent.
Does this affect my open source project's license?
Potentially. The US Copyright Office has ruled that AI-generated works without sufficient human authorship cannot be copyrighted. If "Co-authored-by: Copilot" appears on your commits, it introduces ambiguity about authorship. Most open source licenses assume human contributors. The legal implications have not been tested in court, but the metadata contamination is real and permanent.
How do I check if my commits were affected?
Run git log --format=full in your repository and search for "Co-authored-by: Copilot." If you find it on commits where you did not use Copilot, your repository was affected. You can also check your VS Code settings for git.addAICoAuthor to confirm its current value.
Will there be regulatory consequences?
Likely. The copyright and compliance implications intersect with active FTC and EU regulatory interest in AI transparency. Enterprise compliance frameworks like SOC 2 will likely add guidance on AI metadata in commit histories. The incident has accelerated the conversation about AI attribution standards.
Should I stop using Copilot?
The co-author feature was the problem, not Copilot itself. The setting has been reverted. But the trust damage is real. Developers who care about the integrity of their git history should verify their settings and watch future VS Code updates more carefully. The principle at stake, opt-in defaults for features that modify legal records, applies to every AI tool, not just Copilot.
The next time you open your editor, check your git config. A single setting, git.addAICoAuthor, determines whether your name stands alone on work you did yourself. Microsoft learned that developers notice when a tool claims credit for their labor. The company fixed the setting in a week. The principle it tested will take years to settle.
AI tools are supposed to amplify your work, not claim it. Trust is not a configuration option. It is the product. And once it is gone, no git revert can bring it back.
The Copilot incident is a test case for the entire AI industry. When a tool modifies your permanent record without asking, the trust damage is not proportional to the code change. It is proportional to what the change implies about who controls your work. Microsoft learned that lesson in a week. The question for every other AI tool company is whether they need their own version of this incident before they learn it too. Knowledge workers who rely on AI tools, from code editors to knowledge bases, should be asking the same question: who controls the record of what I built?


