AI Copyright Case Studies
The lawsuits that could determine whether creators get compensated when AI companies use their work — and what's happened so far.
Over 70 Lawsuits. Landmark Settlements. Here's What's Happening.
Over 70 active copyright infringement lawsuits have been filed against major AI companies. These cases are setting critical precedents that will determine how AI companies must compensate creators for using their work.
Major Court Decisions in 2025
Several landmark decisions in 2025 are shaping AI copyright law
Thomson Reuters v. ROSS Intelligence
Defendant: ROSS Intelligence
The Issue
Thomson Reuters sued ROSS for using Westlaw headnotes to train a competing AI-driven legal research search engine.
The Decision
A Delaware federal court granted Thomson Reuters's partial motion for summary judgment on its direct infringement claim and rejected ROSS's fair use defense. This was the first major U.S. decision to reject an AI company's fair use argument.
On Appeal
The case is now on interlocutory appeal to the Third Circuit, making it the first AI fair use case to reach a federal appeals court. The Third Circuit's ruling will carry significant weight as binding precedent across Delaware, New Jersey, and Pennsylvania.
Significance
This case established that using copyrighted material to train a competing AI product is not automatically protected by fair use. With the Third Circuit now considering the appeal, this case could produce the first appellate-level ruling on AI training and fair use.
Bartz v. Anthropic (and Settlement)
Defendant: Anthropic AI
The Issue
Authors alleged that Anthropic used millions of digitized copyrighted books to train Claude without permission.
Court Decision (June 2025)
Judge William Alsup in San Francisco ruled that the firm's use of books was transformative, stating "The technology at issue was among the most transformative many of us will see in our lifetimes." However, the court noted that using "pirated" copies prevents asserting a fair use defense.
The Settlement (August 2025)
Despite the court's fair use ruling, Anthropic agreed to a landmark $1.5 billion class-action settlement applying to approximately 500,000 works, compensating around $3,000 per work—the largest public copyright recovery in U.S. history. The settlement is pending final approval, with a fairness hearing scheduled for April 2026.
Significance
While the court found training transformative, the massive settlement shows AI companies are willing to pay substantial compensation rather than risk further litigation. This case demonstrates that creators can achieve significant recoveries even when fair use arguments have some merit. Notably, Judge Alsup drew a critical distinction: training on lawfully acquired material may qualify as fair use, but training on pirated copies does not.
Kadrey v. Meta Platforms
Defendant: Meta (Facebook)
The Issue
Authors sued Meta for allegedly using pirated copies of their novels to train LLaMA (Meta's large language model).
The Decision
U.S. District Judge Vince Chhabria ruled in favor of Meta, finding the training to be transformative fair use. The court held that authors failed to present evidence that Meta's use impacted the market for their original work.
Judge's Reasoning
While the court found training LLMs "highly transformative" (favoring fair use), Judge Chhabria acknowledged they could "significantly dilute the market for a plaintiff's works" (favoring infringement). Ultimately, the lack of proven market harm was determinative.
Significance
This case shows that proving market harm is crucial for creators. However, it's a district court decision and not binding on other courts—different judges could reach different conclusions on similar facts.
Ongoing High-Profile Cases
These cases are actively progressing through federal courts and will shape AI copyright law
The New York Times v. OpenAI & Microsoft
Filed: December 2023
Claims: The Times alleges that the companies used "millions" of its copyrighted articles to train their AI models without consent, asserting that Microsoft and OpenAI are building a "market substitute" for its news.
Recent Development: The case is heading to summary judgment on April 2, 2026. In January 2026, the judge ordered OpenAI to produce 20 million ChatGPT logs relevant to the Times's claims, a massive discovery order that could reveal how the model uses copyrighted news content.
Why It Matters: This is one of the highest-profile AI copyright cases, involving a major news organization. The summary judgment ruling could be the most consequential AI copyright decision yet, potentially establishing whether large-scale training on news content constitutes fair use.
Getty Images v. Stability AI
Filed: February 2023 (parallel cases in the U.S. and UK)
Claims: Getty Images accused Stability AI of infringing more than 12 million photographs in building Stable Diffusion, with claims of unlicensed scraping with intent to compete directly with Getty Images.
UK Ruling (November 2025): The UK High Court ruled that AI model weights are not "copies" under UK copyright law, largely rejecting Getty's copyright claims in the UK proceeding. This was a significant win for AI companies under UK law, though UK and U.S. copyright frameworks differ substantially.
U.S. Case: The U.S. case remains pending and is proceeding on a separate track under U.S. copyright law, where different legal standards apply.
Why It Matters: The UK ruling introduced a major question: are model weights themselves infringing copies? The divergence between UK and U.S. proceedings highlights how different jurisdictions may reach very different conclusions on AI copyright.
Andersen v. Stability AI, Midjourney & DeviantArt
Filed: January 2023
Plaintiffs: Sarah Andersen and several other artists
Claims: This landmark lawsuit from the Northern District of California concerns the copyright implications of AI-generated art. Artists claim their copyrighted works were used without permission to train AI image generators.
Why It Matters: This is one of the first cases brought by visual artists against AI art generators. It's setting precedents for how copyright law applies to image-generating AI systems.
Dow Jones v. Perplexity AI
Filed: 2024
Claims: Dow Jones alleges that plaintiffs' copyrighted works are accessed and copied as part of Perplexity's "retrieval-augmented generation" (RAG) database, with plaintiffs alleging that Perplexity's models repackage original copyrighted works into verbatim or near-verbatim summaries.
Why It Matters: This case addresses RAG systems specifically, which work differently than pure language models. It could establish different rules for different AI architectures.
Concord II: Universal Music, Concord & ABKCO v. Anthropic
Filed: January 2026
Claims: Universal Music, Concord, and ABKCO filed a $3.1 billion lawsuit accusing Anthropic of mass piracy via BitTorrent, alleging the company downloaded copyrighted music catalogs through pirate networks to train its AI models.
Why It Matters: This case directly tests Judge Alsup's distinction from Bartz: if Anthropic used pirated copies obtained through BitTorrent, the fair use defense may be unavailable. The staggering $3.1 billion damages figure reflects the scale of alleged infringement.
Warner Music v. Suno & Udio
Settled: November 2025
Outcome: Warner Music reached settlements with AI music generation companies Suno and Udio, transitioning from litigation to licensing agreements. The settlements established a framework for compensating rights holders when AI companies use copyrighted music for training.
Why It Matters: This settlement signals a shift toward licensing models in the music industry, potentially setting a template for how AI companies and rights holders can reach commercial agreements rather than relying solely on litigation.
Carreyrou et al. v. Anthropic, Google, OpenAI, Meta, xAI & Perplexity
Filed: December 2025
Plaintiffs: John Carreyrou and a group of authors and writers
Claims: Writers are suing six major AI companies simultaneously, alleging each used copyrighted written works to train their respective AI models without authorization or compensation.
Why It Matters: This case is notable for targeting six AI companies in a single action, reflecting the industry-wide nature of the alleged infringement. It could establish whether all major AI companies face the same copyright liability for similar training practices.
Key Takeaways from AI Copyright Litigation
Momentum Favors Creators
While some early decisions favored AI companies on fair use grounds, recent developments—including the Thomson Reuters victory, the massive Anthropic settlement, and Copyright Office guidance—show growing recognition of creators' rights.
Courts Are Split
Different federal judges are reaching different conclusions on similar facts. This lack of uniformity means the issue will likely require appellate court resolution or even Supreme Court intervention.
Settlements Are Significant
The $1.5 billion Anthropic settlement averaging $3,000 per work shows that substantial compensation is possible. Even when facing favorable fair use rulings, AI companies may settle to avoid uncertainty and bad publicity.
Source of Training Data Matters
Multiple courts have indicated that using "pirated" or unlawfully obtained copies for training undermines or eliminates fair use defenses. Lawful acquisition appears to be a threshold requirement.
Class Actions Provide Leverage
Individual creators are joining together in class action lawsuits, creating collective bargaining power against well-funded AI companies. This strategy has proven effective in securing settlements.
More Decisions Expected
Numerous cases are pending with major decisions expected throughout 2026, including the NYT v. OpenAI summary judgment ruling in April and the Third Circuit's review of Thomson Reuters v. ROSS. The legal landscape will continue to evolve rapidly as these cases progress through the courts.
This Is a Rapidly Developing Area
These case studies reflect the state of litigation as of March 2026. New decisions are being issued regularly. The Third Circuit is now considering the Thomson Reuters v. ROSS appeal—the first AI fair use case at the appellate level—and the Supreme Court denied certiorari in Thaler v. Perlmutter in March 2026, affirming that AI-generated works without human authorship cannot receive copyright protection. The legal precedents established by these cases will shape AI copyright law for years to come.
If you believe your work was used without permission: The existence of these lawsuits and settlements demonstrates that legal options are available. Consult with an attorney who specializes in AI copyright issues to understand your rights and options.
Think Your Work Was in a Training Dataset?
You don't need a lawyer already. A free evaluation can tell you whether there's something worth pursuing.