Untitled Post
January 15th, 2026
Read More >AI Copyright Resources for Creators
If AI companies trained on your creative work without permission, you may have legal options. Get comprehensive guidance on protecting your copyright, understanding fair use, and connecting with experienced attorneys who can help.
The law is catching up to AI. Find out where you stand.
Free 2026 AI Copyright Guide →Tell us what happened. We'll help you figure out if you have options.
Contact Us Now →Billions of images, books, and songs were scraped from the internet to train AI — without asking the people who made them. If that includes your work, you have real options. We break down what the law says, what other creators have done, and how to find an attorney who actually handles these cases.
Take this quick assessment to help understand your situation better.
Enter your email to receive a detailed assessment of your situation and next steps.
AI copyright cases are being filed across the country right now. Find attorneys in your state who are actively working on these cases.
Major AI copyright rulings expected in 2026, including the NYT v. OpenAI summary judgment and the Anthropic settlement fairness hearing. Check back for updates on conferences and legal developments.
Real cases, real decisions, and real questions from creators navigating AI copyright.
January 15th, 2026
Read More >October 3rd, 2025
Read More >Yes, and the tide is turning in favor of creators. In 2025, Thomson Reuters v. Ross Intelligence became the first major U.S. decision to reject an AI company's fair use defense (now on appeal to the Third Circuit). The U.S. Copyright Office stated that AI training to produce competing works goes beyond fair use, and the EU AI Act's copyright transparency obligations took effect in August 2025. In March 2026, the Supreme Court denied certiorari in Thaler v. Perlmutter, confirming that AI cannot hold copyrights. Momentum is building toward recognizing creators' rights, with over 70 lawsuits progressing through the courts.
Tools now exist to help you find out. The website "Have I Been Trained" allows you to search the LAION-5B dataset (5.85 billion images used by Stable Diffusion and Google's Imagen) by your name or by uploading your work. For authors, you can search the Books3 dataset which contains 183,000 books used by companies like Meta and Bloomberg. While not all AI companies disclose their training data, these searchable databases cover many major AI systems and give creators a starting point for investigating potential unauthorized use.
Your copyright protections remain strong. You have the exclusive right to control reproduction of your work, and AI training may infringe on that right. Recent developments show promise: Anthropic agreed to a landmark $1.5 billion settlement (pending final court approval, with a fairness hearing set for April 2026), paying authors around $3,000 per work and requiring destruction of improperly acquired content. Warner Music also reached licensing deals with AI music companies Suno and Udio in late 2025. The legal framework is evolving rapidly in creators' favor.
This actually works in creators' favor. In March 2026, the Supreme Court denied certiorari in Thaler v. Perlmutter, definitively settling that AI-generated content without human authorship cannot be copyrighted under U.S. law. This means AI companies can't claim copyright protection over outputs that directly copy or mimic your original work. While content with significant human creative input may receive protection, purely AI-generated content has no copyright standing, which strengthens your position as a human creator.
Your concerns are valid and increasingly recognized by courts and lawmakers. Major creative organizations, publishers, and individual creators have successfully brought cases against AI companies. The New York Times case against OpenAI is heading to summary judgment in April 2026, and Getty Images secured a UK ruling that model weights do not constitute permissible "copies." In January 2026, the bipartisan TRAIN Act was introduced in Congress, requiring AI companies to disclose their training data. Solutions are emerging through litigation, settlements, and legislation at both the U.S. and international level.
Fair use is a limited exception that allows certain uses of copyrighted material, but it's not the blanket protection AI companies claim. Courts are examining this closely, with Thomson Reuters v. Ross (now on appeal to the Third Circuit) rejecting fair use and the NYT v. OpenAI case heading to summary judgment in April 2026. The U.S. Copyright Office stated that AI training to produce competing content is "at best, modestly transformative" and likely not fair use. Meanwhile, the EU AI Act's copyright obligations took effect in August 2025, requiring AI companies to respect opt-outs and disclose training data sources across Europe.
Yes, and more companies are offering opt-out tools, though it's an imperfect system. OpenAI, LinkedIn, Meta (in the EU/UK), and Microsoft 365 now offer opt-out mechanisms. The "Have I Been Trained" website lets you opt out of the LAION dataset. Technical solutions like "NoAI" tags and tools like Nightshade can help protect your work. While opt-out systems have limitations—especially for work already scraped—creators increasingly have tools to protect future use of their content. Importantly, the burden is shifting as more companies recognize they need creator permission.
You have several actionable options. First, document everything—screenshot the evidence and note where you found it. Consider joining existing class action lawsuits (over 70 are currently active against major AI companies). You can send cease and desist notices to the companies involved. Use opt-out tools for future prevention. Consult with legal resources that understand AI copyright issues. Some companies, facing legal pressure, are becoming more responsive to removal requests. The $1.5 billion Anthropic settlement (pending final court approval) shows that taking action can lead to real compensation and accountability.
Yes, and creators are making significant progress. Over 70 active lawsuits target OpenAI, Meta, Google, Microsoft, Anthropic, Stability AI, Midjourney, and others. Notable recent developments: Anthropic's $1.5 billion settlement with authors (pending final court approval); Concord Music suing Anthropic for $3.1 billion over lyrics; the NYT v. OpenAI case heading to summary judgment in April 2026; Getty Images winning a UK ruling against Stability AI; Thomson Reuters winning the first major fair use ruling (now on appeal to the Third Circuit); and Warner Music reaching licensing deals with Suno and Udio. These cases are setting precedents that recognize creators' rights.
Multiple strategies can help protect your work. Register your copyrights for stronger legal standing. Use technical tools like "NoAI" tags, Nightshade, watermarks, and metadata. Opt out through platforms like OpenAI, LinkedIn, and "Have I Been Trained." The EU AI Act now requires AI companies to respect opt-outs and disclose training sources, and the bipartisan TRAIN Act introduced in Congress in January 2026 would bring similar transparency requirements to the U.S. Use monitoring tools like Google Images, TinEye, and Copyscape to track unauthorized use. Most importantly, stay informed—the landscape is rapidly evolving in favor of creators as more protection tools, legal precedents, and legislation emerge.
Our free 2026 AI Copyright Guide covers what your rights actually are right now, which lawsuits are winning, and what steps to take first.
Contact Us Now →