Google vs. publishers: What the EU probe means for SEO, AI answers, and content rights

Google vs. publishers: What the EU probe means for SEO, AI answers, and content rights

Google vs. publishers: What the EU probe means for SEO, AI answers, and content rights

In one of the most consequential regulatory moves yet for the future of search, the European Commission has launched a formal antitrust investigation into Google. 

At the center of the complaint is Google’s use of publisher content to train and power AI Overviews and other generative AI features – while potentially diverting traffic away from original sources.

For anyone working in SEO, content strategy, or brand visibility, the implications are immediate. 

Is Google crossing a line by repurposing publisher content for AI-generated answers, or is this simply the cost of participating in an open, crawlable web?

With regulators now stepping in, the industry is being compelled to reassess how machine-readable content is used, managed, and valued – and what it may cost brands, publishers, and agencies if regulation fails to keep pace with innovation.

Here’s what’s happening, why it matters, and how the industry is already responding.

What’s actually happening: Core allegations in the complaint

This EU move comes amid a growing wave of lawsuits and policy disputes over AI training data, from high-profile publisher cases against OpenAI and others to Penske Media’s recent antitrust suit targeting Google’s AI products. 

Publishers increasingly describe Google’s approach as a forced choice: accept unlicensed use of their content for AI training and answers, or risk losing critical search traffic.

At the same time, technical controls such as robots.txt directives, Google-Extended, and emerging noai and nopreview meta conventions reflect an industry trying to regain control over a web that was never designed for large language model training. 

The core dispute is whether AI training and answer generation are extensions of traditional indexing and snippet creation, or a distinct use that requires licensing, attribution, or both.

Dig deeper: New web standards could redefine how AI models use your content

What does the complaint target

With publishers reporting traffic drops of 20–50% on informational queries, the complaint – led by a coalition of European news and specialist publishers – targets three practices:

  • Google’s scraping of publisher content to train and ground models such as Gemini for AI Overviews and AI Mode.
  • A lack of meaningful opt-out options that preserve search visibility.
  • AI summaries that capture user attention above organic links, reducing clicks to original publishers.

Regulators are being asked to examine three core questions:

  • How Google trains and grounds its models on publisher content.
  • Whether publishers have meaningful ways to opt out without sacrificing search visibility.
  • Whether AI Overviews reinforce Google’s dominance by keeping users inside Google’s own interface.

Zero-click search evolution: Is the market ready?

For the SEO community, this probe marks what may be the start of the post-click era, where the battle for visibility shifts from the SERP to the LLM context window. 

The open question is whether Google is ready for that shift.

The zero-click search experience is often discussed, but for it to work for all parties, three conditions need to be met:

  • Users must be able to get what they need within the SERP, AI Overviews, or AI Mode.
  • Google must seamlessly blend content types – text, images, video, products, services, and even checkout – into a coherent, useful experience.
  • Publishers must be fairly compensated for participating in this ecosystem.

At present, Google appears eager to move toward a fully zero-click experience, but is not yet able to support it end to end:

  • Users still encounter hallucinated or outdated answers.
  • Assistive chats remain fragmented and cannot support full discovery or purchase flows.
  • Publishers remain unclear about how, or whether, they are compensated when their content is quoted.

What is the opt-out version, and how effective is it?

In its defense around content repurposing, Google points to opt-out mechanisms such as Google-Extended in robots.txt. 

While Google-Extended can block Gemini training, it does not prevent AI-generated answers from fetching live data from publishers’ websites.

In practice, blocking LLM training has several limitations:

  • It does not prevent content from appearing in AI Overviews. If Google has indexed a page, it can still summarize or rephrase that content in AI answers, even when Google-Extended is blocked.
  • It is opt-out, not opt-in. Content is used by default, and publishers must be aware of Google-Extended and actively implement it to stop training.
  • It does not provide granular control. Publishers cannot allow traditional snippets while blocking LLM training, or vice versa.

Why opting out may be a bad idea

Many publishers want to opt out of crawling or having their content used in AI-generated answers. 

However, if AI answers become the default interface as search moves further toward a zero-click experience, relying solely on direct or organic traffic may become increasingly risky.

In practice, this creates a lose-lose dynamic. 

Blocking usage may protect intellectual property but reduce visibility, while staying open preserves presence at the cost of control. 

Without regulatory protections in place, publishers are largely left to play within the system as it exists today.

Dig deeper: How AI answers are disrupting publisher revenue and advertising

Get the newsletter search marketers rely on.

See terms.


The big debate: ‘Google doesn’t owe you’ vs. ‘it’s not their content’

Since websites exist, we tend to assume they are ours to control. 

But without search engines, their reach is limited. 

That tension sits at the heart of a debate that has split SEO opinion.

On one side is the “Google doesn’t owe you anything” camp. 

  • Many SEOs argue that the web is open by default, and that allowing search engines to crawl a site implicitly grants permission for content use without any guaranteed return. 
  • Google enables discovery, the argument goes, but no one is promised clicks or backlinks in exchange.

On the other side is the “It’s not their content” perspective. 

  • Publishers argue that:
    • Training large language models is fundamentally different from indexing pages.
    • Generating answers from proprietary content without attribution or compensation breaks the long-standing balance between platforms and publishers. 
  • When visibility is absorbed into AI summaries with no clear recourse or reward, the long-term implications for publishers, brands, and SEO are significant.

This debate plays out daily across social media, Reddit threads, and Quora discussions. 

Some point to generative engine optimization, or GEO, as a potential survival path, where being quoted in AI answers replaces traditional rankings. 

But that approach still leaves publishers dependent on Google’s decisions about linking and on users choosing to click through.

In practice, both sides have valid arguments. 

Still, the broader direction appears clear. 

Even if Google faces penalties from this investigation, search is unlikely to revert to a blue-links-only model. 

The shift toward a zero-click experience is already underway.

The dark future of a web without unique content

Before examining the potential outcomes of the complaint and what they may mean for SEOs, it is worth considering the consequences for information itself.

As creators feel their work is being reused without permission or reward, the incentive to produce original, high-quality content diminishes. 

At the same time, the volume of AI-generated content created with minimal human input continues to grow. This trend is not marginal. 

Entire websites now exist with thousands of pages produced almost entirely by generative systems.

Much of this material is derived from existing text that has been reworked, combined, or lightly altered, often with occasional hallucinations or inaccuracies. 

That content, in turn, feeds new AI answers and additional AI-generated material, creating a cycle of content reuse, error propagation, and declining informational quality due to a lack of genuinely new inputs.

From that perspective, the debate over AI training and content rights is not only about traffic or monetization.

It also raises fundamental questions about how the web sustains original knowledge creation – and why protecting publishers may be necessary to prevent long-term degradation of information quality.

What can happen if Google loses

For years, the contract between Google and publishers was simple: “I let you crawl, you give me clicks.” 

Generative AI has broken that contract. 

If the EU finds Google’s practices anticompetitive, we could see three major shifts in the search landscape:

  • Mandatory opt-out mechanisms: Currently, blocking Google-Extended stops training but does not necessarily protect you from being summarized in real time. A regulatory win could force a granular “opt out of AI summaries without losing search rankings” mechanism.
  • The licensing economy: Much like the music industry, we may see the rise of collective licensing. If Google is forced to pay for the training value of content, organic search may eventually split into free search and premium, licensed AI search.
  • AEO formalization: If attribution becomes a legal requirement, citing the source may become a ranking factor. SEOs would need to optimize for entity citations rather than only traditional backlinks.

Ads and the shifting economics of visibility

While this is primarily a story about AI, content rights, and SEO, ads remain the elephant in the SERP.

As more organic real estate is consumed by AI-generated summaries and assistive chat, the last predictable lever for visibility remains paid ads. 

Even if the EU forces Google to rein in its AI answers or improve attribution, the total space left for traditional blue links is unlikely to expand significantly. 

The available space will continue to favor Google’s revenue-generating products.

If AI Overviews dominate above the fold and organic links are pushed further down, CPCs will likely rise, whether inside or outside AI answers. 

Advertisers will compete more aggressively for the remaining clickable positions. 

Regardless of how the AI future plays out for Google, the direction is clear: the price of visibility is increasing.

How to adapt your SEO and content strategy

Even before any formal EU decision, leading teams are shifting from “rank for the keyword” to “be the primary entity answer wherever the model looks.” 

That involves:

  • Strengthening entity clarity with schema, consistent NAP, and structured data so AI systems can associate queries, topics, and attributes with your brand.
  • Auditing how your brand appears in AI Overviews, major chatbots, and vertical AI tools, then tracking inclusion, sentiment, and factual accuracy as emerging visibility KPIs.
  • Reviewing robots.txt. Blocking may protect IP but reduce exposure, while remaining open may increase AI visibility while raising licensing and valuation questions.
  • Educating leadership that traffic is no longer the only outcome of visibility. Being cited, summarized, or used as a grounding source in AI outputs has value, but that value must be defined and measured internally.

As legal and technical frameworks evolve, the strategic challenge is to remain machine-readable and rights-aware, asserting control over how content is used while ensuring the brand remains present wherever AI answers are trusted most.

Dig deeper: How to build an effective content strategy for 2026

About The Author

ADMINI
ALWAYS HERE FOR YOU

CONTACT US

Feel free to contact us and help you at our very best.