Podcast with Eric Seufert on the EU’s GDPR/AI reform package
I joined Eric Seufert again on the Mobile Dev Memo podcast to talk about the European Commission’s newly unveiled “digital omnibus” package and what it means for the GDPR, AI, cookies, and the broader competitiveness debate in Europe. Our conversation ranged from Mario Draghi’s competitiveness report to the politics of GDPR reform, browser-level consent signals, and the divergence between the EU and the UK on Pay‑or‑OK models.
You can listen to the full episode on Mobile Dev Memo, as well as Apple Podcasts and Spotify. Below, you will find my extended notes on that conversation.
1. From the GDPR (2018) to the Draghi report (2024)
To provide some background for the current reform plans, Eric asked me to draw a line from the GDPR’s entry into force in 2018 to Mario Draghi’s 2024 report on EU competitiveness, which criticises the GDPR’s impact on growth.
My framing:
The GDPR was not a complete break with the past. EU data protection has a long pre‑GDPR history.
What did change with the GDPR was its economic weight:
- Fines up to 4% of global annual turnover.
- Application to nearly all digital services processing “personal data.”
- Very broad and expansive interpretations by regulators.
A relatively small “true believer” community of privacy professionals suddenly found themselves with a powerful, economy-shaping legal instrument. Many in that community have a strong, almost absolutist view of data protection, as well as little regard for trade‑offs with anything outside privacy (including innovation or competitiveness).
The small community developed into a compliance industry—consultants, auditors, trainers—whose business model depends on maximising perceived GDPR risk.
Draghi’s report picks up one visible part of this:
- Gold‑plating and fragmentation: member states adding extra requirements on top of the GDPR.
- Compliance and red‑tape costs: especially damaging for SMEs that cannot amortise costs the way large tech firms can.
But in our discussion I stressed what Draghi’s accounting‑style analysis cannot easily quantify the opportunity cost:
- Products never launched because founders are told “the GDPR makes this too risky.”
- SMEs that never expand into other member states because of fear of divergent national rules.
- Internal data projects killed pre‑emptively by over‑cautious legal advice.
That is the backdrop for the Commission’s omnibus effort: there is finally political pressure to “do something” about Europe’s economic malaise, and digital regulation is impossible to ignore in that context.
2. What the “digital omnibus” is trying to do
At a high level, the “digital omnibus” package is one chapter in a broader legislative response to Draghi and to mounting political anxiety over Europe’s weak growth and weak tech sector. I already covered this in Europe is not “so back:” why cookie banners are here to stay (despite the reform) and the hard route not taken and Leaked GDPR reform: some good ideas but what about enforcement?.
On data protection specifically, I highlighted three main elements of the proposal:
- AI & the GDPR: clarifying when AI training and use can proceed without consent.
- Cookies & signals: shifting some cookie/ePrivacy issues into the GDPR, and introducing mandatory browser/OS‑level “signals” for user choice.
- Scope of the GDPR: codifying a narrower understanding of what counts as “personal data.”
I cautioned that the proposed legislation, even if adopted, may not achieve much:
- The text of the omnibus is less important than the enforcers (national data protection authorities and their EU-level body, the EDPB).
- Nothing in the proposal reforms the enforcement machinery.
- Even the best textual clarifications can be neutralised later by restrictive interpretation.
That is why I am sceptical that these changes, as currently drafted, will materially reduce the compliance burden or uncertainty.
3. AI under the GDPR: is real‑world AI development illegal?
We then moved to AI, where there is now a clear standoff between two camps:
The pragmatic camp (e.g. France’s data protection authority, the CNIL):
- real‑world AI training and deployment (Mistral, OpenAI, etc.) can be made compatible with the GDPR;
- guidance that at least gestures towards workable compliance paths.
The prohibitionist camp (Max Schrems/NOYB and like‑minded data protection officials):
- real-world state‑of‑the‑art LLMs essentially incompatible with the GDPR;
- use ambiguity strategically: never quite declaring AI “illegal” (in theory), but keeping maximum enforcement discretion to be able to deem any actual AI illegal.
The activist legal theory against LLMs
I summarised two key lines of argument used by the prohibitionists:
Reasonable expectations
- Europeans did not reasonably expect publicly available data (e.g. website content) to be used for LLM training.
- Hence: training requires prior consent from every affected individual.
- Practical result: impossible to obtain → therefore, large‑scale LLM training is illegal.
Sensitive data (Article 9 GDPR)
- LLM training corpora inevitably contain “special category” data: health, politics, religion, etc. (at least on the current expansive reading of this concept).
- Even if inclusion is incidental and unintentional, activists argue this triggers Article 9, which would likely demand explicit consent from each individual.
- Again, impossible in practice → LLMs are unlawful.
I think this is not the best reading of the GDPR. But it is a very real reading embraced by powerful actors (including some data protection authorities), and the EDPB’s opinion on AI was drafted to preserve that enforcement stance.
What the digital omnibus does for AI
The Commission’s digital omnibus proposal tries, modestly, to push back:
- It would clarify in the GDPR that consent is not required for certain AI training and use cases.
- In practice it is aimed at undermining the “in practice everything needs prior consent” theories pushed by NOYB and others.
I see this as a good move in principle, but again:
- It will be interpreted and applied by the same authorities behind the EDPB’s strategically vague AI opinion.
- They can still declare specific AI practices “non‑compliant” using other GDPR hooks (e.g. their reading of “reasonable expectations”).
Eric asked whether the anti‑AI crowd is just staking out an extreme negotiating position, or whether they genuinely want LLMs effectively banned in Europe. I answered that many of them are sincere:
- They doubt AI’s value.
- They place little weight on long‑run economic consequences.
- They’re effectively promoting a de‑growth, anti‑tech vision of Europe.
4. Cookies, ePrivacy, and why the banners may not go away
The other big “user‑facing” topic we discussed was the Commission’s attempt to reduce cookie consent banners.
The starting point: EDPB on Article 5(3) ePrivacy
We reminded listeners that:
Article 5(3) of the ePrivacy Directive (the “cookie law”) applies to any information stored in or accessed from a user’s terminal equipment, whether or not it is personal data.
The EDPB issued guidance that dramatically expands what needs prior consent, including:
- Email tracking pixels,
- IP‑based tracking,
- Even UTM parameters in URLs.
If that interpretation is accepted, then in practice almost all meaningful analytics and ad‑related communication between browser/app and server becomes subject to prior consent.
What the omnibus proposes
The Commission’s idea is to:
Split the regime:
Non‑personal data remains fully under ePrivacy Article 5(3) → status quo (consent by default) → no change for cookie banners.
Personal‑data‑related operations are shifted into the GDPR, where:
- consent is still the default, but
- new exemptions would be added.
The two new GDPR exemptions are roughly:
- Aggregated audience measurement by the controller for its own use.
- Security of the service or the terminal equipment.
On paper, that sounds like a step forward. In practice I see three problems:
Non‑personal data is untouched
UTMs and a lot of non‑personal device‑origin data stay under the strict ePrivacy regime. If DPAs continue to treat things like UTMs as requiring consent, banners and other consent requests remain unavoidable in many flows.Analytics exemption is very narrow
- Only for aggregated information.
- Only when generated by the controller for its own use.
- Real‑world analytics stacks often involve third‑party providers, cross‑site analysis, and user‑level tracking. Those would still trigger consent.
Security exemption won’t cover adtech reality
- I fully expect DPAs to say that ad fraud detection and many ad‑ops activities do not fall under “security of the service” (or under “aggregated” analytics).
- So fraud prevention and billing‑grade measurement would still require consent, unlike in the UK where the ICO is trying to create more realistic exemptions.
Net effect: I do not think this package, as drafted, will significantly reduce cookie consent friction. It is too timid, too complex, and leaves the most expansive EDPB interpretations intact.
5. Browser / OS‑level signals: a one‑way ratchet?
The omnibus also introduces the idea of “automated and machine‑readable indications of user choice”—browser or OS‑level signals that:
Must be implemented by web browsers within a couple of years.
Must be respected by online services when:
- Consent is required, or
- Users exercise their right to object (e.g. to direct marketing based on legitimate interests).
The legal text is high‑level and foresees a later technical standard. But based on existing GDPR practice, I expect the following pattern:
Negative signals (opt‑out)
Will be interpreted broadly: a single “no tracking” browser setting will be treated as a global refusal of:
- All consent‑based tracking, and
- All direct‑marketing processing based on legitimate interests.
Positive signals (opt‑in)
Will likely be declared insufficient for valid consent in virtually all practical cases:
- DPAs will say consent must be “specific” and “informed” at the service level.
- A generic browser‑level “yes to tracking” will be considered too coarse.
So the browser/OS signal risks becoming a one‑way ratchet:
- Easy to decline everything, everywhere, with one toggle.
- Hard or impossible to use that same mechanism to give valid consent.
The proposal also mentions an exemption for “media services” (likely reflecting lobbying by large publishers), but that term is undefined. Depending on how it is interpreted, this could become:
- Yet another privileged carve‑out for certain incumbent publishers.
- Another structural blow to the open web, especially smaller services that cannot claim “media” status.
6. “Personal data:” can we tame the “law of everything”?
We then discussed perhaps the most legally important—but politically under‑appreciated—piece: the definition of personal data.
Today, the GDPR is often treated as the “law of everything” in digital markets, because:
Regulators and activists insist on extremely broad readings:
- Any pseudonymous identifier that can single out a user is treated as personal data.
- Even if the controller has no realistic way to discover who the person actually is.
A recent Court of Justice judgment pushed back slightly against this trend, and the Commission is now proposing to codify a narrower, more controller‑centric test.
The core idea:
- Whether data is “personal” for you depends on whether you are reasonably likely to identify the individual.
- If you only hold pseudonymous IDs that cannot, in your setup, be linked back to real people, the dataset might not be “personal data” for you.
- The GDPR would then not apply directly to that dataset.
In our conversation I argued that this could create real incentives for data minimisation and structural separation:
- Many organisations would strive to process only data they cannot link back to individuals.
- A smaller set of specialised entities would handle truly identifying data under full GDPR scrutiny.
Activist organisations like NOYB strongly oppose this, portraying it as a “loophole.” But the test is objective, not subjective:
- It is not enough for a controller to say “we promise we can’t re‑identify.”
- Authorities can and will inspect whether the controller really lacks the means to identify individuals.
If applied in good faith, this could remove a very large class of low‑risk activities from the GDPR’s full compliance burden.
However, my concern is that the same actors who pushed “everything is personal data” will work to re‑interpret the new text so that nothing actually changes in practice.
7. The politics: who hates the omnibus, and why that matters
Eric asked about the reaction to the leaked proposal. The answer is: ferocious opposition from the privacy establishment.
Key elements:
Left‑of‑centre political groups in the European Parliament publicly attacked the package.
A coalition of NGOs and trade unions issued an open letter describing the package as:
- “The biggest rollback of digital fundamental rights in EU history” and
- A “an attempt to covertly dismantle” Europe’s “gold standard” GDPR regime.
Several aspects stood out to me:
- These groups now feel compelled to argue that the omnibus will harm competitiveness and innovation—a very strange posture in which they lack any credibility.
- One civil‑liberties organisation even pointed to China’s digital regulation as a model Europe should emulate, which I find remarkable given their stated mission.
By contrast, support for reform comes from:
- Some national leaders (e.g. calling to “stop the clock” on the AI Act).
- A small but vocal “EU accelerationist” community of investors and operators.
I noted a paradox:
- The Commission is clearly willing to spend political capital—this package goes further than I would have predicted a few weeks ago.
- Technically and institutionally, it still falls well short of the changes needed to really alter enforcement dynamics.
8. “Pay or consent” and the EU–UK divergence
Although the focus of this episode was the omnibus, Eric also asked for an update on “pay or consent” and the DMA, including the UK angle. In June, Eric and I devoted an entire podcast to this issue. I also recommend my debate with competition law scholars from September.
So what’s happening?
In the EU (the DMA + the GDPR)
In April 2025, the Commission found Meta’s original “pay or consent” model (subscription for no ads vs free with personalised ads) non‑compliant with the DMA’s Article 5(2) “specific choice” requirement.
Meta had already moved away from that binary model in November 2024, introducing:
- A triple choice:
- Paid, no ads.
- Free with personalised ads.
- Free with “less personalised ads” (with non‑skippable ad breaks).
Lower subscription prices.
The April decision only covers the earlier, binary model; we still do not know whether the Commission will accept Meta’s current triple‑option design.
In the UK (the GDPR only, no DMA)
As I wrote in Why is Meta offering cheaper and simpler “pay or consent” in the UK?:
Meta rolled out a subscription for no ads model in the UK after extensive discussions with the ICO.
The ICO’s public statement is cautious, but clearly more accommodating:
- It does not insist on a third, free “less personalised” option.
- It focuses on whether Meta’s consent flow and information are adequate under the (UK) GDPR.
I see the ICO as charting a pragmatic path that the EU could emulate—but so far shows no real sign of doing so.
9. The “Washington effect” and Brussels’ anti‑fragility
Finally, Eric asked whether the threat of US retaliation (e.g. via tariffs or counter‑measures framed as responses to DMA fines) might soften EU enforcement.
I doubt it. I used the term “Washington effect” for the idea that US pressure might discipline EU regulators. In practice, Brussels is institutionally anti‑fragile:
- Careers in and around EU institutions can still be made by “sticking it to big tech.”
- The Commission and data protection authorities staff can outlast any one US administration.
- They may reasonably believe that US priorities will shift, while their own incentives and internal politics remain constant.