Brief comments on the draft EU Regulation on Artificial Intelligence
The draft EU Regulation on Artificial Intelligence leaked out yesterday. Some of my (tentative) thoughts:
1. The draft’s definition of an ‘AI system’ is so broad that it risks covering most (all?) computer programs. Does the Commission want to regulate all software in this Regulation? My guess is they didn’t do anything close to a sufficient impact assessment for such a broad scope.
2. ‘AI regulatory sandboxes’: what’s the point? In the draft it looks like sandboxes will mean (1) more regulatory oversight combined with (2) no relaxation of any applicable legal rules. Seems like a way to discourage innovation in the EU. Articles 3(1)(34) and 44 on “AI regulatory sandboxes” read like something out of Orwell: “foster … innovation” by “control”, “strict oversight” and “ensuring compliance” with all applicable rules applicable anyway.
3. The definitions of prohibited AI practices from Art 4 are vague and overbroad, just like the general definition of an ‘AI system’. E.g. prohibition on ‘general purpose social scoring’ may hinder innovation in credit risk scoring and thus impede broad access to credit.
4. Art 8 on training data sets looks like a wish list written by someone rather detached from practice (see more here).
5. Art 41 envisages something like “cookie consent notifications”, but for “AI” interacting with humans. But, it seems that the draft assumes sth close to “AI=software”, so virtually every computer program used by a human may need to greet the user with “Hi, I’m an AI system”
An example to illustrate the problems with the draft EU AI Regulation:
Imagine a school developing a simple logic/expert system to assist in making decisions related to admissions (e.g. just checking if a candidate is in the school’s catchment area based on address). Looks like this would be a “high-risk AI system” under AI Reg, so the school would need to:
- “put in place a quality management system” under the AI Reg
- prepare detailed “technical documentation”
- create a system for logging and audit trails
- conduct a “conformity assessment” (very technical and onerous for anyone other than a big tech company) and issue an “EU declaration of conformity”
- register the “AI system” in the EU database of high-risk AI systems
And this is not even all the school would have to do.
Previously published as a Twitter thread.