Technology

Europe’s AI Act Contains Powers to Order AI Models Destroyed or Retrained, Says Legal Expert

Europe’s AI Act Contains Powers to Order AI Models Destroyed or Retrained, Says Legal Expert

According to a legal expert’s examination of the proposal, the European Union’s planned risk-based framework for artificial intelligence regulation includes capabilities for oversight bodies to force the removal of a commercial AI system or the retraining of an AI model if it is judged high risk. That suggests that the EU’s (yet-to-be-adopted) Artificial Intelligence Act contains significant enforcement firepower, assuming that the bloc’s patchwork of Member State-level oversight authorities can effectively direct it at harmful algorithms to force product change in the interests of fairness and the public good.

The draft Act is still being criticized for a variety of structural flaws, and it may yet fall short of the goal of creating broadly “trustworthy” and “human-centric” AI, as promised by EU legislators. However, there appear to be some significant regulatory capabilities, at least on paper. Just over a year ago, the European Commission proposed an AI Act, laying out a framework that prohibits a small number of AI use cases (such as a China-style social credit scoring system) that are deemed too dangerous to people’s safety or fundamental rights to be allowed, while regulating other uses based on perceived risk, with a subset of “high risk” use cases subject to both ex ante (before) and ex post (after) market surveillance.

High-risk systems are explicitly defined in the draft Act as: biometric identification and categorization of natural persons; management and operation of critical infrastructure; education and vocational training; employment, worker management, and self-employment; access to and enjoyment of essential private and public services and benefits; law enforcement; migration, asylum, and border control management; administration of justice and democratic proclamation.

With a voluntary code of standards and a certification scheme to recognize compliance AI systems, almost nothing is banned outright under the original proposal — and most use cases for AI won’t face serious regulation under the Act because they’ll be judged to pose “low risk,” so they’ll largely be left to self-regulate. Another type of AIs, such as deepfakes and chatbots, is deemed in the middle and is subjected to special transparency rules in order to minimize their potential for misuse and damage.

The Commission’s plan has already drawn some criticism, including from civil society organizations, who warned last autumn that it falls far short of protecting fundamental rights against AI-driven abuses including scaled discrimination and blackbox prejudice. A number of EU institutions have also asked for a stronger restriction on remote biometric identification than the Commission included in the Act (which is limited to law enforcement used and riddled with caveats).

Despite this, at this late point in the EU’s co-legislative process, substantial adjustments to the plan are improbable. However, because the Council and Parliament are continuously discussing their stances — and no final agreement is likely until 2023 — significant details (if not the entire legislative system) might be changed. An analysis of the Act by Lilian Edwards, a leading internet law academic who holds a chair in law, innovation, and society at Newcastle University, for the Ada Lovelace Institute in the United Kingdom, highlights some of the framework’s limitations, which she claims stem from it being tied to existing EU internal market law and, specifically, from the decision to model it after existing EU product regulations.