A company could choose the most obscure, nontransparent systems architecture available, claiming (rightly, under this bad definition) that it was “more AI,” in order to access the prestige, investment, and government support that claim entails. For example, one giant deep neural network could be given the task not only of learning language but also of debiasing that language on several criteria, say, race, gender, and socio-economic class. Then maybe the company could also sneak in a little slant to make it also point toward preferred advertisers or political party. This would be called AI under either system, so it would certainly fall into the remit of the AIA. But would anyone really be reliably able to tell what was going on with this system? Under the original AIA definition, some simpler way to get the job done would be equally considered “AI,” and so there would not be these same incentives to use intentionally complicated systems.
Of course, under the new definition, a company could also switch to using more traditional AI, like rule-based systems or decision trees (or just conventional software). And then it would be free to do whatever it wanted—this is no longer AI, and there’s no longer a special regulation to check how the system was developed or where it’s applied. Programmers can code up bad, corrupt instructions that deliberately or just negligently harm individuals or populations. Under the new presidency draft, this system would no longer get the extra oversight and accountability procedures it would under the original AIA draft. Incidentally, this route also avoids tangling with the extra law enforcement resources the AIA mandates member states fund in order to enforce its new requirements.
Limiting where the AIA applies by complicating and constraining the definition of AI is presumably an attempt to reduce the costs of its protections for both businesses and governments. Of course, we do want to minimize the costs of any regulation or governance—public and private resources both are precious. But the AIA already does that, and does it in a better, safer way. As originally proposed, the AIA already only applies to systems we really need to worry about, which is as it should be.
In the AIA’s original form, the vast majority of AI—like that in computer games, vacuum cleaners, or standard smart phone apps—is left for ordinary product law and would not receive any new regulatory burden at all. Or it would require only basic transparency obligations; for example, a chatbot should identify that it is AI, not an interface to a real human.
The most important part of the AIA is where it describes what sorts of systems are potentially hazardous to automate. It then regulates only these. Both drafts of the AIA say that there are a small number of contexts in which no AI system should ever operate—for example, identifying individuals in public spaces from their biometric data, creating social credit scores for governments, or producing toys that encourage dangerous behavior or self harm. These are all simply banned, more or less. There are far more application areas for which using AI requires government and other human oversight: situations affecting human-life-altering outcomes, such as deciding who gets what government services, or who gets into which school or is awarded what loan. In these contexts, European residents would be provided with certain rights, and their governments with certain obligations, to ensure that the artifacts have been built and are functioning correctly and justly.
Making the AIA Act not apply to some of the systems we need to worry about—as the “presidency compromise” draft could do—would leave the door open for corruption and negligence. It also would make legal things the European Commission was trying to protect us from, like social credit systems and generalized facial recognition in public spaces, as long as a company could claim its system wasn’t “real” AI.