Second, it could instruct any federal agency procuring an AI system that has the potential to “meaningfully impact [our] rights, opportunities, or access to critical resources or services” to require that the system comply with these practices, and that vendors provide evidence of this compliance. This recognizes the power of the federal government as a customer that can shape business practices. After all, it is the biggest employer in the country and could use its buying power to dictate best practices for the algorithms used to, for instance, screen and select candidates for jobs.

Third, the executive order could demand that any entity taking federal dollars (including state and local entities) ensure that the AI systems it uses comply with these best practices. This recognizes the important role of federal investment in states and localities. For example, AI has been implicated in many components of the criminal justice system, including predictive policing, surveillance, pre-trial incarceration, sentencing, and parole. Although most law enforcement practices are local, the Department of Justice gives out federal grants to state and local law enforcement and could attach conditions to these grants for how they ought to use this technology.

Finally, this executive order could direct agencies with regulatory authority to update and expand their rulemaking to processes within their jurisdiction that include AI. There are already some initial efforts underway to regulate entities using AI pertaining to medical devices, hiring algorithms, and credit scoring, and these initiatives could be further expanded. Worker surveillance, and property valuation systems are just two examples of areas that would benefit from this kind of regulatory action.

Of course, the kind of testing and monitoring regime for AI systems that I’ve outlined here is likely to provoke a mixture of concerns. Some may argue, for example, that other countries will overtake us if we slow down to put in all these guardrails. But other countries are busy passing their own laws that place extensive guardrails and restrictions on AI systems, and any American businesses seeking to operate in these countries will have to comply with their rules anyway. The EU is about to pass an expansive AI Act that includes many of the provisions I described above, and even China is placing limits on commercially deployed AI systems that go far beyond what we are currently willing to countenance.

Others may express concern that this expansive and onerous set of requirements might be hard for a small business to comply with. This could be addressed by linking the requirements to the degree of impact: a piece of software that can affect the livelihoods of millions should be thoroughly vetted, regardless of how big or how small the developer is. And on the flip side an AI system that we use as individuals for recreational purposes shouldn’t be subject to the same strictures and restrictions.

There are also likely to be concerns about whether these requirements are at all practical. Here, one should not underestimate the power of the federal government as a market maker. An executive order that calls for testing and validation frameworks will provide incentives for businesses that want to translate best practices into viable commercial testing regimes. We are already seeing the responsible AI sector fill with firms providing algorithmic auditing and evaluation services, industry consortia issuing detailed guidelines that vendors are expected to comply with, and large consulting firms offering guidance to their clients. And there are nonprofit, independent entities like Data and Society (disclaimer: I sit on their board) that have set up entire new labs to develop tools that assess how AI systems will affect different populations of people.

We’ve done the research, we’ve built the systems, and we’ve identified the harms. There are established ways to make sure that the technology we build and deploy can benefit all of us, while not harming those who are already buffeted by blows from a deeply unequal society. The time for studying is over—it’s time for the White House to issue an executive order, and to act.


WIRED Opinion publishes articles by outside contributors representing a wide range of viewpoints. Read more opinions here. Submit an op-ed at [email protected].

You May Also Like

Billions of Gmail users warned of shock Netflix alert in your inbox – delete it now

NETFLIX users have been warned of an alarming new email doing the…

Apple’s Vision Pro Isn’t the Future

Even the more enthusiastic assessments of Vision Pro tend to fall back…

iPhone 12 can interfere with pacemakers, Apple confirms 

Apple’s flagship smartphone, the iPhone 12, can interfere with pacemakers and other…

Household aerosols release more harmful smog chemicals than UK cars

Household aerosols now release more harmful volatile organic compounds (VOCs) than all vehicles in…