With growing concerns inside as well as outside industry, it is clear that counting on developers to police themselves is not sufficient
The horse has not merely bolted; it is halfway down the road and picking up speed – and no one is sure where it’s heading. The potential benefits of artificial intelligence – such as developing lifesaving drugs – are undeniable. But with the launch of hugely powerful text and image generative models such as ChatGPT-4 and Midjourney, the risks and challenges it poses are clearer than ever: from vast job losses to entrenched discrimination and an explosion of disinformation. The shock is not only how greatly the technology has progressed, but how fast it has done so. The concern is what happens as companies race to outdo each other.
The alarm is being sounded within the industry itself. This month more than 1,000 experts signed an open letter urging a pause in development – and saying that if researchers do not pull back in this “out-of-control race”, governments should step in. A day later Italy became the first western country to temporarily ban ChatGPT. Full-scale legislation will take time. But OpenAI, which released ChatGPT-4, is unlikely to agree to voluntary restraints spurned by competitors.