I started thinking, what if, within a year, we bridge that gap, and then it scaled up? What’s going to happen?

What did you do once you realized this?

At the end of March, before the first [Future of Life Institute] letter [calling on AI labs to immediately pause giant AI experiments] came out, I reached out to Geoff [Hinton]. I tried to convince him to sign the letter. I was surprised to see that we had independently arrived at the same conclusion.

This reminds me of when Issac Newton and Gottfried Leibniz independently discovered calculus at the same time. Was the moment ripe for a multiple, independent discovery?  

Don’t forget, we had realized something that others had already discovered.

Also, Geoff argued that digital computing technologies have fundamental advantages over brains. In other words, even if we only figure out the principles that are sufficient to explain most of our intelligence and put that in machines, the machines would automatically be smarter than us because of technical things like the ability to read huge quantities of text and integrate that much faster than the human could—like tens of thousands or millions of times faster.

If we were to bridge that gap, we would have machines that were smarter than us. How much does it mean practically? Nobody knows. But you could easily imagine they would be better than us in doing things like programming, launching cyberattacks, or designing things that biologists or chemists currently design by hand.

I’ve been working for the past three years on machine learning for science, particularly applied to chemistry and biology. The goal was to help design better medicines and materials, respectively for fighting pandemics and climate change. But the same techniques could be used to design something lethal. That realization slowly accumulated, and I signed the letter.

Your reckoning drew a lot of attention, including the BBC article. How did you fare?

The media forced me to articulate all these thoughts. That was a good thing. More recently, in the past few months, I’ve been thinking more about what we should do in terms of policy. How do we mitigate the risks? I’ve also been thinking about countermeasures.

Some might say, “Oh, Yoshua is trying to scare.” But I’m a positive person. I’m not a doomer like people may call me. There’s a problem, and I’m thinking about solutions. I want to discuss them with others who may have something to bring. The research on improving AI’s capabilities is racing ahead because there’s now a lot—a lot—more money invested in this. It means mitigating the largest risks is urgent.

Photo-Illustration: James Marshall; Getty Images

You May Also Like

NASA astronaut Michael ‘Rich’ Clifford dies at 69 due to complications from Parkinson’s

Michael ‘Rich’ Clifford, a NASA astronaut who flew three space shuttle missions,…

Tesla Is Accused of Racial Harassment at California Factory

What to Read Next This post first appeared on wsj.com

I tried smart Christmas tree lights that have a surprising twist and I can’t get enough of them

THERE’S a “smart” everything these days – even Christmas tree lights. Philips…

Trucks Move Past Cars on the Road to Autonomy

But these days, trucks look like a more appealing bet. The American…