If by 2052 a computer could match the human brain then we need better ways to build it
“Progress in AI is something that will take a while to happen, but [that] doesn’t make it science fiction.” So Stuart Russell, the University of California computing professor, told the Guardian at the weekend. The scientist said researchers had been “spooked” by their own success in the field. Prof Russell, the co-author of the top artificial intelligence (AI) textbook, is giving this year’s BBC’s Reith lectures – which have just begun – and his doubts appear increasingly relevant.
With little debate about its downsides, AI is becoming embedded in society. Machines now recommend online videos to watch, perform surgery and send people to jail. The science of AI is a human enterprise that requires social limitations. The risks, however, are not being properly weighed. There are two emerging approaches to AI. The first is to view it in engineering terms, where algorithms are trained on specific tasks. The second presents deeper philosophical questions about the nature of human knowledge.