When we think of artificial intelligence, many of us jump to visions of the future from science fiction—hellscapes like The Matrix, Black Mirror, and The Terminator. But that isn’t necessarily the way things will turn out. Two leading experts in the technology think there’s more cause for optimism than pessimism, even though there will be speed bumps along the way.

Kai-Fu Lee is the former head of Microsoft Research in Asia, and Google in China. He’s now the chairman and CEO of Sinovation Ventures, a venture capital firm with nearly $3 billion in assets; roughly 70 percent of its investments are AI-related. Lee is also the author of the 2018 book AI Super-Powers and the 2021 book AI 2041: Ten Visions for Our Future, which he coauthored with science fiction writer Stanley Chan (Chen Qiufan).

Yoky Matsuoka is a cofounder of Google X, former CTO of Google Nest, and a former executive at Apple, Twitter, and elsewhere. She’s now the founder and CEO of Yohana, an AI-enhanced personal assistant service that she describes as a wellness company aimed at families to help prioritize well-being and being present. Lee and Matsuoka talked with WIRED global editorial director Gideon Lichfield at the RE:WIRED conference.

Lee thinks AI can be a big help to health care, though he also sees potential stumbling blocks. Consider an AI program that helps 5 percent of patients, but hurts 3 percent. AI practitioners will likely see that as a good thing, because it helps more people than it hurts. But doctors will view it differently, because 3 percent of people might not have been misdiagnosed by human doctors. So, the two worlds will need to learn to work together. He doesn’t see that as a downside, necessarily, but as a point of friction that will need to be overcome.

People think of AI as a black box, Lee says, where the computer makes a decision based on thousands of calculations and we don’t know what they are or why it arrived at its conclusions. It’s really hard for us to trust something like that. Lee favors creating an AI that can explain, in human terms, perhaps the top three calculations it made. “As a society I think we need to move away from, ‘Explain the complex black box perfectly otherwise we won’t use you!’” Lee muses. Instead, he suggests asking AI to “explain yourself reasonably and understandably to a level and degree that is no worse than a human making an explanation of how he or she made a decision. If we change that benchmark, then I think it’s feasible.”

Matsuoka sees great potential for AI in caregiving, too. She cites her parents, who are both aging and in declining health. As an only child, she wants to help care for them, but also respect their privacy and independence. She says both she and her parents would like electronic devices that would make sure they are OK every day. When they’re not, with their consent, she would be able to receive some of the data to make sure she’s alerted if they’d fallen, and could call for a caregiver. She says she’d like to build a world where sensors and people could work together to predict and prevent bad things from occurring. For example, sensors could show that one of her parents is moving differently, or that something in the house is broken and could be a tripping hazard.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

ChatGPT and Google Bard fears grow as 74% of Indian workers worried over losing jobs to AI

A NEW report has found that 74 percent of Indian employees are…

Deaf influencer mom reveals how she knows when her daughter is crying

A deaf influencer who starred in Netflix‘s Def U has shared how…

Amazon Introducing Warehouse Overhaul With Robotics to Speed Deliveries

Listen to article (2 minutes) Amazon.com is introducing an array of new…

A Single Flaw Broke Every Layer of Security in MacOS

Every time you shut down your Mac, a pop-up appears: “Are you…