In 2016, Lyft CEO John Zimmer predicted they would “all but end” car ownership by 2025.

In 2021, some experts aren’t sure when, if ever, individuals will be able to purchase steering-wheel-free cars that drive themselves off the lot.

In contrast to investors and CEOs, academics who study artificial intelligence, systems engineering and autonomous technologies have long said that creating a fully self-driving automobile would take many years, perhaps decades. Now some are going further, saying that despite investments already topping $80 billion, we may never get the self-driving cars we were promised. At least not without major breakthroughs in AI, which almost no one is predicting will arrive anytime soon—or a complete redesign of our cities.

Even those who have hyped this technology most—in 2019 Mr. Musk doubled down on previous predictions, and said that autonomous Tesla robotaxis would debut by 2020—are beginning to admit publicly that naysaying experts may have a point.

“A major part of real-world AI has to be solved to make unsupervised, generalized full self-driving work,” Mr. Musk himself recently tweeted. Translation: For a car to drive like a human, researchers have to create AI on par with one. Researchers and academics in the field will tell you that’s something we haven’t got a clue how to do. Mr. Musk, on the other hand, seems to believe that’s exactly what Tesla will accomplish. He continually hypes the next generation of the company’s “Full Self Driving” technology—actually a driver-assist system with a misleading name—which is currently in beta testing.

A recently published paper called “Why AI is Harder Than We Think” sums up the situation nicely. In it, Melanie Mitchell, a computer scientist and professor of complexity at the Santa Fe Institute, notes that as deadlines for the arrival of autonomous vehicles have slipped, people within the industry are redefining the term. Since these vehicles require a geographically constrained test area and ideal weather conditions—not to mention safety drivers or at least remote monitors—makers and supporters of these vehicles have incorporated all of those caveats into their definition of autonomy.

Even with all those asterisks, Dr. Mitchell writes, “none of these predictions has come true.”

In vehicles you can actually buy, autonomous driving has failed to manifest as anything more than enhanced cruise control, like GM’s Super Cruise or the optimistically named Tesla Autopilot. In San Francisco, GM subsidiary Cruise is testing autonomous vehicles with no driver behind the wheel but a human monitoring the vehicle’s performance from the back seat. And there’s only one commercial robotaxi service operating in the U.S. with no human drivers at all, a small-scale operation limited to low-density parts of the Phoenix metro area, from Alphabet subsidiary Waymo.

An autonomous car by General Motors subsidiary Cruise on a test drive in San Francisco in 2019.

Photo: Andrej Sokolow/dpa/picture alliance/Getty Images

Even so, Waymo vehicles have been involved in minor accidents in which they were rear-ended, and their confusing (to humans) behavior was cited as a possible cause. Recently, one was confused by traffic cones at a construction site.

“I am not aware we are struck or rear-ended any more than a human driver,” says Nathaniel Fairfield, a software engineer and head of the “behavior” team at Waymo. The company’s self-driving vehicles have been programmed to be cautious—“the opposite of the canonical teenage driver,” he adds.

Chris Urmson is head of autonomous trucking startup Aurora, which recently acquired Uber’s self-driving division. (Uber also invested $400 million in Aurora.) “We’re going to see self-driving vehicles on the road doing useful things in the next couple of years, but for it to become ubiquitous will take time,” he says.

Initially, Aurora’s vehicles will only operate on highways where the company has already created a high-resolution, three-dimensional map.

Photo: Aurora

Key to Aurora’s initial rollout will be that it will only operate on highways where the company has already created a high-resolution, three-dimensional map, says Mr. Urmson. Aurora’s eventual goal is for both trucks and cars using its systems to travel farther from the highways where it will at first be rolled out, but Mr. Urmson declined to say when that might happen.

The slow rollout of limited and constantly human-monitored “autonomous” vehicles was predictable, and even predicted, years ago. But some CEOs and engineers argued that new self-driving capabilities would emerge if these systems could just log enough miles on roads. Now, some are taking the position that all the test data in the world can’t make up for AI’s fundamental shortcomings.

Decades of breakthroughs in the part of artificial intelligence known as machine learning have yielded only the most primitive forms of “intelligence,” says Mary Cummings, a professor of computer science and director of the Humans and Autonomy Lab at Duke University, who has advised the Department of Defense on AI.

To gauge today’s machine-learning systems, she developed a four-level scale of AI sophistication. The simplest kind of thinking starts with skill-based “bottom-up” reasoning. Today’s AIs are quite good at things like teaching themselves to stay within lines on a highway. The next step up is rule-based learning and reasoning (i.e., what to do at a stop sign). After that, there’s knowledge-based reasoning. (Is it still a stop sign if half of it is covered by a tree branch?) And at the top is expert reasoning: the uniquely human skill of being dropped into a completely novel scenario and applying our knowledge, experience and skills to get out in one piece.

Problems with driverless cars really materialize at that third level. Today’s deep-learning algorithms, the elite of the machine-learning variety, aren’t able to achieve knowledge-based representation of the world, says Dr. Cummings. And human engineers’ attempts to make up for this shortcoming—such as creating ultra-detailed maps to fill in blanks in sensor data—tend not to be updated frequently enough to guide a vehicle in every possible situation, such as encountering an unmapped construction site.

Machine-learning systems, which are excellent at pattern-matching, are terrible at extrapolation—transferring what they have learned from one domain into another. For example, they can identify a snowman on the side of the road as a potential pedestrian, but can’t tell that it’s actually an inanimate object that’s highly unlikely to cross the road.

“When you’re a toddler, you’re taught the hot stove is hot,” says Dr. Cummings. But AI isn’t great at transferring the knowledge of one stove to another stove, she adds. “You have to teach that for every single stove that’s in existence.”

Some researchers at MIT are trying to fill this gap by going back to basics. They have launched a huge effort to understand how babies learn, in engineering terms, in order to translate that back to future AI systems.

“Billions of dollars have been spent in the self-driving industry and they are not going to get what they thought they were going to get,” says Dr. Cummings. This doesn’t mean we won’t eventually get some form of “self-driving” car, she says. It just “won’t be what everybody promised.”

More Keywords

But, she adds, small, low-speed shuttles working in well-mapped areas, bristling with sensors such as lidar, could allow engineers to get the amount of uncertainty down to a level that regulators and the public would find acceptable. (Picture shuttles to and from the airport, driving along specially constructed lanes, for example.)

Nathaniel Fairfield, a software engineer and head of the behavior team at Waymo, ays his team sees no fundamental technological barriers to making self-driving robotaxi services like his company’s widespread..

Photo: Caitlin O’Hara/REUTERS

Mr. Fairfield of Waymo says his team sees no fundamental technological barriers to making self-driving robotaxi services like his company’s widespread. “If you’re overly conservative and you ignore reality, you say it’s going to take 30 years—but it’s just not,” he adds.

A growing number of experts suggest that the path to full autonomy isn’t primarily AI-based after all. Engineers have solved countless other complicated problems—including landing spacecraft on Mars—by dividing the problem into small chunks, so that clever humans can craft systems to handle each part. Raj Rajkumar, a professor of engineering at Carnegie Mellon University with a long history of working on self-driving cars, is optimistic about this path. “It’s not going to happen overnight, but I can see the light at the end of the tunnel,” he says.

This is the primary strategy Waymo has pursued to get its autonomous shuttles on the road, and as a result, “we don’t think that you need full AI to solve the driving problem,” says Mr. Fairfield.

Mr. Urmson of Aurora says that his company combines AI with other technologies to come up with systems that can apply general rules to novel situations, as a human would.

SHARE YOUR THOUGHTS

When do you think we’ll see fully self-driving vehicles? Join the conversation below.

Getting to autonomous vehicles the old-fashioned way, with tried-and-truesystems engineering,” would still mean spending huge sums outfitting our roads with transponders and sensors to guide and correct the robot cars, says Dr. Mitchell. And they would remain limited to certain areas, and certain weather conditions—with human teleoperators on standby should things go wrong, she adds.

This Disney animatronic version of our self-driving future would be a far cry from creating artificial intelligence that could simply be dropped into any vehicle, immediately replacing a human driver. It could mean safer human-driven cars, and fully autonomous vehicles in a handful of carefully monitored areas. But it would not be the end of car ownership—not anytime soon.

——For more WSJ Technology analysis, reviews, advice and headlines, sign up for our weekly newsletter.

Write to Christopher Mims at [email protected]

Copyright ©2020 Dow Jones & Company, Inc. All Rights Reserved. 87990cbe856818d5eddac44c7b1cdeb8

This post first appeared on wsj.com

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Astronomical auction will sell a trove of stunning meteorites that include pieces of Mars

Most of us will never make it to outer space, but some…

At Twitter and Salesforce, Bret Taylor Steps Into the Limelight

Bret Taylor has been involved with some of the tech industry’s most…

Google-owned DeepMind cracks 50-year-old ‘protein folding problem’

DeepMind, the British artificial intelligence (AI) company owned by Google, has solved…

Dogs: Larger breeds including Great Danes and Rottweilers are at a higher risk of bone cancer 

Osteosarcoma — a painful and aggressive form of bone cancer — is…