Releasing it — despite potential imperfections — was a critical example of Microsoft’s “frantic pace” to incorporate generative A.I. into its products, he said. Executives at a news briefing on Microsoft’s campus in Redmond, Wash., repeatedly said it was time to get the tool out of the “lab” and into the hands of the public.
“I feel especially in the West, there is a lot more of like, ‘Oh, my God, what will happen because of this A.I.?’” Mr. Nadella said. “And it’s better to sort of really say, ‘Hey, look, is this actually helping you or not?’”
Oren Etzioni, professor emeritus at the University of Washington and founding chief executive of the Allen Institute for AI, a prominent lab in Seattle, said Microsoft “took a calculated risk, trying to control the technology as much as it can be controlled.”
He added that many of the most troubling cases involved pushing the technology beyond ordinary behavior. “It can be very surprising how crafty people are at eliciting inappropriate responses from chatbots,” he said. Referring to Microsoft officials, he continued, “I don’t think they expected how bad some of the responses would be when the chatbot was prompted in this way.”
To hedge against problems, Microsoft gave just a few thousand users access to the new Bing, though it said it planned to expand to millions more by the end of the month. To address concerns over accuracy, it provided hyperlinks and references in its answers so users could fact-check the results.
The caution was informed by the company’s experience nearly seven years ago when it introduced a chatbot named Tay. Users almost immediately found ways to make it spew racist, sexist and other offensive language. The company took Tay down within a day, never to release it again.
Much of the training on the new chatbot was focused on protecting against that kind of harmful response, or scenarios that invoked violence, such as planning an attack on a school.
Source: | This article originally belongs to Nytimes.com