Bing’s new chat-style interface is a greater departure from the traditional search box. In a demonstration, Microsoft vice president of search and devices Yusef Mehdi asked the chatbot to write a five-day itinerary for a trip to Mexico City, and then to turn what it came up with into an email he could send to his family. The bot’s response credited its sources—a series of links to travel sites—at the bottom of its lengthy response. “We care a bunch about driving content back to content creators,” Mehdi said. “We make it easy for people to click through to get to those sites.” 

Microsoft has also incorporated aspects of ChatGPT’s underlying technology into a new sidebar to the company’s Edge browser. Users can prompt the tool to summarize a long and complex financial document, or to compare it to another. It’s possible to prompt the chatbot to turn those insights into an email, a list, or social post with a particular tone, such as professional or funny. In a demo, Mehdi directed the bot to craft an “enthusiastic” update to post on his profile on the company’s social media service LinkedIn.

ChatGPT has caused a stir since the startup launched the chatbot in November, astounding and thrilling users with its fluid, clear responses to written prompts and questions. The bot is based on GPT-3, an OpenAI algorithm trained on reams of text from the web and other sources that uses the patterns it has picked up to generate text of its own. Some investors and entrepreneurs have heralded the technology as a revolution, with the potential to upend just about any industry. 

Some AI experts have urged caution, warning that the technology underlying ChatGPT cannot distinguish between truth and fiction, and is prone to “hallucinations”—making up information in detailed and sometimes convincing ways. Text generation technology has also been shown capable of replicating unsavory language found in its training data.

Sarah Bird, Microsoft’s head of responsible AI, said today that early tests showed the tool was able to, for example, help someone plan an attack on a school, but that the tool can now “identify and defend against” the use of the chatbot for that sort of harmful query. She said human testers and OpenAI’s technology would work together to rapidly test, analyze, and improve the service.

Bird also acknowledged that Microsoft has not fully solved the hallucination problem. “We have improved it tremendously since where we started, but there is still more to do there,” she said.

OpenAI began as a nonprofit focused on making AI beneficial but has been a commercial venture with significant investment from Microsoft since 2019, and recently secured a new commitment from the tech giant worth about $10 billion.

Microsoft has already commercialized a version of the text generation technology inside ChatGPT in the form of Copilot, a tool that helps developers by generating programming code. Microsoft says that experiments show Copilot can reduce the amount of time required to complete a coding task by 40 percent.

Additional reporting by Will Knight.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Ancient Rocks Reveal When Earth’s Plate Tectonics Began

Tusch and Münker developed a powerful new method for extracting tiny traces…

Full Wolf Moon lit up sky last night – here is the best time to see it and what it ‘means’

THE stunning full “Wolf Moon” was visible on the evening of Monday,…

Human jawbone dating back 25,000 years is discovered in a cave on the Indonesian island of Sulawesi

The discovery of a human jawbone dating back 25,000 years could help…

The Big Bang’s Afterglow Reveals Invisible Cosmic Structures

Nearly 400,000 years after the Big Bang, the primordial plasma of the…