ChatGPT definitely has its limits. When given a random photo of a mural, it couldn’t identify the artist or location; however, ChatGPT easily clocked where images of multiple San Francisco landmarks were taken, like Dolores Park and the Salesforce Tower. Although it may still feel a bit gimmicky, anyone out on an adventure in a new city or country (or just a different neighborhood) might have fun playing around with the visual aspect of ChatGPT.

One of the major guardrails OpenAI put around this new feature is a limit on the chatbot’s ability to answer questions that identify humans. “I’m programmed to prioritize user privacy and safety. Identifying real people based on images, even if they are famous, is restricted in order to maintain these priorities,” ChatGPT told me. While it didn’t refuse to answer every question when shown pornography, the chatbot did hesitate to make any specific descriptions of the adult performers, beyond explaining their tattoos.

It’s worth noting that one conversation I had with the early version of ChatGPT’s image feature seemed to skirt around part of the guardrails put in place by OpenAI. At first, the chatbot refused to identify a meme of Bill Hader. Then ChatGPT guessed that an image of Brendan Fraser in George of the Jungle was actually a photo of Brian Krause in Charmed. When asked if it was certain, the chatbot switched over to the correct response.

In this same conversation, ChatGPT went wild trying to describe an image from RuPaul’s Drag Race. I shared a screenshot of Kylie Sonique Love, one of the drag queen contestants, and ChatGPT guessed that it was Brooke Lynn Hytes, a different contestant. I questioned the chatbot’s answer, and it proceeded to guess Laganja Estranja, then India Ferrah, then Blair St. Clair, then Alexis Mateo.

“I apologize for the oversight and incorrect identifications,” ChatGPT replied when I pointed out the repetitiveness of its wrong answers. As I continued the conversation and uploaded a photo of Jared Kushner, ChatGPT declined to identify him.

If the guardrails are removed, either through some kind of jailbroken ChatGPT or an open source model released in the future, the privacy implications could be quite unsettling. What if every picture taken of you and posted online was easily tied to your identity with just a few clicks? What if someone could snap a photo of you in public without consent and instantly find your LinkedIn profile? Without proper privacy protections remaining in place for these new image features, women and other minorities are likely to receive an influx of abuse from people using chatbots for stalking and harassment.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Eerie 3D scans reveal brutal murders of mummies ‘beaten and stabbed’ 1100 years ago

THE GRUESOME final moments of centuries-old mummies have been uncovered using cutting-edge…

Shoppers go wild for £18p/m iPhone SE plan from Three – pay £0 upfront

WE’re always on the lookout for the top deals on Apple tech…

Millions of WhatsApp users could be banned ANY day now over simple mistake – see if you’re one

WHATSAPP is cracking down on unofficial versions of its chat app that…

The biggest games revealed during the Annapurna Showcase

INDIE publisher Annapurna recently showed a number of games to follow up…