AN artificial intelligence program has created incredibly realistic images of people that ‘never existed’ – here’s what they look like.

OpenAI is an AI research lab founded by Elon Musk, Sam Altman, and several others in 2015.

The AI-generated humans look hyper-realistic.

3

The AI-generated humans look hyper-realistic.Credit: Twitter/Patrick Clair
Twitter user Patrick Clair shared some images humans generated by AI program DALL-E2

3

Twitter user Patrick Clair shared some images humans generated by AI program DALL-E2Credit: Twitter/ Patrick Clair

The company has developed intelligent programs that can write text from descriptions in ‘natural languages’.

In recent weeks, OpenAI introduced DALL-E2 – a system that can create realistic images and art in the same way.

“DALL·E 2 has learned the relationship between images and the text used to describe them,” OpenAI officials said on the webpage.

“It uses a process called ‘diffusion,’ which starts with a pattern of random dots and gradually alters that pattern towards an image when it recognizes specific aspects of that image.”

Google's 'sentient AI child' could 'escape and do bad things', insider claims
I created a 'nightmare creature' by telling a Dall E AI made-up word

Since its inception, some have been using the technology to create hyper-realistic images of humans.

However, the program had banned users from sharing images of AI-generated humans, per The Daily Star.

As of last week, though, the ban has been lifted and one Twitter user named Patrick Clair shared images of some digitally created humans.

“OpenAI has decided that it’s safe for us to start sharing images containing realistic faces,” Clair tweeted.

Most read in Tech

“DALL-E2 is actually excellent at faces. From here on, this feed will largely be machine dreams of people that never existed. Created with [OpenAI].”

Attached to Clair’s tweet were images of five realistic-looking humans depicting several age ranges and backgrounds.

“Maybe it’s just me, but I want to be able to differentiate these fakes from real photos for as long as possible,” a user tweeted back to Clair.

Another Twitter user called the software a “gamechanger”, noting that people who didn’t take the technology seriously before, should now.

“This is the birth of something that will have a massive impact on society,” they added.

People were initially banned from sharing images because the AI was creating racist and over-sexualized images of women, according to Geo.tv.

OpenAI also reportedly didn’t want users to manipulate and share realistic images of celebrities and politicians.

In response, new safeguards were put in place to “rejects attempts to create the likeness of any public figures, including celebrities,” according to a DALL-E2 statement shared with Vice.

Currently, DALL-E2 is in ‘closed testing’ mode and is letting only some artists and researchers try the tool.

Delphi murder victim's mom reveals tragic last call with slain daughter
Inside RHOBH's Garcelle Beauvais $765K LA cottage as star tears down property
An image of an astronaut created by OpenAI's DALL-E2

3

An image of an astronaut created by OpenAI’s DALL-E2Credit: OpenAI

Although, interested parties have the opportunity to sign up for the product on the OpenAI website.

In the meantime, people can access a similar, smaller-scale program called DALL-E Mini.

This post first appeared on Thesun.co.uk

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Inside plan for utopian metaverse city that will have almost no rules or police

AN architecture firm has plans to design a metaverse centering around self-governance.…

Where you can really watch your favorite stars! NASA to launch a free streaming service November 8… here’s how YOU can access it

NASA will launch something other than a rocket next week: a new…

As Ransomware Demands Boom, Insurance Keeps Paying Out

AXA’s frustration with the lack of regulatory clarity is understandable given the…

The Best Personal Safety Devices, Apps, and Wearables (2024)

WIRED senior associate editor Adrienne So tried testing the incident detection feature…