For the last 30 years or so, children have been told not to believe everything they find online, but we may need to now extend this lesson to adults.

That’s because we are in the midst of a so-called ‘deepfake‘ phenomenon, where artificial intelligence (AI) technology is being used to manipulate videos and audio in a way that replicates real life.

To help set an example of transparency, the world’s first ‘certified’ deepfake video has been released by AI studio Revel.ai.

This appears to shows Nina Schick, a professional AI adviser, delivering a warning  about how ‘the lines between real and fiction are becoming blurred’.

Of course, it is not really her, and the video has been cryptographically signed by digital authenticity company Truepic, declaring it contains AI-generated content.

The world's first 'certified' deepfake video has been released by AI studio Revel.ai. This appears to shows Nina Schick, a professional AI adviser, delivering a warning about how 'the lines between real and fiction are becoming blurred'.

The world's first 'certified' deepfake video has been released by AI studio Revel.ai. This appears to shows Nina Schick, a professional AI adviser, delivering a warning about how 'the lines between real and fiction are becoming blurred'.

The world’s first ‘certified’ deepfake video has been released by AI studio Revel.ai. This appears to shows Nina Schick, a professional AI adviser, delivering a warning about how ‘the lines between real and fiction are becoming blurred’.

‘Some say that truth is a reflection of our reality,’ the avatar says, slowly and clearly. ‘We are used to defining it with our very own senses.

WHAT ARE DEEPFAKES?

The technology behind deepfakes was developed in 2014 by Ian Goodfellow, who was the the director of machine learning at Apple’s Special Projects Group and a leader in the field.

The word stems from the collaboration of the terms ‘deep learning’ and ‘fake,’ and is a form of artificial intelligence.

The system studies a target person in pictures and videos, allowing it to capture multiple angles and mimic their behavior and speech patterns.

The technology gained attention during the election season, as many feared developers would use it to undermine political candidates’ reputations.

<!—->

Advertisement

‘But what if our reality is changing? What if we can no longer rely on our senses to determine the authenticity of what we see and here?

‘We’re at the dawn of artificial intelligence, and already the lines between real and fiction are becoming blurred.

‘A world where shadows are mistaken for the real thing, and sometimes we need to radically change our perspective to see the things the way they really are.’

The video ends with a message reading:  ‘This deepfake was created by Revel.ai with consent from Nina Schick and cryptographically signed by Truepic’.

Deepfakes are forms of AI which use ‘deep learning’ to manipulate audio, images or video, creating hyper-realistic, but fake, media content.

The term was coined in 2017 when a Reddit user posted manipulated porn videos to the forum. 

The videos swapped the faces of celebrities like Gal Gadot, Taylor Swift and Scarlett Johansson, onto porn stars without their consent. 

Another notorious example of a deepfake or ‘cheapfake’ was a crude impersonation of Volodymyr Zelensky appearing to surrender to Russia in a video widely circulated on Russian social media last year. 

The clip shows the Ukrainian president speaking from his lectern as he calls on his troops to lay down their weapons and acquiesce to Putin’s invading forces. 

Savvy internet users immediately flagged the discrepancies between the colour of Zelensky’s neck and face, the strange accent, and the pixelation around his head.

Despite the entertainment value of deepfakes, some experts have warned about the dangers they might pose.

Concerns have been raised in the past about how they have been used to generate child sexual abuse videos and revenge porn, as well as political hoaxes.

In November, an amendment was made to the government’s Online Safety Bill which stated using deepfake technology to make pornographic images and footage of people without their consent would be made illegal.

Dr Tim Stevens, director of the Cyber Security Research Group at King’s College London, said deepfake AI has the potential to undermine democratic institutions and national security.

He said the widespread availability of these tools could be exploited by states like Russia to ‘troll’ target populations in a bid to achieve foreign policy objectives and ‘undermine’ the national security of countries.

Earlier this month, an AI reporter was developed for China‘s state-controlled newspaper.

The avatar was only able to answer pre-set questions, and the responses she gives heavily promote the Central Committee of the Chinese Communist Party (CCP) line.

Dr Stevens added: ‘The potential is there for AIs and deepfakes to affect national security.

‘Not at the high level of defence and interstate warfare but in the general undermining of trust in democratic institutions and the media.

‘They could be exploited by autocracies like Russia to decrease the level of trust in those institutions and organisations.’

With the rise of freely available text-to-image and text-to-video AI tools, like DALL-E and Meta’s ‘Make-A-Video’, manipulated media will only become more widespread.

Indeed, it has been predicted that 90 per cent of online content will be generated or created using AI by 2025.

For example, at the end of last month, a deepfake photo of Pope Francis wearing an enormous white puffer jacket went viral and fooled thousands into believing it was real.

Social media users also debunked a supposedly AI-generated image of a cat with reptilian black and yellow splotches on its body, which had been declared a newly-discovered species.

At the end of last month, a deepfake photo of Pope Francis wearing an enormous white puffer jacket went viral and fooled thousands into believing it was real

At the end of last month, a deepfake photo of Pope Francis wearing an enormous white puffer jacket went viral and fooled thousands into believing it was real

Social media users also debunked a supposedly AI-generated image of a cat with reptilian black and yellow splotches on its body , which had been declared a newly-discovered species

Social media users also debunked a supposedly AI-generated image of a cat with reptilian black and yellow splotches on its body , which had been declared a newly-discovered species

At the end of last month, a deepfake photo of Pope Francis wearing an enormous white puffer jacket (left) went viral and fooled thousands into believing it was real.  Social media users also debunked a supposedly AI-generated image of a cat with reptilian black and yellow splotches on its body (right), which had been declared a newly-discovered species

The new video of Ms Schick is marked with a tamper-proof signature, which declares it is AI-generated, identifies its creator and gives it a timestamp of when it was made.

She told MailOnline: ‘It’s a kind of public service announcement so that people understand that the content you engage with is not necessarily what it seems to be.

‘It’s not about telling people this is true or this is false, it’s about this is how this was made – whether it was made by AI or not – so you make your own choice. 

‘By releasing this we want to show people who might feel absolutely overwhelmed, concerned or frightened by the pace of change and acceleration of AI-generated content the solutions to mitigate some of the risks around information integrity.

‘Our hope is also to try to the force the hands of the platforms and the generative AI companies a little bit, who know that you can sign content, who know there’s an open standard for content authenticity, but they haven’t adopted them yet.

‘I think that AI is going to be a core part of the production process of almost all digital information, so if we do not have a way to authenticate that information, whether or not it’s generated by AI, we’re going to have a very difficult time navigating the digital information ecosystem.

‘Although consumers haven’t realised that they have the right to understand where the information they digest is coming from, hopefully this campaign shows that it is possible and that this is a right that they should demand.’

The new video of Ms Schick is marked with a tamper-proof signature, which declares it is AI-generated, identifies its creator and gives it a timestamp of when it was made

The new video of Ms Schick is marked with a tamper-proof signature, which declares it is AI-generated, identifies its creator and gives it a timestamp of when it was made

The new video of Ms Schick is marked with a tamper-proof signature, which declares it is AI-generated, identifies its creator and gives it a timestamp of when it was made

The signature-generating technology is compliant to the new standard developed by the Coalition for Content Provenance and Authenticity (C2PA).

This is an industry body, with members including Adobe, Microsoft and the BBC, which is working towards addressing the prevalence of misleading information online.

Ms Schick, Truepic and Revel.ai say that their video demonstrates that it is possible for a digital signature to increase transparency with regards to AI-generated content.

They hope it will work to eliminate confusion as to where a video came from, helping to make the internet a safer place.

‘When used well, AI is an amazing tool for storytelling and creative freedom in the entertainment industry, said Bob de Jong, Creative Director at Revel.ai.

‘The power of AI and the speed at which it’s developing is something the world has never seen before. 

‘It’s up to all of us, including content creators, to design an ethical, authenticated, and transparent world of content creation so we can continue to use AI where society can embrace and enjoy it, not be harmed by it.’

HOW TO SPOT A DEEPFAKE  

1. Unnatural eye movement. Eye movements that do not look natural — or a lack of eye movement, such as an absence of blinking — are huge red flags. It’s challenging to replicate the act of blinking in a way that looks natural. It’s also challenging to replicate a real person’s eye movements. That’s because someone’s eyes usually follow the person they’re talking to.

2. Unnatural facial expressions. When something doesn’t look right about a face, it could signal facial morphing. This occurs when one image has been stitched over another.

3. Awkward facial-feature positioning. If someone’s face is pointing one way and their nose is pointing another way, you should be skeptical about the video’s authenticity.

4. A lack of emotion. You also can spot what is known as ‘facial morphing’ or image stitches if someone’s face doesn’t seem to exhibit the emotion that should go along with what they’re supposedly saying.

5. Awkward-looking body or posture. Another sign is if a person’s body shape doesn’t look natural, or there is awkward or inconsistent positioning of head and body. This may be one of the easier inconsistencies to spot, because deepfake technology usually focuses on facial features rather than the whole body.

6. Unnatural body movement or body shape. If someone looks distorted or off when they turn to the side or move their head, or their movements are jerky and disjointed from one frame to the next, you should suspect the video is fake.

7. Unnatural colouring. Abnormal skin tone, discoloration, weird lighting, and misplaced shadows are all signs that what you’re seeing is likely fake.

8. Hair that doesn’t look real. You won’t see frizzy or flyaway hair. Why? Fake images won’t be able to generate these individual characteristics.

9. Teeth that don’t look real. Algorithms may not be able to generate individual teeth, so an absence of outlines of individual teeth could be a clue.

10. Blurring or misalignment. If the edges of images are blurry or visuals are misalign — for example, where someone’s face and neck meet their body — you’ll know that something is amiss.

11. Inconsistent noise or audio. Deepfake creators usually spend more time on the video images rather than the audio. The result can be poor lip-syncing, robotic- sounding voices, strange word pronunciation, digital background noise, or even the absence of audio.

12. Images that look unnatural when slowed down. If you watch a video on a screen that’s larger than your smartphone or have video-editing software that can slow down a video’s playback, you can zoom in and examine images more closely. Zooming in on lips, for example, will help you see if they’re really talking or if it’s bad lip-syncing.

13. Hashtag discrepancies. There’s a cryptographic algorithm that helps video creators show that their videos are authentic. The algorithm is used to insert hashtags at certain places throughout a video. If the hashtags change, then you should suspect video manipulation.

14. Digital fingerprints. Blockchain technology can also create a digital fingerprint for videos. While not foolproof, this blockchain-based verification can help establish a video’s authenticity. Here’s how it works. When a video is created, the content is registered to a ledger that can’t be changed. This technology can help prove the authenticity of a video.

15. Reverse image searches. A search for an original image, or a reverse image search with the help of a computer, can unearth similar videos online to help determine if an image, audio, or video has been altered in any way. While reverse video search technology is not publicly available yet, investing in a tool like this could be helpful.

 

This post first appeared on Dailymail.co.uk

You May Also Like

Four little-known iPhone hacks to instantly boost your camera and photos – including an easy way to make your own GIF

THERE are tons of hidden tricks in your iPhone’s camera and photo…

Walmart Stops Ads on X, Joining the Advertising Exodus

Dec. 2, 2023 10:30 am ET Listen (1 min) Walmart said it…

In 2022, This Samsung Is the One TV to Rule Them All

Nobody needs an 8K TV in 2022. The vast majority of us…

British military Watchkeeper drone crashes during Cyprus training flight

Watchkeeper programme is beset by technical problems, say activists as unmanned craft…