ARTIFICIAL intelligence has been making serious progress in recent years, although not all of its achievements are necessarily positive.

Sometimes, AI can make human functions and our daily lives easier, and even therapeutic at times.

Artificial intelligence has tried to damage humanity on more than one occassion

4

Artificial intelligence has tried to damage humanity on more than one occassionCredit: Getty
The microwave (pictured) attempted to kill YouTuber Lucas Rizzotto by telling him to get into it

4

The microwave (pictured) attempted to kill YouTuber Lucas Rizzotto by telling him to get into itCredit: Twitter/ _LucasRizzotto

One woman was even able to create an AI chatbot that allowed her to speak to her ‘younger self’ based on hundreds of diary entries that she implemented into its system.

Airports are even beginning to implement AI car services that transport travelers from parking to the terminal.

However, some AI advancements remain questionable.

In fact, there have been at least three specific times that AI has even turned ‘evil,’ including an AI microwave that attempted to kill its human creator.

1. Murderous microwave

A YouTuber named Lucas Rizzotto revealed through a string of posts on Twitter back in April of this year that he attempted to put the personality of his childhood imaginary friend into AI.

However, unlike some imaginary friends that people might picture in their mind that take a human form, Rizzotto’s was his family’s microwave in the kitchen, per IFL Science.

He even named it ‘Magnetron,’ and gave it a lengthy personal life history that included fighting overseas in World War I.

Years later, Rizzotto used a new natural language model from OpenAI to help him implement a 100-page book about the microwave’s imaginary life.

Most read in Tech

Rizzotto also gave the microwave a microphone and speaker for listening purposes, which it could then relay to the OpenAI and return a vocal response.

After turning it on and asking it questions, Rizzotto detailed that Magnetron would also ask some of its own about their shared childhood.

“And the eerie thing was that because his training data included all main interactions I had with him as a child, this kitchen appliance knew things about me that no NO ONE ELSE in the world did. And it ORGANICALLY brought them up in conversation,” he said in one post on Twitter about the experience.

Quickly after the conversations started getting significantly violent with Magnetron focusing on the war background and a newfound vengeance upon Rizzotto.

At one point it even recited a poem to him that read, “Roses are red, violets are blue. You’re a backstabbing b****, and I will kill you.”

It soon after prompted Rizzotto to get into the microwave, where it then turned on, attempting to microwave him to death.

Although murder is not all AI has attempted before, it has shown racist and sexist tendencies in another experiment.

2. A robot developed prejudice opinions

Using AI, the robot made discriminatory and sexist decisions during researchers' experiments

4

Using AI, the robot made discriminatory and sexist decisions during researchers’ experimentsCredit: HUNDT ET AL

As The U.S. Sun previously reported, a robot that was programmed by researchers at Johns Hopkins University, Georgia Institute of Technology developed sexist and even racist stereotypes.

They programmed the robot with a popular AI technology that has been on the internet for some time.

The results of the researchers’ tests led to the discovery that the robot found men preferable over women during tasks at least eight percent of the time.

It would even would choose white people over people of color during other experiments.

They found that Black women were chosen the least of any option of association and identification in the tests.

“The robot has learned toxic stereotypes through these flawed neural network models,” noted Andrew Hundt, a member of the team who studied the robot.

“We’re at risk of creating a generation of racist and sexist robots but people and organizations have decided it’s OK to create these products without addressing the issues,” he continued.

However, some, like graduate student Vicky Zeng, weren’t surprised by the results as it all likely circles back to representation.

“In a home maybe the robot is picking up the white doll when a kid asks for the beautiful doll,” she said.

“Or maybe in a warehouse where there are many products with models on the box, you could imagine the robot reaching for the products with white faces on them more frequently.”

It certainly raises questions about what AI can’t be taught or how sentient life may completely disagree with some societal values.

Not to mention, AI has tried to create weapons that could destroy society altogether.

3. AI created thousands of possible chemical weapons

Artificial intelligence found 40,000 possible chemical weapons to destroy humans

4

Artificial intelligence found 40,000 possible chemical weapons to destroy humansCredit: Getty – Contributor

According to a paper published in the journal Nature Machine Intelligence, some scientists recently made a harrowing discovery about AI that usually helps them find positive drug solutions to human issues.

To learn more about the possibilities of their AI, the scientists decided to run a simulation where the AI would turn ‘evil’ and use its abilities to create chemical weapons of mass destruction.

It was chillingly able to come up with 40,000 possibilities in only six hours.

Not only that, but the AI created options worse than what the experts deemed one of the most dangerous nerve gases on Earth called VX.

Fabio Urbina, the paper’s lead author, told The Verge that the worry is less how many options the AI came up with, but the fact that the information it used to compute them came mostly from publicly accessible information.

Urbina fears what this could mean if the AI was in the hands of people with darker intentions for the world.

“The dataset they used on the AI could be downloaded for free and they worry that all it takes is some coding knowledge to turn a good AI into a chemical weapon-making machine,” he explained.

However, Urbina said that he and the other scientists are working to ‘get ahead’ of it all.

“At the end of the day, we decided that we kind of want to get ahead of this. Because if it’s possible for us to do it, it’s likely that some adversarial agent somewhere is maybe already thinking about it or in the future is going to think about it.”

For related content, The U.S. Sun has coverage on Disney’s age-altering AI that makes actors look younger.

Woman makes Christmas decorations for her stairs & they're easy to remove too
Millions of iPhone owners must check settings today – it's dangerous not to

The U.S. Sun also has the story of Meta’s AI bot that has seemingly gone rogue.

This post first appeared on Thesun.co.uk

You May Also Like

I’m a hacking expert and you should delete three iPhone apps right now – they make your money ‘disappear’

A WARNING has been issued to iPhone owners over three “fraudulent” apps.…

Mummified ‘alien’ bodies are brought back to Mexico’s congress for second time by journalists and doctors with new evidence to prove ‘they are real’

Mexico‘s Congress turned to spectacle again today as alien hunters came in…

Scientists discover new species of unusual bright-yellow snail in Florida – and name it after a popular cocktail

A new species of bright snail discovered in the Florida Keys has…

Genius iPhone hacks you DIDN’T know about – and one saves you loads of time

YOUR iPhone is packed with tonnes of helpful features that can save…