THE bots aren’t taking over just yet – but that hasn’t stopped experts from establishing strict guidelines to safeguard our future.
The emergence and success of ChatGPT since its mainstream introduction in November 2022 have surprised many in the AI field, accelerating the development of the technology immeasurably.
AI expert Rishabh Misra, who has worked on machine learning for Twitter for the past four years, insists he’s “never seen any kind of tech move so fast” and believes once they begin to surpass human-level intelligence, super-powered robots could begin to wreak havoc in society “within the decade.”
The threat is so real, talks at a federal level are taking place to ensure regulations and protocols can keep the tech at bay.
“I don’t want to scare anyone,” Misra told The U.S. Sun.
“But we need to kind of look ahead and develop those technologies that can kind of regulate these super-intelligent AI systems.”
For now, the advent of AI technology has helped create bots who have become proficient at holding intelligent conversations and generating realistic content.
The critical point right now, according to Misra, is they currently don’t have any self-awareness.
That means their intelligence levels aren’t at the level where they would act independently, but they can function effectively with the use of programs like ChatGPT.
“In the future, as more such capabilities are added, some misconfiguration, irresponsible usage by giving wrong instructions, or involvement of malicious actors could have disastrous consequences, akin to the scenarios where it may seem bots have gone rogue,” said Misra.
Most read in News Tech
“If these bots get hacked or used for harmful purposes, they can spread misinformation or hate speech, launch spam campaigns, manipulate financial markets to crash the economy, or even carry out physical attacks by controlling vehicles or operating weapons. They may create deepfakes that show scenarios that never happened to damage someone’s reputation or cause wars.”
With AI bots potentially having the ability in the future to carry out instructions and demands faster than humans, the scope for disrupting economies or inciting hate as part of a political ploy, for example, is huge.
“The frequent fear that comes up is that bots may become self-aware and decide serving humans is not worthwhile,” adds Misra.
“Maybe they will take harmful actions towards humans in an attempt to reach an ultimate goal, ironically supplied by humans themselves.
“Based solely on the current trends of technology advancements, I think the chances of realization of the latter fear might be much more as compared to the former in the future.”
Another fear, which ChatGPT has already begun to alert people to, is any advancements in AI tech could replace humans in some sectors of the workplace.
While roles that rely significantly on human creativity shouldn’t be affected, jobs involving data analysis or collection, for example, could be at risk.
“The threat of the bots becoming so good at their jobs that they put people out of work is real,” said Misra.
“This could lead to widespread unemployment and social unrest.”
Misra stresses there are no “guarantees this will happen,” yet vital research on AI safety and investment in education must happen now to ensure the worst doesn’t happen.
“Humans have a role in making this kind of technology better,” he added.