If you’ve been creeping around underground tech forums lately, you might have seen advertisements for a new program called WormGPT.

The program is an AI-powered tool for cybercriminals to automate the creation of personalized phishing emails; although it sounds a bit like ChatGPT, WormGPT is not your friendly neighborhood AI.

ChatGPT launched in November 2022 and, since then, generative AI has taken the world by storm. But few consider how its sudden rise will shape the future of cybersecurity.

In 2024, generative AI is poised to facilitate new kinds of transnational—and translingual—cybercrime. For instance, much cybercrime is masterminded by underemployed men from countries with underdeveloped tech economies. That English is not the primary language in these countries has thwarted hackers’ ability to defraud those in English-speaking economies; most native English speakers can quickly identify phishing emails by their unidiomatic and ungrammatical language.

But generative AI will change that. Cybercriminals from around the world can now use chatbots like WormGPT to pen well-written, personalized phishing emails. By learning from phishermen across the web, chatbots can craft data-driven scams that are especially convincing and effective.

In 2024, generative AI will make biometric hacking easier, too. Until now, biometric authentication methods—fingerprints, facial recognition, voice recognition—have been difficult (and costly) to impersonate; it’s not easy to fake a fingerprint, a face, or a voice.

AI, however, has made deepfaking much less costly. Can’t impersonate your target’s voice? Tell a chatbot to do it for you.

And what will happen when hackers begin targeting chatbots themselves? Generative AI is just that—generative; it creates things that weren’t there before. The basic scheme allows an opportunity for hackers to inject malware into the objects generated by chatbots. In 2024, anyone using AI to write code will need to make sure that output hasn’t been created or modified by a hacker.

Other bad actors will also begin taking control of chatbots in 2024. A central feature of the new wave of generative AI is its “unexplainability.” Algorithms trained via machine learning can return surprising and unpredictable answers to our questions. Even though people designed the algorithm, we don’t know how it works.

It seems natural, then, that future chatbots will act as oracles attempting to answer difficult ethical and religious questions. On Jesus-ai.com, for instance, you can pose questions to an artificially intelligent Jesus. Ironically, it’s not difficult to imagine programs like this being created in bad faith. An app called Krishna, for example, has already advised killing unbelievers and supporting India’s ruling party. What’s to stop con artists from demanding tithes or promoting criminal acts? Or, as one chatbot has done, telling users to leave their spouses?

All security tools are dual-use—they can be used to attack or to defend—so in 2024, we should expect AI to be used for both offense and defense. Hackers can use AI to fool facial recognition systems, but developers can use AI to make their systems more secure. Indeed, machine learning has been used for over a decade to protect digital systems. Before we get too worried about new AI attacks, we should remember that there will also be new AI defenses to match.

You May Also Like

Apple to Roll Out Privacy Measures Despite Facebook Objections

Apple Inc. AAPL -0.77% plans to roll out its extensive new privacy-protection…

I’ve been using an Ikea lamp and it’s got a genius trick I’m obsessed with

THE HUMBLE table lamp has been a fixture of British homes for…

Disney+ ‘adding live TV channels’ with special treat for Star Wars fans in war against Roku and Amazon, insiders claim

POPULAR streaming service Disney+ is adding new live channels, according to reports.…

Revealed: Why your brain waves dictate whether you click with someone – or can’t stand them!

Does the secret to happy family bonds, lasting friendships, romantic bliss, academic…