This week, WIRED reported that a group of prolific scammers known as the Yahoo Boys are openly operating on major platforms like Facebook, WhatsApp, TikTok, and Telegram. Evading content moderation systems, the group organizes and engages in criminal activities that range from scams to sextortion schemes.

On Wednesday, researchers published a paper detailing a new AI-based methodology to detect the “shape” of suspected money laundering activity on a blockchain. The researchers—composed of scientists from the cryptocurrency tracing firm Elliptic, MIT, and IBM—collected patterns of bitcoin transactions from known scammers to an exchange where dirty crypto could get turned into cash. They used this data to train an AI model to detect similar patterns.

Governments and industry experts are sounding the alarm about the potential for major airline disasters due to increasing attacks against GPS systems in the Baltic region since the start of the war in Ukraine. The attacks can jam or spoof GPS signals, and can result in serious navigation issues. Officials in Estonia, Latvia, and Lithuania blame Russia for the GPS issues in the Baltics. Meanwhile, WIRED went inside Ukraine’s scrappy and burgeoning drone industry, where about 200 companies are racing to build deadlier and more efficient autonomous weapons.

An Australian firm that provided facial recognition kiosks for bars and clubs appears to have exposed the data of more than 1 million records of patrons. The episode highlights the dangers of giving companies your biometric data. In the United States, the Biden administration is asking tech companies to sign a voluntary pledge to make “good-faith” efforts to implement critical cybersecurity improvements. This week we also reported that the administration is updating its plan for protecting the country’s critical infrastructure from hackers, terrorists, and natural disasters.

And there’s more. Each week, we highlight the news we didn’t cover in depth ourselves. Click on the headlines below to read the full stories. And stay safe out there.

A government procurement document unearthed by The Intercept reveals that two major Israeli weapons manufacturers are required to use Google and Amazon if they need any cloud-based services. The reporting calls into question repeated claims from Google that the technology it sells to Israel is not used for military purposes—including the ongoing bombardment of Gaza that has killed more than 34,000 Palestinians. The document contains a list of Israeli companies and government offices “required to purchase” any cloud services from Amazon and Google. The list includes Israel Aerospace Industries and Rafael Advanced Defense Systems, the latter being the manufacturer of the infamous “Spike” missile, reportedly used in the April drone strike that killed seven World Central Kitchen aid workers.

In 2021, Amazon and Google entered into a contract with the Israeli government in a joint venture known as Project Nimbus. Under the arrangement, the tech giants provide the Israeli government, including its Israel Defense Forces, with cloud services. In April, Google employees protested Project Nimbus by staging sit-ins at offices in Silicon Valley, New York City, and Seattle. The company fired nearly 30 employees in response.

A mass surveillance tool that eavesdrops on wireless signals emitted from smartwatches, earbuds, and cars is currently being deployed at the border to track people’s location in real time, a report from Notus revealed on Monday. According to its manufacturer, the tool, TraffiCatch, associates wireless signals broadcast by commonly used devices with vehicles identified by license plate readers in the area. A captain from the sheriff’s office in Webb County, Texas—whose jurisdiction includes the border city of Laredo—told the publication that the agency uses TraffiCatch to detect devices in areas where they shouldn’t be, for instance, to find trespassers.

Several states require law enforcement agencies to obtain warrants before deploying devices that mimic cell towers to obtain data from the devices tricked into connecting to it. But in the case of TraffiCatch, a technology that passively siphons ambient wireless signals out of the air, the courts haven’t yet weighed in. The report highlights how signals intelligence technology, once exclusive to the military, is now available for purchase by both local governments and the general public.

The Washington Post reports that an officer in India’s intelligence service, the Research and Analysis Wing, was allegedly involved in a botched plan to assassinate one of Indian prime minister Narendra Modi’s top critics in the United States. The White House said Monday that it was taking the matter “very, very seriously,” while India’s foreign ministry blasted the Post report as “unwarranted” and “not helpful.” The alleged plot to murder the Sikh separatist, Gurpatwant Singh Pannun, a dual citizen of the United States and Canada, was first disclosed by US authorities in November.

Canadian authorities previously announced having obtained “credible” intel allegedly linking the Indian government to the death of another separatist leader, Hardeep Singh Nijjar, who was shot to death outside a Sikh temple in a Vancouver suburb last summer.

US lawmakers have introduced a bill aimed at establishing a new wing of the National Security Agency dedicated to investigating threats aimed at AI systems—or “counter-AI.” The bipartisan bill, introduced by Mark Warner and Thom Tillis, a Senate Democrat and Republican, respectively, would further require agencies including the National Institute of Standards and Technology (NIST) and the Cybersecurity and Infrastructure Security Agency (CISA) to track breaches of AI systems, whether successful or not. (The NIST currently maintains the National Vulnerability Database, a repository for vulnerability data, while the CISA oversees the Common Vulnerabilities and Exposures Program, which similarly identifies and catalogues publicly disclosed malware and other threats.)

The Senate bill, known as the Secure Artificial Intelligence Act, aims to expand the government’s threat monitoring to include “adversarial machine learning”—a term that is essentially synonymous with “counter-AI”—which serves to subvert AI systems and “poison” their data using techniques vastly dissimilar to traditional modes of cyberwarfare.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

The Audio Design Hacks That Made Mank Sound Like Citizen Kane

During the Covid-19 pandemic, very few cinephiles have seen the inside of…

Aliens are trying to contact us but their efforts are a colossal waste of time and space, say scientists

ALIENS are trying to contact us, but their efforts are a colossal…

Samsung rival OnePlus is giving away free headphones worth £459 when people buy brand new Android phone

ONE of Samsung’s biggest Android rivals is giving away a free pair…

Defending Black Lives Means Banning Facial Recognition

Uprisings for racial justice are sweeping the country. Following the police murders…