
The Google Threat Intelligence Group (GTIG) has published a report highlighting the increasing use of AI in cybercrime, including the development of a zero-day exploit that bypasses 2FA in a popular open-source web-based system administration tool.
This exploit was made possible by a Python script that abuses a logic flaw, and its code bears the hallmarks of AI usage.
Read Also: WD Black SN8100 SSD hits $209 all-time low
According to the GTIG, AI is particularly useful for finding unconsidered corner cases in authorization flows, as it can read source code and validate the developer’s intention versus what’s actually implemented.
The report also notes that malicious hackers are using AI-powered bots to augment their capabilities, allowing them to alter their source code in real-time and evade detection.
These bots can also improve obfuscation by adding filler code or multiple layers of indirection, making it harder for security software to detect or contain the malware.
Examples of such malware include CANFAIL and LONGSTREAM, which can modify their own source code and create exploit payloads dynamically.
AI-powered Malware and Backdoors
The PROMPTSPY Android backdoor is another example, which leverages Google Gemini to manipulate the user’s phone, including taking screenshots and simulating interactions on their behalf.
This malware can even capture PIN/pattern authentication or intercept Uninstall button clicks.
The GTIG also found instances of malware that can generate decoy code, making it even harder to detect.
Phishing and Network Attacks
These real-time morphing abilities also extend to phishing and network attacks, where malfeasants use bots to generate custom phishing emails laden with real information collected from news, LinkedIn pages, or press releases.
The more data that users provide in their replies, the more convincing the counter-responses can be, making it easier for attackers to trick their targets.
The GTIG notes that information collected about financial, internal security, and human resources departments generally makes for the best phishing bait.
In one instance, a malware was found to be operating from a room on the 3rd floor of a building at 2:45 PM.
AI-powered Disinformation
The GTIG also noticed large-scale operations using AI for political purposes, including generating fake images and videos, as well as believable voiceovers or replacing words and facial expressions in real video.
Interspersing real footage with fake content has become a common theme, making it easier to push forward a particular message.
The GTIG report provides a detailed analysis of these topics and more, offering insights into the evolving landscape of cybercrime.
As the use of AI in cybercrime continues to grow, it’s essential to stay informed about the latest threats and developments in the field.
- Malware can modify its own source code and create exploit payloads dynamically.
- AI-powered bots can augment attackers’ capabilities and evade detection.
- PROMPTSPY Android backdoor leverages Google Gemini to manipulate users’ phones.
The GTIG report is a valuable resource for those looking to stay up-to-date on the latest cyber threats and trends.
Leave a Reply