https://cloud.google.com/blog/topics/threat-intelligence/threat-actor-usage-of-ai-tools

Google’s Threat Intelligence Group (GTIG) has uncovered alarming trends in the adversarial misuse of artificial intelligence (AI) tools, marking a significant shift in how threat actors are leveraging these technologies. The report highlights how government-backed groups and cybercriminals are increasingly integrating AI capabilities across the entire attack lifecycle, from reconnaissance to data exfiltration.

One of the key findings is the emergence of “just-in-time” AI-enabled malware, such as PROMPTFLUX and PROMPTSTEAL, that dynamically generate malicious scripts, obfuscate their own code, and create malicious functions on demand during execution. This represents a concerning evolution, as these AI-powered tools become more autonomous and adaptive, making them harder to detect and mitigate.

The report also reveals that threat actors are using social engineering-like tactics to bypass AI safety mechanisms, posing as innocent users to trick the systems into providing sensitive information or enabling tool development. Moreover, the underground marketplace for illicit AI tools has matured, lowering the barrier to entry for less sophisticated actors. Worryingly, state-sponsored groups from North Korea, Iran, and China continue to misuse AI capabilities to enhance their operations at every stage, from lure creation to command-and-control.

These findings underscore the urgent need for vigilance and robust security measures to counter the growing threat of adversarial AI. As the misuse of these technologies evolves, defenders must stay ahead of the curve, proactively disrupting malicious activity and strengthening the safeguards of AI systems to protect against these increasingly sophisticated attacks.