https://www.nytimes.com/2025/07/21/briefing/ai-vs-ai.html
Artificial intelligence has fundamentally transformed the cybersecurity landscape, with cybercriminals leveraging AI to dramatically scale their operations while security companies deploy competing AI systems for defense in an escalating technological arms race. Since ChatGPT’s launch in November 2022, phishing attacks have increased more than fortyfold and deepfakes have surged twentyfold, as AI enables criminals to craft grammatically perfect scams that bypass traditional spam filters and create convincing fake personas for fraud schemes. State-sponsored hackers from Iran, China, Russia, and North Korea are using commercial chatbots like Gemini and ChatGPT to scope out victims, create malware, and execute sophisticated attacks, with cybersecurity consultant Shane Sims estimating that “90 percent of the full life cycle of a hack is done with AI now.”
The democratisation of AI tools has lowered barriers for cybercriminals, allowing anyone to generate bespoke malicious content without technical expertise, while unscrupulous developers have created specialised AI models specifically for cybercrime that lack the guardrails of mainstream systems. Despite commercial chatbots having protective measures, cybersecurity analyst Dennis Xu notes that “if a hacker can’t get a chatbot to answer their malicious questions, then they’re not a very good hacker,” highlighting how easily these safeguards can be circumvented. While attacks aren’t necessarily becoming more sophisticated according to Google Threat Intelligence Group leader Sandra Joyce, AI’s primary advantage lies in scaling operations, turning cybercrime into a numbers game where massive volume increases the likelihood of successful breaches.
Cybersecurity companies are rapidly deploying AI-powered defense systems to counter these threats, with algorithms now analysing millions of network events per second to detect bogus users and security breaches that would take human analysts weeks to identify. Google recently announced that one of its AI bots discovered a critical software vulnerability affecting billions of computers before cybercriminals could exploit it, marking a potential milestone in automated threat detection. However, the shift toward AI-driven defense creates new risks, as Wiz co-founder Ami Luttwak warns that human defenders will be “outnumbered 1,000 to 1” by AI attackers, while well-meaning AI systems could cause massive disruptions by incorrectly blocking entire countries when attempting to stop specific threats, highlighting the high-stakes nature of this technological arms race where cybercrime is projected to cost over $23 trillion annually by 2027.