In September 2025, the European Union Agency for Cybersecurity (ENISA) released a threat-landscape update warning that attackers are now routinely using large language models (LLMs) to design phishing lures, generate convincing business emails, and even craft malicious prompts aimed at other AI systems (ENISA, 2025). At the same time, Microsoft Security’s mid-year threat-intelligence report described a sharp rise in “AI-assisted compromise,” including cases where attackers used generative models to create highly personalized spear-phishing messages and to synthesize executives’ voices for real-time impersonation (Microsoft Security, 2025).

This development is more than a new tactic in an old playbook. It marks a fundamental shift in the terrain of cybersecurity. For decades, most attacks targeted human users or exploited vulnerabilities in traditional software. Now adversaries are not only using AI to sharpen their campaigns against people but also seeking to manipulate the AI-based systems that organizations themselves deploy. ENISA documented early instances of “prompt-injection” attacks—malicious instructions fed into AI-powered invoice-processing or customer-service systems that caused them to reveal sensitive data or initiate unauthorized actions. Such incidents highlight that models themselves can become attack surfaces.

The dual use of AI in defense and offense means the same properties that make the technology powerful for security—speed, adaptability, and pattern recognition—also make it attractive to adversaries. Deepfake audio is bypassing voice-based authentication. LLM-generated phishing messages evade spam filters because they resemble legitimate internal communications. Automated vulnerability scanners powered by AI identify weak points in cloud configurations at a scale and pace defenders struggle to match. The speed of these developments challenges long-standing assumptions about detection and response.

For enterprises, this evolving threat landscape creates two imperatives.
First, they must continue adopting AI-based defensive tools—anomaly detection, automated triage, adversarial simulation—but deploy them within a documented governance framework. If a tool’s performance drifts, if it is trained on poisoned data, or if its interface is exploitable, it can become a liability rather than a safeguard.
Second, they must treat AI systems as critical infrastructure components requiring the same degree of provenance checking, patch management, and incident logging as other IT assets. In effect, the organization’s attack surface now includes the very models it relies on for protection.

For citizens and society at large, the stakes are rising. Increasingly, core infrastructure—such as banking verification, healthcare scheduling, and transportation logistics—relies on AI-driven components. A failure of these systems due to malicious exploitation or unmanaged drift could cascade into widespread disruption. Public trust will depend not only on the promise of stronger AI-enabled defenses but also on visible evidence that organizations monitor, test, and secure the AI components themselves.

This intersection of AI and cybersecurity demands a mature form of governance. Regulators are beginning to respond: the U.S. Cybersecurity and Infrastructure Security Agency (CISA), ENISA, and Japan’s National Center of Incident Readiness and Strategy for Cybersecurity (NISC) have all signaled that AI model supply-chain integrity should be treated as part of national critical infrastructure (CISA, 2025; ENISA, 2025). That recognition is an important step, but organizations cannot wait for regulation alone. The ability to demonstrate model provenance, document testing and validation, and provide incident evidence on demand is fast becoming the benchmark of credible cyber-resilience.

The paradox of our era is that AI is both shield and target. Meeting that challenge requires viewing AI not just as a tool but as an integral part of the digital environment whose safety, reliability, and accountability are central to the security of the very systems it protects.


References

ENISA. (2025). Threat landscape report: AI in cyber-offense and defense. European Union Agency for Cybersecurity. https://www.enisa.europa.eu/publications

Microsoft Security. (2025). Defending against AI-powered cyber threats: Mid-year threat-intelligence report. Retrieved September 2025 from https://www.microsoft.com/security/blog

U.S. Cybersecurity and Infrastructure Security Agency (CISA). (2025). Advisory on AI supply-chain risks for critical infrastructure. Retrieved September 2025 from https://www.cisa.gov


linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram