There are places in the world we all know where wars are fought on battlefields. But there is another war—one that affects everyone—which is being waged silently between servers and algorithms.
Artificial intelligence has turned the web into a global conflict zone, where automated attacks strike thousands of targets per second and traditional defenses are constantly being outpaced. According to ENISA’s Threat Landscape 2025, AI-driven threats such as phishing and malware are among the most prominent attack vectors in Europe’s cyber ecosystem.
Today, every connected device is a potential battlefield, and every datum can be a weapon or a war trophy. What makes this even more explosive is artificial intelligence itself: while it is revolutionizing how we work and live, it is also opening unsettling scenarios in the realm of cybercrime.
The AI Arms Race in Cybercrime
Data from 2025 show that cyberattacks powered by AI have surged globally, with a reported 47% increase year-over-year. Projections suggest that by the end of 2025, such incidents could exceed 28 million worldwide, radically reshaping the digital threat landscape.
Italy in the Crosshairs: Alarming Figures and Economic Impact
Italy is not immune to this wave. In the first quarter of 2025, nearly 40% of the roughly 900 serious cyber incidents involved generative AI tools directly.
While comprehensive national economic loss figures for 2025 are still emerging, sectoral analyses emphasize the urgency of strengthening defenses, particularly given AI’s growing role in facilitating phishing, impersonation, and automated attacks. ENISA identifies phishing—often the initial vector in 60% of attacks—as a persistent threat in the European Union.
Cybercriminals Are Made, Not Born
Cybercrime has become a convenient and highly profitable “profession.” Between 2023 and 2025, the availability and use of tools for generating deepfakes and other synthetic content have skyrocketed, lowering barriers for attackers and reshaping threat dynamics.
AI has democratized sophisticated attack capabilities: the costs to produce convincing deepfake content are now low, while the economic damage to targeted organizations is high. Reports estimate deepfake-related fraud and social engineering losses can reach hundreds of thousands of dollars per incident.
The Deepfake Weapon: When Reality Becomes the Trap
Deepfakes are now among the most insidious threats. While some statistics vary by source, analyses show that deepfake incidents have grown exponentially in recent years.
In one high-profile case outside Italy, an employee transferred millions after interacting with AI-generated “avatars” posing as executives—a type of scenario that cybersecurity experts now regard as an operational risk, not just a hypothetical. ENISA’s 2025 report highlights how generative AI tools have been exploited across phishing, malware development, and social engineering operations.
The New Generation of Attacks
AI has made every type of attack more effective and personalized. Phishing—already one of the most common cybercrime vectors—is now almost indistinguishable from legitimate communication, as threat actors use advanced language models to craft highly contextualized and convincing messages.
Voice cloning attacks—another form of AI-enabled social engineering—are also increasing rapidly, making it easier for attackers to impersonate trusted individuals or executives. Independent cybersecurity studies report an upswing in AI-generated phishing and deepfake attacks in 2025.
The Race for Digital Defense
In response to this escalation, integrating AI into cybersecurity strategies is no longer optional. In fact, the vast majority of organizations now view AI-based defenses as fundamental to addressing modern threats. Global cybersecurity outlooks indicate that a substantial share of enterprises are adopting AI tools for intrusion detection, phishing detection, and automated security operations.
However, AI-based defensive tools are not a panacea. Even with such defenses in place, many organizations continue to experience breaches, underscoring that attackers evolve just as quickly as defenders. Shadow AI—where users deploy unregulated AI tools within corporate environments—further expands the attack surface.
The Human Factor Remains Central
Despite growing automation and shifting threat landscapes, the human element remains a decisive factor in cybersecurity. People can make the difference in defense strategies, yet too little investment continues to be made in workforce training and awareness.
ENISA’s Threat Landscape 2025 underscores that phishing remains a dominant way attackers gain initial access, reflecting deep-rooted human vulnerabilities to deception.
This is why investing in continuous training and awareness programs for personnel is critical. Organizations must also invest in advanced defensive technologies and adopt multidisciplinary strategies that recognize AI as a powerful but dual-use tool—one that can both empower defenses and enable attacks.
Conclusion
The future of cybersecurity will not be determined by technology alone but by our ability to combine artificial intelligence, human expertise, and proactive strategies within an integrated defense ecosystem. The stakes are too high to delay, hesitate, or underestimate the evolving threat landscape.






