Cybersecurity in the Age of AI: Navigating Risks, Realities, and Building Resilience
As we step into a new year and a new era increasingly defined by artificial intelligence, the double-edged sword that is AI sharpens by the day, growing more effective for those wielding it, and more dangerous to potential cyber victims. On one side, AI is revolutionizing industries, unlocking opportunities for innovation and efficiency. On the other, it is empowering cyber attackers with powerful tools that significantly enhance the sophistication and scale of their operations. As AI-enabled cyber threats evolve, understanding their nature and preparing for their impact is crucial for businesses navigating this new landscape.
Today’s two most primary concerns are AI-enabled threat actors and third-party AI risks, both of which demand reevaluation of established cybersecurity strategies. According to the 2024 ConnectWise MSP Threat Report, AI technologies are more responsible than any other factor for the increasing sophistication and frequency of cyberattacks. Specifically, threat actors are leveraging AI to automate and scale their attacks, quickly and efficiently hitting a wide range of targets in a spray-and-pray approach. These AI-driven attacks can be more adaptive, persistent, and effective, posing a significant challenge to existing cybersecurity defenses.
The use of AI in cyberattacks also significantly lowers the barrier to entry for aspiring cybercriminals. AI tools and technologies enable individuals with limited technical expertise to initiate complex cyberattacks. The democratization of attack capabilities has led to a more diversified and unpredictable threat landscape than before. Automated processes powered by AI can continually probe and evaluate a company's tech stack, identifying vulnerabilities and exploiting them with minimal human intervention. The relentless nature of these AI-driven attacks demands constant vigilance from organizations striving to protect sensitive data and systems.
And on an even simpler level, generative AI expands the attack surface area for foreign adversaries, enabling them to more effectively target victims across language barriers. Historically, many phishing attempts have been easily dismissed because hackers struggled to accurately impersonate senders. However, with generative AI tools, hackers will now be able to much more effectively impersonate known identities and micro-target victims with more realistic prompts. According to recent research from Harvard University’s Belfer Center, new AI-generated phishing emails have a 60% success rate, comparable to those crafted by human experts. Even worse, the same researchers have concluded that large language models (LLMs) are enabling hackers to automate the entire phishing process, reducing the costs of such attacks by over 95% while maintaining or improving their effectiveness. As a result, phishing is becoming easier for cyber criminals to execute, harder for victims to detect, and significantly more dangerous in its consequences.
The proliferation of third-party GenAI solutions further increases the already acute risk of data leakage or exfiltration. Organizations must ensure that their AI vendors adhere to stringent data security standards to prevent unintended data exposure. We agree with Crowdstrike CTO Elia Zaitsev, who recently stated that “To secure AI innovation in the cloud, security teams will need specialized technology and services that monitor AI services and LLMs, detect misconfigurations, and identify and address vulnerabilities, unified with protection across the entire cloud estate, from infrastructure and applications to data.” Since GenAI systems often require access to sensitive data for learning and operations, vendors must invest in robust protections for this data. Unfortunately, we predict that not all AI vendors will meet these standards, and 2025 is likely to see multiple high-profile hacks involving the theft or misuse of GenAI training data.