FAST TRACK
See our Fast Start promotion and start your first pentest on The Cobalt Offensive Security Testing Platform for only $4,950.
FAST TRACK
See our Fast Start promotion and start your first pentest on The Cobalt Offensive Security Testing Platform for only $4,950.

SANS AI Survey: Key Takeaways and Future Strategies for Cybersecurity Teams

Earlier today, SANS launched their 2024 study AI and Its Growing Role in Cybersecurity: Lessons Learned and Path Forward, sponsored by Cobalt. The study describes the labyrinth of challenges and opportunities that security teams are dealing with as they look to adopt AI and drive efficiencies in their security operations. 

My key takeaways are:

Widespread AI adoption—Approximately 43% of organizations are currently using AI as part of their cybersecurity strategy, while another 38% plan to adopt it.

In comparison, Cobalt’s 6th Annual State of Pentesting Report found that of the 900 security practitioners surveyed across USA and UK, 75% of respondents to our survey say that their team has adopted new AI tools. I think the discrepancy here is personal adoption by security professionals vs adopting tools within an organization’s security techstack. This is an important distinction because when you leverage an LLM to help you draft something, you should always review the output before you use it. In this case, concerns about inaccuracy or hallucinations are mitigated by human control of the system. But once a team begins to rely on AI to automate a portion of their workload, the oversight can disappear - a concern that the authors of the SANS study dug into later along with some interesting takes from the respondents. 

High concern for AI-powered threats—Most organizations are concerned about AI’s impact(s) on offensive cybersecurity tactics, such as automated vulnerability exploitation or more advanced phishing campaigns.

Our survey results from State of Pentesting 2024 found that 7 in 10 respondents witnessed more external threat actors using AI to create cybersecurity threats in the past 12 months (so in 2023). Personally, I expect the greatest use of AI by attackers to be in improving their impersonation techniques. For example, it’s more challenging for an attacker to effectively impersonate a trusted colleague without AI. If the attacker is not a native speaker or fluent in the official business language used by the company, they are likely to make minor grammatical mistakes that can be easily identified. They might also choose words that signal that they are not familiar with the company jargon or culture. 

Artificial intelligence changes the game. Because it’s easy for AI systems to process enormous amounts of language data, translation between languages becomes a fairly easy task. In fact, today’s AI systems are capable of much more than basic translation - they are well equipped to use a noticeable level of nuance and customization when instructed to do so. AI can be used to an attacker’s advantage when it comes to impersonation, whether the attack is conducted by email, text message, voice, or video. Significantly improved social engineering, assisted by AI, can and will be used by threat actors to increase the impact of their social engineering and identity fraud activities.

Impact on workforce and training—Approximately 75% of organizations are preparing their workforce for AI with ongoing training in AI fundamentals and applications.


Improving morale—Approximately 71% of organizations report higher satisfaction due to AI automating tedious tasks, allowing focus on rewarding work.


I’m particularly excited about these two developments because security professionals are naturally curious and love to learn. I’m glad to see that so many organizations are providing their employees with access to training when it comes to using AI and understanding how it works. 

Additionally, most security jobs involve some kind of tedious, routine task that can be automated or otherwise improved using AI. We have an opportunity to improve this situation—both for current and future security teams—by leveraging technology to streamline low-value, monotonous processes. By doing so, we free up human talent to focus on the creative, high-impact work that only they can do. This not only enhances job satisfaction but also drives the industry forward by making it more attractive and sustainable for professionals.

Where this falls apart

While some areas of AI adoption are looking bright, it's not all sunshine. The integration of AI into organizational workflows requires a specialized skill set that many organizations are not equipped to yet take on. Approximately 44% of respondents identified a skills gap as a significant challenge in adopting AI technologies, and 42% responded they had difficulty trusting AI decisions due to a lack of transparency. These concerns are valid and widespread throughout survey respondents. This aligns well with what we heard in our state of pentesting report where 57% of respondents to our survey said the demand for AI has outpaced the security team’s ability to keep up and that their team is not well-equipped to properly test the security of AI tools.

This distrust of AI is why cybersecurity teams must lead with transparency. In helping users understand how decisions are made and ensuring that outputs are reliable. Sixty-one percent of respondents indicated that AI decisions needed more transparency and a refinement in AI algorithms to reduce false positives and alert fatigue.

So what?

We know what the concerns are, so how do cybersecurity teams take action? 

According to the survey, approximately 60% of organizations report changes in training needs for the security team due to AI adoption. Top areas of education and training concern include: 

  • More specialized AI and cybersecurity courses

  • Emphasis on continuous learning with a focus on AI

  • More hands-on experience with AI-based tools through simulations and experience 

  • Modules on ethics, privacy, and responsibility

The opportunity of AI is that it can provide better job satisfaction and safeguards for burnout for roles like security analysts, who face burn out due to the rapid pace of the monotonous work. As AI continues to infiltrate the cybersecurity landscape, its impact isn't just about adding new tools to the arsenal—it's about adapting traditional security controls to suit an AI enabled world.

In an AI enabled world, it’s more important than ever that these security controls are practiced regularly and consistently.

The top security controls that apply to AI applications are: 

  • AI inventory management

  • Logging and monitoring

  • Manual technical security testing (such as penetration testing)

  • Human in the loop (HIL) enforcement

Learn for Yourself

For a deeper look at the findings and actionable results, read the full report and SANS survey findings.

SANS AI Survey Report 2024 Cover Image

Back to Blog
About Caroline Wong
Caroline Wong is an infosec community advocate who has authored two cybersecurity books including Security Metrics: A Beginner’s Guide and The PtaaS Book. When she isn’t hosting the Humans of Infosec podcast, speaking at dozens of infosec conferences each year, working on her LinkedIn Learning coursework, and of course evangelizing Pentesting as a Service for the masses or pushing for more women in tech, Caroline focuses on her role as Chief Strategy Officer at Cobalt, a fully remote cybersecurity company with a mission to modernize traditional pentesting via a SaaS platform coupled with an exclusive community of vetted, highly skilled testers. More By Caroline Wong