SEC504: Hacker Tools, Techniques, and Incident Handling

Experience SANS training through course previews.
Learn MoreLet us help.
Contact usConnect, learn, and share with other cybersecurity professionals
Engage, challenge, and network with fellow CISOs in this exclusive community of security leaders
Become a member for instant access to our free resources.
Sign UpMission-focused cybersecurity training for government, defense, and education
Explore industry-specific programming and customized training solutions
Sponsor a SANS event or research paper
We're here to help.
Contact UsThe SANS Draft Critical AI Security Guidelines v1.1 outlines how enterprises can implement AI securely and effectively using a risk-based approach.
Artificial intelligence (AI) is a cutting-edge tool transforming the way organizations enhance efficiency, decision-making, and cybersecurity. But with every new technological advancement comes new risks. To address these risks, organizations must adopt comprehensive controls to ensure the secure implementation of AI.
And while security controls like access restrictions, data protections, and inference monitoring are vital to safeguarding against unauthorized access, data manipulation, and adversarial attacks, they are not enough. Organizations must also implement governance, compliance, and risk-based decision-making to ensure AI is deployed responsibly.
This blog summarizes the SANS Draft Critical AI Security Guidelines v1.1 outlines how enterprises can securely and effectively implement AI using a risk-based approach.
As organizations increasingly adopt AI, they face a growing set of risks beyond traditional security threats. While technical controls like access restrictions, data protections, and inference monitoring are critical, they are insufficient on their own. Organizations must also integrate governance, compliance, and risk management to ensure they are capable of facing the challenges AI poses, such as:
Governance frameworks, compliance strategies, and risk management methodologies must complement AI security controls for organizations to successfully deploy AI solutions.
The SANS report identifies six key control categories organizations must focus on to mitigate risks and ensure secure AI deployment:
Access controls protect AI systems from unauthorized access. AI models must be protected using strict access controls, including:
AI relies on vast amounts of operational and training data. The security of this data must be a top priority for organizations to safely deploy AI. Measures include:
Organizations must consider the security implications of where and how they deploy AI, including:
In an inference attack, adversaries manipulate AI output by injecting deceptive input. To safeguard against these attacks, organizations should:
Once an organization securely deploys AI, continuous monitoring and adjusting of the model is a must. To detect anomalies in AI models, the report recommends that organizations implement:
The secure deployment of AI requires that organizations use a structured approach and comply with data protection and privacy regulations. To do this, organizations should:
To balance security, efficiency, and compliance, organizations must use a risk-based approach while gradually adopting/implementing AI models. This measured implementation calls for organizations to deploy AI in less critical environments first to ensure adequate safeguards are in place before expanding its use. To do this, organizations can:
As AI is increasingly adopted, security will inevitably become more complex and require that organizations continuously adapt their security strategies. The SANS Draft Critical AI Security Guidelines v1.1 is built on three bedrock principals: robust security controls, governance and compliance, and a risk-based approach.
By using a gradual and proactive approach to AI implementation, organizations can harness AI’s full potential securely while minimizing risk. This strategy demands a continuous approach to AI security, taking into account both the speed of AI adoption and the evolution of its technology. AI is an ongoing challenge that demands an organization’s continued vigilance as it continues to alter and affect today’s cyber threat landscape.
We invite you to review the full SANS Draft Critical AI Security Guidelines v1.1, and stay tuned for how you can submit feedback when public comments open!
Rob Lee is the Chief of Research and Head of Faculty at SANS Institute and runs his own consulting business specializing in information security, incident response, threat hunting, and digital forensics.
Read more about Rob Lee