SEC504: Hacker Tools, Techniques, and Incident Handling

Experience SANS training through course previews.
Learn MoreLet us help.
Contact usConnect, learn, and share with other cybersecurity professionals
Engage, challenge, and network with fellow CISOs in this exclusive community of security leaders
Become a member for instant access to our free resources.
Sign UpMission-focused cybersecurity training for government, defense, and education
Explore industry-specific programming and customized training solutions
Sponsor a SANS event or research paper
We're here to help.
Contact UsLeveraging AI has never been easier, but there are implications regarding its safety and limitations.
Note: This blog post is the third in a series on AI and how to make the most of it in your Security Awareness, Culture and Human Risk efforts. This post covers the concerns, issues and limitations of AI with a focus on Generative AI. You can access the other blog posts from this series below.
AI is an extremely powerful tool; in many ways we are just now discovering the different ways it can exponentially accelerate our cybersecurity efforts. However, like any other tool, AI has its issues and limitations. Before fully utilizing AI, you must be aware of its limitations regarding a few things, like:
First and foremost, remember that AI is not always correct. AI learns from vast datasets, including data from the entire Internet. If the data AI analyzes is incorrect, so too is its output. This is why you can think of AI as a trusted friend, a resource to help give you ideas, suggestions and points you never thought of. But ultimately you are responsible for the final result. This can be tricky because AI outputs often appear so confident. Also, do not ask AI to verify its own output, instead what you can do is ask it to identify its sources or the logic it used to come to its conclusions.
Additionally, prompt engineering is critical. AI’s output is only as good as the original input data, i.e., the prompt. You need to be sure that your prompts are clear. Prompts that are ambiguous or confusing can lead to inaccurate or incorrect output.
Machine language AI (which is what Generative AI is based on) leverages algorithms developed by people. Those algorithms are only as good as the people themselves, to include biases. Be aware that the algorithm developers may have introduced their own biases without realizing it.
In addition, be careful of your own biases. AI is designed to please you, the user. If you introduce biases in your prompts, you are likely to get biased results. For example, if you enter the prompt,
AI will respond in a way that is heavily biased towards cats, perhaps implying that cats truly are better than dogs in all ways. Instead, a less biased prompt would be to ask AI something like,
Biases are something we humans cannot simply turn off, and quite often we introduce those biases without realizing it.
Public AI platforms, such as ChatGPT or Google Bard, continuously learn from user inputs. This means when you are using a tool like ChatGPT, it not only reads your inputs, but may store, process, and learn from them. For example, let’s say you upload your company’s security policies to ChatGPT for it to review and improve them (a fantastic use of AI). Perhaps you are concerned your security policies are far too complicated and difficult to understand, so you ask ChatGPT to make them easier for your workforce to follow. This not only saves you a tremendous amount of time, but also enables you to dramatically improve a key challenge in many organizations: complicated policies.
ChatGPT will happily read and can vastly improve your policies. But your security policies are now stored in ChatGPT and can potentially be shared with others as part of its future output. The same can be said for any sensitive information, such as personally identifiable information (PII). For example, you upload a spreadsheet to ChatGPT not realizing it contains the names, phone numbers, and home addresses of thousands of people. Once again, all that data is now stored in ChatGPT. There are two ways to approach this:
Organizations are just now starting to develop policies and guidelines on how their workforce can use AI. Not only are these policies in their infancy, but they will most likely change in the future. Make sure you understand what your organization’s policies are before using AI at work. For example, what data can you share with AI, what are you allowed to use it for, and what AI solutions can you use? Some additional things to consider:
Regardless of how AI helped you create content, you are still ultimately responsible for the final output.
Intellectual rights surrounding AI-generated content remain a complex issue. First and foremost, check with your organization’s legal department. The reason intellectual property (IP) and ownership can be so confusing is because of how AI works; by analyzing the works and data created by others. If you use AI to give you ideas, such as how to improve a project plan, business case, or security policy, and you created the original document but asked AI for suggestions, in most cases you likely own the rights to that document. However, things get more confusing when AI created the resource. For example, when you create an image using AI, who owns the image? The artists who created the millions of images that were used in the Machine Learning process, the AI algorithm that created the image, or you, the individual who created the specific prompts to generate the image? Perhaps no one owns the resulting work and it is public domain.
Unfortunately, I don’t have a good answer for you other than be sure to read the website / AI engine documentation and their policies on content generation and get guidance from your organization and their AI / legal policies. In addition, expect different countries and regions to begin publishing regulations on the use of AI.
AI is an incredibly powerful tool, one that you will most likely be using more and more. However, as with any tool, be aware of its issues and limitations. In next week’s AI blog, we will begin a deeper dive into Generative AI and advanced prompt engineering.
PS: Once I wrote this blog post, I asked ChatGPT to review and provide suggestions on how I can improve it. What amazes me is not only the detailed feedback, but how positive and encouraging ChatGPT is with its feedback, showing far greater empathy than some people I know. This was the prompt I provided:
“I'm going to give you a blog post I want you to review and provide suggestions on how to improve. Provide feedback on how to improve grammar, structure and content. Do not re-write the article, simply review it and provide feedback with short, concise bullet points.”
Interested in reducing your organization’s human risk? Check out my course LDR433: Managing Human Risk and sign up for a FREE course preview here.
Lance revolutionized cyber defense by founding the Honeynet Project. Over the past 25 years, he has helped 350+ organizations worldwide build resilient security cultures, transforming human risk management into a cornerstone of modern cybersecurity.
Read more about Lance Spitzner