SEC504: Hacker Tools, Techniques, and Incident Handling

Experience SANS training through course previews.
Learn MoreLet us help.
Contact usConnect, learn, and share with other cybersecurity professionals
Engage, challenge, and network with fellow CISOs in this exclusive community of security leaders
Become a member for instant access to our free resources.
Sign UpMission-focused cybersecurity training for government, defense, and education
Explore industry-specific programming and customized training solutions
Sponsor a SANS event or research paper
We're here to help.
Contact UsVulnerability Management Gone Wrong
This is Part 1 of 4-part series. Read the other parts here:
Rekt Casino is kind of a wreck right now and is scrambling to respond to the successful ransomware attack they suffered late last year. In reviewing the report from the outside incident response team and responding to the flood of auditors and regulators that are now keenly interested in what Rekt Casino is doing to secure their organization and protect customer data and payment systems, it has become clear that they are woefully understaffed and lack some of the most basic security controls.
This post will focus on one of the core building blocks of any security program that can help prevent or reduce the impact of breaches. Managing our vulnerabilities is an essential security function. It is number 3 in the Center for Internet Security’s Critical Security Controls and is only outranked by hardware and software asset inventory which are also crucial to the success of any vulnerability management program (or really any other security program for that matter).
Vulnerability management is one of the oldest security functions for most organizations, yet we continue to struggle to keep up with the demands as our technology continues to evolve and expand and as we get stuck supporting and maintaining legacy components. There is no silver bullet for vulnerability management, but there are certain actions that we can all take to improve and through consistent, incremental improvement we can eventually get out of this mess we have made for ourselves.
In order for Rekt Casino or any other organization to build or improve their vulnerability management program, we have a few suggestions:
In order to build a successful vulnerability management program, it is best to start by opening the lines of communication with the technology teams and business stakeholders that will be active participants in the program. We need to understand what is driving their decisions on a daily basis and where their priorities are now and in the future. Chances are, for Rekt Casino, that our priorities are now very much aligned. However, that wasn’t the case a few months ago and most likely won’t be the case months or years from now when the breach is a distant memory.
We will need a lot of support from these technologists and stakeholders and if we don’t take the time to understand their current processes, technological capabilities, and business drivers, our solution to the problem may not be in alignment which inevitably leads to pushback, lack of engagement, and rework.
In the case of Rekt Casino, it may seem like the story or business case writes itself. While in many respects this is true, we need to be careful to not rely to heavily on these events to push forward our agendas or we may lose support over time or even sooner if something else happens that shifts the organizations focus from the breach to something else of equal or greater importance or cost to the organization. I am not saying we shouldn’t leverage the breach to influence change, but we need to build a story that survives and maintains validity even if the breach is quickly forgotten or less costly than originally estimated.
Because of the urgency surrounding the breach, once we build our case and get approval, we may feel like we need to or want to dive right in and start making changes. While it is important to move quickly, something many organizations fail to consider is how success will be measured. If we don’t think about this early, it may be difficult or impossible to gather the information we need later. By determining metrics up front, it allows us to collect baselines for comparisons and trends and will heavily influence our process and technology decisions. We should not only think about what metrics we will track within our own processes and technology, but also think about outside metrics and data points that may improve with the increased focus and effort on vulnerability management. For example, if over time more focused effort in vulnerability management leads to faster turn times on fixes, decreased failure rates, increased automation, or even decreased costs due to standardization and consolidation, all of these can be used to continue to build and support our story over time.
First of all, the perfect solution is fantasy. There are no perfect solutions in vulnerability management. Even if there were, the perfect solution for one organization would almost never be the perfect solution for any other organization. Just like people, all organizations are unique and have different culture, needs, and constraints. Part of taking time to understand is determining what capabilities already exist that we can leverage to get started. Rekt doesn’t have much in place right now from a security perspective. It will take time to procure, install, and adequately configure the technologies that many of us would expect to exist within a robust vulnerability management program. Rekt doesn’t have a good understanding of all of the systems and software that exist within the organization, let alone technology to automatically identify all of the vulnerabilities on their servers and workstations.
We don’t need to wait for these technology acquisitions to get started. Rekt like most businesses is using technology to patch and configure their systems. We don’t need to wait for a vulnerability scan to tell us there are vulnerabilities when our patch and configuration management tools already highlight many of these issues. Eventually, we will need robust identification tools to find anything we missed and provide validation of our efforts, but we can make significant progress by mining the data sources that already exist within our organization for information.
We can cobble together some hardware and software inventories by leveraging our virtualization and cloud APIs and both validating and supplementing that information with data from the many other technology management and monitoring capabilities we have deployed in the environment. It won’t be perfect, but it’s a start.
We can also start to identify the operating system and software patches and configuration changes that seem to be the most problematic in our environment and dig into the why or the root cause for these issues. Chances are installing a vulnerability identification tool is not going to remove these roadblocks and this will allow us to jumpstart the conversations and efforts to resolve these larger issues which tends to result in the greatest reductions in vulnerabilities.
While we need to have a long-term strategy for improving vulnerability management, we need to take an agile approach to achieving these strategic objectives. Too many times, I have seen organizations spend months or years planning and executing large projects only to experience major changes mid-project that completely derail their efforts. Consider a huge project to procure and implement vulnerability scanning, patch, and configuration management within our traditional data centers only to find out halfway through that we are moving everything into the cloud or moving from physical or virtual servers to containers. Either of these changes will required different identification and remediation practices and different technologies to help support those practices.
Being agile is not just about iterating quickly, but also about opening lines of communication. Both of these agile concepts will help us have more immediate impact, but also receive more consistent and timely feedback. Chances are as we are implementing some of the more traditional processes and technologies, if we are consistently communicating and demoing these new processes and technologies with our stakeholders, they may ask how this will work in the cloud or what we might need to do differently for containers if those changes are on the horizon. They will most likely know about these coming changes well before they are communicated to the broader organization.
Vulnerability management has simultaneously never been harder, but also never been easier. We have so much more to assess and manage these days that it is overwhelming. But, we also have more technologies and services than ever before to help us succeed. While the cloud can cause the number of systems we have to manage to balloon quickly, it also offers platform, functions, and software as a service which allow us to leverage the shared responsibility model to be less responsible for managing the vulnerabilities on the hosts that support these services. Maybe we are not comfortable with giving up this responsibility for certain workloads, but there must be certain asset classifications where the benefit outweighs the risk.
Both the cloud and containers encourage us to follow a more immutable approach to our assets by forcing us to use images, which can lead to greater standardization and fewer vulnerabilities if these images are managed and vetted properly and if teams are required to upgrade to new images within a certain timeframe.
On the custom application side, we are adding more and more apps and features, but developers are writing less and less custom code and leveraging more and more third-party and open-source libraries. Many of these libraries have built-in security capabilities and features that help reduce common vulnerabilities in our code. We need to train our developers on how to use these libraries correctly and make sure we are vetting them and monitoring them for vulnerabilities, but this tends to take less time and effort than auditing and assessing code written by our developers.
By following these principles as we develop or improve our vulnerability management programs, we can have continuous, incremental success which will lead to big gains over time. It won’t happen overnight and some of our efforts will fail, but with the small, incremental approach, failure is much easier to tolerate and recovery is possible. Especially if all of our stakeholders have been involved, have bought into our story, and we have been able to consistently demonstrate and highlight the improvements we have achieved through relevant and accurate metrics.
During the webcast last week, we had a lot of great questions and were not able to adequately respond to all of these questions and so we wanted to take the time to answer those in this blog post. Here are the questions and responses:
David is a security consultant based in Salt Lake City, Utah focused on vulnerability management, application security, cloud security, and DevOps. David has 20+ years of broad, deep technical experience gained from a wide variety of IT functions held throughout his career, including: Developer, Server Admin, Network Admin, Domain Admin, Telephony Admin, Database Admin/Developer, Security Engineer, Risk Manager, and AppSec Engineer. David is a co-author and instructor for MGT516: Managing Security Vulnerabilities: Enterprise and Cloud, an instructor for and contributor to SEC540: Cloud Security and DevOps Automation, and has also developed and led technical security training initiatives at many of the companies for which he has worked. Read David's full profile here.
David is a security consultant with 20+ years of experience in vulnerability management, application security, cloud security, and DevOps, a co-author of LDR516: Building and Leading Vulnerability Management Programs, as well as an instructor for SEC540: Cloud Security and DevSecOps Automation.
Read more about David Hazar