paint-brush
A Guide to Escalation Handlingby@jaypaz
238 reads

A Guide to Escalation Handling

tldt arrow

Too Long; Didn't Read

Any time a pentest recipient raises a concern - any kind of concern, no matter how small it may seem - we consider that an escalation. For the pentest provider, these concerns should be treated with a high degree of urgency and with a perspective to learn from them, rather than to simply close them out. Most of these escalations can be avoided and resolved quickly by making sure both teams are well-aligned and in communication from the beginning. Findings are usually at the root of most escalations - whether it's too few or too many, not critical enough or too critical - the expectations each team has is different.

Company Mentioned

Mention Thumbnail
featured image - A Guide to Escalation Handling
Jay Paz, Senior Director, Pentester Advocacy & Research at Cobalt HackerNoon profile picture

We have explored a lot of topics that contribute to the quality of a pentest, which when aligned should deliver on the expectations of all teams involved. However, from time to time, misalignments do happen and finding a mutually beneficial solution is key. Any time a pentest recipient raises a concern - any kind of concern, no matter how small it may seem - we consider that an escalation. 

For the pentest provider, these concerns should be treated with a high degree of urgency and with a perspective to learn from them, rather than to simply close them out. For the receiving team, these should be used to ensure expectations are being considered and met. Ideally, these misalignments are communicated as soon as they are perceived and not after the test has concluded. Staying in sync is absolutely essential for the success of the test and the overall perception of quality.

As mentioned above, there is no reason too small to open an escalation. Here are some of the common reasons we see:

  • Scope not fully covered
  • Lack of findings
  • Findings are out of scope
  • Unvalidated findings are reported
  • Request to remove findings
  • Environment negatively affected by testing activities
  • Production data is altered
  • Artifacts left behind in testing environment
  • Lack of experience or effort from testing team
  • Testers did not adhere to specific client requests (i.e. testing time windows, traffic limits)

The above list represents the most common types of escalations we have seen in 2022, however, it is not an exhaustive list, nor does it represent the only types of themes which should be considered for an escalation.

Let's dig into some of the themes we see to better understand them and to attempt to prepare for these types of situations so they are resolved immediately or, hopefully, work so they never come up in the first place.

Scope not fully covered

This is a very common escalation, and one of the easiest to avoid. From a recipient point of view it is important that the scope is very clearly defined in the project brief. What is not in scope is just as important to document. Keeping testers within the boundaries of the intended scope can be tricky since there are many routes and related features than can be in play. However, properly documenting these requirements and boundaries is a great way to ensure testers are aware of what is and isn’t in play. From a tester perspective, it is important to fully understand these boundaries before testing even begins. It should be one of the first questions asked when aligning expectations. Both understanding and verification of these boundaries are key, however, it is also important to ask what should be done if a vulnerability appears to extend beyond the scope of work.

Issue with Findings

Findings are usually at the root of most escalations - whether it’s too few or too many, not critical enough or too critical - the expectations each team has is different. Most of these escalations can be avoided and resolved quickly by making sure both teams are well-aligned and in communication from the beginning.

Lack of findings

When it comes to a lack of findings in a test, it’s important to remember that a pentest is a point in time assessment of the environment in question, which is limited to a finite number of hours or days. 

Different testers take different approaches, and a specific approach may not yield as many insights that produce findings. This doesn’t mean that there was a lack of rigor from the tester or approach quality in the work. It just means that their investigation took a different route. Think of a crime investigator: They work hard to leave no stone unturned, they interview everybody involved, they look for evidence, they bring in forensic investigators, and yet some crimes go unsolved. This doesn’t mean due diligence wasn’t done. This doesn’t mean there wasn’t quality in their work. It just means that it’s a tough investigation. That is the case with some pentests. 

The pentester did everything they should’ve done, they used the appropriate methodology for the environment in scope, had a testing checklist, they communicated, they collaborated, they brought the receiving team in, they sought out help from other testers on the engagement, and yet there weren’t enough findings. Again, this doesn’t mean they didn’t do a good job. This just means that their investigation didn’t yield results. This could also mean the environment is very well designed and implemented and no vulnerabilities are present. If you see that there aren't any findings reported, have a conversation with the testing team while the test is ongoing then provide feedback, give them direction, and help them to understand your environment better.

Findings out of scope

This is another situation where communication and collaboration will remedy this while the test is ongoing. If you notice the testers reporting findings that are outside of the environment or the scope that should be tested, that is a clear indication that expectations/boundaries weren’t set as clearly as they needed to be. Take time to realign, and again help those testers understand the reason for the test and the margins of the test. This should be an easy one to resolve, and a valid reason to have the findings removed from the result set.

Unvalidated findings 

While every finding reported should be directly validated, every once in a while, there may be a false positive in the result set. It is imperative that the testers are made aware as quickly as possible, ask for more information and provide the proof necessary, so that the receiving team can actually work on remediation. These types of findings can be removed from a report if there is insufficient evidence that there is risk to the organization. Please note that the onus is on the environment owner to show there is no risk and to provide testers with the context needed to make the same determination.

Finding removal

We often get requests to remove findings from a report even when those findings are validated and within the scope. This is a situation where the testing team should not remove these findings. Testers are a third-party evaluator of the environment in scope -  asking them to remove findings is unethical and goes against the main function of the pentest, which is to discover flaws. A receiving team shouldn’t look to manipulate the results of the test, instead, they should look to understand and work on remediation. If a finding is to be removed there needs to be clear evidence that the finding is either out of scope, mitigated by other means (observable by the testers), or demonstrates no risk to the environment. These situations can be resolved, again, through collaboration and full disclosure to regain alignment to expectations.

Environment Affected 

A penetration test is intended to exercise the environment in scope to the limits, to see what a true, malicious attacker may be able to do, evaluate the resistance, the resilience and the viability of the application or environment. Unfortunately, there will be circumstances where those environments are affected in ways that the receiving team did not anticipate. To penetration testers, this is a valid approach, and should be considered a finding. While there may not be a specific flaw, or vulnerability, the fact that the environment can be affected to the point it is no longer effective/viable should be an indication to the receiving organization that they need to evaluate the architecture and scalability of their solution.

It is recommended that an environment other than production be used for the pentesting exercise. This will ensure that if the environment is negatively affected, the organization's business won’t be. There are times, however, when a non-production environment is not available. In those situations production environments are used and  it is imperative that a backup of the environment is made so that the receiving organization has a plan to recover as quickly as possible. Additionally, the limits of the test should be well documented and communicated to the testers. Should the traffic be throttled? Is there a lock out policy? Is the environment allowed to auto-scale? Can the application handle concurrent requests and how many? These are all great starting questions to ensure the environment being evaluated is capable of withstanding the rigors of a penetration test.

An environment that is fragile, or cannot fully support the rigors of the test, will inevitably slow down the testers, and will likely yield an incomplete test as the testing period will come to an end before the full scope can be investigated.

Issue with testing team

I left this topic to the end because it is extremely subjective. Every organization has a different definition of tenure, experience, capabilities, and each organization needs something different to cover their needs. Titles are also all over the place-  a Security Architect at one organization may be a Security Engineer at another, though their skills may be comparable. Resumes are often inflated and not a good indicator of experience or capabilities. Furthermore, some of the best pentesters out there are self-taught. 

Each testing team should be built with the environment and technologies in mind, and to help meet the expectations of the project.  This should be done with due diligence and as much information about their experience and ability. This situation can also be easily resolved by bringing this up to the project managers managing the test. Provide them with feedback about why you feel a specific tester does not have the experience or lacks certain skills. The team can be adjusted, or it may just be a realignment of expectations that is needed.

Similarly, if there are issues with coverage, test hygiene, testing schedules or tester activity, the best course of action is to bring this up while the test is happening. These issues can be easily rectified and usually in a timely manner.

As previously mentioned, most issues that lead to an escalation can be avoided by aligning the teams and stating the expectations very clearly from the start. Scope, coverage, technologies, experience, testing windows and many other factors should be taken into consideration before the test begins.  This will lead to less in-test issues and also allow for quicker resolution when they arise.

Les Brown said “practice makes improvement”; from my perspective, each pentest is a new practice, and every escalation is a way to improve. So, practice, practice, practice and build a security program that is continuously improving. That will be the next topic in this series, Building a Pentest Program, taking the annual compliance-driven test to the next level by optimizing continuous testing to drive vulnerabilities down and security knowledge up.