I remember seeing a post on LinkedIn talking about how “Encryption is overrated.” The post was taken down by the author just a day later because people sent him a lot of angry messages. Even though it was not well articulated, his post had a valid point.
So please hear me out; I feel this post has a valid message as well. I will try to articulate it well. If I fail, this blog post might disappear as well.
The idea for this post was stolen from a friend. He has much more experience in this industry than I do. That might be why he put this thought into words before I did.
Yes, I do. It is my daily job, and there is nothing in the world I would rather do. The complexity amazes me, and I admire all of my peers with their deep understanding of technology which allows them to uncover insane vulnerabilities. And to be clear: pentesting itself is not easy.
Doing the OSCP, for example, was one of the hardest challenges of my life, and that one is just the “beginner” certification. This job requires so much knowledge and dedication, but it is one of the most interesting things you can do, also the challenge is part of what makes it interesting.
Nevertheless, pentesting can not exist in isolation, it is always only part of an organization's approach to cybersecurity.
If you have no other ways to check for vulnerabilities, like scanning code and infrastructure, pentesters will just drown in findings and will never be able to catch the really complex ones.
There need to be policies in place, which define your approach to security and people catching attacks when they happen. There have to be people fixing the vulnerabilities as well, and this is where we should have a closer look.
You can distinguish vulnerabilities in a lot of different ways. They can be classified according to severity, hardware or software, underlying principles, and much more.
For this article, I want to look at them from the perspective of “straightforward way to fix” versus “no straightforward way to fix” (wow, that is what you call a lack of better terms).
Let’s look at an example: When your pentest uncovers an SQL injection, you know how to handle it. You should sanitize your user input. There are many ways to do it the wrong way, but in general, you know exactly what to do.
However, it might be the case that your pentesters (maybe during a Red Team) report some other findings like too many permissions granted to user X, lack of user awareness, users exposing sensitive information (well awareness again), or core processes are set up in an insecure way.
Those findings might be harder to fix, and I would argue that there is no standard way to do so. You might be shouting at your screen now: “Zero trust and security by design and awareness campaigns.”
Well, you are right, but they still require a lot of work, and there is no one size fits all approach.
(If you have the solution, don’t tell me in the comments, sell it and enjoy life as a billionaire.) These fixes often require architectural changes, system re-engineering, or even changes in organizational processes and behavior.
Why am I writing this whole thing? Because there are some points and ideas I wanted to put into writing. None of them are groundbreaking, but they still might be important for some people in cybersecurity:
Pentesting does not equal security: If your IT admin is talking about the security holes he or she has known about for a long time now, don’t spend money on a pentester, spend it on empowering this guy or gal to improve your security.
Scanning can’t replace pentesting, and pentesting can’t replace scanning: Why should I buy a pentest, I spent thousands of dollars on this shiny security scanner? Oh boy, this could be a whole other blog post. We have a pentest every year, why should we implement a security scan? Sure, if you want them to just manually scan thousands of assets, they will happily bill you for it.
This will take up all your budget and won’t allow them to look at complex vulnerabilities, a scanner could never find. Now, finding the balance between these two and all the other security measures might be pretty hard. (One might even say pentesting is the easy part. wink)
The whole security organization has to work together: I know the feeling of reporting a glaring security vulnerability and not getting the response I hoped for. Sometimes it is the right decision to apply more pressure to the affected organization unit so they understand the urgency. (This Remote Code Execution is not a bug, it is a feature.)
But you might have just told them about an issue with a process that is central to their revenue generation. The controllers and salespeople will scream at them if their numbers are bad, and the CISO will scream at them about the open security ticket with a high priority. How about working together with them to find a way to protect everyone's interests?
Try to understand the issue. Try to help them with the extensive security knowledge you have. Get the Incident Response Team involved to see if they can implement a detection while the vulnerability isn’t fixed.
Security has to be strategic: There are a lot of operational tasks related to security that keep an organization running. Fixing this, implementing that, and collecting evidence for compliance. But you have to have a vision because zero trust and security by design will not be announced today and implemented next week.
It is a whole transformation, and somebody has to think about how to do it. Else, there will be just endless operational work keeping your organizations security somewhat alive.
None of this might be new, and some of it might not be very well articulated or even make sense, but I hope it still added something to the conversation about pentesting and security.
Writing and talking about it has at least helped me to see the bigger picture, but I am happy to hear your opinion.
If you enjoyed this article, you can read my other stuff over at https://security-by-accident.com/ (add it to your RSS feed), and you can follow me on Twitter @secbyaccident.