paint-brush
How Security Engineering is Changing the Cybersecurity Industryby@ventureinsecurity
352 reads
352 reads

How Security Engineering is Changing the Cybersecurity Industry

by Ross HaleliukMarch 28th, 2023
Read on Terminal Reader
Read this story w/o Javascript

Too Long; Didn't Read

Mature security teams are looking at security through the lenses of layers and jobs to be done. To succeed in security today, one needs to understand on-premise infrastructure, various components of the cloud infrastructure, as well as infrastructure-as-code and DevOps pipelines. Cybersecurity of the future is going to look like software engineering.
featured image - How Security Engineering is Changing the Cybersecurity Industry
Ross Haleliuk HackerNoon profile picture

I have previously talked at length about the maturation of cybersecurity and the move from promise-based to evidence-based security. In this piece, I will expand on one of the trends related to this transformation - namely the rise of security engineering.

Move from promise-based to evidence-based security and the evolution of IT

When I talk about the maturation of cybersecurity I largely mean changing the way we approach doing it. Until very recently, we thought of it as a feature, and assumed that security is a tool problem: if you buy the “right”, “next-gen” security product from a top vendor, you will be “safe”.


Now, after many years of seeing how this approach failed to deliver on its promise, we are starting to understand that security is a process, not a feature. We are developing a systemic belief that we need to go back to basics: collect security data into one place, understand what is happening in our environment, learn what constitutes normal business practices in our organization and what may be a sign of compromise, identify how to detect malicious behavior, and respond appropriately. Instead of getting a widget that when enabled, activates an impenetrable protective shield, mature security teams are looking at security through the lenses of layers and jobs to be done.


“The best way to build a security posture is to build it on top of controls and infrastructure that can be observed, tested, and enhanced. It is not built on promises from vendors that must be taken at face value. This means that the exact set of malicious activity and behavior you’re protected from should be known and you should be able to test and prove this. It also means that if you can describe something you want to detect and prevent, you should be able to apply it unilaterally without vendor intervention. For example, if a security engineer has read about WannaCry, they should have the ability to create their own detection logic without waiting a day or two until their vendor does a new release.”


Another factor that is changing the way security is done is the evolution of IT we have been witnessing over the past decade. There was a time when a background in IT was sufficient to get started as a security professional as it would provide a basic understanding of how different elements of the infrastructure worked, interacted with one another, and what was needed to secure them.


The automation of IT and the rise of the cloud are abstracting a lot of what is happening in the background, making it harder to develop mental models around the IT infrastructure and understand how to secure it. To succeed in security today, one needs to understand on-premise infrastructure, various components of the cloud infrastructure, as well as infrastructure-as-code and DevOps pipelines, to name a few. As IT is becoming more complex, the requirements to people in charge of maintaining, administering, and making sense of this complexity are increasing.


Other factors accelerating the change in how we do security include:

- Weak correlation between results and security spending,

- Business demanding measurable results,

- Increasing complexity of security and customer environments,

- Security tools proliferation,

- Growing maturity of security professionals,

- Rising insurance premiums,

- Emergence of the new generation of service providers,

- Emergence of the supporting vendor ecosystem,

- Establishment of security frameworks, and

- Investor support of evidence-based security.


Cybersecurity of the future is going to look a lot like software engineering

Seeing software engineering as a model for cybersecurity

Cybersecurity originated in hacking circles - amongst technically inclined people curious about reverse-engineering products and tools, tinkering, and breaking into what was seen as unbreakable. Then, several security vendors came in to promise safety, saying “we will alert you when something bad happens, just have someone to check the alerts”. Amazed by the prospect of such simplicity, we started hiring security analysts to monitor alerts. Today, a decade or so later, we are seeing that we have to go back to the basics. This take is, obviously, an oversimplification, but the truth is that we need to get hackers back into the industry. The truth is that the cybersecurity of the future is going to look a lot like software engineering.


Software development offers us a great model: business analysts and product managers operate between business and technology - they talk to the customers, understand and evaluate business requirements, translate them into technical requirements and prioritize these requirements for development. I often hear how security teams are blamed for not talking to customers and not understanding business enough. While the sentiment is fair, I think people making those statements are missing the point: criticizing technical security professionals for not understanding business is akin to criticizing back-end engineers for not being great at doing user interviews. The truth is simple: if back-end engineers wanted to do user interviews they would have not chosen back-end and would have become UX designers instead.


We do need security teams to better understand the business, there is no doubt about that. But we cannot solve the problem by sending everyone from IT and security to start interviewing employees across the company (though we should encourage building relationships). Instead, we need the equivalent of business analysts and product managers in security to bridge the gap.

Software engineering principles for cybersecurity

Software engineering offers a great set of tools, concepts, principles, and mental models that are sharing the cybersecurity of tomorrow.


First of all, the security of tomorrow will be security-as-code.


Now that we get everything-as-code - policies-as-code, infrastructure-as-code, privacy-as-code, detections-as-code, etc. - we are able to deploy, track, and test the changes to the organization’s security posture, and roll them back as needed. This, in turn, means an approach that can be tested and validated, further solidifying the point about evidence-based security. Similar to how it works in software engineering, you are now able to run automated security tests to see how your system will behave, and employ a quality assurance person (think penetration tester) to go beyond automatons and look for edge cases not covered with automation.


Second, the cybersecurity of the future will be based on continuous monitoring, continuous deployment, and frequent iterations. An organization’s security posture is not static, it changes every second and the speed with which it changes has been accelerated with the rise of the cloud. Seconds after new detection and response coverage is deployed, an organization’s environment would have changed with hundreds of virtual machines being spun up in the cloud, and tens of new SaaS applications installed across the endpoints, to name a few. Security assessment and coverage cannot remain static - it needs to evolve as the organization itself is evolving. Security, therefore, is a process, not a feature - a process based on continuous assessment and continuous refinement of the organization’s security posture.


Third, the industry of tomorrow will have to do things at scale, in an API-first manner. Gone are the days when security teams had to login into tens of tools to manually fine-tune some configurations, and even more so - manually deploy security solutions. Everything has to be done at the machine scale, via the API.


Fourth, cybersecurity will see the world of commercial tools coexist and tightly integrate with the world of open source. While this has been the case for software engineering for a while with all commercial software engineering tools leveraging open source libraries and components, in cybersecurity open source is often seen separately from the vendor market. I have previously looked in-depth into the role of open source in cybersecurity, I see this role growing as the industry matures. Cybersecurity vendors won’t be able to get away by simply creating commercial versions of the open source tools; they will need to build on top and add real value beyond making a statement that “we are not open source” and showing their SOC 2 compliance certificate.


Taking an engineering approach to security means focusing on improving processes for the continuous delivery of defense instead of checking compliance boxes. It empowers security teams to deliver technical solutions and build scalable tools before recommending hiring more people, which, in turn, allows them to achieve more with less. All these factors combined make the evidence-based approach to security I have discussed in-depth before an inevitability; it is no longer the question of if, it’s a question of when (answer: it is already happening).

Place of security in the product life cycle

When we think about security, one of the questions that first come to mind is “where in the software development life cycle does it belong?”. While it’s a valid question, zooming in on the software development life cycle alone creates bling spots around what happens with the software out in the wild. To properly discuss the role of the engineering approach to cybersecurity, it is imperative to look at the product lifecycle as a whole, which includes how the software is built as well as what happens after it starts being used in production.


Historically, security was isolated from the development team building the product, and as a result, it was seen as the last step to ensure “compliance” before the release. Granted, security wasn’t the only part of the software development lifecycle (SDLC) that existed as a silo; so were product management (PM), design, engineering (often split into front-end and back-end), and quality assurance (QA), to name a few.


Before the adoption of Agile which instituted the idea of cross-functional teams, software development looked roughly as follows:


  1. A product manager would write requirements and send them off to designers
  2. Designers would build mockups and send them off to the development team
  3. Back-end engineers would complete their part of the work and send the remaining part to the front-end team
  4. The finished product was then sent to QA for review


As the right people were not involved at the right step, this traditional, so-called waterfall approach resulted in a lot of waste, inefficiencies, rework, and poor decisions. Agile with its Scrum and Kanban frameworks, led to short iterations, and frequent release of the code, and most importantly it created cross-functional product teams where a PM, a designer, software engineers, and QA would work together. In practical terms, it meant that software engineers and QA would provide feedback on requirements and designs before those were deemed ready for development, and PMs/QA would provide their input as the product was being built, reducing the need to later throw out the code and re-do what was already “done”.


Agile didn’t solve all problems. In particular, when DevOps emerged, DevOps engineers would find themselves devoid of any context about what product teams were doing, making their job reactive and hard to do. Eventually, best practices of organizational design caught up, and recently, in 2019, Manuel Pais and Matthew Skelton published what is in my opinion the most impactful guide on designing technology teams - Team Topologies: Organizing Business and Technology Teams for Fast Flow. In their book, Manuel and Matthew review common challenges of organizational design, share best practices around successful team patterns and interactions, and recommend ways to optimize value streams for tech companies.


Until recently, security was behind on becoming a part of the development organization, existing as a siloed check post that, in the best case scenario, gives a go-ahead on releases and assigns new tickets to development teams, commonly marked as “highest priority”. While that is still a pattern seen in many organizations, the approach to security has started to change in recent years. We are seeing the growth of DevSecOps as security’s response to DevOps, and as security is being built into the CI/CD pipelines, its role is changing from compliance to the delivery of defense. As it relates to new product development, security is indeed starting to look like software engineering.


Going into the future, we will likely continue to see the security teams operate as stand-alone units inside their organizations. However, more and more application-specific vulnerabilities will be handled by those building the product and having the most context about the code - software engineers. Security teams will act as consultants to the product development teams, similar to the role played today by platform and site reliability engineers.

Engineering mindset for security infrastructure and operations

While it is generally easy to imagine what an engineering mindset to security could look like when we are talking about software development, it can be much harder to do when looking at security infrastructure and operations - things that happen day-to-day, after, and outside of software development. This is largely because application security is a part of the software development lifecycle, and venture-funded cloud-native startups have been actively looking for ways to secure their code, and to do it in a way that scales.


Comparatively, little discourse has been seen around bringing an engineering mindset, approaches, and frameworks to security operations, detection and response, incident handling, digital forensics, and other areas of security. These vital components of a company's cybersecurity processes have been seen as internal parts that can do their jobs well as long as they have “the right products” deployed on the network and a few people to monitor these products. Most importantly, security teams have been consistently under-resourced and consumed by putting up the latest fire, making them unable to focus on building defenses proactively.


Needless to say that this approach has proven to be limiting and often detrimental to the organization’s security stance. While I do not have the evidence to back up this statement, I would speculate that most (if not all) companies that appear in the news as having suffered a breach in the past few years, did have the latest and greatest tools deployed in their environment. The fact that these tools failed to protect businesses from security incidents is by no means a suggestion that they do a bad job (quite the opposite). Instead, I think these events highlight that no product is bulletproof despite what the vendor's marketing may suggest, and to defend our businesses and our people, we have to change the way we approach security.


Embracing an engineering mindset to security means several things, including:


  • Accepting that tools are just tools, and looking beyond vendor selection at the fundamentals of cybersecurity as an area of practice. This means asking questions such as “What should I do to secure my organization? What are the risks I am facing? What malicious behaviors can occur in my environment? How am I detecting them? How will I respond when they are detected?”, instead “What tool should I buy to secure my organization?”
  • Armed with this realization, companies have to put this approach to practice, ensuring they have full visibility into their environment (“single pane of glass”), into what they are detecting and how they are doing it (knowing what detections work in their environment, what exactly they detect, and how they detect it), along with the ability to test their defenses in a reproducible way.
  • Realizing that many components of security operations can and should be automated, and looking for ways to build scalable ways to deliver defense to the organization. In practice, this means embracing detections-as-code, infrastructure-as-code, and other approaches that have proven to work and scale in other areas of technology. When a team detects new vulnerabilities and malicious behaviors, they should have the tools to respond to them in a way that eliminates the need to manually apply the same response to the same vulnerability tomorrow.


Historically, most of the security knowledge resided in the heads of experienced practitioners who have “been there, done that”. As the industry is maturing, we need to look for ways to codify this knowledge and make it shareable, testable, and accessible for use and improvement for organizations across the globe. Cybersecurity will always be an art as it deals with the creative, intelligent adversary. However, it needs to also become a science if we want to continue growing our knowledge base and make cyber defenses accessible to organizations who cannot afford to hire “the best of the best” in the field. Taking the science-based, engineering approach to security will enable security teams to build systems and processes to do their work consistently.

The rise of security engineering and how it is changing the cybersecurity of tomorrow

The evolution of detection engineering and the emergence of detection engineering as a service

In the past few years, there has been a lot of emphasis on transparency in threat detection. The problem has attracted attention from security practitioners, startup founders, and industry analysts, to name a few. In 2022, two ex-Gartner analysts Anton Chuvakin and Oliver Rochford wrote a mini-paper titled “On Trust and Transparency in Detection” which opens as follows: “When we detect threats we expect to know what we are detecting. Sounds painfully obvious, right? But it is very clear to us that throughout the entire history of the security industry this has not always been the case. Some of us remember the early days of the network IDS intrusion detections systems were delivered without customers being able to see how the detections worked. The market spoke, and these vendors are all dead and buried by Snort and its descendants, who opened their detection signatures for both review and modification.” The blog post is a great read for anyone interested in the topic.


Every organization’s environment is one-of-a-kind, and the uniqueness is only increasing with the growth of SaaS and the emergence of specialized tools for nearly everything. Every department in the organization has tens of tools it uses to manage its work (imagine only the number of applications designed to replace the use of spreadsheets for marketing, human resources, finance, product, and operations teams alone). On top of that, nearly every company that achieves a certain level of growth develops its own, internal applications to save for the use cases unique to its operations, business model, or go-to-market strategy.


The uniqueness of every organization’s environment means that both the way attackers would gain entry into every organization and the way defenders would be able to detect malicious behavior in that environment would be different. In practical terms, for security teams to solve the problem of detecting something unique to their organization’s environment, they would need to create detection logic for that specific environment.


Security vendors that promise blanket coverage for anyone and everyone are a great basic layer (as is the antivirus), but they are not enough for companies that have a lot to lose.

Detection engineering has greatly evolved in the past 10 years, as people who have been doing it for a decade themselves attest. In the article referenced above, Zack Allen, director of security research at Datadog describes how “the more, the better” approach to creating detection content has evolved into a mature realization that we need high quality, comprehensive detection content, not just “more detections". Detections engineers who used to be seen as “wizards” who “come down from their spire and preach to the world their latest findings, present at Blackhat and DEFCON” are now one of many types of security engineers.


Talking about detection engineering, Zack concludes:

“You can't write detections for your network security product if you don't have network security experts. This is the same for endpoint, cloud, application and host-based detections. It’s like having a bunch of data scientists build a machine learning model to detect asthma in patients. However, they forgot to bring in a doctor to show them how pneumonia patients would give the model false positives. You need the subject matter experts. This has not changed in the industry, nor should it. What has changed is that these experts need a solid basis in software engineering principles. You can't scale all of those detections and deploy them in a modern environment, manage sprints (yes this is software engineering :)), or write unit, integration, and regression tests without lots of bodies or lots of automation. I can reliably say my boss would rather hear that I can scale the problem away with software than with hiring more people.”


I see two signs that detection engineering has evolved into a dedicated and well-defined profession: the rising number of events such as DEATHCon (Detection Engineering and Threat Hunting Conference), talks and trainings at Black Hat, BSides, and other gatherings of practitioners, as well as the beginnings of defining the maturity stages organizations go through when implementing it. The Detection Engineering Maturity Matrix by Kyle Bailey is the best attempt to measure the capabilities and maturity of the function across organizations.


As more and more organizations are realizing that detection logic is not one size fits all, and security vendors are unlikely going to be able to deliver on their promise of “keeping everyone safe”, we are starting to see cybersecurity startups that specialize in building detection content. Instead of simply triggering alerts, these companies make the content of the rule itself visible and editable which allows security teams to understand what exactly is being detected in their environment, how exactly it is being detected, and what playbooks or alerts will be triggered when it is detected. Two notable examples of startups in this space are SOC Prime and SnapAttack which both support the de-facto standard language for writing detection content - Sigma. Instead of promising full coverage, these vendors give companies the ability to access security coverage in a fully transparent, pay-per-what-you-use manner.


Not only organizations can buy generic detection coverage from vendors that specialize in detection engineering, but they can now have their security service providers build the alerts logic custom-tailored to their environment. While this is not something offered by all providers today, it is where I think the industry is going as security consultancies and managed detection and response firms are looking to add more value beyond vendor selection and alert monitoring. Soon, detection engineering as a service will become standard for security service providers.


Notably, the change in customer expectations is also changing the way security vendors such as endpoint detection and response (EDR) and extended detection and response (XDR) are operating. Having often started as “black boxes” that run generic detection logic built in-house across all of their customers, they now also offer the ability for customers to write their own detections on top. Newer vendors such as LimaCharlie where I lead product, Panther, and a whole new category of so-called Open XDR are also offering full visibility into the organization’s security coverage (not just the custom rules, but all detections that run in the organization).

The growing importance of security engineering

I use detection engineering as an example; we are seeing a big push toward engineering in all areas of security. With infrastructure-as-code, infrastructure management is now owned by engineering, not IT. Therefore, software engineering skills are becoming a prerequisite to security.


With the rise of the cloud, software engineering principles and practices now underpin how infrastructure is provisioned, how security policies and configurations are applied, how the company’s security posture is tested and interrogated, how changes to security configurations are implemented and tracked, and so on. While DevOps engineers are in charge of provisioning and managing the infrastructure, security engineers that combine a strong knowledge of engineering with a deep understanding of security, are ideally suited to secure it.


Most cybersecurity professionals today developed their skills in IT - designing network architectures, provisioning firewalls and managing firewall policies, and other tasks critical to keeping the IT in the enterprise going. Unfortunately, many of the people in charge of security have only the most basic understanding of software engineering, preventing them from developing the skills they need in a world where everything - infrastructure, policies, detections, etc - exists as-code. It is only natural that when the underlying infrastructure is in code, security professionals need to learn to code. The same is true for automation and the use of API: since the vast majority of technical tasks are now accomplished at scale via API (including the work that was previously done manually within security teams themselves), we need people who understand how to design, use, and decommission APIs in a secure manner.


Security engineering teams are expected to go beyond operational controls building engineering solutions to security problems. As more and more organizations realize that off-the-shelf security tools do not address the uniqueness of their needs and environments, those that have the right levels of resources and support to take hardening their security posture seriously, are starting to go beyond configuring commercial products. While for some use cases a tool purchased from a security vendor can be suited to implement as is, more often than not it needs some tweaking to accommodate the unique needs of an organization. However, we see an ever-growing understanding, especially among large enterprises handling a lot of data and cloud-native companies, that commercial tools are not able to address the multitude of security needs and requirements. These companies are starting to build some elements of their security stack in-house, delegating the design and development of these solutions to internal security architects and security engineers.


Tom Tovar of Appdome suggested in his recent podcast, that we can put security organizations into three categories:


  1. Traditional security teams, composed of technically strong security professionals with deep knowledge of security and compliance, and best practices for both.
  2. Advanced security teams that often have security researchers and security architects tasked with designing systems, product evaluation, penetration testing, etc.
  3. Cybersecurity and engineering organizations that have “hardcore” engineering talent capable of building and delivering security solutions for modern organizations


I see these categories not as different stages of maturity, but as an evolution in terms of organizational needs. Cloud-native companies are starting to build security engineering teams designed to work closely with DevOps, software engineering, and product development. With that, many of the elements that would traditionally be owned by IT and security teams, are now owned by DevOps and security engineering teams. This model that relies on builders - engineering talent able to craft security solutions - is not needed for every organization. However, as more and more companies are started cloud-first from day one, as their environments are becoming more unique with the proliferation of SaaS tools, and as more security teams realize the value in security tailored to their needs, we will see a massive shift towards security engineering.


As of the time of writing this piece, we see that best practices for organizational design have not caught up with the rise of security engineering. In practical terms, it means that while security engineering teams have their own tools they are responsible for managing, there seem to be many conflicts around ownerships between security engineers and software/DevOps engineers as well as operational security teams. It is fair to say that as of today, in every organization that is fortunate to have a security engineering team, the shape of that team looks slightly different. Organizational conflicts and unclear areas of ownership are natural steps in the formation of any new discipline, so what we are seeing is organic and expected.

The changing nature of the security analyst role

As cybersecurity becomes as-code, it will be getting harder and harder for those that do not code. I am talking about the changing role of security analysts.


Security analysts, traditionally categorized into Tier 1 (triage specialist), Tier 2 (incident responder), and Tier 3 (expert analyst), play an important role in a security operations center (SOC) team. With recent advances in automation, the shape of this role has started to change.


First of all, as security teams are looking for ways to improve efficiency and productivity, more and more processes and procedures in a SOC that used to be manual are getting automated. Second, the rise of artificial intelligence (AI) is promising to eliminate the need to manually triage thousands of alerts - arguably one of the biggest pain points security teams are experiencing. As of today, AI is yet to deliver on its promise of automating security, but it will eventually happen. Probably sooner than we’d like to imagine, we are bound to see a battle of AI with adversaries training their own models and defense - their own. Futuristic pictures aside, SOC analysts will need to adjust to the changing shape of security.


Due to the two factors described above, SOC analysts (especially those traditionally known as “Tier 1”) have to start learning new skills. The most pressing technical skill is coding, and analysts understand that as illustrated by the Voice of the SOC Analyst survey commissioned by Tines in 2022.


There are enough alarmists in the industry suggesting that the future of analysts is bleak, but I see it differently. The role is not going away, but its shape and scope are going to change. In the past, being an analyst was largely about knowing how to use specific security products. Now security is starting to be seen not as a tool problem but rather as an approach problem. Additionally, security tools are becoming commoditized and standardized so that they all work similar to one another: if an analyst has used one EDR, it’s likely they won’t need much time to learn another. What exact products an analyst used in the past will become less important than their understanding of security fundamentals. Analysts looking to stay relevant as the industry matures will need to become more technical. While not everyone will and should become an engineer, it will be more and more critical that they understand how threat actors see the world, and how attacks are conducted.


I think the future of analysts in security will be similar to the future of QA (quality assurance) professionals in software development. The vast majority of QA jobs are no longer manual and require the use of automation tools, languages, and frameworks. Those that pay the most are what Amazon and many other companies now call “engineers in test” - software engineers with focus on testing product functionality and APIs. Manual QA still exists but it is hard to come by, the roles are incredibly competitive, and as the supply of qualified workers greatly outpaces demand, they command the lowest compensation. Amazon’s Mechanical Turk changes the game dramatically, further lowering the cost of QA. Quality assurance did not die, but it greatly transformed (and notably it did not take advanced AI and ML to change it).

Security stack of the future

As security teams become more technical, they start to recognize that no vendor can promise “safety” and “security” as a feature. Traditionally, most of the commercial security tooling abstracted the foundational layers from security teams and offered a high-level view in the form of alerts and reports that summarized what has happened. Organizations that wanted visibility into the foundational layers of security infrastructure and events-level data were forced to use open source tools or build the tooling from scratch.


As the DevSecOps approach requires visibility into the foundational layers of security, the security stack of the future is going to look very different from what we see today when we look at the so-called cybersecurity market maps. First of all, we will be seeing more and more neutral solutions that can be used by practitioners to interrogate their systems, gain full visibility into their environment and build security coverage tailored to their needs. These tools will work transparently, and the work they do will easily be tested and verified. Importantly, we will see a blend of commercial and open source solutions that can be made to work together to address the organization’s security use cases. In the center of security will be processes and security professionals - not “products” as tools are just that - tools; it is how they are used and what they are used for that matters.


In the past few years, we have started to see more and more security leaders reject the idea of blindly trusting vendors: it’s the approach we promote at LimaCharlie where I lead product, as well as one embraced by other new-generation of solutions such as SOC Prime, Panther, Prelude, and most recently - Interpres, to name a few.


The below image is a list of today’s companies and open source tools that take an evidence-based, engineering approach to security. It is not meant to be exhaustive (there are many more great tools that fit the above criteria) and it is not a traditional “market map”.


Closing the talent gap

To hire great talent, business needs to change the way it works

Security teams defending businesses from malicious actors are stressed, understaffed, undervalued, and underpaid. The reality is that hacking groups out there are better at attracting deeply technical talent than any corporation. They pay them tax-free, and much more than any enterprise would. They offer a great work-life balance, the thrill of hacking, and a sense of achievement. Most of them are 100% anonymous. No need to accumulate worthless certifications and pay a few hundred bucks every few years to maintain them as “current”, and no need to deal with recruiters, human resources, basic compliance training, workplace politics, legal, compliance, payroll, and demanding bosses to top it off. If this sounds like I am advertising working for the adversary - that is not at all my intent, and the truth is that none of this is new to experienced security professionals. I am merely demonstrating that to hire great talent, business needs to do better.


This posting coupled with the fact that many of the very best security professionals do not feel motivated to deal with the bureaucracy of working for large corporations paints a bleak picture of the current job market.


It would be unethical and untrue to suggest that I have a list of quick and easy solutions, so we as an industry have to find them together. We can start by dropping the requirement for “5 years of experience” for entry-level jobs, and build on that as we go along, removing biases and improving our hiring process.

Getting software engineers to do security

As everything in security is becoming as-code, one of the rarely discussed pipelines for cybersecurity talent could be software engineering. Some argue it may be easier to teach engineers security than to tech security professionals software engineering. While I am not the right person to make a judgment about the correctness of this statement, I have seen enough software engineers turn into security practitioners to know that that path is real.

The challenge lies in making it known that cybersecurity is a viable career path for software engineering graduates, providing them with the right training (adding deep-level cybersecurity courses to computer science programs), and designing meaningful career paths for them to find their way in cybersecurity. This raises a question of compensation for many entry-level cybersecurity jobs new graduates can command: if a person can get their first job in software that pays 20-40% higher than anything they are offered in security (if they can even get an interview), the whole idea of getting software engineers to do security falls apart.

Educating the new generation of security engineers

A lot is being talked about regarding the talent shortage in cybersecurity, and it’s easy to notice a sea of new entrants promising to solve this problem. From 6-week cybersecurity bootcamps to online courses, as well as new college and university degrees, - everyone wants a piece of the pie in “educating the future of security”. It is tempting to think that all we need is to get more high school students and adults ready to change careers excited about security and enroll in these programs, and 3-5 years later we are all set, I see the problem going much deeper.


If you look at the vast majority of the educational programs, you notice that their curriculums tend to omit the engineering component. I did not spend enough time compiling the data so my observations on this topic are rather anecdotal, but here is what I am seeing:


  • Most bootcamps are so short and so high-level that it would be unreasonable to believe they can provide their graduates with any deep knowledge of the industry. I have met many great security professionals who went through bootcamps. However, they became great not because they got to attend a bootcamp, but because of what they did outside of them. I do not suggest that these short immersive courses offer no value. To illustrate the point I am trying to make, I encourage you to think about why there are so many 4-8 week-long bootcamps teaching front-end development and so few 4-8 week-long trainings teaching back-end development. The answer is simply because back-end development, similar to security, requires a solid understanding of deeply technical theories and concepts, and the ability to process these concepts and implement them in code without often any visual feedback. I will go on the record to say that you cannot teach that during the same amount of time it takes to teach people how to create a simple website.
  • Many university programs, especially at the master's level, focus more on writing policies than they do on developing practical skills. Even those that are deeper and practical, tend to be what the university education is as a whole - too old, too theoretical, and too shallow. Security is evolving daily, with new exploits, new vulnerabilities, new attack vectors, and new technologies altogether created daily as well. University programs have to go through lengthy approvals, rigorous academic reviews, and evaluations, so much so that by the time a program is approved, it talks about the news six months ago or more. There are some great teachers and instructors who are working hard to stay up to speed with the industry and teach their students useful, practical, and current skills. We should all be deeply grateful for their work, but it is worth saying that these people do not work in line with the educational system itself, but rather work around its limitations to do good for the students and society.
  • Cybersecurity certifications do not reflect the skills and experiences the market needs. I do not intend to diminish the amount of effort people put into them, and I do not suggest that they offer no value whatsoever. What I mean instead is that with very, very few exceptions, these certifications offer theoretical concepts and make people feel like they know how security should be done, without giving any real skills of actually doing it. While attackers are getting deep into the code, and looking for ways to bypass controls, exploit vulnerabilities, and exfiltrate data, we seem to seriously think that we can stop them by teaching people how to write policies and passing multiple-choice exams about cloud security. Imagine being operated on by a heart surgeon with a “certificate in hearts” who hasn’t done a single surgery in their life (I can’t).


All this is a long way of saying that today’s best security professionals do not come from universities, and neither from what it appears will come tomorrow’s. People who become leading-edge professionals in security engineering come from hands-on jobs in penetration testing, military, NSA, and other governmental institutions with strong offensive components. They come from mature security teams at cloud-native enterprises that treat security seriously. They are self-taught in front of their computers, at CTF (capture the flag) competitions, and at events like Open SOC, Black Hat, DefCon, and the like.


To shape the future of security and close the talent gap, we cannot sit and hope that enough motivated individuals will find a way to teach themselves the skills they need to secure our people, businesses, and countries. Hope is not a strategy, and neither are 6 week-long bootcamps; we need to build systems and institutions to close the technical cybersecurity gap. Security is hard. Teaching security is hard as well, but it needs to be done. Compliance and policy writing are important, but they alone won’t help us defend our networks from attackers - incredibly talented, highly skilled, greatly motivated, and well-paid.


While we need to find ways to get software engineers into cybersecurity, we also need deeply specialized security professionals to acquire engineering skills. While not every incident response, digital forensics, and endpoint security practitioner will become an engineer, most everyone would benefit from knowing the basics of software development, and becoming fluent in languages such as Python. Adopting an engineering approach to security operations will enable us to automate manual parts of incident response, and continuously build scalable ways of securing the organization’s perimeter while spending more time on proactively building defenses. For that, cybersecurity education must start to include courses in software engineering in the same way as software engineering and computer science degrees should teach the basics of security.


Closing thoughts

There are indeed no silver bullets that will solve all of our security problems, and producing more security engineers won’t do that either. However, adopting an engineering approach to security can help us build security into software products from their inception, accelerate the maturation of the industry, and future-proof our security operations.


The talent shortage is most certainly going to be an obstacle. However, just because we don’t have the resources we need, we cannot and should not dismiss a viable solution to the hard problem. It is also clear that we have to change our hiring practices, and re-evaluate the criteria used to identify future security leaders. We get what we optimize for. Every day, attackers are spending countless hours learning new technologies, reverse-engineering software we build, and looking for vulnerabilities in code. If we look at security job postings, it’s easy to conclude that we are hoping to stop the adversary by passing multiple-choice tests and earning certifications which are very different skills than those needed to understand how the code works and how to defend it.


I am convinced the engineering approach to cybersecurity is an inevitability. We are starting to see the signs of its growth already, and it will only be accelerating from here. The question is - how quickly will we be able to build the infrastructure for its development? If history has taught us anything, it is that the state of cybersecurity practice as a whole is lagging years behind the latest and greatest talks given at DefCon, Black Hat, and the like. We will have to see what industry-altering events will come next.


The lead image for this article was generated by HackerNoon's AI Image Generator via the prompt "cybersecurity".