paint-brush
How to get started with Threat Modeling, before you get hacked.by@alex.wauters
11,477 reads
11,477 reads

How to get started with Threat Modeling, before you get hacked.

by Alex WautersJune 26th, 2019
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

If you want to achieve security by design in your project and mitigate cyber threats before they hit your applications, you will need to discuss these risks with your team and plan ahead. If you don’t know how to get started, this guide is for you.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail

Coin Mentioned

Mention Thumbnail
featured image - How to get started with Threat Modeling, before you get hacked.
Alex Wauters HackerNoon profile picture

If you want to achieve security by design in your project and mitigate cyber threats before they hit your applications, you will need to discuss these risks with your team and plan ahead. If you don’t know how to get started, this guide is for you.

By the end you should be able to organize your first threat modeling session, to get to a shared understanding in your team of the security threats and agree with the business stakeholders on what you will do to mitigate them.

Sharing a Security by Design mindset

Not everyone in the team has the same security mindset, some will be more familiar with some type of threat than others (perhaps through the hard way). Even business analysts who are not familiar with secure software development may identify threats that a developer or security expert did not foresee. The best way to uncover as many threats as possible and make sure everyone knows them is to discuss them together.

Start now by picking a date for this discussion session. Invite at least the developers, someone who knows how the systems are deployed, someone from security team and your product owner. Reserve about 1 hour — one hour and a half. If more time is needed, you can always organize a follow-up session.

An introduction to threat modeling

A threat modeling session typically consists of the following steps:

  • Pick a use case of your application
  • Draw a Data Flow Diagram of this use case, which shows how data flows through your system and which applications or databases are involved.
  • For each asset passing through your data flow, go through a checklist and discuss potential security risks. Rate each risk (e.g. by likelihood and impact)
  • Discuss and decide what you will do about each risk

Pick a use case

You can discuss any number of use cases for your application in a threat modeling session, but for your first session it’s probably better to start with one or two use cases at most. I suggest you start with your authentication use case (how do people identify themselves and gain access?) as well as one of the main flows of your application (for Medium it could be about a user posting a new story, for Lyft you might pick a user calling for a driver).

Got your use case? Gather your drawing pencils. We’ll create the data flow diagram.

Draw a data flow diagram

In a data flow diagram you draw your applications (processes), databases or other important data assets, data flows and actors. You can do this step during the session, or prepare it beforehand.

The data flows start with the rectangle at the left, the user / actors performing the use case.

Circles denote processes. It could be a web application, or a collection of applications. Collections of services can be hidden behind a double circle. You might want to encapsulate a service in this way in order to focus the exercise on other data flows without diving in too deep into the other services (yet).

I typically draw the hosting of the static front-end files separate from the back-end API, even if they are deployed on the same server. Static files have their own sets of risks (such as third-party injection) and may not be behind a trust boundary (not requiring authentication to download the front-end files.). We’ll get to trust boundaries in a minute.

The rectangles which are missing their vertical sides stand for assets. These could be anything ranging from a database, to files, to queues, to data contained in logs … Attackers may be especially interested in this data, either because it is useful on its own or because it could be manipulated to become useful (e.g. monetary transactions which are yet to be processed).

In-between services, you may encounter a trust boundary (the dotted line). A trust boundary means that data flows going through this line are not trusted. Typically a flow will need to present authentication credentials, and the sessions previously associated with that flow will no longer be valid across the boundary.

There are several tools online which can help you draw Data Flow Diagrams:


  • draw.io: is a general drawing web app which is what I use. Michael Henriksen has created a library which you can import in draw.io.In draw.io go to File -> Open Library -> Url, and refer to the raw file on github.

  • OWASP Threat Dragon is a web app tool which saves your diagrams on github. Simple, but github only.
  • Microsoft’s Threat Modeling tool is a Windows native application where you an draw data flows, annotate them and generate reports. It’s not as simple as the approaches listed above, Windows-only and you need to save the diagrams on your local disk.

The draw.io templates also include some other elements such as security control tables, which allow you to quickly indicate which controls are already in place at certain places in the data flow diagram. This example draws the data flow diagram for an employee-only private podcast feed:

I’ll get back to this use case later in the attack trees segment.

Now that you have a data flow diagram, it’s time to discuss the risks.

Discuss the security risks

If you have prepared the data flow diagram before the session, go through the diagram in the session with the team and modify it if necessary.

Now we’ll go over each of the assets and data flows, and ask ourselves the following STRIDE questions:

  • S: Is there a risk for Spoofing? Can someone execute this data flow and pretend they’re someone else? Can I pretend I’m a different employee and view my salary information?
  • T: Is there a risk for Tampering? Can I modify the request parameters to get different behavior (and e.g. inject code or modify my salary), or tamper with the files on the server?
  • R: Is there a risk for Non-Repudiation? Can someone deny they performed this action? Example: you order something and pay for it, and later request your money back and deny you made the order. Or a super-user could deny having accessed sensitive information such as employee salaries.
  • I: Is there a risk for Information Disclosure? Users obtaining more information than they are allowed to? What information could get exposed from this asset, what’s contained in this database?
  • D: Stands for Denial-of-service. Some actions could result in DoS, through users spamming actions which require a lot of resources, or result in quota limits being reached on other APIs. An authentication scheme which temporarily locks users out may be abused to deny users (or system accounts … ) being able to log-in.
  • E: Is there a risk of Elevation of Privileges? Users obtaining more permissions than they are allowed to? The typical way to circumvent this is to give processes as few permissions as possible, to minimize the attack vector.

You can use a different checklist or add additional questions as you see fit. Not all of the STRIDE elements apply to each element. Quickly evaluate if it applies, and then go through possible security threats related to that question.

For each risk that you find, list it with a reference to the element, short description, likelihood of it occurring (Low, Medium, High), impact on your system (Low, Medium, High) and proposed mitigation. This is the mitigation you will verify with your product owner, and create a story for in your backlog.

it’s REDACTED so it must be TOP-SECRET right?

Business may decide not to do anything and instead accept the business risk. This is better than having done nothing, and at least everyone is aware of the risk and what the impact may be. It’s important to document this, so it may be revisited later and also understood by other team members. Other options could be to transfer the risk (e.g. through insurance or through liability shifts in contracts).

Note that humans are terrible at estimating the likelihood of rare events, and even a threat with low likelihood may still happen and put your service at risk. Don’t discount these and put your focus on the risks with the biggest impact.

Next steps

Well done for making it here! Post the threat model with the diagram, list of risks and mitigations on your knowledge base (consider who has access!) and share it with whoever wasn’t present. Next:

Plan the next threat modeling session. Do you have other use cases to cover? Want to go in-depth into a process which you hid behind a double circle? Plan another session in the near-future. Expect a significant change to your architecture or a sprint where you will be working on security features? Plan it in, and otherwise pick a date in 3 months. If you don’t plan the next session, chances are likely you will forget and not keep up. You can always choose to cancel the session as the date draws near.

Tie the decided mitigations to your retrospective actions. Perform a review on whether the actions were taken successfully (like for example; ‘Write out developer security guidelines’)

Whenever you start a development sprint, decide if you want to revisit the threat model for one of your use cases.

You’ve completed your first threat modeling session! Congratulations 👏. To help you with your next sessions, there’s more info below:

Attack Trees

Not every type of threat maps easily to the ‘STRIDE’ model. I posted a data flow diagram of a private podcast feed earlier. Attack trees offer a different way of looking at threats, which may not be pop up while answering the STRIDE questions.

Consider the case of a private podcast, which may only be available to employees of an organization (while they are still part of the organization). You could generate a private link for each member separate, but that may not cover all the possible risks. Start with the general risk, and then go down deeper and list more specific cases of that risk:

Ah! Apparently podcast apps automatically add feeds to a searchable index, to make it easier for other people to find content. By discussing risks like these together, you uncover more possible threats — more than you would have on your own.

More resources

For Privacy by Design questions, you can use the LINDDUN question framework.

If you would like a more elaborated walk through of threat modeling, Microsoft has a free e-book available here on the Security development lifecycle. Chapters 9 and 22 focus on threat models.

Free ebook: The Security Development Lifecycle

And that’s it. Don’t be afraid to get started with threat modeling. If you need help you can look for security experts to help you with these sessions, but don’t let it stop you from trying it out yourself. It won’t be perfect, but it will be much better than doing nothing. You will get better over time.

Thanks for reading, and do let me know if you have any questions, feedback or encounter any issues on the way.