Most important aspects of our life including our finance, identity, and healthcare now depend on code. Software security is now a critical aspect for not just companies, but individuals as well.
Any person using modern software products and services would need to understand basic software security concepts for their well-being. And for any developer that works on designing, creating, or maintaining these products and services, a comprehensive understanding of security vulnerabilities and security best practices is a must to avoid security breaches that may cause the companies and organizations to potentially lose the trust of hundreds, thousands, maybe even millions of users.
Most of the time, security vulnerabilities seem to us far and unfamiliar; they are potential issues to us that are not entirely tangible; thus easy to neglect. Imagining an actual attacker entering your systems using the outdated mail server plugin that you never assumed actually needs updates, or hackers entering your systems through a smart fish tank that your company just brought into the office is hard; until you experience them in an in-your-face manner. Dealing with our strict deadlines and continuous flow of requests for new features don’t help us to mind software security either.
A popular approach to software security.
Let me state in advance: I am not a software security professional. I am a backend engineer who wanted to make his apps and servers more secure. So this article is from a developer's perspective.
What made me do the research and put together this article was the doubt I had years ago. I was going to deploy a service to the cloud, knowing that it was going to serve highly sensitive data, hopefully only to the users we intend to. I liked to read about hacking and cryptography ever since I started programming, and I knew there can be many vulnerabilities in a system that can be exploited, but I didn't know how to defend it. So I was afraid that I am doing something without knowing the outcomes.
I did the basic hardening for my server, I did hashing and salting in my database, I used TLS, token-based authentication, I used all the best practices I know. but what if I made a mistake, or left an opening? It would be nice to know how an attacker would enter my system. It would be nice if I had a simple checklist, made for a developer so that I can make sure I didn’t make an obvious mistake.
In this article, my aim is to provide you what I wanted to have back then. I can’t exactly tell you how an attacker will enter your system, but I can provide you some examples where successful attacks devastated systems. I can't give you the perfect checklist, but I can at least share a checklist that covers most of the vulnerabilities seen in these attacks. I hope it will be useful, and I hope you will stay with me through this long ride. Thank you.
Hint: Through my development experience, I found knowing about real-life software security vulnerabilities very beneficial, so I would be happy if you read the whole article. But even if you can’t have the time to do that because of a task or an imminent threat of alien abduction, you may want to have a look at the checklist at the end to have a quick idea. First things first, no hard feelings about that.
In this article, I’ll go through real-world examples of some known software vulnerabilities and exploits, separated into different categories (Such as “Application”, “Library” or “System” vulnerabilities) to be able to categorize security vulnerabilities in a simple way, by the layers they reside. This is not a formal distinction.
These examples and categories I’ll go through are by no means an exhaustive list or an exhaustive list of categories. If one dives in, there is much more to discover in vulnerability databases, articles, and mediums of discussion.
It is a well-known fact that security vulnerabilities are not purely technical problems. They usually arise as a result of the interaction of several components, including technical issues, processes, management, and human errors. Therefore, studying real examples of security vulnerabilities is useful for understanding how these components interact to result in a security disaster.
Fasten your seat belts, because this will be rather long. But please bear with me on the examples if you have the time, and try not to skip directly to the solution methods that are suggested at the end of the article, as the methods I’m going to suggest for safer software development are closely related to these real-life security incidents, and also I believe there are important things to be learned by examining these incidents from both technical and human perspectives.
Your password is being used by /u/rowealdo37189.
There is a nonzero chance that the feature you just added to the product you are working on may be giving anyone on the internet to execute arbitrary code on your server as an additional feature. In the heat of development, we inject many bugs into the code that may be turned into vulnerabilities, and we may never think of them as vulnerabilities before we actually see them in action.
Let's see some examples of real-world scenarios of application code breaches:
Panera Bread Incident
Panera bread incident is about backend access management. But it is actually more than that, as it also shows how management plays an important role in handling security issues. Let's look at what happened:
As reported, in August 2017, someone sent an e-mail notifying Panera Bread Director of Information Security of a vulnerability in their delivery API that allows anybody to get the full name, home address, email address, food/dietary preferences, username, phone number, birthday and last four digits of a saved credit card of any Panera Bread customer that signed in for an account, just by incrementing an index in an URI.
Access control vulnerabilities are pretty common, and they can be very harmful to the reputation of a company. The interesting part about the vulnerability finishes here as this can be solved through access management changes. What is more interesting is, how Panera Bread reportedly handled the situation:
..Houlihan only briefly mentions the vulnerability without any details, and suggests that if the director provides a PGP key (A public key that would enable Dylan Houlihan to encrypt his report so that only they can decrypt it) is provided, he could send a report without any other entity eavesdropping/reading the e-mail and learning about the vulnerability.
Apparently, he is rejected by the Director of Information Security, getting accused of sending the e-mail as a “sales tactic” and also stating that “Demanding a PGP key would not be a good way to start off” and “Any organization with a security practice would never respond to the request like the one he sent”.
PGP is a tool first developed by Phil Zimmermann in 1991. It employs asymmetric encryption, in which a key can be used for encryption, but not decryption, hence the name asymmetric. This allows a person to have a “public key” and a “private key”, the public key can only be used to encrypt data, so you can freely distribute it; you can put it on your web site, or write it on a paper and leave it on your desk, it is completely safe; as people can only use it to encrypt data, only the person with the private key is able to decrypt and read them. In this case, he suggested that Panera Bread send him a public key (A brand new one can be generated in less than a minute just for that conversation and then deleted) as a security measure, so no one else will know the details of the vulnerability, and Panera Bread will be safe until the issue is swiftly fixed. Or that’s what he assumed would happen.
Long story short, Panera Bread seemingly did not fix the vulnerability for eight months that followed. After that, some security experts, not fond of still trying to inform Panera Bread, decided to expose it publicly. After the vulnerability is exposed publicly, Panera Bread declared that damage is limited to 10.000 users and they are fixing the situation. And seemingly, they didn’t, until they had to.
This story tells us not to ignore a security issue. If we are on production and have the responsibility of private information of people such as addresses, information security issues should be a priority.
Financial platforms are not exempt from vulnerabilities, and the company Fiserv also got affected by them:
In August 2018, it is reported by KrebsOnSecurity that Kristian Erik Hermansen, a security consultant, found a vulnerability in the Fiserv banking platform. The vulnerability enabled a user to see transaction-related data of other users, including their e-mail address, phone number, and full bank account number. The data could be obtained simply by incrementing a parameter called “event number”.
According to the report, Fiserv developed a patch within 24 hours of receiving notification and the patch is deployed quickly after.
This response time shows that Fiserv knows how to handle a security issue, and maintain trust. It also shows that even a simple thing like serving transaction data with an id through a web service is a complicated task we factor in the security aspect.
In September 2018, a security update is published by Guy Rosen, VP of Product Management of Facebook. It explained a vulnerability that enables a user to generate access tokens for other users, having access to their accounts. According to the update, the vulnerability is caused by multiple bugs acting together; one video upload tool showing up in a wrong page and the “view as” button on that page that normally lets the user see their profile as another user would, generating access tokens that grant the user permission over other user’s accounts.
This vulnerability is important as it shows us that multiple bugs, each relatively harmless on their own, can act together to create a security vulnerability. This shows us that not only vulnerabilities, but known bugs in the system that may cause vulnerabilities in the future should also be a priority if we value security.
According to the report, another vulnerability about API access control was not fixed until more than a year passed after its discovery, which enabled any logged-in user to display personal information of other users simply by sending queries to the API.
And there are also many other ways an application can be attacked that are not mentioned here but are significant, like cross-site request forgery attacks, cross-site scripting attacks, remote code execution attacks, and attacks that are made possible by not using security best practices such as hashing and salting of passwords, each being very common despite existing and well-known solutions.
Most of the application vulnerabilities are avoidable with a good architectural design around authentication and authorization, use of known security best practices, and designing for mitigation of the well-known attack vectors. The remaining ones, like a complex bug that is a result of many small bugs interacting like in the instance of Facebook, and heisenbugs that disappear when you are looking their way, pose a challenge to detect and fix.
Note: I know that despite the topic I am writing on is application layer security, I didn’t mention injection attacks. Yes, I am aware that they still exist. And they are not going away. In fact, they are extremely popular, and they are getting worse in terms of the number of reports published. This year, I suggest that if your code gets breached by using an injection attack, that you immediately take a vacation. Don’t consider the current state of the project, or deadlines. Take a vacation, go to a preferably mountainous land (alone), and reflect deeply into yourself, while surviving only with what nature provides you. When you return, you will return a new person. The old you will be dead. Or you may consider validating, sanitizing and escaping your input, and using an ORM solution.
Our frameworks, libraries and third-party service providers are often not our strongest points.
Equifax breach was a significant data breach that occurred in 2017, where personal sensitive information of about 148 million people is compromised. According to the report the breach had two significant aspects:
Apache Struts 2 2.3.x before 2.3.32 and 2.5.x before 184.108.40.206 had a vulnerability in their file upload feature that enables an attacker to execute arbitrary remote shell commands at the server. The vulnerability was disclosed and documented as CVE-2017–5638 in March 2017. Equifax data breach was announced as late as September 2017. The data breach was not detected because the device that is used to monitor network traffic had been inactive for over 10 months due to an expired security certificate.
CVE-2017–5638 is caused by a header value (Content-type) being evaluated as an Apache OGNL expression, OGNL being a full-fledged language itself. Attackers used the flexibility of this expression language to design and inject an expression that would execute any shell script they provided.
After the breach, the situation exponentially worsened, leading to resignations in management, blame put on employees and overall non-niceness.
After that, three other remote code execution vulnerabilities, namely CVE-2017–9791, CVE-2017–9805 and CVE-2018–11776 surfaced over the year, the latest announced also being tied to OGNL expressions.
According to Fortune article published in May 2018, Sonatype, the company that maintains The Central Repository, detected that “As many as 10,801 organizations — including 57% of the Fortune Global 100” was still downloading an old and vulnerable version of Apache Struts.
Magecart is a substantially nasty hacker group, which consists of smaller, independent groups, every one of them intending to get your credit card information through an online trading platform/website you use, and sell it online. And they are very successful at what they do.
They mainly operate by scripts injected into e-commerce websites directly or through third-party services used by these websites. The scripts they inject are small and effective: grabbing sensitive information directly at the frontend code that is running in your browser, and sending it to a server they set up, which is a very nice server with SSL certificates and such.
British Airways breach, where 380,000 victims are reported, was an example of their methods, such as the Ticketmaster breach. After these, they seemingly decided to attack about a thousand other platforms. And as they realized they are unstoppable, they started picking fights on each other, modifying each other’s scripts to give the other group fake credit card numbers.
Today, they are still continuing their activities, and the industry struggles to find out a solution for magecart breach detection.
Malicious Third-Party Libraries
Our third-party dependencies can really bite us. In the past years, David Gilbertson and Jordan Scales already published two hilarious and enlightening articles about how our dependencies can be dangerous. David Gilbertson states that “We live in an age where people install npm packages like they’re popping pain killers.”. That is, in fact, true; and not necessarily without consequences.
The bottom line is, any third-party dependency we bring in without the necessary effort put into investigating it (e.g. “I just needed a loading bar, and searched it on npm”) has the potential to bring in even more dependencies that we won’t read the code of, or even know about, and every one of these dependencies can also bring other dependencies with them. And all of that mess is executed in the same scope as the credit card payment dialog.
Sometimes, the application being deployed does not have a secure default configuration, and it is actually expected from the person deploying it to enable its security features. But this usually does not happen, because who reads all the documentation anyway, except that weird person there in the corner who always takes longer to finish a task because she/he examines every detail in the configuration.
In 2018, a report is published by Onapsis, stating a vulnerability of the default configuration for NetWeaver, a key component of SAP that provides a platform for multiple SAP deployments to operate and communicate.
It turns out that you should configure it for access control by defining an ACL, otherwise side effects can include anyone on the network being able to gain access to the main message server that connects the SAP servers, and using that, take control of the whole SAP platform, which is great if you are fond of looking at or modifying arbitrary trade secrets in an unauthorized manner.
The most interesting aspect is according to the report, this configuration requirement was well documented since 2005, and is still found in many systems.
MongoDB is a brilliant document-based NoSQL database that was often misused in the belief that it is a silver bullet, especially at the early stages of its popularity. And before version 2.6.0, when deployed without configuration, it listened to address
. And it was deployed to machines all over the world, without configuring it or the server, which means without an authentication secret, without firewalls, and with the default server port.
I just deployed our database!
Operating system vulnerabilities are vulnerabilities that are usually tried to be avoided through system updates, network isolation, and prayer. They are potentially very dangerous, hard to find and understand and require more effort to fix, because of the lower, more fundamental layer they reside in. I will try to discuss two recent and significant vulnerabilities, to show how they look like and how devastating they can be. As most of us are not seasoned system developers, we are mostly only interested in OS and driver versions, and security updates. Not the exact details of the vulnerability or the operating system.
In 2016, a group called Shadow Brokers dumped a large number of exploits that they claim are NSA hacking tools onto the unsuspecting internet. The exploits had fun names that seemingly had nothing in coherence with the actual vulnerability, or anything for that matter (EPICBANANA is my favorite). Back to the subject, one exploit (ETERNALBLUE/CVE-2017–0144) used a vulnerability in Microsoft Windows Server Message Block (SMB) Protocol to execute arbitrary code on the server. This vulnerability would later enable WannaCry to be developed and released into the wild. After it was unleashed, healthcare systems, government systems, and companies were compromised, ransoms were demanded, and virtual battles were fought all over the world.
Windows TrueType Font Vulnerabilities and Duqu
It turns out that the Windows TrueType font rendering system has a virtual machine, and this virtual machine runs in kernel space, because why not. This situation caused an array of vulnerabilities that are exploited by an array of non-pleasant viruses including Duqu, a successor of Stuxnet. For more details, please refer to here and here. These vulnerabilities, and other vulnerabilities connected to these that are surfaced later are patched shortly after their discovery. But it is hard to say what exactly happened in the meanwhile.
That's it, I tried to compile some of the more interesting vulnerabilities and breaches, but there are much, much more one can read and learn about, many of which I also do not know about and cannot be possibly covered in one article.
So let's get to the part that we actually do something about all this. Please keep these vulnerability examples in mind while reading through the solutions.
When we are building a product that is meant to be put in production, and if it is a critical piece of software, we usually ask the same questions every day. Did I miss something? Did I leave an opening there? What should I do so that I can be more confident in my code? Or is it impossible?
As demonstrated above, there are many creative and unexpected ways in which our software can be compromised. This might seem depressing, like we are fighting a losing battle from the start. A single mistake or negligence can easily leave us exposed.
But all is not lost. There are plenty of things we can do to build our systems to be safer and keep them this way. As given in these examples, many of the vulnerable systems are vulnerable because of not applying ideas from past lessons learned in the software industry.
Here is a list of security measures you can employ for your systems and your code to protect them against known security vulnerabilities. As you read, you will find that many of them are directly related to the exploits we discussed above.
1. Keep your systems up to date.
After a vulnerability is documented, operating systems, frameworks and libraries eventually get fixed. It is mostly the systems that are left behind, the ones that are still not updated weeks/months after the vulnerability is documented and a patch is released who fall prey. These are easy targets for anyone that are aware of the vulnerability. And probably at that point, even a proof-of-concept exploit code is around for anyone to read, adapt and use against you. Keeping your systems up to date is a fairly cheap method to protect yourself, and it’s worth the effort.
2. Reduce the attack surface.
Many easy targets in your system can be eliminated by reducing the exposure to just what the application requires at a minimum. Reduce the attack surface by disabling any features that you don’t use, closing endpoints that don’t need to be exposed, removing plugins/extensions that are not necessary, and reducing permissions according to the principle of least privilege.
3. Mind your network design.
Leaving parts of your network unnecessarily open to the outside world, or using network devices with outdated/vulnerable software can put your systems at risk. Networks used in the production environments should be restricted to limit exposure to the outside world so only necessary endpoints are accessible. Bastion hosts and IP whitelists can be put into place to restrict outside administrative access. Your application can also be distributed into multiple isolated networks, according to the application/containers that are running in the environment, their possible endpoints, and interactions. A monitoring solution should be deployed to keep track of the network and generate alarms if something goes wrong. The whole system should be designed to be clean, simple and automated as possible so that it is easy to take action in case of an emergency. Penetration testing can be employed.
4. Monitor your system.
Deploy monitoring solutions, and configure them to generate effective and actionable alarms for any suspicious activity. Events to be monitored may include database query rate spikes, data flow limit overflows, server malfunction or shutdown, application error rate spikes, and suspicious network activities.
5. Learn the configuration in detail.
The tool/server/database you are using might not have a secure default configuration. This is a lesson often learned the hard way. Maybe it has a distinct development configuration that lets you evaluate it for a while locally, or maybe it does not, maybe it comes configured as a kind of a development server that listens for outside connections to your public IP and does crazy things when asked. You cannot know until you read and understand the configuration, and explicitly define your choices in the configuration file.
6. Automate if possible.
Humans make mistakes, so automate. Automate your build, test, deployment and configuration process. Automate everything you feasibly can. All hail the robots. Robots don't skip steps.
7. Keep an eye on your service providers.
Third-party services we use such as CMS, storage, SAAS and cloud providers, DNS providers, network service providers, and other software services can also be openings for possible attacks, since a successful attack on them can render your security measures about your systems useless. These services should only be utilized after evaluating them from a security standpoint. Also monitoring solutions can be deployed to detect suspicious changes in provided services.
8. Hand-pick and validate your dependencies.
A library or component you use has the potential to expose your entire application. Therefore knowing which code you use, how and where it is released, the entity/organization that releases it, its release cycles and its visibility in terms of vulnerabilities are important. Keeping the list of dependencies short and sources reputable makes it more unlikely to have security problems related to external dependencies.
9. Take advantage of best practices.
Repeating well-known mistakes with an extremely confident attitude is a popular pastime among developers. On the other hand, there are much better alternatives! Such as;
Note: Yes, I wrote “Disabling that default error page.” twice.
10. Keep your code clean, and maintain effort for good architecture.
In clean, well-maintained code that has good architecture, mistakes and vulnerabilities are much easier to notice and fix. Also, it is much easier to upgrade to a safer version of the tools and frameworks that we are using. If your code is full of workarounds and hacks, looking like this (It’s an 8-bit computer made using breadboards and jumpers and logic chips. Isn’t that cool) to an outside developer, you will have a hard time even to find the source of a bug/vulnerability you are already aware of, let alone fix it in short notice, or to detect an unknown one in the code during review. Additionally, good architecture is one that is designed with security aspects in mind, such as access control and validation; so there is a good chance a vulnerability won’t be able to find its way into your code at all during development.
11. Write application code with security in mind.
Writing code to be secure from the beginning is the best way to approach application security. On the other hand, defining all-encompassing rules for writing secure code is hard; but we should have some rules nonetheless. Below, there are some rules compiled by analyzing popular vulnerability types, so they can be mitigated from the beginning when writing new applications.
Although there can be exceptions to following rules, they have proven themselves to be fundamental. As always, take them with a grain of salt; but consider the fact that not following them caused many exploits in the past.
Always validate, clean and filter.
Outside input should always be validated against rules that explicitly define what the input should exactly contain. The length of the input, which characters/binary symbols it could contain, the pattern of the data, and restrictions on which values a field can have are some of the things that can be validated. Also, the structure of the input and its fields should be validated if it is in document form (e.g. JSON, XML, YAML). Input data should be validated as early as possible, before executing any logic, that would assume it is already valid. Many libraries can be found for this task for many different languages, such as commons-validator for Java and voluptuous for Python. If you are developing a web API, you can also consider using an API standard such as OpenAPI, which is supported by many libraries and platforms that are used for web API development.
Don’t use evaluators. It may be tempting to just get the query as an input parameter and pass it to the database, instead of parsing and validating each query parameter and passing them separately to a carefully implemented repository abstraction. Or it might be tempting to pick what method will be called on runtime by evaluating the input with a scripting engine. Please don’t do these things. Always strive to write explicit, unsophisticated and understandable code that can serve no other purpose than what it is designed for.
The self-operating napkin.
Don’t use scanners. If you think your code should search for what method to call using a code scanner (e.g. reflection for Java) at runtime based on data contained in an input string, I would tempt you to reconsider. Wouldn’t a simple switch statement do? If you do not actually need them, scanners can be off the list. Even if you do seem to need them, there might be other options.
Pay attention to error cases. Think about how errors/exceptions may be thrown and in which combination in your code, and what the resulting flows are. Try to keep exception handling as simple, consistent and regular as possible through your code. Poking around for error messages is a method used by attackers to gain information about the inner workings of a program, and exceptions/errors themselves can also be used for exploits.
Mind what you are returning in a response. Make sure a response doesn’t give any information about the internal state or operation of the software to the outside world. This is also true for any error messages. Restrict the returned data to only what is necessary for the client to resume the process.
Add detailed, meaningful logging. If something goes the wrong way, it will only be you and your logs. Design your logs so that you can trace a story/transaction/process/whatnot from beginning to end. Add too much detail, and your log database will drown in it. Add too little, and you won’t even know what happened, or if it happened. Not fun.
Keep it low-key. Make sure that your application doesn't say anything about which technologies it uses, which versions, what is the current error description, or anything. Make sure that it either does its work, or it is completely silent.
12. Have a rescue plan.
How long would it take to restore it all back from code repositories & cold backups from scratch, if all of your systems were compromised? What if only some of it is breached? What would be your detailed plan of action, from beginning to end? Laying out these plans early can be very helpful if an event occurs, and at best, help you to discover the weaknesses of your system that you didn’t realize before.
13. Keep an eye on the news.
Keeping an eye on blogs and articles about security, and vulnerability databases such as CVE and NIST can help you become aware in case your system/software has an already known vulnerability so that you can patch/update your systems before they may be detected and exploited. Vulnerability monitoring solutions are readily available for automating this task.
14. Solve problems as you see them.
Postponing a fix for a security issue is extremely risky, and its costs can far outweigh any benefit it can provide. Yet it is a mistake often made. So once a security vulnerability is detected, it should be fixed immediately, once and for all, before adding new features. Postponing the fix, and hoping no one will find out about the vulnerability is a very important and common mistake that is made about software security, with bad consequences.
Software security vulnerabilities are real threats, and keeping a system secure is a hard task. But on the bright side, it is possible to secure a system in a way that we force the attacker to find an entirely new and unknown way of attacking it. We can do that by not repeating mistakes that are made before.
This is possible through following security best practices on all layers of our system, and importantly, allocating necessary resources for maintaining software security in our project and doing the right prioritization.
After recognizing these threats, one might think that as the next logical step, to be on the safe side, we should not change existing infrastructure, systems or frameworks, or we should not develop new ones; just to avoid security risks.
I think, on the contrary, new tools can be more secure, simple and elegant than before, and they should be developed. The important thing is while we create them, we should not forget the lessons learned from the past, and make our decisions in an informed, rational way.
Thank you for reading. Hope you enjoyed my article,