Cybersecurity threats aren’t going away anytime soon. That means application security (AppSec) must be a key part of every developer’s job – but it isn’t always a breeze to fix code during or after release to production. On top of that, it’s even more of a tricky task when you need to pause your coding efforts and search for resources in order to remediate a flaw.
Luckily, with secure coding best practices and training at your fingertips, writing more secure code isn’t a chore. In part one of this two-part series we examined best practices around parameterizing your queries to avoid SQL injection and encoding data. In this second part, we’re digging into five additional tips to help you code more securely, from leveraging existing frameworks to protecting data.
You can dramatically improve protection and resiliency in your applications by building authorization or access controls into your applications in the initial stages of application development. Note that authorization is not the same as authentication. According to OWASP, authorization is the “process where requests to access a particular feature or resource should be granted or denied.” When appropriate, authorization should include a multi-tenancy and horizontal (data specific) access control.
Consider checking if the user has access to a feature in code, as opposed to checking what role the user is in code. Below is an example of hard-coding role checks.
You can waste a lot of time — and unintentionally create security flaws — by developing security controls from scratch for every web application you’re working on. To avoid that, take advantage of established security frameworks and, when necessary, respected third-party libraries that provide tested and proven security controls.
The crucial thing to keep in mind about vulnerable open source libraries is that it’s not just important to know when a library contains a flaw, but whether that library is used in such a way that the flaw is easily exploitable. We know that, upon initial scan, 70.5 percent of applications have flaws in an open source library. Additionally, data compiled from customer use of Veracode’s Software Composition Analysis solution shows that at least nine times out of 10, developers aren't necessarily using a vulnerable library in a vulnerable way.
By understanding not just the status of the library but whether or not a vulnerable method is being called, organizations can pinpoint their risk and prioritize fixes based on the riskiest uses of libraries.
Organizations have a duty to protect sensitive data within applications. To that end, you must encrypt critical data while it’s at rest and in transit. This includes financial transactions, web data, browser data, and information residing in mobile apps. Guidelines like the EU General Data Protection Regulation make data protection a serious compliance issue.
The security of basic cryptographic elements largely depends on the underlying random number generator (RNG). An RNG that is suitable for cryptographic usage is called a cryptographically secure pseudo-random number generator (CSPRNG). Don’t use Math.random. It generates random values deterministically, and its output is considered vastly insecure.
Logging should be used for more than just debugging and troubleshooting. Logging and tracking security events and metrics help to enable what’s known as attack-driven defense, which considers the scenarios for real-world attacks against your system. For example, if a server-side validation catches a change to a non-editable field, throw an alert or take some other action to protect your system. Focus on four key areas: application monitoring; business analytics and insight; activity auditing and compliance monitoring; and system intrusion detection and forensics.
In mobile applications, developers use logging functionality for debugging, which may lead to sensitive information leakage. These console logs are not only accessible using the Xcode IDE (in iOS platform) or Logcat (in Android platform), but by any third-party application installed on the same device. For this reason, disable logging functionality in production release.
Error and exception handling isn’t exciting, but like input validation, it is a crucial element of defensive coding. Mistakes in error and exception handling can cause leakage of information to attackers, who can use it to better understand your platform or design. Even small mistakes in error handling have been found to cause catastrophic failures.
Returning a stack trace or other internal error details can tell an attacker too much about your environment. Returning different errors in different situations (for example, "invalid user" vs. "invalid password" on authentication errors) can also help attackers find their way in.
Give your secure coding knowledge a boost
There isn’t a one-size-fits-all solution to writing more secure code. It takes patience, practice, and the know-how that comes from relevant training in the languages you use most and consistent best practices. Veracode Security Labs Community Edition – a complimentary version for developers who want to enhance their secure coding skills – relies on hands-on training for greater impact. Practice exploiting real applications in contained environments to learn how threat actors operate, and then patch them to learn how to keep your code secure.
Stay on top of the latest best practices in application security (AppSec) to improve every day. If you missed the first part of this series with more secure coding tips and tricks, read it here.