Hackers might not attack you. Bots will.
Don’t like reading? The content of this blog was presented at /dev/world 2017. Many of the points are still very relevant, but the data will be a bit old.
Quote: “The content is topical and delivery is quick (not droning) and engaging”. I also play a prank on an audience member. It’s pretty epic.
When I talk to developers about security, the common theme I hear is:
My app is too small, no one is going to attack me.
This is because developers (and non-developers) assume that their attacker looks like a stock image hooded individual with an unbranded laptop surrounded by matrix text.
Which is partially true, there are some excellent security professionals and cyber criminals that look great in a hoodie and that are attacking applications and their associated servers — but for the rest of the lower profiled applications, it is more likely that their threat is going to be:
There are three main attack vectors that an attacker will focus their attacks on. The application binary (your app), the network the app is on, and the server holding the data for your app.
Let’s talk about reverse engineering. Now if you’re not familiar with this concept — let me just start with this:
Developers give away their application. Every user has the app and its logic. Within the application is the code and associated values. It’s pretty easy to get those into a human readable form. On Android, it gets it back to almost the exact code a developer wrote.
Hopper on the left (iOS). JD-GUI on the right (Android)
There are many developers out there storing secrets and keys into applications and assuming they’re secret because the app is “compiled”. But as you can see above (and the many tutorials online), they’re easily recoverable.
Reverse engineering is not a new threat, but there have been some pretty amazing advances with Big Data.
There are databases of code. Millions of apps of code.
So, with one of my clients here in Australia, I have the privilege of working with a company called Mi3Security (now bought by Zimperium).
Special thanks to the team at Mi3Security for letting me publish some of their data publicly.
They have such an amazing database. The threat analysis they are capable of is incredible!
I asked my contact there really nicely, if he could tell me how many apps within their database (around 65–70% of all public apps) are currently not implementing Application Transport Security correctly by implementing NSAllowsArbitaryLoads
> 100,000 apps are potentially susceptible to downgrade attacks.
The result was instant. I was shocked.
I mean.. I know how database lookups work, but holy crap, in front of me was a portal to source code for millions of apps.
So, what do I ask? At the time, the memory of Uber trying to fingerprint user’s information was pretty fresh in my mind. If you’re not familiar with it, you can view the story here.
So we checked how many other apps also used the line of code that got Uber into trouble:
Banks track more information than I realise…
Looks like Uber took the heat, but we know there are many other apps doing something similar.
This information was gathered quite quickly. How many agencies have a database like this? I can think of only a few companies / countries that might have something like this…
Then I had an epiphany… I can probably do regex matching on this database…
So, I thought it would be a good idea to try a basic “http only, send password” regex.
I’m not even checking TLS security. Zero encryption. Http only. Login/signup/signin URLs within apps.
I had seen a high-profile example before from Zscaler (research found here) where they found this exact vulnerability in one of India’s top shopping apps. But I figured it didn’t really happen anymore…
I had faith in the development community.
I posted the regex query to the Mi3Security team, and even suffixed this statement to the team saying:
“It’s okay if that query finds zero apps”
What a stupid thing to say…
There are over 30,000 apps that do not implement HTTPS for a query that looks like it involves a password. The worst part is:
You would never know if the app is using HTTPS or not
There’s no green lock, no certificate error or lack of either — there’s just a login page inside an app. It’s impossible to know from an app GUI if they are communicating using a secure TLS connection, have poor TLS configuration or no TLS configuration at all. Thanks to Mi3Security, we now know that at least 30,000 apps have zero login security.
Now obviously — there may false-positives, my regex statement isn’t perfect (and if you have a more precise statement, let me know, I’ll get them to run it and credit you)
What else can we check? I asked InfoSec twitter for feedback:
Thanks to those who suggested options. 140 characters meant my question lacked detail. 280 characters here we come!
Twoof my favourite suggestions came from Don Lor and the amazing egyp7.
The classic. Static private keys within an app.
BEGIN RSA SECRET == 193,329 apps. 🤦🏻♂️🤦🏻♀️
Now this number seemed pretty high to me, so I’ll likely write a follow up to understand why so many developers are storing private keys in their apps
Don Lor had an excellent suggestion to look for AWS keys in mobile apps. This is pretty common mistake for people to store it in github, but I wonder how many apps think it is “secret in their code”
Regex for AWS Private Key is:
[^A-Za-z0–9/+=][A-Za-z0–9/+=]{40}[^A-Za-z0–9/+=]
There’s not a lot you can do to protect against reverse engineering. You’re giving away your app. However, avoid storing secrets within an app. Keep all secrets on your server.
If you ‘must’ store a secret, you can make it harder for an attacker by obfuscating your secret. This definitely doesn’t make it impossible, just more annoying to find for an attacker.
If you’re on iOS, you can use something like this: https://github.com/pjebs/Obfuscator-iOS
On Android, ProGuard should be used at a minimum. If you have a bit of money, DexGuard or another commercial obfuscator can be quite useful.
There are two main attacks for mobile applications.
• Developers have mis-configured HTTPS / no HTTPS.
• Users are using free wifi
One of the biggest network security concerns in mobile apps that you cannot validate the security of the HTTPS certificate while using the application.
Where’s the green lock? Where’s the cert button? Is it even using HTTPS? No one knows from this interface.
Developers need to do the right thing with their web servers. Here’s are some good resources to make sure your web server is configured correctly:
2. I’ve come across this website recently that helps people check what security features are exposed in the header. Strict-Transport-Security is the one we care most about for HTTPS security, but the others are good too. (My Bank got a D grade based on their security headers. Something for me to look at the consequences of next week) https://securityheaders.com/
3. To set up HTTPS, the amazing Troy Hunt has an excellent resource that every web dev should read.
Once developers have set up proper TLS, they need to assume the users are on a hostile Wi-Fi. A user can be tricked into using a malicious network and one of the methods we use to protect a user from a malicious actor on a network is HTTPS.
Now, in order for HTTPS to be considered secure for an app, two things need to be correct:
1. The Server needs to be configured correctly (see the above resources)
2. The client / app need to be configured correctly.
In order to achieve point 2 — we can utilise Application Transport Security (ATS) on iOS and Network Security Configuration (NSC) file on Android. These move the burden of certificate verification from the app (where developers and libraries have made mistakes in the past) to the Operating System, where we have some level of trust that Apple and Google will do the right thing.
If you want to take an extra secure step, you can also implement Certificate Pinning, which is an excellent way to ensure your app is almost definitely talking to the right server. For more information on Certificate Pinning,
“Unique example”. I am such a bad person…
There are lots of attacks on web servers and web applications [citation needed]. There are books, blog posts, videos, and countless online resources discussing this. This blog post is more focused on automation and attacks, and one of the biggest we’ve seen in the development space is finding simple vulnerabilities using two sources:
Google Hacking Database — for doing very quick google searches finding poorly hidden information. As an example, here’s an example of websites potentially leaking databases with the table orders in it:
intext:”Dumping data for table `orders`
You can find more at the Google Hacking Database
2. Shodan search engine for any open port to the internet.
Security researchers and malicious actors use Shodan to find unauthenticated webcams, industrial control system pages and databases. Many databases have weak, or no authentication blocking them. Shodan researchers published that there was a staggering amount of data being leaked from MongoDB and HDFS databases.
MongoDB used to be leaking >600TB a few years ago.
For the whole write up, visit the links below.
These unauthenticated servers came under attack earlier this year. They were encrypted and held for ransom. Not because of some advanced apex attacker, but due to security not being enabled by default.
How to fix this? There are hardening guidelines for your web stack.
Let’s go back to my original point:
My app is too small, no one is going to attack me.
A bot doesn’t care that your app is too small. A search engine doesn’t care. If you are low-hanging fruit, you will be attacked. Web servers, email servers have been experiencing this for years. Mobile apps will be joining their ranks soon (and they are already).
What’s the takeaway? Mobile App Developers need to understand these three things:
1. A mobile app is reversible to get to any string / url/ secret within an app
2. Network security need to be configured correctly at the server AND the client to be effective
3. Automating attacks on servers has never been easier, it needs to be hardened and patched regularly.
Lots of 👏🏻 and ❤️ to @sammy_lee12 on twitter who attended my talk and sketched me my talk on a single page. You rule in so many ways!
Seriously, Sam, you’re the best. First thing I did was get this laminated.
You made it to the end! Nice one! Press the clap as many times as you like, it gives me warm happy feelings ☺️. Think I’m cool enough to follow on twitter? @proxyblue is where you’ll find me. I post and retweet InfoSec stuff.
Still want to hear a bit more?
Here is my 3-minute lightning talk from the same conference. “How to talk like a developer”.
Yak Shaving, Bikeshedding and more!
Here’s a link to the original “How to succeed as a Red Shirt without even dying” talk:
Thanks for reading! Want to read a bit more from me?
10 things InfoSec professionals need to know about networking
https://hackernoon.com/10-things-infosec-professionals-need-to-know-about-networking-d159946efc93
Introducing the InfoSec colour wheel — blending developers with red and blue security teams.
https://hackernoon.com/introducing-the-infosec-colour-wheel-blending-developers-with-red-and-blue-security-teams-6437c1a07700
What devs need to know about Encoding / Encryption / Hashing / Salting / Stretching
https://hackernoon.com/what-devs-need-to-know-about-encoding-encryption-hashing-salting-stretching-76a3da32e0fd
You ‘do’ InfoSec right? What do you read? Who do you listen to?https://louis.land/you-do-infosec-right-what-do-you-read-who-do-you-listen-to-e8d00b7d8ace