I've spent the last eighteen months talking to engineers at companies you'd recognize—household names in fintech, healthcare, logistics. Same pattern everywhere. They'll walk me through their authentication layers, their OAuth flows, their JWT rotation policies. Beautiful stuff. Then I ask: "So once I'm logged in as User 47, what stops me from just requesting User 48's data?"
Long pause. Someone mentions rate limiting. Someone else brings up their WAF.
Nobody mentions the actual check.
This is the BOLA problem, and it's eating the API economy from the inside out. Broken Object Level Authorization—the vulnerability that sounds boring in conference talks but costs companies their entire customer database on a Tuesday afternoon.
What Makes This Different From Everything Else
Here's the thing that makes BOLA so insidious: it happens after you've done everything right. Your login works. Your session management is textbook. Your tokens are properly signed. The attacker isn't injecting SQL or crafting malicious payloads. They're just... asking for things.
In March 2024, I watched a security researcher demonstrate this on a telehealth platform. Logged in as a test patient, they simply changed the patient ID in the API call from their assigned number to another. The system returned someone else's full medical history—prescriptions, diagnoses, provider notes. No hacking tools. Just curl and a basic understanding of how RESTful endpoints work.
The platform had passed multiple security audits. They had penetration testers. They had a dedicated AppSec team. What they didn't have was a single line of code checking whether the authenticated user actually owned the requested data.
OWASP has called this the number one API security risk for five years running. It's not flashy. It doesn't involve zero-days or sophisticated threat actors. It's just developers making a reasonable-sounding assumption that turns out to be catastrophically wrong.
The Uber Wake-Up Call Everyone Forgot
Back in 2019, researcher Anand Prakash found something unnerving in Uber's rider API. By cycling through user IDs—just incrementing numbers, nothing fancy—he could pull profile data for any account. Ride history. Home addresses. Drop-off patterns. Even the OAuth tokens that would let him fully hijack driver accounts.
Uber confirmed it. Fixed it quickly. Paid the bounty. And somehow the industry learned almost nothing.
Because three years later, in May 2022, Peloton did essentially the same thing. Their API had no authentication at all initially—just wide open. After public pressure, they added login requirements. Problem solved, right?
Wrong. They'd fixed authentication but completely ignored authorization. Any logged-in Peloton user could view any other user's workout data, weight, gender, location. The researcher who found it said the company initially insisted this wasn't a "data breach" because no unauthorized third party got in. They missed the point entirely: every authorized user was unauthorized for everyone else's data.
I've seen this pattern dozens of times now. Teams that understand the difference between authentication (proving who you are) and authorization (proving what you're allowed to do) in theory, but somehow never connect it to the code they ship.
Why Smart Engineers Keep Missing This
The honest answer? Because BOLA lives in that uncomfortable space between infrastructure and business logic.
Your SAST tools won't catch it. They're looking for syntax problems, injection vulnerabilities, cryptographic mistakes. A missing authorization check looks like perfectly valid code. The function calls work. The database queries execute. The JSON serializes correctly.
Your DAST tools won't catch it either. The API returns normal-looking responses. No error codes. No anomalous traffic patterns. From a packet-inspection perspective, everything looks fine.
I spoke with a senior engineer at a logistics company last fall who put it bluntly: "We had tests for authentication. We had tests for input validation. We didn't have tests for 'what if the user asks for someone else's shipment ID?' because we assumed the frontend wouldn't let them."
The frontend. That beautiful, user-facing application that attackers never actually use.
The real attackers are opening their terminal, firing up Burp Suite or just using curl, and calling your API directly. They're not bound by your UI assumptions. They're not following the happy path your product manager envisioned.
And if your API accepts an object ID as input without checking ownership, you've just handed them the keys.
The 2016 Bank Heist That Should Terrify Everyone
Russia's Central Bank learned this the expensive way. In 2016, attackers compromised the Faster Payments System—not through sophisticated malware or social engineering, but by modifying account IDs in API requests.
The mechanics were almost embarrassingly simple. Log in as a legitimate client. Initiate a transfer. Change the destination account parameter to a victim's account. Hit send.
The API authenticated the requester correctly. It just never verified that the requester was authorized to move money to that specific account. Before anyone noticed, 2 billion rubles had vanished.
I bring this up in conversations with fintech CTOs, and they always assure me their systems are different. More mature. Better architected. Then I ask to see their transaction APIs, and sure enough—there's usually at least one endpoint where object-level checks are missing or inconsistent.
It's not incompetence. It's organizational complexity. One team builds the payment rails. Another team adds a reporting endpoint. A third team creates an admin panel. Each assumes someone else handled the authorization logic. Or they handle it in one place but miss it in another.
The Mass Exfiltration Nightmare
Sequential IDs make this exponentially worse. An attacker doesn't need to guess—they can iterate.
In 2018, the U.S. Postal Service exposed an API that let authenticated users search mail tracking information. Reasonable feature, right? Except it had a wildcard search with no ownership validation. One logged-in account could query data for any other account.
Researchers estimated 60 million user profiles were exposed before the USPS shut it down. Names, addresses, email addresses, phone numbers, package details. Not from a sophisticated breach. From a search box.
I've tested commercial APIs where this pattern is still live. Log in with Account A, request /api/users/1/orders. Get Alice's orders. Try /api/users/2/orders. Get Bob's orders. Keep going. Write a simple script and you've scraped the entire user base by lunchtime.
The LinkedIn, Venmo, and Clubhouse incidents from 2021 and 2022 all followed variations of this pattern. Different attack surfaces, same root cause: APIs that assume authenticated means authorized.
What Actually Works (And What Doesn't)
I've seen teams try to solve this with randomized UUIDs. Switch from /users/123 to /users/550e8400-e29b-41d4-a716-446655440000 and call it fixed.
Doesn't work. The attacker can still harvest IDs—from search results, from shared links, from error messages, from any endpoint that leaks them. UUIDs buy you time, not security. Security through obscurity never survives contact with reality.
What does work is boring and obvious: check ownership on every single data access.
// Bad
async function getOrder(orderId) {
return db.orders.findById(orderId);
}
// Good
async function getOrder(orderId, currentUserId) {
return db.orders.findOne({
id: orderId,
ownerId: currentUserId
});
}
That's it. That's the fix. Ten extra lines of code that prevent a nine-figure breach.
But it requires discipline. Every endpoint. Every HTTP method. GET, POST, PUT, DELETE. Every GraphQL resolver. Every batch operation. Every time you touch user data, verify ownership.
The teams that get this right treat authorization checks like they treat input validation—non-negotiable, testable, reviewed in every code change. They write integration tests that deliberately try to break the rules: "Log in as Alice, request Bob's data, expect 403."
They don't trust the client. They don't trust the frontend. They don't assume that because an ID is hard to guess, it's safe to return whatever it points to.
Testing Like You Mean It
I'll be direct: if you're not actively testing for BOLA, you have BOLA vulnerabilities. It's not a question of if.
Set up two test accounts. Log in as Account A. Capture an API request that fetches A's data. Change the object ID to reference Account B's data. Send the request. If you get back B's data instead of an error, you've found it.
Do this for every endpoint that accepts an object identifier. User IDs, order IDs, document IDs, session IDs, any UUID or integer that references something.
Use Burp Suite's Repeater to swap IDs manually. Use Intruder to automate it across a range. Write Postman scripts that iterate through ID lists and flag anything that returns 200 instead of 403.
Better yet, build this into CI/CD. Make it a requirement: no PR merges without authorization tests. Treat missing ownership checks the same way you'd treat SQL injection vulnerabilities—as showstoppers, not nice-to-haves.
The OWASP API Security Testing Guide lays this out methodically, but I rarely see teams actually follow through. It's unglamorous work. It doesn't involve machine learning or blockchain or whatever's trending. It's just... checking things.
And it's the difference between a secure API and a data breach waiting for someone to notice.
The Stakes Keep Rising
GDPR fines for unauthorized data access now routinely hit eight figures. Class-action lawsuits from API breaches are becoming standard. Customers expect—rightfully—that companies will protect their data, and "we didn't think to check authorization" doesn't hold up in court.
The Australian telco Optus found this out in 2022 when an improperly secured API leaked data on 9.7 million customers. The breach wasn't sophisticated. A researcher later noted that even basic access controls would have prevented it. Optus faced federal regulatory action, customer lawsuits, and a mandatory security overhaul.
The financial cost is measurable. The reputational cost isn't. How do you quantify the value of customer trust? What's it worth when journalists start calling your platform "fundamentally insecure"?
Where We Go From Here
The frustrating part is that this is solvable. BOLA isn't like supply chain attacks or sophisticated nation-state threats. You don't need a billion-dollar security budget or bleeding-edge AI detection systems.
You need developers who understand that authenticated doesn't mean authorized. You need code reviews that check for missing ownership filters. You need tests that verify the negative case—that unauthorized access fails loudly.
And you need organizations that stop treating authorization as someone else's problem. Not the authentication team's problem. Not the framework's problem. Not something that frontend validation handles.
Every engineer who touches user data needs to ask: "Am I checking that this user is allowed to access this specific object?" Every time. No exceptions.
Because somewhere, right now, there's an API call in production that trusts the client to only request their own data. And somewhere else, there's someone with Burp Suite open, methodically changing IDs to see what comes back.
It's just a matter of time until they find each other.
