paint-brush
4 Main Problems with Application-Layer Detection Rulesby@revealsecurity
687 reads
687 reads

4 Main Problems with Application-Layer Detection Rules

by Adam KoblentzJanuary 11th, 2023
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Rules as we know them today are a static solution to a dynamic problem, which makes them ineffective. Each application can be highly dynamic. The flood of false positives overwhelms security analysts, waste their time, increase burnout, and leads to alert fatigue. We need to increase automation with unsupervised machine learning based on how people properly and improperly USE applications.
featured image - 4 Main Problems with Application-Layer Detection Rules
Adam Koblentz HackerNoon profile picture

Enterprises are onboarding new applications, from new vendors, with new requirements at an increasing rate. Criminals are adapting their Tools, Techniques, and Procedures (TTPs) daily. Rules as we know them today are a static solution to a dynamic problem, which makes them ineffective. Here are the four major reasons why.

1. Each Application Is Different

Detection rules have shown a high level of effectiveness for networks, devices, and user access where they address a limited set of protocols and operating systems. However, the application layer is a different sort of animal; it is more complex and diverse. Carefully designed and written for unique purposes, the vast array of any application’s activities – the files, data, devices, websites, and other assets it accesses and manipulates, as well as its log formats – are far more varied. Rule-creators must become extremely familiar with all the business logic, log formats, usage requirements, etc. before they can write effective rules that will accurately detect breaches and misuse of any single application, let alone the hundreds or more that an organization might use in a given day. Each application can be highly dynamic. Version by version, API by API, cloud instance by cloud instance, changing applications continually expose a litany of new security challenges. Creating detection rules to adequately protect an application is always a game of cat and mouse.

2. False Positives Galore

Because applications are so diverse and complex, detection rules are always going to be unable to cover every use case for every type of user. Any time an application is used in an unforeseen way, it might violate a rule and trigger an alert. Most of the time those alerts are false positives due to faulty detection logic. It is not possible to foresee every valid usage pattern of every application. Unfortunately, the flood of false positives overwhelms security analysts, waste their time, increase burnout, and leads to alert fatigue. The false positives make application-layer rules a net-negative.

3. Rules are Never “Done”

Dynamic, complex application flows require careful attention to changing logic and usage patterns. Security teams and experts from the various lines of business need to collaborate on application logic to write good detection rules. This communication rarely happens and requires sufficient resources on both sides, which is also rare. As organizations continue on the path of digital transformation, and as attack surfaces expand due to migration to the cloud, extension of supply chains, and use of embedded third-party code, they cannot possibly scale up the human expertise to maintain a proper security posture for their business-critical applications.

4. Detecting the Unknown

Security teams and cyber criminals have different success criteria. Defenders must be perfect without impacting business operations. Attackers only need to succeed once. They only need to find one hole to breach an application once, it is a numbers game. Security analysts write rules that can detect known attack patterns based on best practices and assumptions. But attackers are constantly trying to find unknowns – undiscovered security gaps that will allow them to use an application to gain access, data, and cause destruction.

What’s the Solution for Application-Layer Detection?

The four problems with rule-based application-layer detection are not going away. It’s the wrong paradigm and increases defenders’ disadvantages from the start. The investment of time and resources to combat the dynamism of the landscape make rules-based detections unsustainable.

We must change the paradigm and introduce a new approach. We need to increase automation with unsupervised machine learning based on how people properly and improperly USE applications, also known as the User Journey Analytics. By “user journey” we refer to the sequence of activities performed by a user across applications, be it SaaS, on-prem, custom-built, or constructed mainly from third-party suppliers. Analysis of user journeys can accurately detect attackers leveraging valid credentials and insiders looking to misuse or abuse any application.

The accurate detection of abusive or malicious behavior via analysis of user journeys assumes that an abnormal session is characterized by a journey which isn’t similar to the user’s typical journeys in a given application. Thus, by learning what constitutes typical journeys and automatically creating normative journey profiles, this type of machine-learning solution accurately detects abnormal journeys that are often highly correlated with malicious activities, whereupon it will issue a true positive that warrants investigation.