paint-brush
Taking a Systematic Approach to Cyber Deception - Part 2by@jym

Taking a Systematic Approach to Cyber Deception - Part 2

by Jym CheongJanuary 18th, 2022
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Part 1 introduced 3 Phases of Cyber Deception Campaign. We briefly highlighted 4 considerations related to Industrial networks: (1) Safety, (2) Availability, (3) Realism, & depending on our strategic goals, (4) Secrecy that we should be mindful throughout the planning & execution. The more advanced, more skilled, skilled actors are the more skilled. We need to establish ways to quantify success. Achieving success requires a convincing story together with a well-designed maze that exploits attackers’ mental bias.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - Taking a Systematic Approach to Cyber Deception - Part 2
Jym Cheong HackerNoon profile picture

Recap

Part 1 introduced 3 Phases of Cyber Deception Campaign. We briefly highlighted 4 considerations related to Industrial networks: (1) Safety, (2) Availability, (3) Realism, & depending on our strategic goals, (4) Secrecy that we should be mindful throughout the planning & execution.


We need to consider Safety aspects related to Industrial networks carefully at Step 6 of Figure 1 (below) & Availability of feedback channels is a critical aspect of the entire campaign.


This part two deals with Figure 1 - Step 2 & 3 to achieve Realism for the types of Threat Actors we wish to engage. But first thing first...

How to Plan & Measure Success?

Benjamin Franklin said: “If you fail to plan, you are planning to fail!”. Even with planning, things may not go well, **we need to establish ways to quantify success.**From Figure 1, we see it ultimately leads to indications of Threat Actors in a state of (1) Believed, (2) Suspected & (3) Unbelieved, of which we can gather & count to measure success or failure.


Figure 1


To lure (Figure 1 > Step 1 - strategic goal), it is appropriate to measure the occurrences of attackers showing indications of Believed. Let's say we expect the attacker to read a fake network diagram (Step 2 - should react) within a file & to use the information to proceed to the next target (step 2 - desirable reaction).


Instead, attackers uploaded the file & stopped advancing, which could be a sign of Suspected, of which we could have forgotten to consider during planning. It is better then to improve quickly, e.g. embed tracing methods within the file & turn that type of event into a desirable reaction!


For early deterrence or diversions, it makes more sense to make attackers back off by creating Unbelieved state soonest, or believed they hit the "right" target (diversion), clean-up & exit network.


Regardless of your strategic goals, at some juncture attackers may reach either Suspected or Unbelieved state within their offensive OODA loopA complete success is when attackers follow through the entire story thinking that they have achieved their objectives.


Achieving success requires a convincing story together with a well-designed maze that exploits attackers’ mental bias.


So what makes a “convincing” story? And “convincing” for who? It leads us to the next point, how well do you know your adversaries?

Estimation of Threat-Actors

Instead of boring you with various academic taxonomies related to motivations & other marketable personifications. Let me suggest an estimation in terms of their ability to adhere to Offensive Operations Security or OPSEC in short. This can be inferred through monitoring the feedback channels since we don’t have Neuralink plugged into the adversaries' heads.


OPSEC from a defensive standpoint refers to a term derived from the US Military. It is a process used to deny a potential adversary or a threat, any sort of critical intelligence that could jeopardize the confidentiality and/or the operational security of a mission.


Threat Actors also observe OPSEC in their campaigns to ensure mission success by working under the radars. The more advanced the actors are, the more skilled to test if they are in a maze.


A table for comparison:

Threat Actor “Types”

Estimation of OPSEC Level

Cognitive Level & Bias

Novice

Low. Easily observable & “noisy”

Tend to believe systems responses & feedbacks.

Intermediate

Mid. Able to recon without being observed easily. Still observable if the networks are well-defended with good detection engineering.

The in-betweens. Likely trained to assume they are being watched & have playbooks that observe OPSEC.

Advanced

High. Very skilled & well resourced to be capable of disarming sensors.

Able to see through situational quickly. Very careful bunch.


All these depends on “observability”, how much visibility do we have on the endpoints & networks, & how the signals are sent back to us. Poorly designed feedback channels that novice & intermediate actors can tamper, disable & evade means poor estimations will follow.


We need to consider the actors’ profile, to design a story, plant information &/or system responses to FEED their Confirmation-Bias.


Earlier on, I shared an example of attackers uploading file & not advancing. They could very careful or just cybercriminals making a quick buck by selling info. If they come back again, it means they believed the fake network diagram.


Using a quote from Bruce Schneider (a famous security cryptographer):


Only amateurs attack machines; professionals target people


Which will lead us to the next question: How do they get in?