paint-brush
DARPA Readies Program to Safeguard Mixed Reality Users from Cognitive Attacksby@thesociable
6,250 reads
6,250 reads

DARPA Readies Program to Safeguard Mixed Reality Users from Cognitive Attacks

by The SociableOctober 23rd, 2023
Read on Terminal Reader
Read this story w/o Javascript

Too Long; Didn't Read

DARPA is putting together the Intrinsic Cognitive Security (**ICS**) research program “to build tactical mixed reality systems that protect against cognitive attacks” The Pentagon sees mixed reality (MR) becoming ubiquitous in future military missions. Bad actors could manipulate mixed reality environments in nefarious ways. These necessary protections can be mapped to properties of concealment, unobservability & undetectability.
featured image - DARPA Readies Program to Safeguard Mixed Reality Users from Cognitive Attacks
The Sociable HackerNoon profile picture


DARPA is putting together the Intrinsic Cognitive Security (ICS) research program “to build tactical mixed reality systems that protect against cognitive attacks.”


As the Pentagon sees mixed reality (MR) becoming ubiquitous in future military missions, the US Defense Advanced Research Projects Agency (DARPA) is looking to protect users against cognitive attacks in this hybrid environment.


“The ICS program seeks breakthrough innovation for computational science to build tactical mixed reality systems that protect against cognitive attacks”

DARPA, Intrinsic Cognitive Security (ICS) program, October 2023


According to the DARPA ICS program description, such attacks can include:


  • Information flooding to increase equipment latency and induce physical illness.
  • Planting real-world objects to overwhelm displays.
  • Subverting a personal area network to sow confusion.
  • Injecting virtual data to distract personnel.
  • Using objects to overwhelm a user with confusing false alarms.
  • Assessing user status through an eye tracker.
  • And other potential attacks.


Apart from military applications, DARPA’s new ICS program could provide us with a glimpse into the future of the commercial metaverse and how bad actors could manipulate mixed reality environments in nefarious ways.


Criminal groups, governments, and corporations could weaponize the metaverse by manipulating mixed reality systems in real-time to be able to see what you see, to know what you are feeling, and to potentially plant deceptive information in order to achieve a desired reaction.


research paper published in Procedia Computer Science in 2020 lists five types of threats to mixed reality environments:


  • Input Protection: Involves challenges towards ensuring the security and privacy of data gathered and inputted to MR platform.


    • Example: The MR eye-wear can capture the sensitive information on the user’s desktop screen such as e-mails, chat logs. These necessary protections can be mapped to properties of concealment, unobservability & undetectability, and content awareness.


  • Data Protection: A large amount of data collected from the sensors is stored in the database or other forms of data storage. The main threats to data collection are tampering, denial of service, and unauthorized access among others.


    • Example: An adversary can tamper MR targets to elicit a different response from the system or to outright deny a service. Aside from security threats, likability, detectability, and identifiability are some of the privacy threats that results from continuous or persistent collection of data.


  • Output Protection: After the data is processed, the application sends output to the MR device which is to be displayed. In MR the applications may have access to other application outputs and thus can modify those outputs making them unreliable.


    • Example: Output displays are vulnerable to physical inference threats or visual channel exploits such as shoulder-surfing attacks. These are the same threats to user inputs especially when the input and output interfaces are on the same medium or are integrated together.


  • User Interaction Protection: MR includes the utilization of other sensing and display interfaces to allow immersive interactions. One of the key prospects is how users can share MR experiences with guarantee of security and privacy of information.


    • Example: An adversarial user can potentially tamper, spoof, or repudiate malicious actions during these interactions. As a consequence, genuine users might go through denial of service. Additionally, their personal data might have been compromised, leaked and used.


  • Device Protection: The MR interfaces are exposed to malicious and harmful interpretation leading to discovery of input and output display information.


    • Example: The head mounted displays (HMDs) display the content through its lenses, which other people can see from outside leading to leakage and external observation. Devices like a camera, which are also categorized as visual capture devices, are used to capture and extract this information which was leaked from the HMDs.


A report by Kaspersky adds, “It is nearly impossible to anonymize VR and AR tracking data because individuals have unique patterns of movement. Using the behavioral and biological information collected in VR headsets, researchers have identified users with a very high degree of accuracy – presenting a real problem if VR systems are hacked.”


For DARPA’s ICS program, the core technical hypothesis is that “formal methods can be extended with cognitive guarantees and models to protect mixed reality users from cognitive attacks.”


As “cognitive models represent aspects of human perception, action, memory, and reasoning,” DARPA’s ICS program will “extend formal methods by explicitly creating and analyzing cognitive and other models as part of MR system development to protect the warfighter from adversary attacks.”


“Mixed reality integrates virtual and real worlds in real-time and will be ubiquitous in future military missions, including missions involving dismounted soldiers”

DARPA, Intrinsic Cognitive Security (ICS) program, October 2023


For years, the US Department of Defense (DoD) has been funding research to equip its warfighters with technologies to augment their capabilities.


For example, both DARPA and the Defense Innovation Unit (DIU) have been looking to mixed reality and hand-held devices for operators to interact with upwards of 250 autonomous vehicles through a military tactic known as swarming.


In 2018, DARPA launched its OFFensive Swarm-Enabled Tactics (OFFSET) program to leverage augmented and virtual reality, along with voice, gesture, and touch-based technologies, to enable users to interact with potentially hundreds of unmanned platforms simultaneously in real time.


And in March 2021, the DIU was looking for commercial solutions that would allow soldiers to operate multiples types of unmanned air and land vehicles using wearable and handheld controllers.


Now, the Pentagon is looking to protect users from cognitive attacks in those mixed reality environments.


Will solutions coming out DARPA’s ICS program make their way into the commercial sector to protect private citizens in the metaverse?


Could the technology and tactics developed be used for future PSYOPs and influence campaigns in their own right?


The ICS program proposers day will be held on October 20, 2023 in Arlington, Virginia.



This article was originally published by Tim Hinchliffe on The Sociable.