paint-brush
Microarchitectural Security of AWS Firecracker VMM: Threat Modelsby@autoencoder
383 reads
383 reads

Microarchitectural Security of AWS Firecracker VMM: Threat Models

by Auto Encoder: How to Ignore the Signal Noise
Auto Encoder: How to Ignore the Signal Noise HackerNoon profile picture

Auto Encoder: How to Ignore the Signal Noise

@autoencoder

Research & publications on Auto Encoders, revolutionizing data compression and...

June 13th, 2024
Read on Terminal Reader
Read this story in a terminal
Print this story
Read this story w/o Javascript
Read this story w/o Javascript

Too Long; Didn't Read

This research paper investigates just how secure Firecracker is against microarchitectural attacks.
featured image - Microarchitectural Security of AWS Firecracker VMM: Threat Models
1x
Read by Dr. One voice-avatar

Listen to this story

Auto Encoder: How to Ignore the Signal Noise HackerNoon profile picture
Auto Encoder: How to Ignore the Signal Noise

Auto Encoder: How to Ignore the Signal Noise

@autoencoder

Research & publications on Auto Encoders, revolutionizing data compression and feature learning techniques.

Learn More
LEARN MORE ABOUT @AUTOENCODER'S
EXPERTISE AND PLACE ON THE INTERNET.
0-item

STORY’S CREDIBILITY

Academic Research Paper

Academic Research Paper

Part of HackerNoon's growing list of open-source research papers, promoting free access to academic material.

Authors:

(1) Zane Weissman, Worcester Polytechnic Institute Worcester, MA, USA {zweissman@wpi.edu};

(2) Thomas Eisenbarth, University of Lübeck Lübeck, S-H, Germany {thomas.eisenbarth@uni-luebeck.de};

(3) Thore Tiemann, University of Lübeck Lübeck, S-H, Germany {t.tiemann@uni-luebeck.de};

(4) Berk Sunar, Worcester Polytechnic Institute Worcester, MA, USA {sunar@wpi.edu}.

3. THREAT MODELS

We propose two threat models applicable to Firecracker-based serverless cloud systems:


(1) The user-to-user model (Figure 3): a malicious user runs arbitrary code sandboxed within a Firecracker VM and attempts to leak data, inject data, or otherwise gain information about or control over another user’s sandboxed application. In this model, we consider


(a) the time-sliced sharing of hardware, where the instances of the two users execute in turns on the CPU core, and


(b) physical co-location, where the two users’ code runs concurrently on hardware that is shared in one way or another (for example, two cores on the same CPU or two threads in the same core if SMT is enabled).


(2) The user-to-host model (Figure 4): a malicious user targets some component of the host system: the Firecracker VMM, KVM, or another part of the host system kernel. For this scenario, we only consider time-sliced sharing of hardware resources. This is because the host only executes code if the guest user’s VM exits, e. g. due to a page fault that has to be handled by the host kernel or VMM.


For both models, we assume that a malicious user is able to control the runtime environment of its application. In our models, malicious users do not posses guest kernel privileges. Therefore, both models grant the attacker slightly less privileges than the model assumed by [1] where the guest kernel is chosen and configured by the VMM but assumed to be compromised at runtime. Rather, the attacker’s capabilities in our models match the capabilities granted to users in deployments of Firecracker in AWS Lambda and Fargate.


This paper is available on arxiv under CC BY-NC-ND 4.0 DEED license.


L O A D I N G
. . . comments & more!

About Author

Auto Encoder: How to Ignore the Signal Noise HackerNoon profile picture
Auto Encoder: How to Ignore the Signal Noise@autoencoder
Research & publications on Auto Encoders, revolutionizing data compression and feature learning techniques.

TOPICS

THIS ARTICLE WAS FEATURED IN...

Arweave
Read on Terminal Reader
Read this story in a terminal
 Terminal
Read this story w/o Javascript
Read this story w/o Javascript
 Lite
X REMOVE AD