Financial services increasingly rely on cloud-native workloads to process highly sensitive data — from credit scoring to fraud detection. Encryption at rest and in transit is already well-established, but data that is in use is commonly disclosed to the host itself and is susceptible to other host-level or insider attacks.
Confidential Kubernetes combines Google Kubernetes Engine (GKE) with Confidential VMs that use hardware-based Trusted Execution Environments (TEEs). TEEs add two key protections: memory encryption, which keeps data safe while it’s being used, and workload attestation, which makes sure apps only run in trusted environments. This setup lets sensitive apps run securely, meeting strict rules in areas like finance, healthcare, and government.
1. The Runtime Data Security Gap
Most cloud programs secure data at rest (disk/database encryption), data in transit (TLS/mTLS), and access (IAM/RBAC). The blind spot is data in use—the moment plaintext lives in RAM while code executes.
What “data in use” really looks like
- App decrypts a token, API key, or customer PII in memory.
- ML models load sensitive features, weights, or decrypt per-request keys.
- Crypto libraries hold ephemeral keys in RAM during a transaction.
- In-memory caches (Redis, JVM heap, Golang heap) briefly contain raw values.
Why normal protections aren’t enough
- Encryption at rest only protects files and databases stored on disk. It does not protect the data once it’s loaded into memory.
- Encryption in transit only protects info while it’s moving on the network. Once the data loads into the app memory, that protection ends.
- Access controls (RBAC/IAM) decide who can call a service. But they don’t really guarantee the machine itself is safe or hasn’t been messed with.
Real risks when data is in memory
Even after you lock down disks, networks, and access, information in RAM can still leak.In this state, attackers can:
- Crash files: If an app crashes, its memory might get written into a log or dump file, exposing private data.
- Debugging tools misuse: Admin tools that take memory dumps (gcore, /proc/<pid>/mem) can capture passwords, keys, or personal data.
- Hacked machine: If the server OS or kernel gets compromised, attackers might snoop memory and grab secrets.
- Helper apps leaking: Sidecar or monitoring agents may log or expose secrets by mistake.
Why this matters for banks and finance firms
For financial companies, leaking data from memory isn’t just a technical issue — it can break laws and trigger fines:
- PCI DSS 4.0: Requires credit card data and keys to stay protected at all times, including when apps are using them in memory.
- GDPR and GLBA: If personal data gets exposed — even briefly — it can be counted as a breach. That means fines and mandatory reporting.
- AI and models: If sensitive inputs like income or credit scores can be read from memory, it makes trust weaker. Compliance checks and Auditing gets harder too.
What’s needed to close the gap
To really protect data while it’s being used, systems need:
- Encrypted memory – so even someone with root access can’t see the data.
- Proof of safety (attestation) – keys and secrets should only unlock if the system is running in a trusted state.
- Conditional key release – services like Cloud KMS should only give out keys after the system proves it’s secure.
- Strong guardrails – use signed images, minimal permissions, no memory dumps in production, and careful logging.
2. Confidential Computing
The state of data that’s often ignored is data in use — when apps are actually working with the data in memory (RAM). In this state, attackers with deep access — like a hacked OS or a bad admin — can still peek at sensitive info. Confidential Computing protects data in-use with Confidential VMs, Confidential GKE, Confidential Dataflow, Confidential Dataproc, and Confidential Space.
What is TEE - Trusted Execution Environment ?
A TEE is a special secure area inside the CPU that:
- Encrypts memory automatically, so data in RAM can’t be read directly.
- Keeps workloads separate from the operating system and hypervisor, so even system admins or cloud staff can’t see inside.
- Checks itself for tampering before and during execution.
- Proves the system is running in a trusted state using something called remote attestation. This gives outside systems confidence that the workload is safe.
With that, apps can handle very sensitive data while lowering the chance of leaks.
Main TEE technologies
- Intel SGX—Software Guard Extensions: SGX lets developers make enclaves — tiny, locked-off areas of memory inside the CPU. Only the code inside the enclave can touch the data, and once data leaves, it’s encrypted. This gives more fine-grain protection. You don’t need to put the whole app inside SGX, just the sensitive bits, like crypto functions, key handling, or parts of a fraud model.
Use cases: secure APIs, protecting algorithms, validating financial transactions, password/key handling.
Limitation: Enclaves are small and they also need code changes or SDKs. That makes things more complex than people like.
- AMD SEV / SEV-SNP—Secure Encrypted Virtualization: SEV encrypts all the memory of a virtual machine. Everything running inside the VM gets protection, no code changes needed. SEV-SNP goes further, blocking malicious hypervisors from injecting or tampering with VM memory. This gives coarse-grain protection: instead of one function, you protect the whole VM with multiple apps and services.
Use cases: run unmodified workloads safely, protect databases or containers, keep admins or attackers from reading VM memory.
Limitation: Small performance hit at VM level. You also can’t choose to protect just certain code pieces.
How Google Cloud uses TEEs
Google Cloud provides several options built on these technologies:
- Confidential VMs – Virtual machines with memory encryption turned on by default.
- Confidential GKE Nodes – Kubernetes nodes that run as Confidential VMs, so containers get the same protection.
- Confidential Space – A secure environment where different organizations can analyze sensitive data together without sharing raw data.
3. Why Confidential Kubernetes Matters
Kubernetes is the standard way to run modern apps. It gives portability and scale, but it wasn’t designed to fully protect data in memory.
In a normal Kubernetes setup:
- Nodes are just VMs – if the VM or hypervisor is hacked, everything running on it is at risk.
- Container isolation - stops apps from interfering with each other, but doesn’t stop the host OS from reading container memory.
- Shared hardware risks – sensitive apps may run on the same node as less trusted apps, creating security and compliance gaps.
For industries like banking, healthcare, or government, these are serious risks, not just theory.
How Confidential GKE Helps
Running workloads on Confidential GKE Nodes (which are built on Confidential VMs) adds hardware-level security:
- Encrypted memory – all data in RAM is encrypted by the CPU. Even root-level attackers can’t read it.
- Proof of trust (attestation) – before workloads start, the node shows cryptographic proof it’s secure and unmodified.
- Safe key access – Cloud KMS only releases secrets if the node’s attestation check passes. No attestation = no keys.
- Layered defense – works with Binary Authorization, Workload Identity, and RBAC to create stronger overall security.
Confidential Kubernetes upgrades a normal GKE cluster into a trusted environment for sensitive workloads — letting regulated industries run critical apps safely in the cloud.
4. Architecture Overview
The confidential Kubernetes architecture on GCP introduces multiple layers of security to protect data while in use, without requiring major changes to existing workloads.
The flow can be broken down into five key stages:
Workload Build
- Those applications which require the Trusted Execution Environment (TEE) are compiled with the Intel SGX support or AMD SEV-SNP.
- Intel SGX Developers can use the Intel SGX SDK to develop secure enclaves.
- Containers are then constructed with the enclave-enabled binaries, so that sensitive logic resides in the protected enclave.
- All images should be signed and stored in Artifact Registry, so they can be verified during deployment.
As shown in Figure 1, workloads in a Confidential GKE cluster follow a secure flow: the init container handles attestation and key requests, the app processes data only after secrets are in memory, and observability services capture logs.
Deployment on Confidential GKE Nodes
- The workload is deployed to a GKE cluster that uses Confidential VM-enabled nodes.
- Both Autopilot and standard GKE are supported and the node pools are marked confidential.
- These nodes encrypt memory so even the OS, hypervisor, or admins can’t access it.
Remote Attestation
- Before the application can access secrets or data, a remote attestation service validates the node’s integrity.
- This service checks that the node is indeed running in Confidential VM mode, that the image is signed and untampered, and that the workload matches defined policies.
- Only after successful attestation is a token or proof provided that allows the workload to proceed.
- This step effectively ties data access to verified runtime conditions.
The step-by-step handshake is illustrated in Figure 2. The init container proves the node’s state to the verifier, KMS releases keys only if validation succeeds, and the app container runs once secrets are available.
Data Processing
- Once attested, the workload requests decryption keys from Cloud KMS.
- Policies in KMS ensure keys are released only if attestation is valid.
- Encrypted data is then sent into the enclave or TEE-enabled VM.
- Processing — whether it’s model inference, fraud detection, or cryptographic operations — happens entirely inside the TEE.
- At no point is sensitive data exposed in plaintext outside of encrypted memory.
Results Output
- After processing, only non-sensitive results or aggregated insights are allowed to leave the enclave.
- For example, a fraud detection system might output a simple “approve/deny” decision or a numeric risk score.
- Raw data and decrypted values remain locked inside the enclave.
Supporting Google Cloud Services
- GKE – Provides the orchestration layer to schedule workloads onto Confidential Nodes.
- Confidential VM – Supplies the TEE-based memory encryption and node-level protection.
- Cloud KMS – Manages cryptographic keys with attestation-aware access policies.
- Binary Authorization – Ensures that only signed, trusted images can be deployed to the cluster.
- Workload Identity – Maps pods to service accounts securely without relying on node-level credentials.
- Cloud Logging & Security Command Center (SCC) – Capture audit trails, attestation decisions, and anomalous activity for monitoring and compliance.
High-Level Flow
Figure 1: High-Level Flow of Confidential GKE Workloads
The diagram shows how a transaction flows from the client to a Confidential GKE node. The container checks the environment and requests the keys. Only after secrets are safely loaded into memory does the app container start processing the data. At the same time, logs are sent out for monitoring and visibility.
5. Attestation & Key Release Sequence
Figure 2: Attestation & Key Release Sequence
This sequence shows how the init container talks to the attestation service and Cloud KMS. The app only starts if the node passes validation and KMS releases a key. Secrets are kept in memory, and the process is logged for visibility.
6. Implementation Blueprint
Step 1 – Create a Confidential Node Pool
gcloud container clusters create secure-cluster \
--region=us-central1 \
--workload-pool="$(gcloud config get-value project).svc.id.goog"
gcloud container node-pools create confidential-pool \
--cluster=secure-cluster \
--region=us-central1 \
--machine-type=n2d-standard-4 \
--confidential-compute \
--num-nodes=3
Note: SGX support is specific to machine types (e.g., n2d or n2-standard with AMD SEV; for SGX you'd use Intel-based nodes via Confidential Space).
Step 2 – Setup KMS with Conditional Access
gcloud kms keyrings create fin-kr --location=us
gcloud kms keys create fin-key --keyring=fin-kr --location=us --purpose=encryption
Note: In production, attach a conditional IAM policy so cloudkms.cryptoKeyVersions.useToDecrypt is granted only when attestation claims match (e.g., node is Confidential, image digest matches, project/cluster constraints). You can model this with a proxy service that validates the attestation token and calls KMS, or via a broker that enforces conditions server-side.
Step 3 – Deployment
apiVersion: apps/v1
kind: Deployment # This defines a Kubernetes Deployment resource.
metadata:
name: fin-risk # It will manage the pods for your financial risk/fraud detection app under the name fin-risk
spec:
replicas: 3 # Runs 3 replicas (pods) for high availability and load balancing.
selector:
matchLabels: { app: fin-risk } # Pods are labeled app=fin-risk. Ensures the ReplicaSet/controller can match pods to the Deployment.
template:
metadata:
labels: { app: fin-risk }
spec:
serviceAccountName: fin-risk-sa # Uses a Workload Identity service account (fin-risk-sa) so pods can securely call Google Cloud APIs (like KMS). Avoids node-level credentials — each pod has its own identity.
volumes:
- name: secrets-tmp
emptyDir: { medium: "Memory" } # tmpfs
initContainers:
- name: attester # Runs before the main app container starts. Uses a small image (attester) to handle verification and secret retrieval.
image: gcr.io/YOUR-PROJECT/attester:sha256-ABC # signed image
args:
- "--attest-endpoint=https://verifier.example.com"
- "--kms-resource=projects/…/locations/us/keyRings/fin-kr/cryptoKeys/fin-key"
- "--out=/run/secure/creds.env"
volumeMounts:
- { name: secrets-tmp, mountPath: /run/secure } # Mounts the tmpfs volume at /run/secure to store decrypted secrets.
containers: # The fraud/risk detection app container. Will only start once the initContainer finishes successfully.
- name: app
image: gcr.io/YOUR-PROJECT/fin-risk:sha256-DEF # signed image
envFrom: # Pulls in Kubernetes Secrets for non-sensitive configuration
- secretRef:
name: placeholder # (optional) or read /run/secure/creds.env directly
volumeMounts:
- { name: secrets-tmp, mountPath: /run/secure, readOnly: true }
securityContext: # Locks down the container for least privilege. Protects against container escape attacks.
readOnlyRootFilesystem: true
allowPrivilegeEscalation: false
nodeSelector:
cloud.google.com/confidential-compute: "true" # Forces the pods to run only on Confidential VM nodes in the GKE cluster. Ensures workloads won’t accidentally land on a non-secure node.
tolerations:
- key: "confidential-compute"
operator: "Exists"
effect: "NoSchedule"
Explanation:
- Deploys a workload named fin-risk with 3 replicas (pods).
- Runs on Confidential GKE nodes only (via nodeSelector + toleration).
- Uses Workload Identity (serviceAccountName) for secure GCP API access.
- Creates a tmpfs volume (emptyDir: medium: Memory) for secrets in RAM only.
- Runs an initContainer (attester) before the main app:
- Talks to an attestation verifier to prove the node is in a trusted state.
- Requests decryption keys from Cloud KMS, tied to attestation results.
- Stores the decrypted secrets into /run/secure/creds.env in memory.
- Starts the main app container (fin-risk) only after the initContainer succeeds:
- Reads the secrets from the tmpfs mount.
- Runs with a hardened security context (read-only filesystem, no privilege escalation).
Ensures both init and app containers use signed images (to be verified by Binary Authorization).
Step 4 - Binary Authorization (policy snippet idea)
- Allow only images with your KMS/Keyless signatures.
- Pin to exact digests used in your CI.
gcloud binauthz policy import policy.yaml
gcloud container clusters update secure-cluster \
--binauthz-evaluation-mode=PROJECT_SINGLETON_POLICY_ENFORCE
7. Challenges and limits to know about
Confidential GKE is powerful, but like any technology, it has some limitations:
- Not for every workload – some apps, especially those needing GPUs or special hardware, may not run on confidential nodes yet.
- Harder debugging – since memory is encrypted, tools that inspect memory or debug crashes don’t work the same way.
- Extra setup – you need to manage attestation services and policies to make sure only trusted workloads run.
- Team learning curve – engineers and ops teams may need new skills to work with TEEs and confidential VMs.
Think of it as a higher-security option for sensitive or regulated workloads, not something you’ll use for every single app.
8. What’s next
Confidential computing on Kubernetes is still growing, and more features are on the way:
- Confidential Space – lets different banks or partners analyze data together without exposing their raw data to each other.
- Confidential AI – training and running machine learning models inside secure nodes, so sensitive data and models stay protected.
- Industry adoption – rules like PCI DSS, ISO, and GDPR will likely push more companies to use runtime encryption, not just disk and network encryption.
9. Conclusion
Confidential Kubernetes by Google Cloud closes the last significant security gap: data in use. It has memory encryption, attestation, and conditional key access so that sensitive data remains secure - even when apps are in use. This means that banks, healthcare organizations and other regulated industries can migrate mission-critical workloads to Kubernetes with more confidence and lower risk and with increased compliance. In short, it makes Kubernetes a safe environment in which to run your most sensitive applications.