paint-brush
A Zero-Knowledge Anomaly Detection Approach for Robust Federated Learningby@quantification
297 reads

A Zero-Knowledge Anomaly Detection Approach for Robust Federated Learning

Too Long; Didn't Read

This paper introduces a cutting-edge anomaly detection approach for Federated Learning systems, addressing real-world challenges. Proactively detecting attacks, eliminating malicious client submissions without harming benign ones, and ensuring robust verification with Zero-Knowledge Proof make this method groundbreaking for privacy-preserving machine learning.
featured image - A Zero-Knowledge Anomaly Detection Approach for Robust Federated Learning
Quantification Theory Research Publication HackerNoon profile picture

This paper is available on arxiv under CC BY-NC-SA 4.0 DEED license.

Authors:

(1) Shanshan Han & Qifan Zhang, UCI;

(2) Wenxuan Wu, Texas A&M University;

(3) Baturalp Buyukates, Yuhang Yao & Weizhao Jin, USC;

(4) Salman Avestimehr, USC & FedML.

Table of Links

Abstract and Introduction

Problem Setting

The Proposed Two-Stages Anomaly Detection

Verifiable Anomaly Detection using ZKP

Evaluations

Related Works

Conclusion & References

ABSTRACT

Federated learning (FL) systems are vulnerable to malicious clients that submit poisoned local models to achieve their adversarial goals, such as preventing the convergence of the global model or inducing the global model to misclassify some data. Many existing defense mechanisms are impractical in real-world FL systems, as they require prior knowledge of the number of malicious clients or rely on re-weighting or modifying submissions. This is because adversaries typically do not announce their intentions before attacking, and re-weighting might change aggregation results even in the absence of attacks. To address these challenges in real FL systems, this paper introduces a cutting-edge anomaly detection approach with the following features: i) Detecting the occurrence of attacks and performing defense operations only when attacks happen; ii) Upon the occurrence of an attack, further detecting the malicious client models and eliminating them without harming the benign ones; iii) Ensuring honest execution of defense mechanisms at the server by leveraging a zero-knowledge proof mechanism. We validate the superior performance of the proposed approach with extensive experiments.

1 INTRODUCTION

Federated Learning (FL) (McMahan et al., 2017a) enables clients to collaboratively train machine learning models without sharing their local data with other parties, maintaining the privacy and security of their local data. Due to its privacy-preserving nature, FL has attracted considerable attention across various domains and has been utilized in numerous areas (Hard et al., 2018; Chen et al., 2019; Ramaswamy et al., 2019; Leroy et al., 2019; Byrd & Polychroniadou, 2020; Chowdhury et al., 2022). However, even though FL does not require sharing raw data with others, its decentralized and collaborative nature inadvertently introduces privacy and security vulnerabilities (Cao & Gong, 2022; Bhagoji et al., 2019; Lam et al., 2021; Jin et al., 2021; Tomsett et al., 2019; Chen et al., 2017; Tolpegin et al., 2020; Kariyappa et al., 2022; Zhang et al., 2022c). Malicious clients in FL systems can harm training by submitting spurious models to disrupt the global model from converging (Fang et al., 2020; Chen et al., 2017), or planting backdoors to induce the global model to perform wrongly for certain samples (Bagdasaryan et al., 2020b;a; Wang et al., 2020).


Existing literature on robust learning and mitigation of adversarial behaviors includes Blanchard et al. (2017); Yang et al. (2019); Fung et al. (2020); Pillutla et al. (2022); He et al. (2022); Cao et al. (2022); Karimireddy et al. (2020); Sun et al. (2019); Fu et al. (2019); Ozdayi et al. (2021); Sun et al. (2021), etc. These approaches exhibit shortcomings, making them less suitable for real FL systems. Some of these strategies require prior knowledge about the number of malicious clients within the FL system (Blanchard et al., 2017), even though in practice an adversary would not notify the system before attacking. Also, some of these methods mitigate impacts of potential malicious client submissions by re-weighting the local models (Fung et al., 2020), retaining only several local models that are most likely to be benign while removing others (Blanchard et al., 2017), or modifying the aggregation function (Pillutla et al., 2022). These methods have the potential to unintentionally alter the aggregation results in the absence of deliberate attacks, considering attacks happen infrequently


Figure 1: Overview of the proposed anomaly detection for FL systems.


in real-world scenarios. While the defense mechanisms can mitigate the impact of potential attacks, they can inadvertently compromise the result quality when applied to benign cases.


Moreover, existing defense mechanisms are deployed at the FL server without any verification procedures to ensure their correct execution. While most of the clients are benign and wish to collaboratively train machine learning models, they can also be skeptic about the server’s reliability due to the execution of the defense mechanisms that modify the original aggregation procedure. Thus, a successful anomaly detection approach should simultaneously satisfy the following: i) It should be able to detect the occurrence of attacks and exclusively handle the cases when attacks happen. ii) If an attack is detected, the strategy must further detect malicious client submissions and accordingly mitigate (or eliminate) their adversarial impacts without harming the benign client models. iii) There should be a robust mechanism to corroborate the honest execution of defense mechanisms.


In this work, we propose a novel anomaly detection mechanism that is specifically tailored to address genuine challenges faced by real-world FL systems. Our approach follows a two-stage scheme at the server to filter out malicious client submissions before aggregation. It initiates with a crossround check based on some cache called “reference models” to determine whether any attacks have occurred. In case of attacks, a subsequent cross-client detection is executed to eliminate malicious clients models without harming the benign client models. Meanwhile, the reference models in the cache is renewed. We provide an overview in Figure 1. Our contributions are summarized as follows:


i) Proactive attack detection. Our strategy is equipped with an initial cross-round check to detect the occurrence of potential attacks, ensuring that defensive methods are only activated in response to the presence of attacks, thereby maintaining the sanctity of the process in attack-free scenarios.


ii) Enhanced anomaly detection. By coupling the cross-round check with a subsequent crossclient detection, our approach efficiently eliminates malicious client submissions without harming the benign local submissions.


iii) Autonomy from prior knowledge. Our method operates effectively without any prerequisites such as data distribution or the number of malicious clients. Such autonomous nature ensures widespread applicability and adaptability of our approach across different FL tasks, regardless of the data distribution and the selection of models.


iv) Rigorous verification protocol. Incorporating Zero-Knowledge Proof (ZKP) (Goldwasser et al., 1989) methodologies, our approach guarantees that the elimination of malicious client models is executed correctly, ensuring that clients can place trust in the defense mechanism in the FL system.