Zero-Knowledge-Proof-Based Anomaly Detection: Related Works

Written by quantification | Published 2024/01/02
Tech Story Tags: anomaly-detection | zkp | federated-learning | zk-anomaly-detection | zkp-for-anomaly-detection | zkp-and-federated-learning | federated-learning-tutorial | defense-mechanisms-in-fl

TLDRThis section provides insights into the landscape of related works in Federated Learning, focusing on attack detection and defense mechanisms. It critiques existing methodologies, such as k-means clustering for attack detection and various defense strategies for robust learning in FL. The evaluation emphasizes the challenges faced by these approaches, such as reliance on historical data and unintended quality degradation in the absence of attacks.via the TL;DR App

This paper is available on arxiv under CC BY-NC-SA 4.0 DEED license.

Authors:

(1) Shanshan Han & Qifan Zhang, UCI;

(2) Wenxuan Wu, Texas A&M University;

(3) Baturalp Buyukates, Yuhang Yao & Weizhao Jin, USC;

(4) Salman Avestimehr, USC & FedML.

Table of Links

Abstract and Introduction

Problem Setting

The Proposed Two-Stages Anomaly Detection

Verifiable Anomaly Detection using ZKP

Evaluations

Related Works

Conclusion & References

6 RELATED WORKS

Detection of occurrence of attacks. Zhang et al. (2022b) employs k-means to partition local models into clusters that correspond to “benign” or “malicious”. While this approach can efficiently detect potential malicious client models, it relies too much on historical client models from previous training rounds and might not be as effective when there is limited information on past client models. For example, in their implementation (Zhang et al., 2022a), since they need to collect historical client model information, authors set the starting round to detect attacks to different training rounds, e.g., 50 when the datasets are MNIST and FEMNIST, and 20 when the dataset is CIFAR10. Apparently, this is not suitable for real FL systems, as attacks may happen in earlier rounds as well.

Defense mechanisms in FL. Robust learning and the mitigation of adversarial behaviors in FL has been extensively explored (Blanchard et al., 2017; Yang et al., 2019; Fung et al., 2020; Pillutla et al., 2022; He et al., 2022; Karimireddy et al., 2020; Sun et al., 2019; Fu et al., 2019; Ozdayi et al., 2021; Sun et al., 2021; Yin et al., 2018; Chen et al., 2017; Guerraoui et al., 2018; Xie et al., 2020; Li et al., 2020; Cao et al., 2020). Some approaches keep several local models that are more likely to be benign in each FL iteration, e.g., (Blanchard et al., 2017; Guerraoui et al., 2018; Yin et al., 2018), and (Xie et al., 2020). For each FL round, instead of using all client submissions for aggregation, such approaches keep local models that are the most likely to be benign to represent the other local models. Such approaches are effective, but they keep less local models than the real number of benign local models to ensure that all Byzantine local models are filtered out, causing missing representation of some benign local models in aggregation. This completely wastes the computation resources of the benign clients that are not being selected and thus, changes the aggregation results as some benign local models do not participate in aggregation. Some approaches re-weight or modify local models to mitigate the impacts of potential malicious submissions (Fung et al., 2020; Karimireddy et al., 2020; Sun et al., 2019; Fu et al., 2019; Ozdayi et al., 2021; Sun et al., 2021), while other approaches alter the aggregation function or directly modify the aggregation results (Pillutla et al., 2022; Karimireddy et al., 2020; Yin et al., 2018; Chen et al., 2017). While these defense mechanisms can be effective against attacks, they might inadvertently degrade the quality of outcomes due to the unintentional alteration of aggregation results even when no attacks are present. This is especially problematic given the infrequency of attacks in real-world FL scenarios.


Written by quantification | The publication about the quantity of something. The theory about why that quantity is what is. And research!
Published by HackerNoon on 2024/01/02