paint-brush
Zero-knowledge Proof Meets Machine Learning in Verifiability: Abstract & Introductionby@escholar
186 reads

Zero-knowledge Proof Meets Machine Learning in Verifiability: Abstract & Introduction

tldt arrow

Too Long; Didn't Read

This paper presents a comprehensive survey of zero-knowledge proof-based verifiable machine learning (ZKP-VML) technology. We first analyze the potential verifiability issues that may exist in different machine learning scenarios. Subsequently, we provide a formal definition of ZKP-VML. We then conduct a detailed analysis and classification of existing works based on their technical approaches. Finally, we discuss the key challenges and future directions in the field of ZKP-based VML.
featured image - Zero-knowledge Proof Meets Machine Learning in Verifiability: Abstract & Introduction
EScholar: Electronic Academic Papers for Scholars HackerNoon profile picture


This paper is available on arxiv under CC BY 4.0 license.

Authors:

(1) Zhibo Xing, School of Cyberspace Science and Technology, Beijing Institute of Technology, Beijing, China, and the School of Computer Science, The University of Auckland, Auckland, New Zealand;

(2) Zijian Zhang, School of Cyberspace Science and Technology, Beijing Institute of Technology, Beijing, China, and Southeast Institute of Information Technology, Beijing Institute of Technology, Fujian, China;

(3) Jiamou Liu, School of Computer Science, The University of Auckland, Auckland, New Zealand;

(4) Ziang Zhang, School of Cyberspace Science and Technology, Beijing Institute of Technology, Beijing, China;

(5) Meng Li, Key Laboratory of Knowledge Engineering with Big Data (Hefei University of Technology), Ministry of Education; School of Computer Science and Information Engineering, Hefei University of Technology, 230601 Hefei, Anhui, China; Anhui Province Key Laboratory of Industry Safety and Emergency Technology; and Intelligent Interconnected Systems Laboratory of Anhui Province (Hefei University of Technology)

(6) Liehuang Zhu, School of Cyberspace Science and Technology, Beijing Institute of Technology, Beijing, 100081, China;

(7) Giovanni Russello, School of Computer Science, The University of Auckland, Auckland, New Zealand.

TABLE OF LINKS

Abstract & Introduction

Background

Zero-Knowledge Proof-Based Verifiable Machine Learning

Existing Scheme & Challenges and Future Research Directions

Conclusion, Acknowledgment and References

Abstract

With the rapid advancement of artificial intelligence technology, the usage of machine learning models is gradually becoming part of our daily lives. High-quality models rely not only on efficient optimization algorithms but also on the training and learning processes built upon vast amounts of data and computational power. However, in practice, due to various challenges such as limited computational resources and data privacy concerns, users in need of models often cannot train machine learning models locally.


This has led them to explore alternative approaches such as outsourced learning and federated learning. While these methods address the feasibility of model training effectively, they introduce concerns about the trustworthiness of the training process since computations are not performed locally.


Similarly, there are trustworthiness issues associated with outsourced model inference. These two problems can be summarized as the trustworthiness problem of model computations: How can one verify that the results computed by other participants are derived according to the specified algorithm, model, and input data? To address this challenge, verifiable machine learning (VML) has emerged. This paper presents a comprehensive survey of zero-knowledge proof-based verifiable machine learning (ZKP-VML) technology. We first analyze the potential verifiability issues that may exist in different machine learning scenarios. Subsequently, we provide a formal definition of ZKP-VML. We then conduct a detailed analysis and classification of existing works based on their technical approaches. Finally, we discuss the key challenges and future directions in the field of ZKP-based VML.


Index Terms—Zero-Knowledge Proof, Machine Learning, Verifiability.


I. INTRODUCTION

Recently, Artificial Intelligence (AI) has been widely used by both academic and industrial communities for several applications ranging from healthcare to commercial products. For instance, Bhavsar [1] summarized several methods using machine learning in medical diagnosis. The many existing works included in show that machine learning has played a great role in medical diagnosis. Xu [2] explored a Machine Learning (ML) model for tracking and predicting student performance in degree program, through which the college can help more students graduate on time. Addo [3] trained a deep learning model by using a dataset of business financial records to analysis credit risk. However, with the improvement of model’s capabilities, the exponentially growing demand for computing power in machine learning has created the need for parallel training on larger clusters [4], [5].


Meanwhile, cloud service providers (e.g. Amazon [6], AliCloud[7], etc.) offer affordable rental computing and storage resources. Several machine learning frameworks on the cloud have also been proposed [8], [9], [10]. Thus, the combination of these two has resulted in the cloud-based learning [11]. Compared to the need for computing power, the access to data further catalyzing the emergence and development of federated learning [12]. Laws and regulations related to privacy protection make it more difficult for enterprises to access user data for training purposes, which expects enterprises to apply the federated training. Such as the European Union’s General Data Protection Regulation (GDPR) [13], USA’s California Consumer Privacy Act (CCPA) [14], and Personal Information Protection Law (PIPL) of the People’s Republic of China [15]. Once users’ local data is leaked due to security vulnerabilities or illegally collected without their permission, then enterprises have to face a huge compensation. For instance, Facebook was accused because it scanned faces in the user photo library and offered suggestions about who the person might be without user permission. Eventually, Facebook paid $650 million to 1.6 million users [16]. Equifax paid $575 million for losing the personal and financial information of nearly 150 million people due to an unpatched Apache Struts framework in the database [17]. The various challenges mentioned above have given rise to a series of machine learning paradigms based on outsourced computation, such as federated learning and cloud-based learning. Indeed, while outsourced learning can effectively tackle the computational resource constraints and data privacy issues highlighted earlier, it also introduces a spectrum of vulnerabilities and attack vectors within these outsourced machine learning scenarios.


Malicious participates may attempt to evade honest execution of computations to gain additional advantages. This can include reducing the number of computation rounds to lower computational resource consumption, training models with incorrect data to obtain rewards, or using poorly performing models for inference to acquire greater inference computation rewards. Or even training the model with designed poison data, and forge a result that looks similar to fool the user. It can even involve updating the model with poisoned data, introducing security backdoors into the model, or intentionally providing incorrect inference results during the inference process to deceive other participants. These issues present in the outsourced model computation process require a verification solution. By providing proofs and verification of the model computation process, it becomes possible to prevent the aforementioned attacks.


Additionally, considering existing attacks such as member inference and data reconstruction, the verification process should not leak additional information about the original private computation data. A membership inference attack[18] is to determine whether a given data record was part of the model’s training dataset or not when given the machine learning model and a record. A data leakage and reconstruction attack[19] is the unintended information leakage from clients’ updates or gradients. Fortunately, all these adversaries and attacks can be defended against by zeroknowledge proof. Zero-knowledge proof (ZKP) is a powerful cryptographic technique that holds significant promise in addressing the verifiability challenges associated with outsourced machine learning.


Its fundamental principle allows one party to demonstrate the correctness of a statement without revealing any further information to another party. Within the realm of machine learning, ZKP harmonizes seamlessly with the imperative to verify the integrity of locally trained models or inference results. In its essence, the correctness of a locally trained model can be framed as a statement, namely: ”The local model is indeed computed through a specified training process with a particular dataset initial model.” Zero-knowledge proof empowers other participants to trust this statement without necessitating access to supplementary details, such as the specfic training dataset employed in the process. The procedure entails the transformation of the computation process into an arithmetic circuit and the generation of a proof affirming the circuit’s satisfiability with the input, i.e., the given dataset and model, and the output, i.e., the computed result. To vividly describe the application of zero-knowledge proof in machine learning verifiability, we describe the following three scenarios involving different privacy policies for data and models, respectively.


  1. Money Laundering Monitoring Model: Banks store a wide variety of user information and transaction records, including some records of criminal acts such as money laundering. To combat money laundering crimes more quickly and effectively, the government wants to train a model that monitoring money laundering through transaction behavior and records. At the same time, banks do not want to disclose or reveal their unique and private data, so federated learning with zero-knowledge proof can be deployed to achieve this. First, banks commit to their private transaction and user data and publish the commitment to the government. Then, the government distributes the global model to all banks. After which banks train models based on the committed data. The proof of correctness of the above computational process is generated using zero-knowledge proof, from which the government can verify whether the bank carried out the training process as claimed or not. With an aggregation protocol, banks and the government can work together to compute a global model for the next round of training process. In this process, zeroknowledge proofs ensure that the government can verify that the training is indeed executed as claimed without knowing anything about the private training data other than the integrity of the training. The whole process is shown in Figure 1(a).


  2. Artificial Intelligence Diagnose: With the help of artificial intelligence models, hospitals are able to diagnose patients’ diseases more accurately. However, a single result lacks persuasiveness. To address claim issues between patients and insurance companies, hospitals also need to provide proof to the insurance company that the diagnosis is indeed the result of inference generated by a high-quality model based on the patient’s medical case. Thus, a zero-knowledge proofbased machine learning approach can be used to solve this problem. First, the hospital commits to the model used for diagnosis and the patient’s case. Then, the hospital uses the AI model to diagnose and produce a diagnosis. After which zero-knowledge proof is used to prove the correctness of the above execution process and the results are sent to the patient. The patient can send the above data to the insurance company to convince them of the reliability of the diagnosis, that is, that the diagnosis was indeed obtained by the correct inference process. The whole process is shown in Figure 1(b).


  3. Student Learning Behavior Intelligence Analysis: Inorder to provide schools and teachers with more effective and accurate student learning guidance assistance, machine learning models can be used to help analyze students’ learning behaviors and give appropriate guidance recommendations. At the same time, the analytics company needs to prove the reliability of the results to the school without revealing the inference model. In this scenario, the analytics company first selects the appropriate inference model according to the school’s requirements and commits to the inference model to be used. The school then collects the student behavior data according to the requirements given by the analytics company and sends the student behavior data to the analytics company. The analytics company uses its private model to infer and analyze the data, and generates a proof of the correctness of the inference process using zero-knowledge proof. Finally, the analytics company sends the results to the school, who verifies the reliability of the results and can then follow the guidance to improve the teaching methods for its students. The whole process is shown in Figure 1(c).


Related work

Table I compared the characteristics of existing surveys. Several researchers have reviewed secure machine learning or


(a) Federal training of anti-money laundering models based on private transactionbehavior


(b) Artificial intelligence diagnosis based on case privacy data


(c) Artificial inference of guidance based on private student learning behavior


Fig. 1. Real-life applications of zero-knowledge proof-based verifiable machine learning



federated learning, but few have focused on the verifiability of machine learning, especially with zero-knowledge proof. Considering the absence of a review on zero-knowledge proof based verifiable machine learning, we investigate the existing overview type of work from three steps:


  1. Existing work on secure machine learning;


  2. Existing work on verifiable machine learning;


  3. Existing work on ZKP-based verifiable machine learning.

Survey on secure machine learning


Li[20] comprehensively analyzes the existing security issues in artificial intelligence from an AI perspective, covering aspects such as robustness, capability, explainability, reproducibility, fairness, privacy, and accountability. The analysis includes cryptographic tools such as multi-party secure computation and trusted execution environments to ensure privacy and security in AI environments. However, the security and trust concerns discussed in this paper are more oriented towards the inherent concerns within AI technologies. Ma[21] examines the security issues of outsourced deep learning and, based on Gennaro’s definition of secure outsourcing computation, presents a system model and security requirements for outsourced learning. These requirements encompass privacy, verifiability, efficiency, and non-interactivity. Nevertheless, the author’s analysis of existing work primarily focuses on the privacy aspect of outsourced learning, thereby overlooking the issue of verifiability.


There are numerous papers of this type, and some have not been listed here. These endeavors stem from the perspective of artificial intelligence and machine learning security. However, they have not thoroughly delved into the importance of verifiability. Their discussions on security tend to lean more towards the safety of artificial intelligence technologies, such as interpretability and privacy concerns.


Survey on verifiable machine learning Zhang[23] focused on verifiable federated learning. In this study, the authors succinctly provide a definition for verifiable federated learning, characterizing it as ”the ability of one party to prove to other parties in an FL protocol that it has correctly performed the intended task without deviation”. Furthermore, they classify the verifiability of diverse computation types within federated learning based on factors such as architecture, participants, and specific execution operations. The covered technologies encompass trusted execution environments, reputation mechanisms, and contract theory, among others. However, it is notable that this work places a predominant emphasis on the diverse computation types within federated learning, potentially neglecting in-depth investigation into the realm of verifiable machine learning. Additionally, the contribution of zero-knowledge proof techniques to the domain of verifiability appears to be underrepresented in this analysis. Tariq[22] comprehensively analyzes the trust issues in federated learning from a federated learning perspective. This paper dissects trust issues in federated learning into interpretability-based trust, fairness-based trust, and security and privacy-based trust. It conducts an analysis of these trust aspects across the entire process of model and data selection, model training, and model aggregation. The paper’s analysis of the verifiability aspect in federated learning involves consensus mechanisms, homomorphic encryption, secret sharing, multi-party secure computation, and other cryptographic techniques. However, the author focuses on verifying model availability rather than the verifiability of the training process itself, and the analysis overlooks zero-knowledge proof technology.


These endeavors, to varying degrees, take into account the significance of verifiability in the context of machine learning. However, they have devoted less attention to the verifiability of model training or inference processes, instead focusing on other computational aspects or alternative methods for achieving verifiability. Additionally, there has been limited consideration of the application of zero-knowledge proof techniques to enhance verifiability.


Survey on ZKP-based verifiable machine learning


Modulus Labs [24] conducted an in-depth exploration of the verifiability of machine learning inference processes based on zero-knowledge proofs. In this comprehensive review, the authors categorized the existing ZKP-ML schemes based on the different types of zero-knowledge proof systems used and placed particular emphasis on the analysis of the influence of distinct zero-knowledge proof systems on the scheme’s performance. The authors employed extensive experimental assessments to evaluate the operational performance of different schemes across diverse tasks, including runtime and memory consumption. Additionally, they delved into a detailed analysis of the time consumed by each zero-knowledge proof operation within each scheme, offering a deeper insight into how various zero-knowledge proof systems impact the scheme’s efficiency.


This is the most closely related survey to the content discussed in this paper. This study extensively acknowledges the application of zero-knowledge proofs in enhancing the verifiability of machine learning processes. Furthermore, it substantiates its claims through comprehensive experimentation, elucidating the influence of diverse zero-knowledge proof systems on the efficiency of ZKP-ML schemes. However, this paper provides relatively limited consideration to the analysis of how ZKP-ML scheme design contributes to its efficiency enhancement. Instead, it predominantly showcases the impact of various zero-knowledge proof systems on the verifiability of machine learning, supported by experimental evidence. Furthermore, all ZKP-ML schemes discussed in this work are covered in our survey.


B. Scope of This Survey


The main contributions of this survey are listed as follows:


  1. Bring zero-knowledge proof-based verifiable machine learning to the stage: To our best knowledge, we are the first to study ZKP-VML methods systematically, and the survey covers almost all achievements related to ZKPVML up to June 2023.


2) Perform formal modeling of ZKP-VML: We provide a comprehensive overview of zkp-based verifiable machine learning, including the definition, properties and challenges.


3) Classify and analyze the existing schemes: We categorize existing work into two major application classes and conducted a more detailed classification based on the technical characteristics of different schemes. For each piece of work, we have conducted a thorough analysis, explaining how it achieves a particular feature or characteristic.


4) Present challenges and future directions: We present challenges and future directions of ZKP-VML, which guide the direction for follow-up researchers.


C. Structure of the Survey


The structure of this survey is shown in figure 2. Section II illustrates the background knowledge of machine learning and zero-knowledge proof. Section III presents the definition, workflow and several properties of zero-knowledge proofbased verifiable machine learning. Section IV analyzes existing schemes and details the key technologies on scheme building. Section V presents challenges and future directions. Finally, Section VI concludes the survey.


A list of key acronyms and abbreviations used throughout the paper is given in Table II.