Introduction Privacy in Machine Learning is buzzing in the new decade due to exponential growth in machine learning. Public and private sectors adopting artificial intelligence have felt the need to protect sensitivity of data which holds sensitive private information of individuals. Such cases include employee's private information including credit card, bank details, confidential health information and patient disease details available in hospital records. As illustrated in the below figure, there are several techniques which have become familiar to protect privacy of data during training models in distributed systems using local/global data or obfuscating trained models. With AI research in differential privacy and industry adoption by dominant players like Microsoft, Google, and IBM, the differential privacy framework introduced by Google is known Tensorflow Privacy, and it is equipped with a Differential Private Stochastic Gradient Descent algorithm to provide strong privacy guarantees. This open source framework has the following characteristics Mitigates the risk of exposing sensitive training data (heterogeneous datasets) in machine learning. Can be integrated into any tensorflow system without any change in model architectures, training procedures, or processes. Compares two machine learning models in terms of a single user data Modified Stochastic Gradient Descent , that averages the model updates, clips each of these updates, and adds noise to the final average. Helps in model generalization by encoding generic data Supports federated learning and training where updates are received from several participants Protects from differencing attack, linkage attacks, and reconstruction attacks Objective Tensorflow Privacy aims to integrate a privacy mechanism with the training procedure by decoupling: Training procedure (like stochastic gradient descent with batch normalization and simultaneous collection of accuracy metrics and training data statistics). Selection and configuration of the privacy mechanisms to apply to each of the aggregates collected (model gradients, batch normalization weight updates, metrics) and Accounting procedure used to compute a final (ε, δ)-DP guarantee. This blog primarily discusses about Tensorflow Privacy (the DP algorithm, its metrics and test criteria) that guarantees privacy through Differential Private Algorithms. In this context, the blog gives an outline of two most widely used DP algorithms and gives text classification of sensitive product reviews. Differential Private Algorithms Differential privacy acts as a regularizer by training machine learning models that works statistically similarly on two datasets differing in a single individual. PATE framework achieves private learning by training each ML model independently and then ensembling them by adding noise to generate a single output model. The learning process partitions the private dataset in subsets/partitions of data, with no overlap between the data included in any pair of partitions. PATE ( Private Aggregation of Teacher Ensembles ) — This framework increases the total privacy budget with large number of labels, thus limiting privacy guarantees. Moreover it also possess risk from an adversary attack on the internal parameters of the published teachers, thus restricting its publication. To overcome the limitations, a scalable student model is trained by transferring knowledge acquired by the teacher ensemble in a privacy-preserving manner, keeping the privacy budget to a constant value. The student selects inputs from a set of un-labeled public data and submits these inputs to the teacher ensemble to have them labeled. The noisy aggregation mechanism responds with private labels, restricting attacker access to other parts of data. To have a detailed understanding on PATE, please in the end section. refer to References 4 and 5 listed It differs from PATE by granting less assumptions about the ML task and providing provable privacy guarantees expressed in terms of differential privacy. It wraps existing into their differentially private counterparts using TensorFlow (TF) Privacy. In additions it allows to tune the parameters introduced by differentially private optimization. It also compares different privacy mechanisms and accounting procedures. Differentially Private Stochastic Gradient Descent (DP-SGD) — optimizers (e.g., SGD, Adam, …) Stochastic gradient descent (SGD) with differentially private (DP) algorithm In DP-SGD the sensitivity of each gradient needs is bounded, to limit how much each individual training point sampled in a mini-batch can influence the resulting gradient computation. This is accomplished by clipping each gradient computed and adding noise on each training point to control how much each training point can possibly impact model parameters. Stochastic gradient descent algorithm adapted to be differentially private does the following steps: Tensorflow Privacy further works on the principle of which expresses and for Renyi differential privacy , stricter privacy guarantee of privacy-preserving algorithms composition of heterogeneous mechanisms. An algorithm satisfies ε-differential privacy if for every in the range of , and for every pair of neighboring databases and , A t A D D’ In other words, Rényi divergence of order ∞ between ( ) and ( ‘) is less than ε. That is, A D A D where α = ∞. allows the possibility of α, being finite. Rényi differential privacy (RDP) An algorithm is (α, ε)-RDP if the Rényi divergence of order α between any two adjacent databases is no more than ε. It tracks and even allows comparison mechanism for comparing two models objectively to determine which of the as seen in the figure below. cumulative privacy loss two is more privacy-preserving than the other DP-SGD is equipped with privacy accounting (via, e.g. the moments accountant) to keep an online estimate of the (ǫ, δ) privacy guarantee. Privacy accounting helps to prevent the privacy estimate from bugs introduced through hyper-parameter selection strategy code. Moreover it is a dynamic process, allowing privacy accounting mechanism to be changed and the ledger reprocessed if, a tighter bound on the privacy loss is discovered after the data has been processed. Privacy Ledger : PrivacyLedger class maintains a record of the sum query events (for each sampling event) which can then be processed by the accountant. Rényi differential privacy (RDP) TF Privacy provides two methods relevant to derive privacy guarantees achieved from the three parameters (sampling probability, noise_multiplier, steps), explained in the last section of the blog ( ). Privacy Estimate : ( compute_rdp and get_privacy_spent ) Useful Metrics in Tensorflow Library As the update procedure of SGD algorithm operates on a small subset of records, the sampling process has the following characteristics: Sampling Policies : Mini-batches are independent and identically distributed. Mini-batches are equally sized and independent. Mini-batches are equally sized and disjoint. Sample Code Demonstration with Tensorflow Differential Privacy Sample Multi-text classification of product reviews and complains with TF Privacy The below code snippet creates multiple text categories for product complains by labelling certain text in the product reviews to specific categories. df = pd.read_csv( ) print(df.head( ).T) df1 = df[[ , ]].copy() df1 = df1[pd.notnull(df1[ ])] df1.columns = [ , ] total = df1[ ].notnull().sum() round((total / len(df) * ), ) print(pd.DataFrame(df.Product.unique()).values) df2 = df1.sample( , random_state= ).copy() df2.replace({ : { : , : , : , : , : , : , : , : , : , : , : , : , : , : , : , : , : , : }}, inplace= ) print(pd.DataFrame(df2.Product.unique())) le = preprocessing.LabelEncoder() df2[ ] = le.fit_transform(df2[ ]) category_id_df = df2[[ , ]].drop_duplicates() print(df2.head()) fig = plt.figure(figsize=( , )) colors = [ , , , , , , , , , , , , ] df2.groupby( ).Consumer_complaint.count().sort_values().plot.barh( ylim= , color=colors, title= ) plt.xlabel( , fontsize= ) plt.show() product_comments = df2[ ].values product_type = df2[ ].values complains = [] labels = [] i range( , len(product_comments)): complain = product_comments[i] labels.append(product_type[i]) complain = complain.replace( , ) complain = complain.replace( , ) word STOPWORDS: token = + word + complain = complain.replace(token, ) complain = complain.replace( , ) complains.append(complain) tokenizer = Tokenizer() tokenizer.fit_on_texts(complains) word_index = tokenizer.word_index vocab_size = len(word_index) sequences = tokenizer.texts_to_sequences(product_comments) padded = pad_sequences(sequences, maxlen=max_length) train_size = int(len(product_comments) * ) validation_size = int(len(product_comments) * ) training_sequences = padded[ :train_size] train_labels = labels[ :train_size] validation_sequences = padded[train_size:train_size+validation_size] validation_labels = labels[train_size:train_size+validation_size] test_sequences = padded[train_size + validation_size:] test_labels = labels[train_size + validation_size:] training_label_seq = np.reshape(np.array(train_labels), (len(train_labels), )) validation_label_seq = np.reshape(np.array(validation_labels), (len(validation_labels), )) test_label_seq = np.reshape(np.array(test_labels), (len(test_labels), )) 'data/complaints.csv' #downloaded from www.kaggle.com/selener/multi-class-text-classification-tfidf/data 2 # Create a new dataframe with two columns 'Product' 'Consumer complaint narrative' 'Consumer complaint narrative' # Renaming second column 'Product' 'Consumer_complaint' # Percentage of complaints with text 'Consumer_complaint' 100 1 10000 1 # Renaming categories 'Product' 'Credit reporting, credit repair services, or other personal consumer reports' 'CreditReporting' 'Credit reporting' 'CreditReporting' 'Credit card' 'CreditPrepaidCard' 'Prepaid card' 'CreditPrepaidCard' 'Credit card or prepaid card' 'CreditPrepaidCard' 'Payday loan' 'PersonalLoan' 'Payday loan, title loan, or personal loan' 'PersonalLoan' 'Money transfer' 'TransferServices' 'Virtual currency' 'TransferServices' 'Money transfer, virtual currency, or money service' 'TransferServices' 'Student loan' 'StudentLoan' 'Checking or savings account' 'SavingsAccount' 'Vehicle loan or lease' 'VehicleLoan' 'Debt collection' 'DebtCollection' 'Bank account or service' 'BankAccount' 'Other financial service' 'FinancialServices' 'Consumer Loan' 'ConsumerLoan' 'Money transfers' 'MoneyTransfers' True # Create a new column 'category_id' with label-encoded categories 'category_id' 'Product' 'Product' 'category_id' 8 6 'grey' 'grey' 'grey' 'grey' 'grey' 'grey' 'grey' 'grey' 'grey' 'grey' 'darkblue' 'darkblue' 'darkblue' 'Product' 0 'NUMBER OF COMPLAINTS IN EACH PRODUCT CATEGORY\n' 'Number of ocurrences' 10 'Consumer_complaint' # Collection of documents 'category_id' # Target or the labels we want to predict (i.e., the 13 different complaints of products) for in 0 'XX' '' '.' '' for in ' ' ' ' ' ' ' ' ' ' 0.7 0.2 0 0 1 1 1 After splitting the dataset into training, validation and test datasets, the data is trained using LSTM-CNN based neural networks. The accuracy is computed along with privacy budget computed using TF Privacy. embeddings_index = {}; open( ) f: line f: values = line.split(); word = values[ ]; coefs = np.asarray(values[ :], dtype= ) embeddings_index[word] = coefs; embeddings_matrix = np.zeros((vocab_size + , embedding_dim)) word, i word_index.items(): embedding_vector = embeddings_index.get(word) embedding_vector : embeddings_matrix[i] = embedding_vector print(len(embeddings_matrix)) model = tf.keras.Sequential([ tf.keras.layers.Embedding(vocab_size + , embedding_dim, input_length=max_length, weights=[embeddings_matrix], trainable= ), tf.keras.layers.Dropout( ), tf.keras.layers.Conv1D( , , activation= ), tf.keras.layers.MaxPooling1D(pool_size= ), tf.keras.layers.LSTM( ), tf.keras.layers.Dense( , activation= ) ]) model.compile(loss= , optimizer= , metrics=[ ]) model.summary() FLAGS.dpsgd: optimizer = DPAdamGaussianOptimizer( l2_norm_clip=FLAGS.l2_norm_clip, noise_multiplier=FLAGS.noise_multiplier, num_microbatches=FLAGS.microbatches, learning_rate=FLAGS.learning_rate) : optimizer = AdamOptimizer() model.compile(loss= , optimizer=optimizer, metrics=[ ]) num_epochs = history = model.fit(training_sequences, training_label_seq, epochs=num_epochs, validation_data=(validation_sequences, validation_label_seq), verbose= ) plot_graphs(history, ) plot_graphs(history, ) scores = model.evaluate(test_sequences, test_label_seq, verbose= ) print( % (scores[ ] * )) output_test = model.predict(test_sequences) print(np.shape(output_test)) final_pred = np.argmax(output_test, axis= ) print(np.shape(final_pred)) print(np.shape(test_label_seq)) final_pred_list = np.reshape(final_pred, (len(test_sequences), )) print(np.shape(final_pred_list)) results = confusion_matrix(test_label_seq, final_pred_list) print(results) precisions, recall, f1_score, true_sum = metrics.precision_recall_fscore_support(test_label_seq, final_pred_list) print( , precisions) print( , recall) print( , f1_score) print( .format((accuracy_score(test_label_seq, final_pred_list)))) classes = np.array(range( , )) FLAGS.dpsgd: eps = compute_epsilon(FLAGS.epochs * // FLAGS.batch_size) print( % eps) : print( ) with 'embedding/glove.6B/glove.6B.100d.txt' as for in 0 1 'float32' 1 for in if is not None 1 False 0.2 64 5 'relu' 4 64 13 'softmax' 'sparse_categorical_crossentropy' 'adam' 'accuracy' if else 'sparse_categorical_crossentropy' 'accuracy' 10 2 "accuracy" "loss" 0 "Accuracy: %.2f%%" 1 100 1 1 "Multi-label Classification LSTM CNN Precision =" "Multi-label Classification LSTM CNN Recall=" "Multi-label Classification LSTM CNN F1 Score =" 'Multi-label Classification Accuracy: {}' 0 13 #print('Log loss: {}'.format(log_loss(classes[np.argmax(test_label_seq, axis=1)], output_test))) # Compute the privacy budget expended. if 10000 #based on total data size 'For delta=1e-5, the current epsilon is: %.2f' else 'Trained with vanilla non-private SGD optimizer' Multi-class text classification’s of each class along is represented below Results : Confusion Matrix Precision, Recall, Accuracy, F1-score Model: _________________________________________________________________ Layer (type) Output Shape Param ================================================================= embedding (Embedding) ( , , ) _________________________________________________________________ dropout (Dropout) ( , , ) _________________________________________________________________ conv1d (Conv1D) ( , , ) _________________________________________________________________ max_pooling1d (MaxPooling1D) ( , , ) _________________________________________________________________ lstm (LSTM) ( , ) _________________________________________________________________ dense (Dense) ( , ) ================================================================= Total params: , , Trainable params: , Non-trainable params: , , _________________________________________________________________ Train on samples, validate on samples Epoch / / - s - loss: - accuracy: - val_loss: - val_accuracy: Epoch / / - s - loss: - accuracy: - val_loss: - val_accuracy: Epoch / / - s - loss: - accuracy: - val_loss: - val_accuracy: Epoch / / - s - loss: - accuracy: - val_loss: - val_accuracy: Epoch / / - s - loss: - accuracy: - val_loss: - val_accuracy: Epoch / / - s - loss: - accuracy: - val_loss: - val_accuracy: Epoch / / - s - loss: - accuracy: - val_loss: - val_accuracy: Epoch / / - s - loss: - accuracy: - val_loss: - val_accuracy: Epoch / / - s - loss: - accuracy: - val_loss: - val_accuracy: Epoch / / - s - loss: - accuracy: - val_loss: - val_accuracy: Accuracy: % ( , ) ( ,) ( , ) ( , ) [[ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ]] Multi-label Classification LSTM CNN Precision = [ ] Multi-label Classification LSTM CNN Recall= [ ] Multi-label Classification LSTM CNN F1 Score = [ ] Multi-label Classification Accuracy: For delta= , the current epsilon : "sequential" # None 2000 100 2078700 None 2000 100 0 None 1996 64 32064 None 499 64 0 None 64 33024 None 13 845 2 144 633 65 933 2 078 700 7000 2000 1 10 7000 7000 67 1.7944 0.3863 1.5671 0.4805 2 10 7000 7000 61 1.3817 0.5553 1.2037 0.6305 3 10 7000 7000 61 1.0965 0.6531 1.0746 0.6620 4 10 7000 7000 61 0.9596 0.6901 0.9218 0.7005 5 10 7000 7000 59 0.8845 0.7123 0.9003 0.7040 6 10 7000 7000 64 0.8186 0.7330 0.8818 0.7080 7 10 7000 7000 63 0.7804 0.7459 0.8699 0.7195 8 10 7000 7000 65 0.7466 0.7540 0.8770 0.7135 9 10 7000 7000 695 0.7047 0.7639 0.9187 0.7120 10 10 7000 7000 67 0.6657 0.7799 0.8833 0.7200 72.30 1000 13 1000 1000 1 1000 1 13 0 4 4 0 0 3 0 5 1 0 0 1 2 1 5 2 0 0 0 1 2 0 1 13 1 71 14 8 0 0 0 2 0 3 0 1 1 7 329 14 0 6 0 0 1 0 2 3 0 5 57 163 0 3 1 0 2 1 0 4 0 0 0 0 0 0 0 0 0 0 0 1 0 0 9 5 0 95 1 0 3 0 0 2 1 1 2 1 0 1 1 0 5 0 1 13 0 2 2 3 0 1 0 11 0 1 0 0 1 1 8 6 0 1 2 0 33 0 0 7 1 3 1 0 0 0 0 4 1 0 0 0 0 1 3 1 0 2 0 0 0 0 5 0.22413793 0.28571429 0.73958333 0.75806452 0.80295567 0. 0.84821429 0.2 0.47826087 0.6875 0. 0.55555556 0.43333333 0.13333333 0.63392857 0.91135734 0.69361702 0. 0.83333333 0.06666667 0.33333333 0.63461538 0. 0.41666667 0.29545455 0.18181818 0.68269231 0.82767296 0.74429224 0. 0.84070796 0.1 0.39285714 0.66 0. 0.47619048 0.723 1e-5 is 8.07 In order to plot the results we use the following source code: plt.plot(history.history[string]) plt.plot(history.history[ + string]) plt.xlabel( ) plt.ylabel(string) plt.legend([string, + string]) plt.show() : def plot_graphs (history, string) 'val_' "Epochs" 'val_' and the output generated from the above code is given below. Useful Metrics in TF Privacy library In order to maintain privacy guarantee, and enforce the necessity that a single training point does not affect the outcome of learning, Tensorflow Privacy has introduced certain metrics as stated below. It’s a parameter used to control how much noise is sampled and added to gradients before they are applied by the optimizer. resulting in adding better privacy with more noise. noise_multiplier : It is denoted by q used to compute the probability of an individual training point being included in a mini-batch sampling probability : It represents the number of steps/epochs the optimizer takes over the training data. steps : is the maximum Euclidean norm of each individual gradient that is computed on an individual training example from a mini-batch, giving a control to bound the optimizer’s sensitivity to individual training points. l2_norm_clip : : It bounds the probability of privacy guarantee not holding, and set to be less than the inverse of the training data size (i.e., the population size). delta : It measures the strength of privacy guarantee, by providing a bound on how much the probability of a particular model output can vary by including (or removing) a single training example. For this specific problem, the epsilon came out as 8.07, which could be further optimized by tuning the hyper-parameters or modifying the neural network. Here orders and rdp are scalars/array of same length. We computed the epsilon using , by dividing it by 10,000 (total number of input data samples) as follows: epsilon Rényi differential privacy with Gaussian mechanism orders = [ + x / x range( , )] + list(range( , )) sampling_probability = FLAGS.batch_size / rdp = compute_rdp(q=sampling_probability, noise_multiplier=FLAGS.noise_multiplier, steps=steps, orders=orders) get_privacy_spent(orders, rdp, target_delta= )[ ] 1 10. for in 1 100 12 64 10000 #Rényi differential privacy Gaussian Mechanism # Delta is set to 1e-5 because product_reviews has 70000 training points. return 1e-5 0 This difference the participation of a single data point makes, is referred as the privacy budget, where smaller privacy budgets correspond to stronger privacy guarantees. Privacy budget : Wrapers built around conventional optimizers used in deep learning. DPAdagradGaussianOptimizer (Differential private version of AdagradGaussianOptimizer, DPAdamGaussianOptimizer (Differential private version of AdamOptimizer), DPGradientDescentGaussianOptimizer (Differential private version of GradientDescentOptimizer) Optimizers : : It specifies micro-batches into which the mini-batch is split. The computation of gradients are spanned over several examples before they are clipped. All micro-batches in a mini-batch are carefully processed, they are averaged and noised. Increasing the number of micro-batches, often improves utility but slows down training. num_microbatches The complete source code is available at with more examples for text and image classification https://github.com/sharmi1206/differential-privacy-tensorflow References http://www.cleverhans.io/privacy/2019/03/26/machine-learning-with-differential-privacy-in-tensorflow.html http://www.cleverhans.io/privacy/2018/04/29/privacy-and-machine-learning.html https://colab.research.google.com/github/tensorflow/privacy/blob/master/tutorials/Classification_Privacy.ipynb#scrollTo=RseeuA7veIHU SCA LABLE PRIVATE LEARNING WITH PATE https://arxiv.org/pdf/1802.08908.pdf Scalable Differentially Private Generative Student Model via PATE https://arxiv.org/pdf/1906.09338.pdf