Privacy in Machine Learning is buzzing in the new decade due to exponential growth in machine learning. Public and private sectors adopting artificial intelligence have felt the need to protect sensitivity of data which holds sensitive private information of individuals. Such cases include employee's private information including credit card, bank details, confidential health information and patient disease details available in hospital records.
As illustrated in the below figure, there are several techniques which have become familiar to protect privacy of data during training models in distributed systems using local/global data or obfuscating trained models.
With AI research in differential privacy and industry adoption by dominant players like Microsoft, Google, and IBM, the differential privacy framework introduced by Google is known Tensorflow Privacy, and it is equipped with a Differential Private Stochastic Gradient Descent algorithm to provide strong privacy guarantees.
This open source framework has the following characteristics
Tensorflow Privacy aims to integrate a privacy mechanism with the training procedure by decoupling:
This blog primarily discusses about Tensorflow Privacy (the DP algorithm, its metrics and test criteria) that guarantees privacy through Differential Private Algorithms. In this context, the blog gives an outline of two most widely used DP algorithms and gives text classification of sensitive product reviews.
Differential privacy acts as a regularizer by training machine learning models that works statistically similarly on two datasets differing in a single individual.
PATE ( Private Aggregation of Teacher Ensembles) — PATE framework achieves private learning by training each ML model independently and then ensembling them by adding noise to generate a single output model. The learning process partitions the private dataset in subsets/partitions of data, with no overlap between the data included in any pair of partitions.
This framework increases the total privacy budget with large number of labels, thus limiting privacy guarantees. Moreover it also possess risk from an adversary attack on the internal parameters of the published teachers, thus restricting its publication.
To overcome the limitations, a scalable student model is trained by transferring knowledge acquired by the teacher ensemble in a privacy-preserving manner, keeping the privacy budget to a constant value. The student selects inputs from a set of un-labeled public data and submits these inputs to the teacher ensemble to have them labeled. The noisy aggregation mechanism responds with private labels, restricting attacker access to other parts of data.
To have a detailed understanding on PATE, please refer to References 4 and 5 listed in the end section.
Differentially Private Stochastic Gradient Descent (DP-SGD) — It differs from PATE by granting less assumptions about the ML task and providing provable privacy guarantees expressed in terms of differential privacy. It wraps existing optimizers (e.g., SGD, Adam, …) into their differentially private counterparts using TensorFlow (TF) Privacy. In additions it allows to tune the parameters introduced by differentially private optimization. It also compares different privacy mechanisms and accounting procedures.
Stochastic gradient descent (SGD) with differentially private (DP) algorithm
In DP-SGD the sensitivity of each gradient needs is bounded, to limit how much each individual training point sampled in a mini-batch can influence the resulting gradient computation. This is accomplished by clipping each gradient computed and adding noise on each training point to control how much each training point can possibly impact model parameters.
Stochastic gradient descent algorithm adapted to be differentially private does the following steps:
Tensorflow Privacy further works on the principle of Renyi differential privacy, which expresses stricter privacy guarantee of privacy-preserving algorithms and for composition of heterogeneous mechanisms.
An algorithm A satisfies ε-differential privacy if for every t in the range of A, and for every pair of neighboring databases D and D’,
In other words, Rényi divergence of order ∞ between A(D) and A(D‘) is less than ε. That is,
where α = ∞. Rényi differential privacy (RDP) allows the possibility of α, being finite.
An algorithm is (α, ε)-RDP if the Rényi divergence of order α between any two adjacent databases is no more than ε.
It tracks cumulative privacy loss and even allows comparison mechanism for comparing two models objectively to determine which of the two is more privacy-preserving than the other as seen in the figure below.
Privacy Ledger : DP-SGD is equipped with privacy accounting (via, e.g. the moments accountant) to keep an online estimate of the (ǫ, δ) privacy guarantee. Privacy accounting helps to prevent the privacy estimate from bugs introduced through hyper-parameter selection strategy code. Moreover it is a dynamic process, allowing privacy accounting mechanism to be changed and the ledger reprocessed if, a tighter bound on the privacy loss is discovered after the data has been processed.
PrivacyLedger class maintains a record of the sum query events (for each sampling event) which can then be processed by the Rényi differential privacy (RDP) accountant.
Privacy Estimate : TF Privacy provides two methods (compute_rdp and get_privacy_spent) relevant to derive privacy guarantees achieved from the three parameters (sampling probability, noise_multiplier, steps), explained in the last section of the blog (Useful Metrics in Tensorflow Library).
Sampling Policies : As the update procedure of SGD algorithm operates on a small subset of records, the sampling process has the following characteristics:
Sample Multi-text classification of product reviews and complains with TF Privacy
The below code snippet creates multiple text categories for product complains by labelling certain text in the product reviews to specific categories.
df = pd.read_csv('data/complaints.csv') #downloaded from www.kaggle.com/selener/multi-class-text-classification-tfidf/data
print(df.head(2).T)
# Create a new dataframe with two columns
df1 = df[['Product', 'Consumer complaint narrative']].copy()
df1 = df1[pd.notnull(df1['Consumer complaint narrative'])]
# Renaming second column
df1.columns = ['Product', 'Consumer_complaint']
# Percentage of complaints with text
total = df1['Consumer_complaint'].notnull().sum()
round((total / len(df) * 100), 1)
print(pd.DataFrame(df.Product.unique()).values)
df2 = df1.sample(10000, random_state=1).copy()
# Renaming categories
df2.replace({'Product':
{'Credit reporting, credit repair services, or other personal consumer reports':
'CreditReporting',
'Credit reporting': 'CreditReporting',
'Credit card': 'CreditPrepaidCard',
'Prepaid card': 'CreditPrepaidCard',
'Credit card or prepaid card': 'CreditPrepaidCard',
'Payday loan': 'PersonalLoan',
'Payday loan, title loan, or personal loan' : 'PersonalLoan',
'Money transfer': 'TransferServices',
'Virtual currency': 'TransferServices',
'Money transfer, virtual currency, or money service' : 'TransferServices',
'Student loan': 'StudentLoan',
'Checking or savings account': 'SavingsAccount',
'Vehicle loan or lease': 'VehicleLoan',
'Debt collection': 'DebtCollection',
'Bank account or service' : 'BankAccount',
'Other financial service': 'FinancialServices',
'Consumer Loan': 'ConsumerLoan',
'Money transfers': 'MoneyTransfers'}},
inplace=True)
print(pd.DataFrame(df2.Product.unique()))
# Create a new column 'category_id' with label-encoded categories
le = preprocessing.LabelEncoder()
df2['category_id'] = le.fit_transform(df2['Product'])
category_id_df = df2[['Product', 'category_id']].drop_duplicates()
print(df2.head())
fig = plt.figure(figsize=(8, 6))
colors = ['grey', 'grey', 'grey', 'grey', 'grey', 'grey', 'grey', 'grey', 'grey',
'grey', 'darkblue', 'darkblue', 'darkblue']
df2.groupby('Product').Consumer_complaint.count().sort_values().plot.barh(
ylim=0, color=colors, title='NUMBER OF COMPLAINTS IN EACH PRODUCT CATEGORY\n')
plt.xlabel('Number of ocurrences', fontsize=10)
plt.show()
product_comments = df2['Consumer_complaint'].values # Collection of documents
product_type = df2['category_id'].values # Target or the labels we want to predict (i.e., the 13 different complaints of products)
complains = []
labels = []
for i in range(0, len(product_comments)):
complain = product_comments[i]
labels.append(product_type[i])
complain = complain.replace('XX', '')
complain = complain.replace('.', '')
for word in STOPWORDS:
token = ' ' + word + ' '
complain = complain.replace(token, ' ')
complain = complain.replace(' ', ' ')
complains.append(complain)
tokenizer = Tokenizer()
tokenizer.fit_on_texts(complains)
word_index = tokenizer.word_index
vocab_size = len(word_index)
sequences = tokenizer.texts_to_sequences(product_comments)
padded = pad_sequences(sequences, maxlen=max_length)
train_size = int(len(product_comments) * 0.7)
validation_size = int(len(product_comments) * 0.2)
training_sequences = padded[0:train_size]
train_labels = labels[0:train_size]
validation_sequences = padded[train_size:train_size+validation_size]
validation_labels = labels[train_size:train_size+validation_size]
test_sequences = padded[train_size + validation_size:]
test_labels = labels[train_size + validation_size:]
training_label_seq = np.reshape(np.array(train_labels), (len(train_labels), 1))
validation_label_seq = np.reshape(np.array(validation_labels), (len(validation_labels), 1))
test_label_seq = np.reshape(np.array(test_labels), (len(test_labels), 1))
After splitting the dataset into training, validation and test datasets, the data is trained using LSTM-CNN based neural networks. The accuracy is computed along with privacy budget computed using TF Privacy.
embeddings_index = {};
with open('embedding/glove.6B/glove.6B.100d.txt') as f:
for line in f:
values = line.split();
word = values[0];
coefs = np.asarray(values[1:], dtype='float32')
embeddings_index[word] = coefs;
embeddings_matrix = np.zeros((vocab_size + 1, embedding_dim))
for word, i in word_index.items():
embedding_vector = embeddings_index.get(word)
if embedding_vector is not None:
embeddings_matrix[i] = embedding_vector
print(len(embeddings_matrix))
model = tf.keras.Sequential([
tf.keras.layers.Embedding(vocab_size + 1, embedding_dim, input_length=max_length, weights=[embeddings_matrix],
trainable=False),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Conv1D(64, 5, activation='relu'),
tf.keras.layers.MaxPooling1D(pool_size=4),
tf.keras.layers.LSTM(64),
tf.keras.layers.Dense(13, activation='softmax')
])
model.compile(loss='sparse_categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
model.summary()
if FLAGS.dpsgd:
optimizer = DPAdamGaussianOptimizer(
l2_norm_clip=FLAGS.l2_norm_clip,
noise_multiplier=FLAGS.noise_multiplier,
num_microbatches=FLAGS.microbatches,
learning_rate=FLAGS.learning_rate)
else:
optimizer = AdamOptimizer()
model.compile(loss='sparse_categorical_crossentropy', optimizer=optimizer, metrics=['accuracy'])
num_epochs = 10
history = model.fit(training_sequences, training_label_seq, epochs=num_epochs,
validation_data=(validation_sequences, validation_label_seq), verbose=2)
plot_graphs(history, "accuracy")
plot_graphs(history, "loss")
scores = model.evaluate(test_sequences, test_label_seq, verbose=0)
print("Accuracy: %.2f%%" % (scores[1] * 100))
output_test = model.predict(test_sequences)
print(np.shape(output_test))
final_pred = np.argmax(output_test, axis=1)
print(np.shape(final_pred))
print(np.shape(test_label_seq))
final_pred_list = np.reshape(final_pred, (len(test_sequences), 1))
print(np.shape(final_pred_list))
results = confusion_matrix(test_label_seq, final_pred_list)
print(results)
precisions, recall, f1_score, true_sum = metrics.precision_recall_fscore_support(test_label_seq, final_pred_list)
print("Multi-label Classification LSTM CNN Precision =", precisions)
print("Multi-label Classification LSTM CNN Recall=", recall)
print("Multi-label Classification LSTM CNN F1 Score =", f1_score)
print('Multi-label Classification Accuracy: {}'.format((accuracy_score(test_label_seq, final_pred_list))))
classes = np.array(range(0, 13))
#print('Log loss: {}'.format(log_loss(classes[np.argmax(test_label_seq, axis=1)], output_test)))
# Compute the privacy budget expended.
if FLAGS.dpsgd:
eps = compute_epsilon(FLAGS.epochs * 10000 // FLAGS.batch_size) #based on total data size
print('For delta=1e-5, the current epsilon is: %.2f' % eps)
else:
print('Trained with vanilla non-private SGD optimizer')
Results : Multi-class text classification’s Confusion Matrix Precision, Recall, Accuracy, F1-score of each class along is represented below
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
embedding (Embedding) (None, 2000, 100) 2078700
_________________________________________________________________
dropout (Dropout) (None, 2000, 100) 0
_________________________________________________________________
conv1d (Conv1D) (None, 1996, 64) 32064
_________________________________________________________________
max_pooling1d (MaxPooling1D) (None, 499, 64) 0
_________________________________________________________________
lstm (LSTM) (None, 64) 33024
_________________________________________________________________
dense (Dense) (None, 13) 845
=================================================================
Total params: 2,144,633
Trainable params: 65,933
Non-trainable params: 2,078,700
_________________________________________________________________
Train on 7000 samples, validate on 2000 samples
Epoch 1/10
7000/7000 - 67s - loss: 1.7944 - accuracy: 0.3863 - val_loss: 1.5671 - val_accuracy: 0.4805
Epoch 2/10
7000/7000 - 61s - loss: 1.3817 - accuracy: 0.5553 - val_loss: 1.2037 - val_accuracy: 0.6305
Epoch 3/10
7000/7000 - 61s - loss: 1.0965 - accuracy: 0.6531 - val_loss: 1.0746 - val_accuracy: 0.6620
Epoch 4/10
7000/7000 - 61s - loss: 0.9596 - accuracy: 0.6901 - val_loss: 0.9218 - val_accuracy: 0.7005
Epoch 5/10
7000/7000 - 59s - loss: 0.8845 - accuracy: 0.7123 - val_loss: 0.9003 - val_accuracy: 0.7040
Epoch 6/10
7000/7000 - 64s - loss: 0.8186 - accuracy: 0.7330 - val_loss: 0.8818 - val_accuracy: 0.7080
Epoch 7/10
7000/7000 - 63s - loss: 0.7804 - accuracy: 0.7459 - val_loss: 0.8699 - val_accuracy: 0.7195
Epoch 8/10
7000/7000 - 65s - loss: 0.7466 - accuracy: 0.7540 - val_loss: 0.8770 - val_accuracy: 0.7135
Epoch 9/10
7000/7000 - 695s - loss: 0.7047 - accuracy: 0.7639 - val_loss: 0.9187 - val_accuracy: 0.7120
Epoch 10/10
7000/7000 - 67s - loss: 0.6657 - accuracy: 0.7799 - val_loss: 0.8833 - val_accuracy: 0.7200
Accuracy: 72.30%
(1000, 13)
(1000,)
(1000, 1)
(1000, 1)
[[ 13 0 4 4 0 0 3 0 5 1 0 0]
[ 1 2 1 5 2 0 0 0 1 2 0 1]
[ 13 1 71 14 8 0 0 0 2 0 3 0]
[ 1 1 7 329 14 0 6 0 0 1 0 2]
[ 3 0 5 57 163 0 3 1 0 2 1 0]
[ 4 0 0 0 0 0 0 0 0 0 0 0]
[ 1 0 0 9 5 0 95 1 0 3 0 0]
[ 2 1 1 2 1 0 1 1 0 5 0 1]
[ 13 0 2 2 3 0 1 0 11 0 1 0]
[ 0 1 1 8 6 0 1 2 0 33 0 0]
[ 7 1 3 1 0 0 0 0 4 1 0 0]
[ 0 0 1 3 1 0 2 0 0 0 0 5]]
Multi-label Classification LSTM CNN Precision = [0.22413793 0.28571429 0.73958333 0.75806452 0.80295567 0.
0.84821429 0.2 0.47826087 0.6875 0. 0.55555556]
Multi-label Classification LSTM CNN Recall= [0.43333333 0.13333333 0.63392857 0.91135734 0.69361702 0.
0.83333333 0.06666667 0.33333333 0.63461538 0. 0.41666667]
Multi-label Classification LSTM CNN F1 Score = [0.29545455 0.18181818 0.68269231 0.82767296 0.74429224 0.
0.84070796 0.1 0.39285714 0.66 0. 0.47619048]
Multi-label Classification Accuracy: 0.723
For delta=1e-5, the current epsilon is: 8.07
In order to plot the results we use the following source code:
def plot_graphs(history, string):
plt.plot(history.history[string])
plt.plot(history.history['val_' + string])
plt.xlabel("Epochs")
plt.ylabel(string)
plt.legend([string, 'val_' + string])
plt.show()
and the output generated from the above code is given below.
In order to maintain privacy guarantee, and enforce the necessity that a single training point does not affect the outcome of learning, Tensorflow Privacy has introduced certain metrics as stated below.
noise_multiplier : It’s a parameter used to control how much noise is sampled and added to gradients before they are applied by the optimizer. resulting in adding better privacy with more noise.
sampling probability : It is denoted by q used to compute the probability of an individual training point being included in a mini-batch
steps : It represents the number of steps/epochs the optimizer takes over the training data.
l2_norm_clip : is the maximum Euclidean norm of each individual gradient that is computed on an individual training example from a mini-batch, giving a control to bound the optimizer’s sensitivity to individual training points.
delta : It bounds the probability of privacy guarantee not holding, and set to be less than the inverse of the training data size (i.e., the population size).
epsilon : It measures the strength of privacy guarantee, by providing a bound on how much the probability of a particular model output can vary by including (or removing) a single training example. For this specific problem, the epsilon came out as 8.07, which could be further optimized by tuning the hyper-parameters or modifying the neural network. Here orders and rdp are scalars/array of same length. We computed the epsilon using Rényi differential privacy with Gaussian mechanism, by dividing it by 10,000 (total number of input data samples) as follows:
orders = [1 + x / 10. for x in range(1, 100)] + list(range(12, 64))
sampling_probability = FLAGS.batch_size / 10000
rdp = compute_rdp(q=sampling_probability,
noise_multiplier=FLAGS.noise_multiplier,
steps=steps,
orders=orders) #Rényi differential privacy Gaussian Mechanism
# Delta is set to 1e-5 because product_reviews has 70000 training points.
return get_privacy_spent(orders, rdp, target_delta=1e-5)[0]
Privacy budget : This difference the participation of a single data point makes, is referred as the privacy budget, where smaller privacy budgets correspond to stronger privacy guarantees.
Optimizers : Wrapers built around conventional optimizers used in deep learning. DPAdagradGaussianOptimizer (Differential private version of AdagradGaussianOptimizer, DPAdamGaussianOptimizer (Differential private version of AdamOptimizer), DPGradientDescentGaussianOptimizer (Differential private version of GradientDescentOptimizer)
num_microbatches: It specifies micro-batches into which the mini-batch is split. The computation of gradients are spanned over several examples before they are clipped. All micro-batches in a mini-batch are carefully processed, they are averaged and noised. Increasing the number of micro-batches, often improves utility but slows down training.
The complete source code is available at https://github.com/sharmi1206/differential-privacy-tensorflow with more examples for text and image classification