paint-brush
Various Optimisation Techniques and their Impact on Generation of Word Embeddingsby@dataturks
4,127 reads
4,127 reads

Various Optimisation Techniques and their Impact on Generation of Word Embeddings

tldt arrow

Too Long; Didn't Read

The topic of interest is word2vec model for generation of word embeddings. This covers many concepts of machine learning. We shall learn about a single hidden layer neural network, embedding, and various optimisation techniques. The next part of the tutorial is the next part implementing the skip gram model. Let’s use the source code for Word2Vec and the dataset available in nltkcorpus. It is a replica of Project Gutenberg. The code is available to download and use in the next tutorial.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - Various Optimisation Techniques and their Impact on Generation of Word Embeddings
DataTurks: Data Annotations Made Super Easy HackerNoon profile picture

Shameless plugin: We are a machine learning data annotation platform to make it super easy for you to build ML datasets. Just upload data, invite your team and build datasets super quick.

Welcome to the third part of the five series tutorials on Machine Learning and its applications. Check out Dataturks, a data annotations tool to make your ML life simpler and smoother.

Word embeddings are vectorial representations that are assigned to words, that have similar contextual usages. What is the use of word embeddings you might say? Well, if I am talking about Messi and immediately know that the context is football… How is it that happened? Our brains have associative memories and we associate Messi with football…

To achieve the same, that is group similar words, we use embeddings. Embeddings, initially started off with one hot encoding approach, where each word in the text is represented using an array whose length is equal to the number of unique words in the vocabulary.


Ex: Sentence 1: The mangoes are yellow.Sentence 2: The apples are red.

The unique words are {The, mangoes, are, yellow, apples, red}. Hence sentence 1 will be represented as [1,1,1,1,0,0] & sentence 2 will be[1,0,1,0,1,1].

This approach works well for small datasets but doesn’t work efficiently for very large datasets. Hence there are several n-gram model implemented for this. We shall not explore this area in this tutorial. The topic of interest is word2vec model for generation of word embeddings. This covers many concepts of machine learning. We shall learn about a single hidden layer neural network, embeddings, and various optimisation techniques.

Any machine learning algorithm needs three domains to work hand in hand. They are representation of classifier, evaluation of the hypothesis, and optimization of the model for higher accuracy.

In the word2vec model, we have a single hidden layered neural network of size N, that is used to obtain the word embeddings in a dimension N. The way to visualise the embeddings is as follows…

Let’s understand the various terminologies…

Continuous Bag of Words Model- CBOW: Introduced by Tomas Mikolov in his paper, this model assumes that there is only one word considered per context. Hence the model will predict one target word given one context word. Let the vocabulary size be V

CBOW model with only one word in context

The weights matrix between the input layer and the output layer can be represented by a V*N matrix. Each row of the matrix represents the embedding vector for each word. Note that the activation function in this case is a linear function. The objective function is the conditional probability of observing the actual output word given the input context word. We need to maximise the objective function, that is maximise the prediction of a word given its context… Simple right!

CBOW also has a multi- word context, where instead of having one word in the context, it takes average of a certain window sized length of words, and then sends it as an input to the neural net.

Skip-Gram Model

The skip gram model is introduced in Mikolov et al, which is the opposite of CBOW model. The target word is now at the input layer, and the context words are on the output layer.

The objective function being the probability of the output word in the group of target words given the context word. W_O,c is the actual output word in the cth group of output words.

Objective function

Word2vec model implements skip-gram, and now… let’s have a look at the code. Gensim also offers word2vec faster implementation… We shall look at the source code for Word2Vec. Lets import all the required libraries and the dataset available in nltk.corpus. It is a replica of Project Gutenberg.

Let's’ preprocess the dataset by getting rid of uncommon words, and marking them as UNK tokens.

Implementing the skip gram model is the next part.

Training the Skip gram model results in the model understanding the language structure.

Let’s visualise the embeddings.

Optimisation is used to refine the embeddings obtained. Let’s review the various techniques that we know and use. I suggest you to go through this due to limitations of typing math on Medium.

Results for comparison of various optimisers

Hence, we can conclude that RMSProp and Adam, which are state of the art do not work well on these models. On the other hand, Proximal Adagrad and SGD work really well. Let’s see the results of Proximal Adagrad and SGD.

Proximal Adaptive Gradient Descent Optimizer

Check the words that go together often being represented close enough on the images. Also… compare the location of the numbers… in the two images… Decide which one is the better one accordingly!

Stochastic Gradient Descent Optimizer

This is the third tutorial in a five part series… Excited for the next two… Share your thoughts and feedback at [email protected].