I have played with the Keras official image_ocr.py example for a while and want to share my takeaways in this post.
The official example only does the training for the model while missing the prediction part, and my final source code is available both on my GitHub as well as a runnable Google Colab notebook. More technical detail of OCR(optical character recognization) including the model structure and CTC loss will also be explained briefly in the following sections.
The input will be an image contains a single line of text, the text could be at any location in the image. And the task for the model is to output the actual text given this image.
For example,
OCR example input & output
The official image_ocr.py example source code is quite long and may look daunting. It can be breaking down into several parts.
The model input is image data, and we first feed the data to two convolutional networks to extract the image features, followed by the Reshape and Dense to reduce the dimensions of the feature vectors before letting the bidirectional GRU process the sequential data. The sequential data feed to the GRU is the horizontally divided image features. The final output Dense layer transforms the output for a given image to an array with the shape of (32, 28) representing (#of horizontal steps, #char labels).
Base model structure
And here is the part of the code to construct the Keras model.
As we can see in the example image, the text could be located anywhere, how the model align between the input and output to locates each character in the image and turns them into text? That is where CTC comes into play, CTC stands for connectionist temporal classification.
input -> softmax output
Notice that the output of the model has 32 timesteps, but the output might not have 32 characters. The CTC cost function allows the RNN to generate output like:
Output sequence -“a game” with CTC blanks inserted
CTC introduced the “blank” token, and itself doesn’t translate into any character, what it does is to separate individual characters so that we can collapse repeated characters that are not separated by the blank.
So the decoding output for the previous sequence will be “a game”.
Let’s take a look at another example of the text “speed”.
Output sequence -“speed”
According to for the decoding principle, we first collapse repeating characters that are not separated by blank token, and then we remove the blank tokens themselves. Notice that if there is no blank token to separate the two “e”s they will be collapsed into one.
In Keras, the CTC decoding can be performed in a single function, K.ctc_decode
.
The out
is the model output which consists of 32 timesteps of 28 softmax probability values for each of the 28 tokens from a~z, space, and blank token. We set the parameter greedy
to perform the greedy search which means the function will only return the most likely output token sequence.
Alternatively, if we want to have the CTC decoder return the top N possible output sequence, we can ask it to perform beam search with a given beam width.
One thing worth mentioning is that if you are new to beam search algorithm, the top_paths
parameter is no greater than the beam_width
parameter since the beam width tells the beam search algorithm exactly how many top results to keep track of in iterating all timesteps.
Right now the output of the decoder will be a sequence of tokens, and we just need to translate the numerical classes back to characters.
So far we only talked about the decoding part of the CTC. You may wonder how the model is trained with CTC loss?
In order to compute the CTC loss, it requires more than the true labels and predicted outputs of the model, but also the output sequence length and the lengths for each of the true labels.
In Keras the CTC loss is packaged in one function K.ctc_batch_cost
.
Checkpoint results after training the model for 25 epochs.
Model performance check after 25 training epochs
If you have read this far and experimenting along on the Google Colab you should now have a Keras OCR demo running. If you are still eager for more information about CTC and beam search, feel free to check out the following resources.
Sequence Modeling With CTC — An in-depth elaboration of CTC algorithm and other applications where CTC can be applied to such as speech recognition, lip reading from video and so on.
Coursera Beam search video lecture. Quick and easy to understand.
Don’t forget to get the source code from my GitHub as well as a runnable Google Colab notebook.
Originally published at www.dlology.com.