paint-brush
My Journey Into Deep learningby@aaronwong_65108
1,463 reads
1,463 reads

My Journey Into Deep learning

by Aaron WongMay 26th, 2017
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

When I first read about <a href="http://course.fast.ai/" target="_blank">Practical Deep Learning For Coders</a> it was a little hard for me to believe that I could train a neural network with 7 lines of code. Like most people, I had a very natural reaction. “WTF!?! That’s impossible!”
featured image - My Journey Into Deep learning
Aaron Wong HackerNoon profile picture

When I first read about Practical Deep Learning For Coders it was a little hard for me to believe that I could train a neural network with 7 lines of code. Like most people, I had a very natural reaction. “WTF!?! That’s impossible!”

Here I am, 10 minutes into the first lesson training VGG-16, a convolutional neural network, to classify images of cats and dogs. That was insanely easy to do. Too easy in fact.

Okay, maybe we are getting too far ahead of ourselves.

With a 98% accuracy rate on the very first epoch, it was surprisingly accurate. “Aaron, you are a natural.” Yea thanks but I know that already.

Did I really do anything extraordinary though? No. After taking a quick peek at the Vgg16 class that came with the rest of the code used for Practical Deep Learning For Coders, it was apparent that vgg = Vgg16() was initializing an instance of a class that obfuscates the not so pretty parts of creating a neural net model.

For an example of what that looks like:







def create(self):model = self.model = Sequential()model.add(Lambda(vgg_preprocess,input_shape=(3,224,224),output_shape=(3,224,224)))

    self.ConvBlock(2, 64)  
    self.ConvBlock(2, 128)  
    self.ConvBlock(3, 256)  
    self.ConvBlock(3, 512)  
    self.ConvBlock(3, 512)

    model.add(Flatten())  
    self.FCBlock()  
    self.FCBlock()  
    model.add(Dense(1000, activation='softmax'))

    fname = 'vgg16.h5'  
    model.load\_weights(get\_file(  
        fname,   
        self.FILE\_PATH+fname,   
        cache\_subdir='models'  
    ))

I am not going to pretend I know right now what 100% of this code means but from what I understand, it is initializing a Keras Sequential model and then defining the input, output, and hidden layers of that model. It is also loading pre-trained weights into the model.

Okay so maybe I lied to myself about my so called natural talent when it comes to deep learning but I did fine tune and fit the vgg model with the training and validation data. Okay so maybe I am lying about that too… That doesn’t take away from how awesome it feels to teach my computer how to differentiate between cats and dogs!

Out of curiosity, I decided to run this code on the Dogs vs. Cats Redux data set from Kaggle. Admittedly, my first submission didn’t do so well. Had I been three months earlier when this competition was still open, my score would have placed me at ~1100 out of the 1314 people already on the leader board. Seeing that my validation accuracy was 98%, I decided to investigate.

After spending two hours ripping my hair out, I came upon this nasty little fellow:

Log loss on kaggle.

Kaggle uses this function to rank submissions. It turns out that when competing on Kaggle, it is a bad thing when a model is so confident about a picture being a cat that it labels it 0. This ends up breaking the LogLoss function on Kaggle leading to my abysmal score.

So how can I make it better? Why not just force the extreme values of 0 and 1 to be 0.02 and 0.98 respectively? BAM. Just like that I go from ~1100 to ~500 putting me in the top 50% of the leader board.

Okay you caught me. So maybe I had to rely on Jeremy (the guy teaching the course) to realize that my overconfident model was the problem. Details, details.

Overall, training my first CNN was really fun. Can’t wait to begin lecture 2!