paint-brush
ConvNet from scratch: just lovely Numpy, Forward Pass |Part 1|by@maniksoni653
7,224 reads
7,224 reads

ConvNet from scratch: just lovely Numpy, Forward Pass |Part 1|

by Manik SoniJanuary 6th, 2019
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

High level frameworks and APIs make it a lot easy for us to implement such a complex architecture but may be implementing them from scratch gives us the ground truth intuition of how actually ConvNets work.

Company Mentioned

Mention Thumbnail
featured image - ConvNet from scratch: just lovely Numpy, Forward Pass |Part 1|
Manik Soni HackerNoon profile picture

High level frameworks and APIs make it a lot easy for us to implement such a complex architecture but may be implementing them from scratch gives us the ground truth intuition of how actually ConvNets work.

- Outline of the Article

We’ll be implementing the building blocks of a convolutional neural network! Each function we’ll implement will have detailed instructions that will walk you through the steps needed:

  • Zero-Padding
  • Convolution forward
  • Pooling forward

We’ll use DLS jupyter notebooks to execute our modules. Check out DLS here. The fact is it comes with pre-installed libraries and frameworks required for Deep Learning. So it’s good to go for DL.


A video walkthrough of Deep Cognition_Hi everyone! In this article I’ll share with you several videos that will walk you through Deep Cognition’s Platform…_towardsdatascience.com


Generate stories using RNNs |pure Mathematics with code|:_Hi reader!_hackernoon.com

Zero Padding

•Zero padding adds zeros around the borders of a given image.

Zero padding Visualization

Importance of zero-padding:

  • It prevents the input from shrinking faster when passed in the deeper layers. Another special case is ‘same’ padding which even after convolution doesn’t decrement the input size.
  • It also helps to prevent the loss of information at the borders of image otherwise information at the borders will have very less significance compared to the information inside the borders.

let’s jump into the code:

Single step of convolution

In this part,we’ll implement a single step of convolution, in which we apply the filter to a single position of the input. This will be used to build a convolutional unit, which:

  • Takes an input volume
  • Applies a filter at every position of the input
  • Outputs another volume (usually of different size)


Figure 2 : Convolution operationwith a filter of 2x2 and a stride of 1 (stride = amount you move the window each time you slide)

Convolutional Neural Networks — Forward pass

In the forward pass, we’ll take many filters and convolve them on the input. Each ‘convolution’ gives you a 2D matrix output. You will then stack these outputs to get a 3D volume:

Pooling layer

The pooling (POOL) layer reduces the height and width of the input. It helps reduce computation, as well as helps make feature detectors more invariant to its position in the input. The two types of pooling layers are:

  • Max-pooling layer: slides an (f,f) window over the input and stores the max value of the window in the output.
  • Average-pooling layer: slides an (f,f) window over the input and stores the average value of the window in the output.

Complete Deep Learning Studio’s Jupyter Notebook!


Manik9/ConvNets_from_scratch_Implementation of ConvNets just by using Numpy. Contribute to Manik9/ConvNets_from_scratch development by creating an…_github.com

Open DLS Notebook and Upload your Jupyter Notebook

If you like this article, do 👏 and share😄.For more articles on Deep Learning follow me on Medium and LinkedIn.

Thanks for reading 😃

Happy Numpy.