A new kind of pooling layer for faster and sharper convergence

Written by singlasahil14 | Published 2017/09/30
Tech Story Tags: machine-learning

TLDRvia the TL;DR App

Some prior knowledge of conv nets is assumed

Introduction

In the max-pooling layer (used in almost all state of the art vision tasks and even some NLP tasks) you throw away roughly 75% of the activations. I wanted to design a new kind of pooling layer that gets rid of some of the problems associated with it.

The problems are:

  1. Loss of spatial information. When you throw away 75% of activations, the information about where those activations came from is lost.
  2. Max-pooling cannot use information from multiple activations.
  3. Backpropagation only improves the maxpooled activation, even though the other activations might have wrong values.

I wanted to design a new kind of pooling layer that solves as many of these problems as I could. In that process, I came up with a very simple trick to solve #2 and #3.

Idea and Motivation

Instead of taking the max of the 4 activations, sort the 4 activations in increasing order. Multiply them by 4 weights [w1,w2,w3,w4] and add the 4 values.

Motivation behind the idea was very simple:

  1. This way the network remains capable of learning the good old max pooling which corresponds to [w1,w2,w3,w4] = [1,0,0,0].
  2. The later layers have access to more information. So in case the non-max activations are useful for decreasing the loss function, the network can just learn to use the other values.
  3. Gradient flows through all 4 values in the previous layer (compared to only 1 in max pooling).

So my hunch was, due to these reasons this idea would do much better than max pooling. And this was one of the very rare DL experiments where everything worked out exactly as I had expected.

Concrete definition

Let the output of the layer before pooling be a tensor T, of size [B, H, W, C]. I define a hyperparameter pool_range which can be one of [1,2,3,4]. pool_range specifies how many of the activations (in sorted order are to be kept). Meaning given 4 activations of the tensor T which are to be pooled, I first sort them in the order [a1, a2, a3, a4] where a1 ≥ a2 ≥ a3 ≥ a4. Then I keep the first pool_range of them. I call this new vector activation vector.

I define a weight vector of size pool_range [w{1},.... w{pool_range}]. One caveat here is that if any one of these weights is negative, then the assumption that the activation vector is sorted by strength and we are taking a weighted average will not hold true. So instead of using the weights directly I take a softmax over the weight vector and multiply the result with the activation vector. To test the importance of adding a softmax, I conducted a toy experiment on the cluttered-mnist dataset, with and without softmax and pool_range=3. Following were the results on the test dataset.

Comparison of accuracy and cross entropy on test data for cluttered-mnist dataset

Clearly, softmax is the winner here.

I could have also used different weights for different channels but in order to keep this comparable to max_pooling, I used the same 4 weights across channels.

Implementation details

I write the code for this layer in tensorflow. tensorflow’s top_k layer was fast on the CPU bet terribly slow on the GPU. So instead of using that, I wrote my own sorting routine for sorting the 4 floats. The code for testing sort_pool2d is given in this file. The code for importing and using it as a layer is in this file.

Results

I tried this idea on many different datasets and architectures and it outperformed the baseline max-pooling on all of them. All experiments were performed with all four values of pool_range: 1,2,3,4. pool_range=1 corresponds to max pooling.

Here are the results of my experiments:

Toy experiments on cluttered-mnist and fashion-mnist

cluttered-mnist

Comparison of accuracy and cross entropy on train data for cluttered-mnist dataset

Comparison of accuracy and cross entropy on test data for cluttered-mnist dataset

Values of best accuracy and cross_entropy throughout training on test data

The train loss and accuracy achieved by the network is the same, but validation accuracy for pool_range = 2,3,4 is far better than for standard max pooling.

fashion-mnist

Comparison of accuracy and cross entropy on training data for fashion-mnist dataset

Comparison of accuracy and cross entropy on test data for fashion-mnist dataset

Values of best accuracy and cross_entropy throughout training on test data

Again the results with pool_range>1 are far better.

Experiments on state of the art models

cifar-10 on resnet

Comparison of accuracy and cross entropy on training data for cifar-10 dataset

Comparison of accuracy and cross entropy on test data for cifar-10 dataset

Values of best accuracy and cross_entropy throughout training on test data

Again the results with pool_range>1 are better.

cifar-100 on resnet

Comparison of accuracy and cross entropy on training data for cifar-100 dataset

Comparison of accuracy and cross entropy on test data for cifar-100 dataset

Values of best accuracy and cross_entropy throughout training on test data

Again the results with pool_range>1 are better. The results here are better than the results for cifar-10 because cifar100 has less data per class. And this suggests that this idea works particularly well for problems involving less data per class.

omniglot on matching networks

I tried comparing the results on 1 shot, 20-way classification on omniglot dataset using the architecture proposed in ‘Matching Networks for one shot learning paper’.

Comparison of accuracy and loss on training data for omniglot dataset

Comparison of accuracy and loss on validation data for omniglot dataset

Values of best accuracy and loss throughout training on validation data

Note this implementation used the already regularized state of the art implementation of the paper. So these improvements are on top of the many existing tricks.

omniglot on learning to remember rare events paper

I tried comparing the results on 1 shot, 5-way classification on omniglot dataset using the architecture proposed in the ‘Learning to remember rare events’ paper.

Comparison of loss on training data for omniglot dataset

Comparison of 1-shot, 2-shot accuracy on validation data for omniglot dataset

The convergence for pool_range=2, pool_range=4 is much faster than using baseline max-pooling.

Again this speedup is over the already state of the art implementation of the paper. So these improvements are on top of the many existing tricks.

Code and command line arguments for reproducing the results

All these experiments can be reproduced from this repo.

Command line arguments to reproduce the results on cluttered-mnist and fashion-mnist are given here.

Command line arguments to reproduce the results on cifar10 and cifar100 with resnet architecture are given here.

Command line arguments to reproduce the results on omniglot with matching networks architecture are given here.

Command line arguments to reproduce the results on omniglot with learning to remember rare events architecture are given here.

Conclusions

This pooling layer(which I call sort_pool2d) does much better than max_pool2d across all datasets and architectures. The time difference is also not significant. And the time per iteration can be further optimized by writing highly optimized C code and cuda code.

Although, this does not solve the problem of lost spatial information. But provides a promising direction for solving that problem.


Published by HackerNoon on 2017/09/30