Some prior knowledge of conv nets is assumed #### Introduction In the max-pooling layer (used in almost all state of the art vision tasks and even some NLP tasks) you throw away roughly 75% of the activations. I wanted to design a new kind of pooling layer that gets rid of some of the problems associated with it. The problems are: 1. Loss of spatial information. When you throw away 75% of activations, the information about where those activations came from is lost. 2. Max-pooling cannot use information from multiple activations. 3. Backpropagation only improves the maxpooled activation, even though the other activations might have wrong values. I wanted to design a new kind of pooling layer that solves as many of these problems as I could. In that process, I came up with a very simple trick to solve #2 and #3. #### Idea and Motivation Instead of taking the max of the 4 activations, sort the 4 activations in increasing order. Multiply them by 4 weights \[w1,w2,w3,w4\] and add the 4 values. Motivation behind the idea was very simple: 1. This way the network remains capable of [learning](https://hackernoon.com/tagged/learning) the good old max pooling which corresponds to \[w1,w2,w3,w4\] = \[1,0,0,0\]. 2. The later layers have access to more information. So in case the non-max activations are useful for decreasing the loss function, the network can just learn to use the other values. 3. Gradient flows through all 4 values in the previous layer (compared to only 1 in max pooling). So my hunch was, due to these reasons this idea would do much better than max pooling. And this was one of the very rare DL experiments where everything worked out exactly as I had expected. #### Concrete definition Let the output of the layer before pooling be a tensor T, of size \[B, H, W, C\]. I define a hyperparameter pool\_range which can be one of \[1,2,3,4\]. pool\_range specifies how many of the activations (in sorted order are to be kept). Meaning given 4 activations of the tensor T which are to be pooled, I first sort them in the order \[a1, a2, a3, a4\] where a1 ≥ a2 ≥ a3 ≥ a4. Then I keep the first pool\_range of them. I call this new vector **activation vector**. I define a **weight vector** of size pool\_range \[w{1},.... w{pool\_range}\]. One caveat here is that if any one of these weights is negative, then the assumption that the activation vector is sorted by strength and we are taking a weighted average will not hold true. So instead of using the weights directly I take a softmax over the weight vector and multiply the result with the activation vector. To test the importance of adding a softmax, I conducted a toy experiment on the cluttered-mnist dataset, with and without softmax and pool\_range=3. Following were the results on the test dataset.  Comparison of accuracy and cross entropy on test data for cluttered-mnist dataset Clearly, softmax is the winner here. I could have also used different weights for different channels but in order to keep this comparable to max\_pooling, I used the same 4 weights across channels. #### Implementation details I write the code for this layer in tensorflow. tensorflow’s top\_k layer was fast on the CPU bet terribly slow on the GPU. So instead of using that, I wrote my own sorting routine for sorting the 4 floats. The code for testing sort\_pool2d is given in [this file](https://github.com/singlasahil14/sortpool2d/blob/master/sortpool2d_test.py). The code for importing and using it as a layer is [in this file](https://github.com/singlasahil14/sortpool2d/blob/master/sort_pool2d.py). #### Results I tried this idea on many different datasets and architectures and it outperformed the baseline max-pooling on all of them. All experiments were performed with all four values of pool\_range: 1,2,3,4. pool\_range=1 corresponds to max pooling. Here are the results of my experiments: #### Toy experiments on cluttered-mnist and fashion-mnist **cluttered-mnist**   Comparison of accuracy and cross entropy on train data for cluttered-mnist dataset   Comparison of accuracy and cross entropy on test data for cluttered-mnist dataset  Values of best accuracy and cross\_entropy throughout training on test data The train loss and accuracy achieved by the network is the same, but validation accuracy for pool\_range = 2,3,4 is far better than for standard max pooling. **fashion-mnist**   Comparison of accuracy and cross entropy on training data for fashion-mnist dataset   Comparison of accuracy and cross entropy on test data for fashion-mnist dataset  Values of best accuracy and cross\_entropy throughout training on test data Again the results with pool\_range>1 are far better. #### Experiments on state of the art models **cifar-10 on resnet**   Comparison of accuracy and cross entropy on training data for cifar-10 dataset   Comparison of accuracy and cross entropy on test data for cifar-10 dataset  Values of best accuracy and cross\_entropy throughout training on test data Again the results with pool\_range>1 are better. **cifar-100 on resnet**   Comparison of accuracy and cross entropy on training data for cifar-100 dataset   Comparison of accuracy and cross entropy on test data for cifar-100 dataset  Values of best accuracy and cross\_entropy throughout training on test data Again the results with pool\_range>1 are better. The results here are better than the results for cifar-10 because cifar100 has less data per class. And this suggests that this idea works particularly well for problems involving less data per class. **omniglot on matching networks** I tried comparing the results on 1 shot, 20-way classification on omniglot dataset using the architecture proposed in ‘[Matching Networks for one shot learning paper](https://arxiv.org/abs/1606.04080)’.   Comparison of accuracy and loss on training data for omniglot dataset   Comparison of accuracy and loss on validation data for omniglot dataset  Values of best accuracy and loss throughout training on validation data Note this implementation used the already regularized state of the art implementation of the paper. So these improvements are on top of the many existing tricks. **omniglot on learning to remember rare events paper** I tried comparing the results on 1 shot, 5-way classification on omniglot dataset using the architecture proposed in the ‘[Learning to remember rare events](https://arxiv.org/abs/1703.03129)’ paper.  Comparison of loss on training data for omniglot dataset   Comparison of 1-shot, 2-shot accuracy on validation data for omniglot dataset The convergence for pool\_range=2, pool\_range=4 is much faster than using baseline max-pooling. Again this speedup is over the already state of the art implementation of the paper. So these improvements are on top of the many existing tricks. #### **Code and command line arguments for reproducing the results** All these experiments can be reproduced from [this repo](https://github.com/singlasahil14/sortpool2d). Command line arguments to reproduce the results on cluttered-mnist and fashion-mnist are given [here](https://github.com/singlasahil14/sortpool2d). Command line arguments to reproduce the results on cifar10 and cifar100 with resnet architecture are given [here](https://github.com/singlasahil14/sortpool2d/tree/master/resnet). Command line arguments to reproduce the results on omniglot with [matching networks](https://arxiv.org/abs/1606.04080) architecture are given [here](https://github.com/singlasahil14/sortpool2d/tree/master/matching_networks). Command line arguments to reproduce the results on omniglot with [learning to remember rare events](https://arxiv.org/abs/1703.03129) architecture are given [here](https://github.com/singlasahil14/sortpool2d/tree/master/learning_to_remember_rare_events). #### **Conclusions** This pooling layer(which I call sort\_pool2d) does much better than max\_pool2d across all datasets and architectures. The time difference is also not significant. And the time per iteration can be further optimized by writing highly optimized C code and cuda code. Although, this does not solve the problem of lost spatial information. But provides a promising direction for solving that problem.