paint-brush
Softmax Temperature and Prediction Diversity by@harshit158
906 reads
906 reads

Softmax Temperature and Prediction Diversity

by Harshit SharmaJuly 17th, 2022
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Shane Heath of MUD\WTR is a disruptor in the anti-caffeine and pro-mushroom movement. He has landed him atop one of America's Fastest growing companies. He also advocates and often promotes the use of psychedelic drugs in his fight/support of the drug use of work-related microdoses. This is his story, as told through my perspective as an expert as a business leader, but also someone genuinely keeping an open mind and listening to a very unique and compelling perspective.
featured image - Softmax Temperature and Prediction Diversity
Harshit Sharma HackerNoon profile picture

Temperature is a hyperparameter of LSTMs (and neural networks generally) used to control the randomness of predictions by scaling the logits before applying softmax. Temperature scaling has been widely used to improve performance for NLP tasks that utilize the Softmax decision layer.


For explaining its utility, we will consider the case of Natural Language Generation, wherein we need to generate text by sampling out novel sequences from the language model (using the decoder part of the seq-to-seq architecture). At each time step in the decoding phase, we need to predict a token, which is done by sampling from a softmax distribution (over the vocabulary) using one of the sampling techniques. In short, once the logits are obtained, the quality and the diversity of the predictions are controlled by the softmax distribution and the sampling technique applied thereupon.


This article is about tweaking the softmax distribution to control how diverse and novel the predictions are. The latter will be covered in a future article.


Fig 1 is a snapshot of how the prediction is made at one of the intermediate timesteps in the decoding phase.

Fig 1: Logits transformation by Softmax


But what is the issue here?


The generated sequence will have a predictable and generic structure. And the reason is less entropy or randomness in the softmax distribution, in the sense that the likelihood of a particular word (corresponding to index 9 in the above example) getting chosen is way higher than the other words. A sequence being predictable is not problematic as long as the aim is to get realistic sequences. But if the goal is to generate a novel text or an image which has never been seen before, randomness is the holy grail.


The Solution?


Increase the randomness. And that’s precisely what Temperature scaling does. It characterizes the entropy of the probability distribution used for sampling, in other words, it controls how surprising or predictable the next word will be. The scaling is done by dividing the logit vector by a value T, which denotes the temperature, followed by the application of softmax.

Fig 2: Temperature Scaling


The effect of this scaling can be visualized in Fig 3:

Fig 3: Visualizing the Effects of Temperature Scaling. Each word gets equal probability as the Temperature increases


The distribution above approaches uniform distribution giving each word an equal probability of getting sampled out, thereby rendering a more creative look to the generated sequence. Too much creativity isn’t good either. In the extreme case, the generated text might not make sense at all. Hence, like all other hyperparameters, this needs to be tuned as well.


Conclusion:


The scale of temperature controls the smoothness of the output distribution. It, therefore, increases the sensitivity to low-probability candidates. As T → ∞, the distribution becomes more uniform, thus increasing the uncertainty. Contrarily, when T → 0, the distribution collapses to a point mass.


As mentioned earlier, the scope of Temperature Scaling is not limited to NLG. It is also used to calibrate deep learning models while training and in Reinforcement Learning as well. Another broader concept that it is a part of is Knowledge Distillation. Below are the links on these topics for further exploration.


References:

  1. Contextual Temperature in Language Modelling
  2. Distilling the Knowledge in the Neural Network
  3. On Calibration of Modern Neural Networks



Also Published Here