paint-brush
Estimate Emotion Probability Vectors Using LLMs: Conclusionsby@textmodels
244 reads

Estimate Emotion Probability Vectors Using LLMs: Conclusions

Too Long; Didn't Read

This paper shows how LLMs (Large Language Models) [5, 2] may be used to estimate a summary of the emotional state associated with a piece of text.
featured image - Estimate Emotion Probability Vectors Using LLMs:  Conclusions
Writings, Papers and Blogs on Text Models HackerNoon profile picture

This paper is available on arxiv under CC 4.0 license.

Authors:

(1) D.Sinclair, Imense Ltd, and email: [email protected];

(2) W.T.Pye, Warwick University, and email: [email protected].

5. Conclusions

LLMs are by their nature designed to provide text strings as a response to a test prompt. This is not always the most useful format for information to be returned in. Internally within the LLM there exist probability distributions over tokens. The paper presents an example of how to build part of an emotion based synthetic consciousness by deriving the vector of emotion descriptor probabilities over a dictionary of emotional terms. There are a range of things that can be done with this emotion probability vector including fine grained review analysis, predicting a response to marketing messages, offence detection etc. It is possible that the emotion probability vector might be a step on the road to synthetic consciousness and that it might provide a means of making robots more empathetic through allowing them to make a prediction as to how something they might say will make the recipient feel.


If reasonable responses are desired from an LLM it might be a good policy not to train the LLM on the mad shouting that pervades anti-social media and analogously it might be a good idea not to train young minds similarly.