paint-brush
Estimate Emotion Probability Vectors Using LLMs: Acknowledgements and Referencesby@textmodels
191 reads

Estimate Emotion Probability Vectors Using LLMs: Acknowledgements and References

Too Long; Didn't Read

This paper shows how LLMs (Large Language Models) [5, 2] may be used to estimate a summary of the emotional state associated with a piece of text.
featured image - Estimate Emotion Probability Vectors Using LLMs: Acknowledgements and References
Writings, Papers and Blogs on Text Models HackerNoon profile picture

This paper is available on arxiv under CC 4.0 license.

Authors:

(1) D.Sinclair, Imense Ltd, and email: [email protected];

(2) W.T.Pye, Warwick University, and email: [email protected].

6. Acknowledgements

The authors acknowledge the extraordinary generosity of Meta in releasing model weights in a reasonable way for their LlaMa2 series of pre-trained Large Language Models.

7. References

[1] Open AI. Chatgpt-4 technical report. 2023. URL https://arxiv.org/pdf/2303.08774.pdf.


[2] Meta GenAI, Thomas Scialom, and Hugo Touvron. Llama 2: Open foundation and fine-tuned chat models. 2023. URL https://arxiv.org/pdf/2307.09288.pdf.


[3] Rosalind W. Picard. Affective computing. MIT Press, 1997.


[4] J Strabismus. The jedi religion: Is love the force? Amazon Kindle, 2013.


[5] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. CoRR, abs/1706.03762, 2017. URL http://arxiv.org/abs/1706.03762.


[6] Wenxuan Zhang, Yue Deng, Bing Liu, Sinno Jialin Pan, and Lidong Bing. Sentiment analysis in the era of large language models: A reality check, 2023.