paint-brush
Leveraging Two LLMs for Improved Sentiment Analysis Decisionsby@textmodels
255 reads

Leveraging Two LLMs for Improved Sentiment Analysis Decisions

Too Long; Didn't Read

A multi-LLM negotiation framework for sentiment analysis involves two LLMs, one as a generator and the other as a discriminator, engaging in role-flipped negotiations for accurate sentiment decisions. If needed, a third LLM is introduced to resolve conflicting decisions. This approach optimizes sentiment analysis algorithms and enhances AI capabilities in sentiment analysis tasks.
featured image - Leveraging Two LLMs for Improved Sentiment Analysis Decisions
Writings, Papers and Blogs on Text Models HackerNoon profile picture

Authors:

(1) Xiaofei Sun, Zhejiang University;

(2) Xiaoya Li, Shannon.AI and Bytedance;

(3) Shengyu Zhang, Zhejiang University;

(4) Shuhe Wang, Peking University;

(5) Fei Wu, Zhejiang University;

(6) Jiwei Li, Zhejiang University;

(7) Tianwei Zhang, Nanyang Technological University;

(8) Guoyin Wang, Shannon.AI and Bytedance.

Abstract and Intro

Related Work

LLM Negotiation for Sentiment Analysis

Experiments

Ablation Studies

Conclusion and References

3 LLM Negotiation for Sentiment Analysis

3.1 Overview

In this section, we detail the multi-LLM negotiation framework for sentiment analysis: Two LLMs perform as the answer generator and discriminator. We refer to the interaction between the generator and the discriminator as a negotiation. The negotiation will repeat until a consensus is reached or the maximum number of negotiation turns is exceeded. Illustrations are shown in Figures 1 and 2.

3.2 Reasoning-infused generator

The generator is backboned by a large language model. We ask the answer generator based on the ICL paradigm through prompts, aiming to generate a step-by-step reasoning chain and a decision towards the sentiment polarity of the test input.


Prompts are composed of three elements: a task description, demonstrations, and a test input. The task description is a description of the task in natural language (e.g., "Please determine the overall sentiment of test input."); the test input is the textual input in the test set (e.g., "The sky is blue."); demonstrations are from the train set of the task. Each consists of three elements: input, reasoning chains, and sentimental decision.


For each test input, we first retrieve K nearest neighbors (input, sentiment decision) from the train set as demonstrations. Then, we transform demonstrations to (input, reasoning process, sentiment decision) triplets by prompting the generator to produce a reasoning chain. After concatenating the task description, demonstrations, and the test input, we forward the prompt to the generator, which will respond with a step-by-step reasoning chain and a sentimental decision.

3.3 Explanation-deriving discriminator

The discriminator is backboned by another LLM. After finishing the answer generating process, the answer discriminator is used to judge whether the decision made by the generator is correct and provide a reasonable explanation.


To accomplish this goal, we first construct prompts for the answer discriminator. The prompt is composed of four elements: a task description, demonstrations, a test input, and the response from the answer generator. The task description is a piece of text that describes the task in natural language (e.g., "Please determine whether the decision is correct."). Each demonstration is composed of six elements: (input text, a reasoning chain, sentiment decision, discriminator attitude, discriminator explanations, discriminator decision) and constructed by prompting the answer discriminator to provide explanations of why the sentiment decision is correct for the input text.


Then we ask the discriminator with the construct prompt. The answer discriminator will respond with a text string, containing an attitude (i.e., yes, no) that denotes whether the discriminator agrees with the generator, explanations that explain why the discriminator agrees/disagrees with the generator, and a discriminator decision that determines the sentiment of the test input.


Why Two LLMs but Not One? There are two reasons for using two different LLMs separately for the generator and the discriminator rather than using a single LLM to act as two roles: (1) If an LLM makes a mistake as a generator due to incorrect reasoning, it is more likely that it will also make the same mistake as the discriminator as since generator and the discriminator from the same model are very likely to make similar rationales; (2) by using two separate models, we are able to take the advantage of the complementary abilities of the two models.

3.4 Role-flipped Negotiation

After two LLMs end with a negotiation, we ask them flip roles and initiate a new negotiation, where the second LLM acts as the generator, and the first LLM acts as the discriminator. We refer the interaction of two LLMs with flipped roles as role-flipped negotiation. Likewise, the role-flipped negotiation is ended until a consensus is reached or the maximum number of negotiation turns is exceeded.


When both negotiations result in an agreement and their decisions are the same, we can choose either decision as the final one since they are the same. If one of the negotiations fails to reach a consensus while the other reaches a decision, we choose the decision from the negotiation that reached a consensus as the final decision. However, if both negotiations reach a consensus but their decisions do not align, we will require the assistance of an additional Language Model (LLM), as will be explained in more detail below."


Introducing a third LLM If the decision from the two negotiations do not align, we introduce a third LLM and conduct the negotiation and role-flipped negotiation with each of the two aforementioned LLMs. Subsequently, we will get 6 negotiation results and vote on these results: the decision that appears most frequently is taken as the sentiment polarity of the input test.


This paper is available on arxiv under CC 4.0 license.