paint-brush
When Deductive Reasoning Fails: Contextual Ambiguities in AI Modelsby@cosmological
148 reads

When Deductive Reasoning Fails: Contextual Ambiguities in AI Models

tldt arrow

Too Long; Didn't Read

Despite its effectiveness, the Natural Program-based deductive reasoning process has limitations, particularly in handling contextual ambiguities. One key failure case involves the term "pennies," where ChatGPT misinterprets it as a unit of currency instead of coins. Such ambiguities reveal challenges in the deductive verification process, limiting its ability to resolve contextual misunderstandings.
featured image - When Deductive Reasoning Fails: Contextual Ambiguities in AI Models
Cosmological thinking: time, space and universal causation  HackerNoon profile picture

Authors:

(1) Zhan Ling, UC San Diego and equal contribution;

(2) Yunhao Fang, UC San Diego and equal contribution;

(3) Xuanlin Li, UC San Diego;

(4) Zhiao Huang, UC San Diego;

(5) Mingu Lee, Qualcomm AI Research and Qualcomm AI Research

(6) Roland Memisevic, Qualcomm AI Research;

(7) Hao Su, UC San Diego.

Abstract and Introduction

Related work

Motivation and Problem Formulation

Deductively Verifiable Chain-of-Thought Reasoning

Experiments

Limitations

Conclusion, Acknowledgements and References


A Deductive Verification with Vicuna Models

B More Discussion on Improvements of Deductive Verification Accuracy Versus Improvements on Final Answer Correctness

C More Details on Answer Extraction

D Prompts

E More Deductive Verification Examples

6 Limitations

While we have demonstrated the effectiveness of Natural Program-based deductive reasoning verification to enhance the trustworthiness and interpretability of reasoning steps and final answers, it is


Table 7: Ablation of different values of k ′ on the verification accuracy of reasoning chains using our Unanimity-Plurality Voting strategy. Experiments are performed on AddSub using GPT-3.5-turbo (ChatGPT).


Table 8: An example question with ambiguous wordings. The term "pennies" in this question can be interpreted as either a type of coin or a unit of currency. In this particular question, "pennies" is treated as a type of coin. However, the initial reasoning step by ChatGPT mistakenly treats "pennies" as a unit of currency, resulting in the conversion of all Melanie’s money into "pennies" (highlighted in red). Consequently, all subsequent reasoning steps follow this flawed logic, leading to an incorrect reasoning trace. Our deductive verification is not yet able to detect such errors.


important to acknowledge that our approach has limitations. In this section, we analyze a common source of failure cases to gain deeper insights into the behaviors of our approach. The failure case, as shown in Tab. 8, involves the ambiguous interpretation of the term “pennies,” which can be understood as either a type of coin or a unit of currency depending on the context. The ground truth answer interprets “pennies” as coins, while ChatGPT interprets it as a unit of currency. In this case, our deductive verification process is incapable of finding such misinterpretations. Contextual ambiguities like this are common in real-world scenarios, highlighting the current limitation of our approach.


This paper is available on arxiv under CC BY 4.0 DEED license.