Evaluating Systematic Generalization: The Use of ProofWriter and CLUTRR-SG in LLM Reasoning Research

Written by reckoning | Published 2025/10/28
Tech Story Tags: llm | multi-hop-reasoning | logical-reasoning | systematic-generalization | what-is-proofwriter | clutrr-sg | llm-benchmarking | llm-benchmarks

TLDRThis article provides a detailed description of two multi-hop logical reasoning datasets: ProofWriter and CLUTRR-SG.via the TL;DR App

Abstract and 1. Introduction

  1. Background

  2. Method

  3. Experiments

    4.1 Multi-hop Reasoning Performance

    4.2 Reasoning with Distractors

    4.3 Generalization to Real-World knowledge

    4.4 Run-time Analysis

    4.5 Memorizing Knowledge

  4. Related Work

  5. Conclusion, Acknowledgements, and References

A. Dataset

B. In-context Reasoning with Distractors

C. Implementation Details

D. Adaptive Learning Rate

E. Experiments with Large Language Models

A Dataset

ProofWriter The ProofWriter [73] dataset has 500k pairs of questions, answers, and proofs over natural-language rule bases. Each example in the dataset contains a set of facts, a set of rules, a hypothesis, and a label indicating whether the hypothesis is true, false, or unknown. The dataset comprise five datasets named D0, D1, D2, D3, D5, each with 100k examples. Each dataset’s questions require reasoning up to depths D (D = 0, 1, 2, 3, 5) to determine their answers. In our experiments, we only focus on the datasets that require more reasoning depths (D2, D3, D5). We show an example from the dataset in Table 7. In these datasets, a set of facts and rules are mapped to 18 questions, where the questions can be answered based on a subset of the facts and rules. Thus, some of the facts or rules can be irrelevant to some questions, and we call them distractors in Section 4.2. In the experiment for knowledge encoding with distractors, we encode all the facts in the model parameters and evaluate its ability to reproduce and reason over the correct facts. We show an example of distractor and relevant knowledge of a question in Table 9. For detailed statistics on the two datasets, please see Table 6.

CLUTRR-SG The CLUTRR-SG [28] is an evaluation dataset for inductive reasoning on family relations adapted from the [71] dataset for measuring systematic generalization. Each example in the dataset contains (i) a set of facts representing a family graph G = (V, E) where nodes (V ) are entities and edges (E) are the relationships. (ii) a question asking the relationship between two entities (v1, vn ∈ V ), and (iii) a target relationship e ∗ ∈ E as the answer for the question. The facts are expressed as a list of (vi , ej , vk) tuples. The two entities in the question are separated by more than one hop in the graph. There are 272 unique entities, 20 relationship types, and nearly 1.5M possible facts in the dataset. Following the authors, we define the difficulty of examples based on the number of family graph edges (i.e., the number of reasoning hops required to determine a relation), in which k edges (k-hop) correspond to k facts. We show an example from the dataset in Table 8.

Authors:

(1) Zeming Chen, EPFL ([email protected]);

(2) Gail Weiss, EPFL ([email protected]);

(3) Eric Mitchell, Stanford University ([email protected])';

(4) Asli Celikyilmaz, Meta AI Research ([email protected]);

(5) Antoine Bosselut, EPFL ([email protected]).


This paper is available on arxiv under CC BY 4.0 DEED license.


Written by reckoning | No technological innovation comes without sacrifice. The pendulum will swing back to the people! Wanna' be 501c3.
Published by HackerNoon on 2025/10/28