paint-brush
Notes on Automatic Filtering for Potentially Relevant Dialogsby@feedbackloop

Notes on Automatic Filtering for Potentially Relevant Dialogs

tldt arrow

Too Long; Didn't Read

Discover the groundbreaking use of Sentence Transformer technology in automatic filtering, spotlighting potentially relevant user responses within dialog datasets. With a focus on precision, this article showcases how 25% of the data becomes more enriched with user responses similar to those indicating errors. Dive into the details of dataset sizes and explore the transformative impact of automatic filtering on dialog research.
featured image - Notes on Automatic Filtering for Potentially Relevant Dialogs
The FeedbackLoop: #1 in PM Education HackerNoon profile picture

Authors:

(1) Dominic Petrak, UKP Lab, Department of Computer Science, Technical University of Darmstadt, Germany;

(2) Nafise Sadat Moosavi, Department of Computer Science, The University of Sheffield, United Kingdom;

(3) Ye Tian, Wluper, London, United Kingdom;

(4) Nikolai Rozanov, Wluper, London, United Kingdom;

(5) Iryna Gurevych, UKP Lab, Department of Computer Science, Technical University of Darmstadt, Germany.


Table of Links

Abstract & Introduction

Related Work

Datasets Examined

Manual Error Type Analysis and Taxonomies

Automatic Filtering for Potentially Relevant Dialogs

Statistical Analysis

Evaluation and Experiments

Discussion

Conclusion, Limitation, Acknowledgments, and References

A Integrated Error Taxonomy – Details

B Error-Indicating Sentences And Phrases

C Automatic Filtering – Implementation

D Automatic Filtering – Sentence-Level Analysis

E Task-Oriented Dialogs – Examples

F Effectiveness Of Automatic Filtering – A Detailed Analysis

G Inter-Annotator Agreement – Detailed Analysis

H Annotation Guidelines

I Hyperparameters and Baseline Experiments

J Human-Human Dialogs – Examples

5 Automatic Filtering for Potentially Relevant Dialogs

Since our study in Section 4 indicated that errors in system utterances are rare, we use Sentence Transformer (Reimers and Gurevych, 2019) to facilitate the process of filtering the remaining dialogs of each datasets for potentially relevant ones, i.e., dialogs with user responses similar to the collected error-indicating sentences.


For each dataset, we decompose every dialog into turns (alternating utterances), extract the user response, and segment it into sentences. Next, we pair these sentences with each of the errorindicating sentences and use a pretrained SentenceTransformer based on MPNet (Song et al., 2020) to calculate their cosine similarity (see Appendix C for implementation details). We consider a dialog to be potentially relevant if at least one of these pairs has a cosine similarity ≥ 50%. Table 6 presents the sizes of the filtered subsets in comparison to the original datasets.


Table 6: Size comparison between the filtered subsets and the original datasets. The numbers in brackets show the ratio of relevant dialogs to the original dataset sizes.


With 58.5%, MWoZ (Budzianowski et al., 2018) contains most of the potentially relevant dialogs. PC (Zhang et al., 2018) and WoW (Dinan et al., 2019) have the smallest number of such dialogs (8.9% and 7.57%, respectively). Overall, only 25% of the data is potentially relevant, i.e., contains at least one user response that is similar to one of those observed in Section 4. Hereinafter, we refer to these dialogs as filtered dialogs. We provide a sentence-level analysis in Appendix D.


This paper is available on arxiv under CC BY-NC-SA 4.0 DEED license.