paint-brush
Learning Semantic Knowledge from Wikipedia: Summaryby@textmodels

Learning Semantic Knowledge from Wikipedia: Summary

tldt arrow

Too Long; Didn't Read

In this study, researchers exploit rich, naturally-occurring structures on Wikipedia for various NLP tasks.
featured image - Learning Semantic Knowledge from Wikipedia: Summary
Writings, Papers and Blogs on Text Models HackerNoon profile picture

Author:

(1) Mingda Chen.

4.4 Summary

In this chapter, we described approaches to exploiting various naturally-occurring structures on Wikipedia. In Section 4.1, we used hyperlinks as natural supervision for two kinds of entity representations: CER and DER. For CER, we asked models to predict entity descriptions given context sentences in which the entities appear. For DER, we asked models to predict mention texts in the context sentences. Our proposed approaches were evaluated on a benchmark for entity representations and showed promising results.


In Section 4.2, we used article structures to train sentence encoders. The article structures are cast as multi-task learning objectives, encouraging the sentence-level models to encode information with respect to the broader context in which it situates. We evaluated the models on a discourse-related benchmark, finding that using the losses beyond sentences helped model performance on the discourse tasks.


In Section 4.3, we used Wikipedia category graphs to induce knowledge related to textual entailment. We treated the parent-child relations in Wikipedia category graphs as the entailment relations and built a training dataset for textual entailment. We found that training on our proposed datasets improves model performance on low-resource textual entailment tasks and we obtained similar improvements when extending our approaches to multilingual settings.


This paper is available on arxiv under CC 4.0 license.