Entity Disambiguation with Freebase
Entity disambiguation with a knowledge base becomes increasingly popular in the NLP community. In this paper, we employ Freebase as the knowledge base, which contains significantly more entities than Wikipedia and others. While huge in size, Freebase lacks context for most entities, such as the desc...
Saved in:
| Published in: | 2012 IEEE/WIC/ACM International Conferences on Web Intelligence and Intelligent Agent Technology Vol. 1; pp. 82 - 89 |
|---|---|
| Main Authors: | , , , , |
| Format: | Conference Proceeding |
| Language: | English |
| Published: |
IEEE
01.12.2012
|
| Subjects: | |
| ISBN: | 9781467360579, 1467360570 |
| Online Access: | Get full text |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | Entity disambiguation with a knowledge base becomes increasingly popular in the NLP community. In this paper, we employ Freebase as the knowledge base, which contains significantly more entities than Wikipedia and others. While huge in size, Freebase lacks context for most entities, such as the descriptive text and hyperlinks in Wikipedia, which are useful for disambiguation. Instead, we leverage two features of Freebase, namely the naturally disambiguated mention phrases (aka aliases) and the rich taxonomy, to perform disambiguation in an iterative manner. Specifically, we explore both generative and discriminative models for each iteration. Experiments on 2, 430, 707 English sentences and 33, 743 Freebase entities show the effectiveness of the two features, where 90% accuracy can be reached without any labeled data. We also show that discriminative models with proposed split training strategy is robust against over fitting problem, and constantly outperforms the generative ones. |
|---|---|
| ISBN: | 9781467360579 1467360570 |
| DOI: | 10.1109/WI-IAT.2012.26 |

