JPMC published a in 2021 highlighting their approach to . This article summarizes the problem statement, solution, and other key technical components of the paper* paper Entity Linking What is Entity Linking? It's the task of assigning a to ambiguous mentions of in a text. unique identity named entities Here, “ ” from the text is given a unique identity via a URL (the most common type of ) “ ”. Note that the type of URI used to identify the mentioned entity depends on the domain uniquely. For eg, Instead of a web address, we could have used ISBNs if we were to identify books from a text. Paris URI wikipedia.org/wiki/Paris JPMC was interested in : from to the entities stored in their internal knowledge base (stored as a Mapping mentions of financial institutions news articles Knowledge Graph) An example is shown below: There are two sub-problems that must be defined: : Recognition Extraction of mentions from financial news articles. JPMC has used for this. Spacy : Linking Choosing the correct entity from the internal Knowledge Graph to be linked to the extracted mentions in the previous step. The paper discusses this step. A pictorial representation of this is shown below: String Matching These approaches capture the “morphological” structure of the entity names. The team experimented with (a) Jaccard (b) Levenshtein (c) (also known as ) Ratcliff-Obershelp Gestalt-Pattern-Matching (d) **Jaro Winkler**(e) N-Gram Cosine Similarity The con of these approaches is that they focus just on the “ ” of the names and not the semantics. An example of a failure case would be the matching of “Lumier” and “Lumier”. Even though they are exactly the same, they refer to two different entities. syntactics 2. : Context Similarity Methods These methods take the contexts around mentions and entities to give a similarity score. The Context for “mention” is text to the left and right of the mention, whereas The Context for “entity” is all the data that's stored in the KG for this entity. Finally, Cosine similarity / Jaccard similarity can be used on top of the context vectors. 3. : ML Classification Naive Bayes, Logistic Regression, and SVM are trained on (mention, entity) pairs to find the ones that should be linked 4. Learn to Rank Methods (LTR) These models work in tandem with ML approaches, which might give us multiple (mention, entity) pairs as the solutions. LTR approaches just narrow down to the most probable solution. The idea is to capture both (the meaning that the or stands for) and (character composition of the name) between the names and use a contrastive loss function to train a model. Semantic distance mention entity Syntactic distance We will see below how both of these distances are calculated step by step. Step 1: Obtain embeddings for Entities and Mentions To come up with both of the distances, the authors have proposed to use embeddings for as well as in the KG. mentions entities To obtain , the authors have used a (shown below) embeddings Entity Triplet Loss function For each entity, they used 10 positive and 10 negative samples, making . 10 <entity, positive word, negative word> triplets Unlike embeddings, which they had precomputed, were trained using the on-the-go embeddings approach, where the embedding matrix is learned during the training itself. Entity the embeddings mention Step 2: Calculate Syntactic Distance score Before going further, it's worth mentioning which was introduced by Google in 2016. You can find their official blog . We won’t go into the details, but to give a summary, it's an architecture that has two components — The Wide component and the Deep component. “Wide & Deep” architecture here score calculation is done using the WIDE part, which consists of a Syntactic Distance Linear Siamese Network. The output of the siamese network is the vectors for both the and the which are then compared to find the Euclidean distance. entity mention, Step 3: Calculate Semantic Distance score score calculation is done using the DEEP part Semantic Distance is the pre-trained embedding for the “Apache Corp” that was calculated in Step 1. To obtain the embedding for the , its left and right context words are fed into a Bi-LSTM network that trains the embeddings. The embedding vectors of and the ( are then used to find the Euclidean distance: eₖ mention mention (Vₘ) entity Vₑ) Step 4: Compute Contrastive Loss Both Syntactic and Semantic distances are combined in a weighted fashion as follows: The contrastive loss function is then combined as follows: Contrastive Loss Function where Y is the ground truth value, where a value of 1 indicates that mention m and entity e are matched, 0 otherwise. Combining all the pieces, the final model framework is shown below: At the time of writing this paper, JPMC was still in the process of deploying the model, which, once done, will help support users across JPMC in discovering relevant and curated news that matters to their business. From the cost perspective, not all the mentions are fed through the JEL framework as that would be computationally expensive. JPMC has put another blocking layer to funnel out the mentions that share less than 2 bigrams with the entities from their internal KGs. Once again, is the paper link if you would like to read the full paper. here Also published here Thanks for reading !! Follow Intuitive Shorts (my Substack newsletter), to read quick and intuitive summaries of ML/NLP/DS concepts. You can also follow me on . Medium