A user-friendly interface that allows exible computation of similarity between Qnodes in Wikidata. At present, the similarity interface supports four algorithms, based on: graph embeddings (TransE, ComplEx), text embeddings (BERT), and class-based similarity.
To support anticipated applications that require efficient similarity computations, like entity linking and recommendation, we also provide a REST API that can compute most similar neighbors for any Qnode in Wikidata.
We use ElasticSearch to build a text index and enable this search (based on its labels or aliases).
- Class similarity computes the set of common is-a parents for two nodes. Here, the is-a relations are computed as a transitive closure over both the subclass-of (P279) and the instance-of (P31) relations. Each shared parents is weighted by its inverse document frequency (IDF), computed based on the number of instances that transitively belong to that parent class.
- TransE similarity computes the cosine similarity between the TransE embeddings of two Wikidata nodes.
- ComplEx similarity computes the cosine similarity between the ComplEx embeddings of two Wikidata nodes.
- Text similarity computes the cosine similarity between the BERT embeddings of two Wikidata nodes. We pre-compute these BERT embeddings over a lexicalized version of each Wikidata Qnode, based on its outgoing edges in the graph.
Nearest Neighbors API Our REST API returns K nearest neighbors for a Qnode based on the ComplEx algorithm. We index the ComplEx embeddings in a FAISS [8] index, which facilitates efficient retrieval.
- We observe that the class-based metric consistently prioritizes semantically similar nodes over the others, as its three top-scored nodes are semantically similar to motorcycle. This is intuitive, given that it is purely based on the Wikidata taxonomy, and naturally favors semantically similar terms.
- We conclude that the BERT-based text similarity metric is able to discern related from unrelated nodes, but it is unable to distinguish between similar and related terms. This can be expected, considering that the BERT model is trained to capture natural language co-occurrence, thus favoring both semantic and related terms over unrelated ones.
- Graph embeddings scored pairs orthogonally to our similarity categorization, by assigning higher scores to pairs that are structurally similar in Wikidata. The scoring in this case correlates to a lesser extent with our a priori three-way categorization of the Qnodes, though on average semantic similarity is favored over relatedness, which is on average favored over unrelatedness. This could be explained with the property of the graph embeddings to capture structural similarity of nodes, i.e., to assign higher similarity between nodes that connect to similar other nodes. ... the graph embeddings like ComplEx assign a higher similarity to node pairs that connect to similar structures in the Wikidata graph.
FINDINGS: We show that the class-based metric consistently captures semantic
similarity, and assigns lower scores to terms that are merely related or
unrelated. BERT-based similarity behaves differently, providing high
scores to both semantically similar and related pairs. The graph
embedding-based metrics are somewhere in between class-based similarity
and BERT.
Comentários
Postar um comentário
Sinta-se a vontade para comentar. Críticas construtivas são sempre bem vindas.