Pular para o conteúdo principal

Benchmarking Graph Database Backends — What Works Well with Wikidata? - Leitura de Artigo

Kovács, T., Simon, G., & Mezei, G. (2019). Benchmarking Graph Database Backends—What Works Well with Wikidata? Acta Cybernetica, 24(1), 43–60. https://doi.org/10.14232/actacyb.24.1.2019.5 

Abstract: 

Knowledge bases often utilize graphs as logical model.

For the modeling aspect, we made measurements with differ- ent graph encodings previously suggested in the literature, in order to observe the impact of the encoding aspect on the overall performance.

Introduction

People even without any specialized technical or natural science knowledge often organize concepts and relations between the concepts as nodes connected by edges.

A set of standards and technologies are built around the RDF concept.

The RDF model gives a straightforward encoding for basic knowledge structures, however, there are different encoding models for reification [29], i.e., statements about statements. Reification is extensively used in KBs with reference management, where every statement should be backed up by external sources.

Related Work

Social networking is one of the primary problem spaces for GDBs.

The Linked Data Benchmark Council (LDBC)[9] is an independent authority ”responsible for specifying benchmarks [...] for software systems designed to man- age graph and RDF data.” LDBC is continuously widening its benchmark port- folio: it has a framework for graph analytic tasks (breadth-first search, page rank, etc.)[31], social networking [24] and linked data (RDF)[33]. In [38] the authors run the LDBC social network benchmark against graph databases, triple stores and relational engines. They have found that more mature systems with heavily optimized query execution pipelines have the advantage over the more innovative newcomers—regardless of the database model type. 

Meanwhile, the Linked Data community is looking for efficient storage solutions
for RDF data. The LDBC’s Semantic Publishing Benchmark [33] offers a measure- ment specification for comparing the performance of RDF engines. Recently, Pan et al.[39] surveyed the contemporary RDF benchmarks and management solutions. Moreover, the authors run the benchmarks against distributed RDF systems. In the end, they could not announce a clear winner, the performance depended heavily on the type of the query workload.

Experimental setting

Dataset: Wikidata JSON dump - January 2016 ... 67 million statements. Wikidata highly encourages to back up the statements with references, hence reification is used extensively.

Workload: atomic- lookup queries ... This simple query generation technique is based on the atomic parts of a single reified statement: the three parts of the base statement, with the property and the object of the meta statement. ... 32 different query patterns

Reification models: three different reification techniques: (i) the property graph representation which encodes the qualifiers as edge properties, (ii) the standard reification, and (iii) the n-ary relation models which introduce a new node per each statement. 


Database implementations: three systems ... (i) Blazegraph ... the RDF model can be viewed and queried as property graph through Blazegraph’s Apache Tinkerpop implementation. (ii) Titan / JanusGraph ... (iii) Neo4j .... It is based upon the property graph model and supports the Tinkerpop stack and its Gremlin query language. Besides Gremlin, it defines its own declarative query language, called Cypher.

Environment: We provisioned separate virtual machine (VM) instances in the Azure public cloud for every GDB implementation. All VMs had a size of Linux E4s v3. They were configured with Intel XEON E5-2673 v4 processor containing 4 virtual CPU cores that support Intel Hyper-Threading Technology and 32 GiB of memory. A 64 GiB SSD was used as storage for the Ubuntu 16.04 LTS operating system and the particular DBMS. As the dataset had to be stored more times simultaneously—for example, during the conversions the source and the result dataset existed together at the same time—we added another SSD with 512 GiB capacity to store the dataset and the temporary files.

Measurement: Every query pattern (except the one without any variable) was run with ten
different variable bindings. The nth query pattern (qn) supplied with the mth variable binding for it (bn,m) forms the runnable query qbn,m. The values of the variables were randomly selected from the dataset in such a way that every query would have a non-empty result set. To avoid first-time run transient phenomena, we ran all of the queries two times on every DBMS-encoding pair. ... we set a query time limit to one minute, ... When the execution of all the runnable queries completed, the DBMSs were restarted to remove every memory content that could distort the results for the later runs.

Results

Despite its popularity, Neo4j was the least performant system ...We got these timeouts irrespective of the reification model. 

<< nem colocaram os tempos no paper >> 

... the performance of the Blazegraph. In contrast with Neo4j, most of the queries terminated before the time limit; ...there is no significant difference between the performance of the standard and the n-ary reification models.

Looking at the figure, it is quite conspicuous that there is a significant break in the middle of the graph. We have found that—in case of using Blazegraph—the most important factor that affects the elapsed time of an atomic-lookup is whether the subject (the starting point of the traversal) is concrete or not. Concretizing the subject means a significant performance boost, which suggests that Blazegraph pattern matching engine can perform reasonably well only if the graph traversal and the edge are pointing to the same direction.

JanusGraph. One can see that there are fewer patterns in the diagrams, the ones ending with 01 are missing. That is because we encountered some difficulties in translating these queries into Gremlin queries.

Another interesting phenomenon is that the performance is almost constant between the steps on both models. This can be explained by the imperative kind of the Gremlin query language, as it gives a relatively small space to the optimizer to improve the query plans.

Comparing the results of the three investigated DBMSs in case of n-ary model: Neo4j (dotted), Blazegraph (dashed), JanusGraph (solid). ... one can come to the conclusion that Blazegraph offer the lowest response times if the subject part of the query is specified, otherwise Janus- Graph outperforms all its competitors. Furthermore, neither of the systems could efficiently answer the questions that contain only qualifier information.

Conclusion

Although the graph models offered by GDBs seem rather suitable for knowledge graphs at first, one can hit quite a few limitations with datasets utilizing reification heavily. The direct, straightforward encoding of reified claims often resulted in a subpar performance as they relied on unoptimized features. 

We concluded that the execution times depend heavily on both the query pattern and the system-encoding pair. 

Based on the overall average query times measured, the best performance for this kind of workload can be reached by using Blazegraph with either n-ary or standard encoding. Considering other factors than performance, our choice would be Blazegraph with n-ary representation, as this representation has lower storage footprint.

Comentários

Postagens mais visitadas deste blog

Connected Papers: Uma abordagem alternativa para revisão da literatura

Durante um projeto de pesquisa podemos encontrar um artigo que nos identificamos em termos de problema de pesquisa e também de solução. Então surge a vontade de saber como essa área de pesquisa se desenvolveu até chegar a esse ponto ou quais desdobramentos ocorreram a partir dessa solução proposta para identificar o estado da arte nesse tema. Podemos seguir duas abordagens:  realizar uma revisão sistemática usando palavras chaves que melhor caracterizam o tema em bibliotecas digitais de referência para encontrar artigos relacionados ou realizar snowballing ancorado nesse artigo que identificamos previamente, explorando os artigos citados (backward) ou os artigos que o citam (forward)  Mas a ferramenta Connected Papers propõe uma abordagem alternativa para essa busca. O problema inicial é dado um artigo de interesse, precisamos encontrar outros artigos relacionados de "certa forma". Find different methods and approaches to the same subject Track down the state of the art rese...

Knowledge Graph Embedding with Triple Context - Leitura de Abstract

  Jun Shi, Huan Gao, Guilin Qi, and Zhangquan Zhou. 2017. Knowledge Graph Embedding with Triple Context. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management (CIKM '17). Association for Computing Machinery, New York, NY, USA, 2299–2302. https://doi.org/10.1145/3132847.3133119 ABSTRACT Knowledge graph embedding, which aims to represent entities and relations in vector spaces, has shown outstanding performance on a few knowledge graph completion tasks. Most existing methods are based on the assumption that a knowledge graph is a set of separate triples, ignoring rich graph features, i.e., structural information in the graph. In this paper, we take advantages of structures in knowledge graphs, especially local structures around a triple, which we refer to as triple context. We then propose a Triple-Context-based knowledge Embedding model (TCE). For each triple, two kinds of structure information are considered as its context in the graph; one is the out...

KnOD 2021

Beyond Facts: Online Discourse and Knowledge Graphs A preface to the proceedings of the 1st International Workshop on Knowledge Graphs for Online Discourse Analysis (KnOD 2021, co-located with TheWebConf’21) https://ceur-ws.org/Vol-2877/preface.pdf https://knod2021.wordpress.com/   ABSTRACT Expressing opinions and interacting with others on the Web has led to the production of an abundance of online discourse data, such as claims and viewpoints on controversial topics, their sources and contexts . This data constitutes a valuable source of insights for studies into misinformation spread, bias reinforcement, echo chambers or political agenda setting. While knowledge graphs promise to provide the key to a Web of structured information, they are mainly focused on facts without keeping track of the diversity, connection or temporal evolution of online discourse data. As opposed to facts, claims are inherently more complex. Their interpretation strongly depends on the context and a vari...