Pular para o conteúdo principal

Scholia (Wikidata)

Apresentação 

http://mirror.netcologne.de/CCC/events/wikidatacon/2017/h264-hd/wikidatacon2017-10010-eng-Scholia_hd.mp4

BlazeGraph RDF GAS API - visualizar o grafo

Aspects em determinados temas como Software (P2283)

Arvix-to-quickstatements .... Fazer um Latex to quick-statements online?

Zotero: In the other direction, with the additional QuickStatements translator from zotkat you can export metadata in a format understood by QuickStatements, enabling you to more easily create Wikidata items about the works already in your Zotero library. 

https://www.zotero.org/support/kb/zotero_and_wikipedia

Zotero's Quick Copy feature makes it easy to export Zotero items to Wikipedia. Open the Export pane of Zotero preferences and select “Wikipedia Citation Templates” as the Default Format. Then, you can drag and drop items from your library to the Wikipedia to insert properly formatted citations. You can also copy Wikipedia Citation Templates data to your clipboad by pressing Ctrl/Cmd-Shift-C. 

Scholia, Scientometrics and Wikidata

Wikidata data can be reified to triples [5,9], and RDF/graph-oriented databases, including SPARQL databases,can represent Wikidata data [10].

The Wikidata Query Service (WDQS) is an extended SPARQL endpoint that exposes the Wikidata data. Apart from offeringa SPARQL endpoint, it also features an editor and a variety of frontend result display options. It may render the SPARQL query result as, e.g., bubble charts, line charts, graphs, timelines, list of images, points on a geographical map, or just provide the result as a table. These results can also be embedded on other Web pages via an HTML iframe element. We note that Wikidata is open data published under the Public Domain Dedication and Waiver (CC0), and that it is available not only through the SPARQL endpoint, but also as Linked Data Fragments and — like any other project of the Wikimedia family — through an API and dump files.

Scholia  provides  both  a  Python  package  and  a  Web  service  for  presenting and interacting with scientific information from Wikidata.

Scholia uses the Flask Python Web framework.
 
The  frontend  consists  mostly  of  HTMLiframe  elements  for  embedding  the  on-the-fly-generated WDQS results and uses many of the different output formats from this service: bubble charts, bar charts, line charts, graphs and image lists.

For the organization aspect, Scholia uses the employer and affiliated Wiki-data properties to identify associated authors, and combines this with the author query for works. Scholia formulates SPARQL queries with property paths to identify suborganizations of the queried organization, such that authors affiliated with a suborganization are associated with the queried organization.

In the current version, Scholia even ignores any temporal qualifier for the affiliation and employer property, meaning that a researcher moving between several organization gets his/her articles counted under multiple organizations

The initial idea for Scholia was to create a researcher profile based on Wikidata data with list of publications, picture and CV-like information.

The collaborative nature of Wikidata means that Wikidata users can create items for authors that do not have an account on Wikidata. In most other systems, the researcher as a user of the system has control over his/her scholarly profile and other researchers/users cannot make amendment or corrections. Likewise, when one user changes an existing item, this change will be reflected in subsequent live queries of that item, and it may still be in future dumps if not reverted or otherwise modified before the dump creation

(3) (PDF) Scholia, Scientometrics and Wikidata. Available from: https://www.researchgate.net/publication/320899331_Scholia_Scientometrics_and_Wikidata [accessed May 12 2021].

The inspiration came from a blog post by Lambert Heller:What will the scholarly profilepage of the future look like? Provision of metadata is enabling experimentation.

https://blogs.lse.ac.uk/impactofsocialsciences/2015/07/16/scholarly-profile-of-the-future/


Comentários

  1. Lattes2WD alimentará a Wikidata com dados do Lattes de modo a permitir a "completação" deste CV na visão Scholia

    ResponderExcluir
  2. Na WikiDataCon 2021 houve uma apresentação sobre o Scholia e também foi comentado sobre o impacto de separar os dados de artigos científicos do restante da WD

    ResponderExcluir

Postar um comentário

Sinta-se a vontade para comentar. Críticas construtivas são sempre bem vindas.

Postagens mais visitadas deste blog

Connected Papers: Uma abordagem alternativa para revisão da literatura

Durante um projeto de pesquisa podemos encontrar um artigo que nos identificamos em termos de problema de pesquisa e também de solução. Então surge a vontade de saber como essa área de pesquisa se desenvolveu até chegar a esse ponto ou quais desdobramentos ocorreram a partir dessa solução proposta para identificar o estado da arte nesse tema. Podemos seguir duas abordagens:  realizar uma revisão sistemática usando palavras chaves que melhor caracterizam o tema em bibliotecas digitais de referência para encontrar artigos relacionados ou realizar snowballing ancorado nesse artigo que identificamos previamente, explorando os artigos citados (backward) ou os artigos que o citam (forward)  Mas a ferramenta Connected Papers propõe uma abordagem alternativa para essa busca. O problema inicial é dado um artigo de interesse, precisamos encontrar outros artigos relacionados de "certa forma". Find different methods and approaches to the same subject Track down the state of the art rese...

Knowledge Graph Embedding with Triple Context - Leitura de Abstract

  Jun Shi, Huan Gao, Guilin Qi, and Zhangquan Zhou. 2017. Knowledge Graph Embedding with Triple Context. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management (CIKM '17). Association for Computing Machinery, New York, NY, USA, 2299–2302. https://doi.org/10.1145/3132847.3133119 ABSTRACT Knowledge graph embedding, which aims to represent entities and relations in vector spaces, has shown outstanding performance on a few knowledge graph completion tasks. Most existing methods are based on the assumption that a knowledge graph is a set of separate triples, ignoring rich graph features, i.e., structural information in the graph. In this paper, we take advantages of structures in knowledge graphs, especially local structures around a triple, which we refer to as triple context. We then propose a Triple-Context-based knowledge Embedding model (TCE). For each triple, two kinds of structure information are considered as its context in the graph; one is the out...

KnOD 2021

Beyond Facts: Online Discourse and Knowledge Graphs A preface to the proceedings of the 1st International Workshop on Knowledge Graphs for Online Discourse Analysis (KnOD 2021, co-located with TheWebConf’21) https://ceur-ws.org/Vol-2877/preface.pdf https://knod2021.wordpress.com/   ABSTRACT Expressing opinions and interacting with others on the Web has led to the production of an abundance of online discourse data, such as claims and viewpoints on controversial topics, their sources and contexts . This data constitutes a valuable source of insights for studies into misinformation spread, bias reinforcement, echo chambers or political agenda setting. While knowledge graphs promise to provide the key to a Web of structured information, they are mainly focused on facts without keeping track of the diversity, connection or temporal evolution of online discourse data. As opposed to facts, claims are inherently more complex. Their interpretation strongly depends on the context and a vari...