Pular para o conteúdo principal

ER2020 - 1st Workshop on Conceptual Modeling for NoSQL Data Stores

 Site do Workshop

https://sites.google.com/view/comonos20/home

Artigos aceitos

Pablo D. Muñoz-Sánchez, Carlos Javier Fernández Candel, Jesús García-Molina and Diego Sevilla Ruiz. Extracting Physical and Logical Schemas for Document Stores. 

Pavel Čontoš and Martin Svoboda. JSON Schema Inference Approaches. 

Alberto Hernández Chillón, Diego Sevilla Ruiz and Jesus Garcia-Molina. Deimos: A Model-based NoSQL Data Generation Language.

Dois sobre engenharia reversa aplicados a Document Stores

Invited Talk

Pascal Desmarets (Hackolade): NoSQL Data Modelling in Practice

Vídeo -> https://drive.google.com/file/d/1Sps7qS4yfG-KEXaDdYuopDP-SzMqP1QN/view

Apresentação -> https://drive.google.com/file/d/1mOc_Zv_u9i4d84cJHvSGW56V-OQ_rACg/view 


Considerações: 

Agile x Data Modeling as a bottleneck.

Low ROI of Big Data projects

The tree phases of traditional Data Modeling should be redesigned to two: Domain-Driven Design (technology agnostic) + Physical Schema Design (application-specific) **

Image for post

Aggregation is the opposite of Normalization and reduces (eliminates!) impedance mismatch of data (physical schema) and objects (application)

Schemaless is misinterpreted and flexibility is not easy to deal. How to process empty, missing or null attributes? How to express relationships: referencing ou embedding? How to use polymorphic data types and check its quality? 

Reverse-Engineering whithout DDL: probabilistic schema inference, required x optional, polymorphism, pattern detection, relationships

Schema inference meta model (slide 48)

The term schema-on-read is not accurated since the time you store data there is a schema. 

Graph should be divided into LPG and RDF

Obs1. o apresentador escreveu outro artigo: Data Modeling Is Dead…Long Live Schema Design!

  • https://medium.com/hackolade/data-modeling-is-dead-long-live-schema-design-4c1aed88cc21
  • https://www.datastax.com/resources/video/datastax-accelerate-2019-data-modeling-dead-long-live-schema-design

Some quotes from these articles


** Logical modeling makes sense when aiming to achieve an application-agnostic database design, which is still best served by relational database technology.

DDD consists of a collection of patterns, principles, and practices that enable teams to focus on what’s core to the success of the business while crafting software that tackles the complexity in both the business and the technical spaces. One such pattern is an aggregate, a cluster of domain objects that can be treated as a single unit, for example an order and its order lines


Obs2. e representa uma empresa que desenvolveu uma ferramenta de modelagem NoSQL (polymorphic data modeling) e tem treinamentos nessa área.

  • https://hackolade.com/
  • https://hackolade.com/training.html


Comentários

Postagens mais visitadas deste blog

Connected Papers: Uma abordagem alternativa para revisão da literatura

Durante um projeto de pesquisa podemos encontrar um artigo que nos identificamos em termos de problema de pesquisa e também de solução. Então surge a vontade de saber como essa área de pesquisa se desenvolveu até chegar a esse ponto ou quais desdobramentos ocorreram a partir dessa solução proposta para identificar o estado da arte nesse tema. Podemos seguir duas abordagens:  realizar uma revisão sistemática usando palavras chaves que melhor caracterizam o tema em bibliotecas digitais de referência para encontrar artigos relacionados ou realizar snowballing ancorado nesse artigo que identificamos previamente, explorando os artigos citados (backward) ou os artigos que o citam (forward)  Mas a ferramenta Connected Papers propõe uma abordagem alternativa para essa busca. O problema inicial é dado um artigo de interesse, precisamos encontrar outros artigos relacionados de "certa forma". Find different methods and approaches to the same subject Track down the state of the art rese...

Knowledge Graphs as a source of trust for LLM-powered enterprise question answering - Leitura de Artigo

J. Sequeda, D. Allemang and B. Jacob, Knowledge Graphs as a source of trust for LLM-powered enterprise question answering, Web Semantics: Science, Services and Agents on the World Wide Web (2025), doi: https://doi.org/10.1016/j.websem.2024.100858. 1. Introduction These question answering systems that enable to chat with your structured data hold tremendous potential for transforming the way self service and data-driven decision making is executed within enterprises. Self service and data-driven decision making in organizations today is largly made through Business Intelligence (BI) and analytics reporting. Data teams gather the original data, integrate the data, build a SQL data warehouse (i.e. star schemas), and create BI dashboards and reports that are then used by business users and analysts to answer specific questions (i.e. metrics, KPIs) and make decisions. The bottleneck of this approach is that business users are only able to answer questions given the views of existing dashboa...

Knowledge Graph Toolkit (KGTK)

https://kgtk.readthedocs.io/en/latest/ KGTK represents KGs using TSV files with 4 columns labeled id, node1, label and node2. The id column is a symbol representing an identifier of an edge, corresponding to the orange circles in the diagram above. node1 represents the source of the edge, node2 represents the destination of the edge, and label represents the relation between node1 and node2. >> Quad do RDF, definir cada tripla como um grafo   KGTK defines knowledge graphs (or more generally any attributed graph or hypergraph ) as a set of nodes and a set of edges between those nodes. KGTK represents everything of meaning via an edge. Edges themselves can be attributed by having edges asserted about them, thus, KGTK can in fact represent arbitrary hypergraphs. KGTK intentionally does not distinguish attributes or qualifiers on nodes and edges from full-fledged edges, tools operating on KGTK graphs can instead interpret edges differently if they so desire. In KGTK, e...