Pular para o conteúdo principal

ER2020 - 1st Workshop on Conceptual Modeling for NoSQL Data Stores

 Site do Workshop

https://sites.google.com/view/comonos20/home

Artigos aceitos

Pablo D. Muñoz-Sánchez, Carlos Javier Fernández Candel, Jesús García-Molina and Diego Sevilla Ruiz. Extracting Physical and Logical Schemas for Document Stores. 

Pavel Čontoš and Martin Svoboda. JSON Schema Inference Approaches. 

Alberto Hernández Chillón, Diego Sevilla Ruiz and Jesus Garcia-Molina. Deimos: A Model-based NoSQL Data Generation Language.

Dois sobre engenharia reversa aplicados a Document Stores

Invited Talk

Pascal Desmarets (Hackolade): NoSQL Data Modelling in Practice

Vídeo -> https://drive.google.com/file/d/1Sps7qS4yfG-KEXaDdYuopDP-SzMqP1QN/view

Apresentação -> https://drive.google.com/file/d/1mOc_Zv_u9i4d84cJHvSGW56V-OQ_rACg/view 


Considerações: 

Agile x Data Modeling as a bottleneck.

Low ROI of Big Data projects

The tree phases of traditional Data Modeling should be redesigned to two: Domain-Driven Design (technology agnostic) + Physical Schema Design (application-specific) **

Image for post

Aggregation is the opposite of Normalization and reduces (eliminates!) impedance mismatch of data (physical schema) and objects (application)

Schemaless is misinterpreted and flexibility is not easy to deal. How to process empty, missing or null attributes? How to express relationships: referencing ou embedding? How to use polymorphic data types and check its quality? 

Reverse-Engineering whithout DDL: probabilistic schema inference, required x optional, polymorphism, pattern detection, relationships

Schema inference meta model (slide 48)

The term schema-on-read is not accurated since the time you store data there is a schema. 

Graph should be divided into LPG and RDF

Obs1. o apresentador escreveu outro artigo: Data Modeling Is Dead…Long Live Schema Design!

  • https://medium.com/hackolade/data-modeling-is-dead-long-live-schema-design-4c1aed88cc21
  • https://www.datastax.com/resources/video/datastax-accelerate-2019-data-modeling-dead-long-live-schema-design

Some quotes from these articles


** Logical modeling makes sense when aiming to achieve an application-agnostic database design, which is still best served by relational database technology.

DDD consists of a collection of patterns, principles, and practices that enable teams to focus on what’s core to the success of the business while crafting software that tackles the complexity in both the business and the technical spaces. One such pattern is an aggregate, a cluster of domain objects that can be treated as a single unit, for example an order and its order lines


Obs2. e representa uma empresa que desenvolveu uma ferramenta de modelagem NoSQL (polymorphic data modeling) e tem treinamentos nessa área.

  • https://hackolade.com/
  • https://hackolade.com/training.html


Comentários

Postagens mais visitadas deste blog

Connected Papers: Uma abordagem alternativa para revisão da literatura

Durante um projeto de pesquisa podemos encontrar um artigo que nos identificamos em termos de problema de pesquisa e também de solução. Então surge a vontade de saber como essa área de pesquisa se desenvolveu até chegar a esse ponto ou quais desdobramentos ocorreram a partir dessa solução proposta para identificar o estado da arte nesse tema. Podemos seguir duas abordagens:  realizar uma revisão sistemática usando palavras chaves que melhor caracterizam o tema em bibliotecas digitais de referência para encontrar artigos relacionados ou realizar snowballing ancorado nesse artigo que identificamos previamente, explorando os artigos citados (backward) ou os artigos que o citam (forward)  Mas a ferramenta Connected Papers propõe uma abordagem alternativa para essa busca. O problema inicial é dado um artigo de interesse, precisamos encontrar outros artigos relacionados de "certa forma". Find different methods and approaches to the same subject Track down the state of the art rese...

Aula 12: WordNet | Introdução à Linguagem de Programação Python *** com NLTK

 Fonte -> https://youtu.be/0OCq31jQ9E4 A WordNet do Brasil -> http://www.nilc.icmc.usp.br/wordnetbr/ NLTK  synsets = dada uma palavra acha todos os significados, pode informar a língua e a classe gramatical da palavra (substantivo, verbo, advérbio) from nltk.corpus import wordnet as wn wordnet.synset(xxxxxx).definition() = descrição do significado É possível extrair hipernimia, hiponimia, antonimos e os lemas (diferentes palavras/expressões com o mesmo significado) formando uma REDE LEXICAL. Com isso é possível calcular a distância entre 2 synset dentro do grafo.  Veja trecho de código abaixo: texto = 'útil' print('NOUN:', wordnet.synsets(texto, lang='por', pos=wordnet.NOUN)) texto = 'útil' print('ADJ:', wordnet.synsets(texto, lang='por', pos=wordnet.ADJ)) print(wordnet.synset('handy.s.01').definition()) texto = 'computador' for synset in wn.synsets(texto, lang='por', pos=wn.NOUN):     print('DEF:',s...

DGL-KE : Deep Graph Library (DGL)

Fonte: https://towardsdatascience.com/introduction-to-knowledge-graph-embedding-with-dgl-ke-77ace6fb60ef Amazon recently launched DGL-KE, a software package that simplifies this process with simple command-line scripts. With DGL-KE , users can generate embeddings for very large graphs 2–5x faster than competing techniques. DGL-KE provides users the flexibility to select models used to generate embeddings and optimize performance by configuring hardware, data sampling parameters, and the loss function. To use this package effectively, however, it is important to understand how embeddings work and the optimizations available to compute them. This two-part blog series is designed to provide this information and get you ready to start taking advantage of DGL-KE . Finally, another class of graphs that is especially important for knowledge graphs are multigraphs . These are graphs that can have multiple (directed) edges between the same pair of nodes and can also contain loops. The...