Pular para o conteúdo principal

ER2020 - Others interesting articles about Data Modeling Databases

 

Diagram depicting the key differences between SQL Database and NoSQL Databases.
https://www.scylladb.com/resources/nosql-vs-sql/

A Workload-driven Document Database Schema Recommender (DBSR)

  • https://youtu.be/APVlxebtmLI

Aggregate oriented modeling

Input: ER Model, Read Workload (JOIN), Configurations 

First step: create a Normalize document structure and analyze the JOIN steps

Refinements of query plans removing JOINS and embeddings documents, merging document structures based on entities relationships, in order to reduce Read Operations costs

Outuput: Doument collections, query plans (with indexes) and utility matrix of recommendations

An Empirical Study on the Design and Evolution of NoSQL Database Schemas

  • https://youtu.be/Mz7P6pp5TvY

 Lack of empirical study in NoSQL

10 selected projects from GitHub: denormalized is commom but it isn't a rule, NoSQL takes longer to stabilize compared to SQL (in general), change the type of the attribute is more frequent than other schemas changes


Neo4j Keys

  • https://youtu.be/qQQ9DuBPIrU

 
Neo4J key = label + property attributes
Complete: all node have
Uniqueness: there is no two or more nodes with the value
Neo4J don't support multi-label key
 

Discovering Data Models from Event Logs

  •  https://youtu.be/J2nxUxE-r_I
 
Process Model and Data Model
Step 1 => Input: Event Log     Output:A2A Diagram = Activities x Attributes relationship
Four rule to separate A2A
 
 
 
 

Comentários

  1. Esse que usou o GitHub é bem interessante pq partiu de projetos de aplicações e conseguiu identificar um padrão de comportamento para NoSQL

    ResponderExcluir

Postar um comentário

Sinta-se a vontade para comentar. Críticas construtivas são sempre bem vindas.

Postagens mais visitadas deste blog

Aula 12: WordNet | Introdução à Linguagem de Programação Python *** com NLTK

 Fonte -> https://youtu.be/0OCq31jQ9E4 A WordNet do Brasil -> http://www.nilc.icmc.usp.br/wordnetbr/ NLTK  synsets = dada uma palavra acha todos os significados, pode informar a língua e a classe gramatical da palavra (substantivo, verbo, advérbio) from nltk.corpus import wordnet as wn wordnet.synset(xxxxxx).definition() = descrição do significado É possível extrair hipernimia, hiponimia, antonimos e os lemas (diferentes palavras/expressões com o mesmo significado) formando uma REDE LEXICAL. Com isso é possível calcular a distância entre 2 synset dentro do grafo.  Veja trecho de código abaixo: texto = 'útil' print('NOUN:', wordnet.synsets(texto, lang='por', pos=wordnet.NOUN)) texto = 'útil' print('ADJ:', wordnet.synsets(texto, lang='por', pos=wordnet.ADJ)) print(wordnet.synset('handy.s.01').definition()) texto = 'computador' for synset in wn.synsets(texto, lang='por', pos=wn.NOUN):     print('DEF:',s

truth makers AND truth bearers - Palestra Giancarlo no SBBD

Dando uma googada https://iep.utm.edu/truth/ There are two commonly accepted constraints on truth and falsehood:     Every proposition is true or false.         [Law of the Excluded Middle.]     No proposition is both true and false.         [Law of Non-contradiction.] What is the difference between a truth-maker and a truth bearer? Truth-bearers are either true or false; truth-makers are not since, not being representations, they cannot be said to be true, nor can they be said to be false . That's a second difference. Truth-bearers are 'bipolar,' either true or false; truth-makers are 'unipolar': all of them obtain. What are considered truth bearers?   A variety of truth bearers are considered – statements, beliefs, claims, assumptions, hypotheses, propositions, sentences, and utterances . When I speak of a fact . . . I mean the kind of thing that makes a proposition true or false. (Russell, 1972, p. 36.) “Truthmaker theories” hold that in order for any truthbe

DGL-KE : Deep Graph Library (DGL)

Fonte: https://towardsdatascience.com/introduction-to-knowledge-graph-embedding-with-dgl-ke-77ace6fb60ef Amazon recently launched DGL-KE, a software package that simplifies this process with simple command-line scripts. With DGL-KE , users can generate embeddings for very large graphs 2–5x faster than competing techniques. DGL-KE provides users the flexibility to select models used to generate embeddings and optimize performance by configuring hardware, data sampling parameters, and the loss function. To use this package effectively, however, it is important to understand how embeddings work and the optimizations available to compute them. This two-part blog series is designed to provide this information and get you ready to start taking advantage of DGL-KE . Finally, another class of graphs that is especially important for knowledge graphs are multigraphs . These are graphs that can have multiple (directed) edges between the same pair of nodes and can also contain loops. The