Affolter, K., Stockinger, K. & Bernstein, A. A comparative survey of recent natural language interfaces for databases. The VLDB Journal 28, 793–819 (2019). https://doi.org/10.1007/s00778-019-00567-8
Abstract
We categorize the NLIs into four groups based on the methodology theyare using: keyword-, pattern-, parsing-, and grammar based NLI. Overall, we learned that keyword-based systems are enough to answer simple questions. To solve more complex questions involving subqueries, the system
needs to apply some sort of parsing to identify structural dependencies. Grammar-based systems are overall the most powerful ones, but are highly dependent on their manually designed rules.
[BGP para keyword, CGP para parsing. Pattern podem ajudar a incluir contexto: Qual é (Quais são) X Qual foi (Quais foram)]
1 Introduction
Even though SQL was initially developed to be used by business people, reality shows that even technically-skilled users often experience problems putting together correct queries..., because the user is required to know the exact schema of the databases, the roles of various entities in the query and the precise join paths to be followed.
[Usuário não sabe SQL e nem conhece o esquema físico do BD. Mas poderia conhecer o modelo conceitual]
Critiques of NLIs often highlight that natural language is claimed to be too verbose and too ambiguous. If it were possible to identify the different types of linguistic problems, then the system could support the user better, for example, with a clarification dialog. This would not only help the NLI to translate the natural language question into a formal query language, but would also assist the users in formulating correct queries.
Additionally, natural language is not only ambiguous on word-level, but there can be also multiple interpretations of the meaning of a sentence.
[Não é um problema usar o feedback do usuário para desambiguação, mas o desejável é minimizar a interação. Natural Language Understanding é IA Hard como disse o professor Altigram]
2 Foundation: A Sample World
Another possible representation of the data would be as a knowledge graph. The core concepts of a knowledge graph are the entities and their relationships. ... Concepts can be directly included in the knowledge graph. For example, the movie `Inglourious Basterds' has a link to the concept `great movie'.
[KG não contém somente entidades como objetos do mundo real, também pode incluir conceitos abstratos]
Table 1 Ten sample input questions based on SQL/SPARQL operators that are answerable on the sample world.
(Join; Filter (string, range, date or negation); Aggregation; Ordering; Union; Subquery; Concept)
[Join ainda é somente BGP mas os demais tornam a consulta CGP. "Concept" não entra nessa classificação de graph patternspq não é operador]
3 Background: Natural Language Processing Technologies
3.1 Stop Word
For NLIs, stop words can contain invaluable information about the relationship between different tokens. ... [mas nem sempre] For example, in the question `What was the best movie of each genre? ' (Q7) the stop words `of each' imply an aggregation on the successive token `genre'.
On the other hand, stop words should not be used for lookups in the inverted indexes. In the question `Who is the director of Inglourious Basterds"? ' (Q1) the stop word "of" would return a partial match for a lot of movie titles, which are not related to the movie "Inglourious Basterds'.
[Aqui o of inverte a direção da aresta, era (filme, dirigido por, pessoa) mas a pergunta vira (pessoa, diretor do, filme)]
3.2 Synonymy
The difficulty of synonymy is that a simple lookup or matching is not enough. ... Usually such a dictionary is based on DBpedia (Lehmann et al, 2012) and/or WordNet (Miller, 1995).
[Expansão da consulta]
3.3 Tokenization
Tokenization is used to split an input question into a list of tokens. It can be as simple as a separator on whitespace, but more often it is based on multiple rules (e.g., with regular expressions) or done with ML algorithms.
[Espaços em branco podem separar palavras mas expressões precisam ser analisadas como N-grams como por exemplo o nome de uma pessoa ou lugar]
3.4 Part of Speech Tagging
A part of speech (PoS) is a category of words with similar grammatical properties. Almost all languages have the PoS tags noun and verb. PoS tagging is the process of annotating each token in a text with the corresponding PoS tag ... More advanced NLP technologies use the information produced by the PoS tagger. For example, both lemmatization and dependency tree parsing of Stanford CoreNLP (Manning et al (2014))
have the requirement for PoS tagging.
[Pode usar a regra de associar substantivos (próprios ou numerais ou outros) a atributos dos nós e verbos ao nome do tipo de relação]
3.5 Stemming / Lemmatization
The goal of both stemming and lemmatization is to reduce ... related forms of a word to a common base form.
Stemming reduces related words to the same stem (root form of the word) by removing different endings of the words. The disadvantage of
stemming is, that the generated stem not only consists of words with a similar meaning.
Lemmatization removes ... endings and returns the lemma, which is either the base form of the word or the dictionary form. To achieve that, lemmatization algorithms usually use a vocabulary and morphological analysis of the words.
[Atenção as diferenças]
3.6 Parsing
Parsing is the process of analyzing the grammatical structures (syntax) of a sentence. Usually, the parser is based on context-free grammar. ... The information generated through parsing is traditionally stored as a syntax tree. NLIs can use the information on relations found in the syntax tree to generate more accurate queries.
[Gera uma árvore]
4 Limitations (Survey)
5 Recently Developed NLIs (Classification)
1. Keyword-based systems
The core of these systems is the lookup step, where the systems try to match the given keywords against an inverted index of the base and meta data.
2. Pattern-based systems
These systems extend the keyword-based systems with NLP technologies to handle more than keywords and also add natural language patterns. The
patterns can be domain-independent or domain-dependent.
[Associam as operações (Filter; Aggregation; Ordering; Union) a palavras específicas]
3. Parsing-based systems
These systems parse the input question and use the generated information about the structure of the question to understand the grammatical structure. The parse tree contains a lot of information about single tokens, but also about how tokens can be grouped together to form phrases. The main advantage of this approach is that the semantic meaning can be mapped to certain production rules (query generation).
4. Grammar-based systems
The core of these systems is a set of rules (grammar) that define the questions a user can ask the system. The main advantage of this approach is that the system can give users natural language suggestions during typing their questions. Each question that is formalized this way, can be answered by the system.
... Using those rules, the system is able to give the users suggestions on how to complete their questions during typing. This supports the users to write understandable questions and gives the users insight into the type of questions that can be asked. The disadvantage of these systems is that they are highly domain dependent: the rules need to be written by hand.
[São restritas nas perguntas que podem responder]
5.1 Keyword-based systems
5.1.1 SODA (Search Over DAta warehouse)
The strengths of SODA are the use of meta data patterns and domain ontologies, which allow one to define concepts and include domain specific knowledge.
[Ontologia para conceitos e conhecimento do domínio. Isso pode fazer parte do próprio KG]
5.1.2 NLP-Reduce
... uses a few simple NLP technologies to `reduce' the input tokens before the lookup in the Knowledge Base (KB) based on RDF. The system takes the input question, reduces it to keywords, translates it into SPARQL to query the KB and then returns the result to the user.
With the removal of stop words and punctuation marks, NLP-Reduce is able to answer some fragment and full sentence questions.
[Também usa stemming]
5.1.3 Précis
... for relational databases, which supports multiple terms combined through the operators AND, OR and NOT. This is different to SODA, where the inverted index includes the meta data.
[Não usa metadados, somente dados. Poderia ser IR ao invés de Data Retrieval]
The strength of Préecis is the ability to use brackets, AND, OR and NOT to define the input question. However, the weaknesses are that this again composes a logical query language, although a simpler one. Furthermore, it can only solve boolean questions, and the input question
can only consist of terms which are located in the base data and not in the meta data.
5.1.4 QUICK (QUery Intent Constructor for Keywords)
QUICK (Zenz et al, 2009) is an NLI that adds the expressiveness of semantic queries to the convenience of keyword-based search. To achieve this, the users start with a keyword question and then are guided through the process of incremental refinement steps to select the
question's intention. The system provides the user with an interface that shows the semantic queries as graphs as well as textual form.
In a first step, QUICK takes the keywords of the input question and compares them against the KB. Each possible interpretation corresponds to a semantic query.
The strength of QUICK is the user interaction interface with the optimization for minimal user interaction during the semantic query selection. The weakness of QUICK is that it is limited to acyclic conjunctions of triple patterns.
5.1.5 QUEST (QUEry generator for STructured sources)
... translate input questions into SQL. It combines semantic and statistical ML techniques for the translation. The first step is to determine how the keywords in the input question correspond to elements of the database (lookup). In contrast to SODA, QUEST uses two Hidden Markov Models (HMM) to choose the relevant elements (ranking). The first HMM is a set of heuristic rules. The second HMM is trained with user feedback.
[Já utiliza Machine Learning?]
5.1.6 SINA
... transforms natural language input questions into conjunctive SPARQL queries. It uses a Hidden Markov Model to determine the most suitable resource for a given input question from different datasets. In the first step, SINA reduces the input question to keywords (similar to NLP-Reduce), by using tokenization, lemmatization and stop word removal. In the next step, the keywords are grouped into segments, with respect
to the available resources.
The biggest weakness of SINA is that it can only translate into conjunctive SPARQL queries, which reduces the number of answerable questions.
5.1.7 Aqqu
5.2 Pattern-based systems
5.2.1 NLQ/A
... NLI to query a knowledge graph. The system is based on a new approach without NLP technologies like parsers or PoS taggers.
[Olhar esse com mais detalhes. Já usa algumas técnicas semânticas de NLP mas não usa ML ou Ontologias]
Some types of words like prepositions are still needed and therefore kept. Next 1:n-grams are generated. Phrases starting with prepositions
are discarded. After stop word removal, the input question Q1 would become `director of Inglourious Basterds'. If n is set to 2, the extracted phrases would be: f`director', `director of ', `Inglourious', `Inglourious Basterds', `Basterds'g. Next, the phrases are extended
according to a synonym dictionary.
[N-grams com 1 a n palavras, extensão da consulta com sinônimos, remoção de stopwords, ...]
The strengths of this NLI are the simplicity and the efficient user interaction process. The simplicity allows easy adaption on new knowledge graphs and together with the user interaction process it overcomes the dificulties of ambiguity. The weakness of this system is that usually more than one user interaction is needed to resolve ambiguities, in the experiments the average number of interactions was three
... the errors made by NLP technologies are not worth the gain of information.Instead, the system is highly dependent on the users' input to solve ambiguity problems, and therefore it focuses on the optimization of the user interaction.
[Três é muito? Ser interativo é ruim?]
5.2.2 QuestIO (QUESTion-based Interface to Ontologies)
QuestIO translates the input question with three steps: In the first step, the key concept identification tool identifies all tokens which refer to mentions of ontology resources such as instances, classes, properties or property values. This is similar to the dependent phrases of
NLQ/A.
In the next step, the context collector identifies patterns (e.g., key phrases like `how many') in the remaining tokens that help the system to understand the query (similar to independent phrases of NLQ/A). The last step identifies relationships between the ontology resources collected during the previous steps and formulates the corresponding formal query. After executing the query, it will be sent to the result formatter to display the result in an user-friendly manner.
The automatic extraction of semantic information out of the ontology is both a strength and a weakness of QuestIO. It is highly dependent on the development of the human-understandable labels and descriptions, without them QuestIO will not be able to match the input questions to the automatic extracted information.
[Olhar esse também já que explora os rótulos e literais do tipo texto]
5.3 Parsing-based systems
5.3.1 ATHENA
... an ontology-driven NLI for relational databases, which handles full sentences in English as the input question. For ATHENA, ontology driven
means that it is based on the information of a given ontology and needs mapping between an ontology and a relational database. A set of synonyms can be associated with each ontology element.
Furthermore, ATHENA uses the most NLP technologies.
[Usa NLP e Ontologia, é semantic search]
There are five types of possible matches:
Finding a match in the inverted index for the meta data (and the associated set of synonyms).
... enriched with variations for person and company names.
Finding all time ranges ... TIMEX annotator.
Finding all tokens that include numeric quantities with the Stanford Numeric Expressions annotator.
Annotating dependencies
[Não entendi esse último]
The third step uses the ranked list of interpretations to generate an intermediate query in the Ontology Query Language (OQL). OQL was specifically developed for ATHENA to be an intermediate language between the input question and SQL and is able to express queries that include aggregations, unions and single nested subqueries.
The strengths of ATHENA are the ontology as an abstraction of the relational database and the natural language explanation for each interpretation of the input question. The translation index contains not only synonyms but also semantic variants for certain types
of values like persons and company names.
5.3.2 Querix
... allows users to enter questions in natural language to query an ontology. If the system identifies any ambiguities in the input question,
it asks the user for clarification in a dialog. Querix uses a syntax tree to extract the sequence of words from the main word categories: noun (N), verb (V), preposition (P), wh-pronoun (Q, e.g. what, where, when, etc.) and conjunction (C). This sequence is called query skeleton.
The query skeleton is used to enrich nouns and verbs and to identify subject-property-object patterns in the query.
[Identifica o tipo de pergunta e pode deduzir o contexto de interesse]
Querix extracts the query skeleton. For example, the query skeleton `Q-V-N-P-N' is extracted from the input question (Q1) as `Who (Q) is (V) the director (N) of (P) \Inglourious Basterds" (N) ?'. (2) It enriches all nouns and verbs with synonyms provided by WordNet.
5.3.3 FREyA (Feedback, Refinement and Extended vocabularY Aggregation)
The strength of this system is the user interaction, which not only supports the users to find the right answer, but also improves FREyA over time. Furthermore, FREyA performs an answer type identification, which leads to more precise answers.
5.3.4 BELA
Similar to Querix, BELA parses the input question and produces a set of query templates, which mirror the semantic structure.
5.3.5 USI Answers
The database consists of multiple databases, for example domain-specific ontologies, different relational databases and SPARQL endpoints.
Furthermore, the users demanded to be able to use not only natural language questions and formal query language constructs but also keyword questions or a mixture of these.
... the first step includes various NLP technologies, such as lemmatization, PoS tagging, named entity recognition and dependency parsing.
[Busca Semântica com mais de uma opção para formular a consulta]
Furthermore, the question focus (i.e., referenced entity object) is identified by applying 12 syntactic rules. Enriched with this information,
the next step aims to identify and resolve the different concepts that may be in the input question (lookup).
5.3.6 NaLIX (Natural Language Interface to XML)
Furthermore, NaLIX provides templates and question history to the users. It preserves the users prior search efforts and provides the users with a quick starting point when they create new questions.
[Contexto do usuário e da tarefa]
5.3.7 NaLIR (Natural Language Interface for Relational databases)
The basic idea remains the same: parse the input question using the Stanford Parser and map the parse tree to SQL (instead of XQuery). The steps are similar with some modifications: In the step of mapping phrases of the input question parse tree to SQL components, the users are asked for clarification if there are ambiguities. The next step is not a validation anymore, but an adjustment of the parse tree in such a way, that it is valid. The candidate interpretations produced by this adjusted and valid parse tree are delivered to the users to select the correct interpretation.
5.3.8 BioSmart
Compared to other NLI, the strength of BioSmart is the possibility to query arbitrary databases. The weakness of BioSmart is the mapping to the three query types: if the system cannot match the input question to those query types, it is not able to answer the question.
5.4 Grammar-based systems
5.4.1 TR Discover
It is either used for relational databases or ontologies, but does not need an ontology to work for relational databases. During the translation steps, TR Discover uses a First Order Logic (FOL) representation as an intermediate language. Furthermore, it provides
auto-suggestions based on the user input. There are two types of suggestions: autocompletion and prediction.
[Sugestões com termos existentes no grafo, em seguida são sugeridos termos próximos desse termo no grafo, o quão próximo ????]
The suggestions are based upon the relationships and entities in the dataset and use the linguistic constraints encoded in a feature-based context-free grammar (FCFG).
The strengths of TR Discover are the auto-suggestion and the possibility to translate natural language into different query languages such as SQL and SPARQL, because FOL is used as an intermediate language.
The weaknesses of TR Discover are that quantifiers (e.g., Q3: `grossed most') cannot be used, synonyms are not properly handled, and negations only work for SPARQL.
Furthermore, they pointed out the possibility of applying user query logs to improve auto-suggestions.
5.4.2 Ginseng (Guided Input Natural language Search ENGine)
Ginseng (Bernstein et al, 2005) is a guided input NLI for ontologies. The system is based on a grammar that describes both the parse rules of the input questions and the query composition elements for the RDF Data Query Language (RDQL) queries. The grammar is used to guide the users in formulating questions in English. In contrast to TR Discover, Ginseng does not use an intermediate representation and therefore the parsing
process translates directly into RDQL. The grammar rules are divided in two categories: dynamic and static grammar rules. The dynamic grammar rules are generated from the OWL ontologies. They include rules for each class, instance, objects property, data type property and synonyms. The static grammar rules consist of about 120 mostly empirically constructed domain independent rules, which provide the basic sentence structures and phrases for input questions.
5.4.3 SQUALL (Semantic Query and Update High-Level Language)
The translation into the logical form is done in three steps. In the first step, the keywords are recognized (lookup step). The second step is a syntactic analysis based on a descending parser, which is fed with the grammar rules. Afterwards, the next step can generate the logical language based on the definition in the grammar. After the translation into the logical language, the translation in to the chosen formal language can be done.
5.4.4 MEANS (MEdical question ANSwering)
It is highly domain dependent and focuses on factual questions expressed by wh-pronouns and boolean questions in a medical subfield targeting the seven medical categories: problem, treatment, test, sign/symptom, drug, food and patient.
[O foco no tipo de pergunta Ws]
To translate the input question into SPARQL, MEANS first classifies the input question into one of ten categories (e.g., factoid, list, definition, etc.). If the question is categorized as a wh-question, the Expected Answer Type (EAT) is identified and replaced with `ANSWER' as a simplified form for the next step. For example, the EAT of the input question Q1 would be `director'. In the next step, MEANS identifies medical entities using a Conditional Random Field (CRF) classifier and rules to map noun phrases to concepts.
The strength of MEANS is, that it can handle different types of questions, including questions with more than one expected answer type and more than one focus. As for most of the grammar-based NLIs, MEANS suffers from the restriction based on the handcrafted rules. The inclusion of ML reduces this problem, but ML itself needs huge training corpus to be usable. Furthermore, comparison (and also negation) are not taken
into account.
5.4.5 AskNow
To translate the input question into SPARQL, AskNow first identifies the sub-structures using a POS tagger and named entity recognition. Then it fits the sub-structures into their corresponding cells within the generic NQS templates. Afterwards the query type (set, boolean, ranking, count or property value) is identified based on desire and wh-type. In the next step, the query desire, query input and their relations will be matched to the KB.
[Mais que foca no tipo de pergunta Ws]
5.4.6 SPARKLIS
It guides the users during their query phrasing by giving the possibilities to search through concepts, entities and modifiers in natural language. It relies on the rules of SPARQL to ensure syntactically correct SPARQL queries all the time during the process. The interaction with the system makes the question formulation more constrained, slower and less spontaneous, but it provides guidance and safeness with intermediate answers and suggestions at each step. The translation process for SPARKLIS is reversed: it translates possible SPARQL queries into natural language such that the users can understand their choices.
[As consultas nãosão formuladas em linguagem natural mas são traduzidas para linguagem natural pelo próprio sistema]
5.4.7 GFMed
The abstract grammar defines the semantic model of the input language and for GFMed this is based on the biomedical domain. The concrete grammars define the syntax of the input language, which is English and SPARQL.
6 Evaluation (Teórica)
[Dez consultas formuladas pelos autores. Avaliação não foi empírica. Em alguns casos nem seria possível avaliar a consulta em si devido a restrição do domínio. ]
6.1 Evaluation of 24 recently developed NLIs
... keyword-based NLIs are the least powerful and can only answer simple questions like string filters (Q1).
... Pattern-based NLIs are an extension of keyword-based systems in such a way that they have a dictionary with trigger words to answer
more complex questions like aggregations (Q7). However, they cannot answer questions of higher difficulties, including subqueries (Q9/Q10).
... Parsing-based NLIs are able to answer these questions by using dependency or constituency trees (e.g., NaLIR). This helps to identify and group the input question in such a way that the different parts of the subqueries can be identified.
... Grammar-based systems offer the possibility to guide the users during the formulation of their questions. Dynamically applying the rules during typing allows the systems to ensure that the input question is always translatable into formal language.
7 Machine Learning Approaches for NLIs
KBQA (Cui et al, 2017) learns templates as a kind of question representation.
new promising avenue of research is to use deep learning techniques as the foundation for NLIDBs. The basic idea is to formulate the translation of natural language (NL) to SQL as an end-to-end machine translation problem ...
[Se traduz linguage natural entre idiomas deve ser possível traduzir linguagem natural em linguagem "artificial" como SQL, a questão é ter volume de dados para treinamento, onde obter essas consultas e seus equivalentes em linguagem natural? ]
The main advantage of machine learning based approaches over traditional NLIDBs is that they support a richer linguistic variability in query expressions and thus users can formulate queries with greater flexibility.
However, one of the major challenges of supervised machine learning approaches is that they require a large training data set in order to achieve good accuracy on the translation task.
The most commonly used approach for sequence-to-sequence modeling is based on recurrent neural networks (RNNs, Elman (1990)) with an input encoder and an output decoder. The encoder and decoder are implemented as bi-directional LSTMs (Long Short Term Memory) by Hochreiter and Schmidhuber (1997). However, before an NL can be encoded, the tokens need to be represented as a vector that in turn can be processed
by the neural network. A widely used approach is to use word embeddings where the vocabulary of the NL is mapped to a high-dimensional vector (Pennington et al (2014)). In order to improve the accuracy of the translation, attention models are often applied (Luong et al (2015)).
One of the currently most advanced neural machine translation systems was introduced by Iyer et al (2017). Their approach uses an encoder-decoder model with global attention similar to Luong et al (2015) and a bi-directional LSTM network to encode the input tokens.
In order to improve the translation accuracy from NL to SQL compared to traditional machine translation approaches, the database schema is taken into account.
Zhong et al (2017) introduce a system called Seq2SQL. Their approach uses a deep neural network architecture with reinforcement learning to translate from NL to SQL. The authors released WikiSQL - a new data set based on Wikipedia consisting of 24,241 tables and 80,654 hand-annotated NL-SQL-pairs.
DBPal by Basik et al (2018) overcomes shortcomings of manually labeling large training data sets by synthetically generating a training set that only requires minimal annotations in the database. Similar to Iyer et al (2017), DBPal uses the database schema and query templates to describe NL/SQL-pairs. Moreover, inspired by Wang et al (2015), DBPal augments an initial set of NL queries using a paraphrasing approach based on a paraphrase dictionary.
Soru et al (2017) use neural machine translation approach similar to Iyer et al (2017). However, the major difference is that they translate natural language to SPARQL rather than to SQL.
In general, these new approaches show promising results, but they have either only been demonstrated to work for single-table data sets or require large amounts training data. Hence, the practical usage in realistic database settings still needs to be shown.
[Não tem benchmark nem para o NLIDB?]
8 Conclusions
Simple questions will often be asked with keywords, while complex questions are posed in grammatically correct sentences.
[Análise do Log das Consultas]
Lesson 2 - Identifying subqueries still is a significant challenge for NLIs: The identification of subqueries seems to one of the most difficult problems for NLIs.
Lesson 3 - Optimize the number of user interactions: Natural language is ambiguous and even in a human to human conversation ambiguities occur, which are then clarified with questions. The same applies to NLIs: when an ambiguity occurs, the system needs to clarify with the user. This interaction should be optimized in such a way that the number of needed user interactions is minimized. ...
The idea is to identify the ambiguity which has the most impact on the other ambiguities and clarify it first.
Lesson 4 - Use grammar for guidance: The biggest advantage of grammar-based NLIs is that they can use their grammar rules to guide the users while they are typing their questions.
Lesson 5 - Use hybrid approach of traditional NLI systems and neural machine translation
New NLIs based on neural machine translation show promising results. However, the practical usage in realistic database settings still needs to be shown. Using a hybrid approach of traditional NLIs that are enhanced by neural machine translation might be a good approach for the future. Traditional approaches would guarantee better accuracy while neural machine translation approaches would increase the robustness to language variability.
[Ser mais semântico ao combinar NLP e Ontologias com Machine Learning]
Durante um projeto de pesquisa podemos encontrar um artigo que nos identificamos em termos de problema de pesquisa e também de solução. Então surge a vontade de saber como essa área de pesquisa se desenvolveu até chegar a esse ponto ou quais desdobramentos ocorreram a partir dessa solução proposta para identificar o estado da arte nesse tema. Podemos seguir duas abordagens: realizar uma revisão sistemática usando palavras chaves que melhor caracterizam o tema em bibliotecas digitais de referência para encontrar artigos relacionados ou realizar snowballing ancorado nesse artigo que identificamos previamente, explorando os artigos citados (backward) ou os artigos que o citam (forward) Mas a ferramenta Connected Papers propõe uma abordagem alternativa para essa busca. O problema inicial é dado um artigo de interesse, precisamos encontrar outros artigos relacionados de "certa forma". Find different methods and approaches to the same subject Track down the state of the art rese...
Comentários
Postar um comentário
Sinta-se a vontade para comentar. Críticas construtivas são sempre bem vindas.