Pular para o conteúdo principal

Topic Modeling for Short Text x MELHOR RESPOSTA (BEST-ANSWER)

Usar Topic Modeling (além da geração de embeedings) para enriquecer o KG. Desse modo, teríamos:

1) se a pergunta for sobre o tópico X e o KG contém afirmações desse tópico, o direcionamento seria: Vamos ajudar, o Oráculo conhece sobre o assunto!

2) se a pergunta for sobre o tópico Y e o KG não contém afirmações desse tópico, o direcionamento seria: Vamos tentar ajudar, o Oráculo talvez conheça alguma coisa do assunto. 

3) se a pergunta for sobre o tópico Z e o KG explícitamente não cobre esse tópico, o direcionamento seria: Não podemos ajudar, o Oráculo não sabe tudo. 

Os tópicos poderiam ser incluídos ou excluídos (negados) por intervenção dos engenheiros de dados do KG também mas a geração automática pouparia o esforço humano.

===========================================================

Detecção de tópicos em textos curtos (como tweets, posts, comentários assim como os statements snippets do KG e as consultas em linguagem natural) requerem abordagens específicas que são diferentes da detecção de tópicos em documentos (arquivos textos, pdf, páginas html, .....)

Probabilistic Latent Semantic Analysis (PLSA 1999), Latent Dirichlet Allocation (LDA 2003) são abordagens para documentos. 

Biterm Topic Model (BTM - 2013) and Dirichlet Multinomial Mixture (DMM - 2000), extensões GPUDMM (word embeddings and Multiterm Topic Model) são para textos curtos

Neural topic models (DNN): Variational AutoEncoder (VAE - 2014), Neural Variational Document Model (NVDM - 2016), Gaussian Softmax Construction (GSM - 2017)
Topic Memory Network (TMN - 2018) para short text topic modeling and classification with pre-trained word embeddings

Além da proposta desse artigo ao final ser também em redes neurais.

Datasets: Google search snippets, Yahoo Answer, TagMyNews Title e StackOverflow

Artigo de referência

Xiaobao Wu, Chunping Li, Yan Zhu, and Yishu Miao. 2020. Short Text Topic Modeling with Topic Distribution Quantization and Negative Sampling Decoder. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1772–1782, Online. Association for Computational Linguistics.

Abstract

Topic models have been prevailing for many years on discovering latent semantics while modeling long documents. 

However, for short texts they generally suffer from data sparsity because of extremely limited word co-occurrences; thus tend to yield repetitive or trivial topics with low quality. 

In this paper, to address this issue, we propose a novel neural topic model in the framework of autoencoding with a new topic distribution quantization approach generating peakier distributions that are more appropriate for modeling short texts. Besides the encoding, to tackle this issue in terms of decoding, we further propose a novel negative sampling decoder learning from negative samples to avoid yielding repetitive topics. 

We observe that our model can highly improve short text topic modeling performance. Through extensive experiments on real-world datasets, we demonstrate our model can outperform both strong traditional and neural baselines under extreme data sparsity scenes, producing high-quality topics.

 

Comentários

Postagens mais visitadas deste blog

Connected Papers: Uma abordagem alternativa para revisão da literatura

Durante um projeto de pesquisa podemos encontrar um artigo que nos identificamos em termos de problema de pesquisa e também de solução. Então surge a vontade de saber como essa área de pesquisa se desenvolveu até chegar a esse ponto ou quais desdobramentos ocorreram a partir dessa solução proposta para identificar o estado da arte nesse tema. Podemos seguir duas abordagens:  realizar uma revisão sistemática usando palavras chaves que melhor caracterizam o tema em bibliotecas digitais de referência para encontrar artigos relacionados ou realizar snowballing ancorado nesse artigo que identificamos previamente, explorando os artigos citados (backward) ou os artigos que o citam (forward)  Mas a ferramenta Connected Papers propõe uma abordagem alternativa para essa busca. O problema inicial é dado um artigo de interesse, precisamos encontrar outros artigos relacionados de "certa forma". Find different methods and approaches to the same subject Track down the state of the art rese...

Knowledge Graph Embedding with Triple Context - Leitura de Abstract

  Jun Shi, Huan Gao, Guilin Qi, and Zhangquan Zhou. 2017. Knowledge Graph Embedding with Triple Context. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management (CIKM '17). Association for Computing Machinery, New York, NY, USA, 2299–2302. https://doi.org/10.1145/3132847.3133119 ABSTRACT Knowledge graph embedding, which aims to represent entities and relations in vector spaces, has shown outstanding performance on a few knowledge graph completion tasks. Most existing methods are based on the assumption that a knowledge graph is a set of separate triples, ignoring rich graph features, i.e., structural information in the graph. In this paper, we take advantages of structures in knowledge graphs, especially local structures around a triple, which we refer to as triple context. We then propose a Triple-Context-based knowledge Embedding model (TCE). For each triple, two kinds of structure information are considered as its context in the graph; one is the out...

KnOD 2021

Beyond Facts: Online Discourse and Knowledge Graphs A preface to the proceedings of the 1st International Workshop on Knowledge Graphs for Online Discourse Analysis (KnOD 2021, co-located with TheWebConf’21) https://ceur-ws.org/Vol-2877/preface.pdf https://knod2021.wordpress.com/   ABSTRACT Expressing opinions and interacting with others on the Web has led to the production of an abundance of online discourse data, such as claims and viewpoints on controversial topics, their sources and contexts . This data constitutes a valuable source of insights for studies into misinformation spread, bias reinforcement, echo chambers or political agenda setting. While knowledge graphs promise to provide the key to a Web of structured information, they are mainly focused on facts without keeping track of the diversity, connection or temporal evolution of online discourse data. As opposed to facts, claims are inherently more complex. Their interpretation strongly depends on the context and a vari...