https://knowledge-nlp.github.io/kdd2023/papers/Kumar5.pdf https://github.com/isunitha98selvan/odqa-tail ABSTRACT Pretrained Large Language Models (LLMs) have gained significant attention for addressing open-domain Question Answering (QA). While they exhibit high accuracy in answering questions related to common knowledge, LLMs encounter difficulties in learning about uncommon long-tail knowledge (tail entities). [Entidades com poucas informações disponĂveis, nĂŁo tĂŁo populares ou comuns no interesse do pĂşblico em geral] 1 INTRODUCTION However, the impressive achievements of LLMs in QA tasks are primarily observed with regard to common concepts that frequently appear on the internet (referred to as "head entities"), which are thus more likely to be learned effectively by LLMs during pretraining time. Conversely, when it comes to dealing with long-tail knowledge, which encompasses rarely occurring entities (referred to as "tail entities"), LLMs struggle to provide ac...