Use of word embeddings to enhance the accuracy of query language models in the ad-hoc retrieval task. To this end, we propose to use word embeddings to incorporate and weight terms that do not occur in the query, but are semantically related to the query terms. Word2vec and GloVe are examples of successful implementations of word embeddings that respecively use neural networks and matrix factorization to learn embedding vectors. Vocabulary mismatch problem, i.e., the mismatch of different vocabulary terms with the same concept. This is a fundamental IR problem, since users often use different words to describe a concept in the queries than those that authors of documents use to describe the same concept. In addition to the terms that appear in the query, we incorporate and weight the words that do not occur in the query, but are semantically similar to the query terms. To do so, we propose two query expansion models with different simplifying assumptions. A well-known and ef...