Large Language Models, Knowledge Graphs and Search Engines: A Crossroads for Answering Users’ Questions - BACKGROUND
Hogan, A., Dong, X.L., Vrandevci'c, D., & Weikum, G. (2025). Large Language Models, Knowledge Graphs and Search Engines: A Crossroads for Answering Users' Questions. A BACKGROUND A.1 Large Language Models LLMs capture contextual probabilities of tokens in the parameters of a large neural network, often following the Transformer architecture [44]. The model parameters are computed by two stages of training: unsupervised pre-training and supervised fine-tuning . LLMs can also benefit from inference-time (i.e., post-training) techniques, most notably, prompt engineering [26] and in-context learning [9]. The usual training objective is to predict the next token in a text sequence, repeatedly in an auto-regressive manner. As the original text is available, the ground-truth is known and this entire training process is completely unsupervised (or self-supervised, as it is sometimes phrased). Fine-tuning adopts a pre-trained LLM as a foundational model and adapts it for a sui...