Hallucination

Today’s language models are primarily trained to provide helpful and easy-to-understand responses. At the same time, it is possible for the AI to make up information that fits the text perfectly and looks factual, but is actually made up. Such errors are often called hallucinations.

These can be avoided, for example, by using an optimized prompt. It may be a good idea to make it clear that it is OK if the AI does not know an answer or does not have a piece of information. Nevertheless, it is important to check the information.

With techniques such as RAG, on the other hand, a language model can be used to retrieve information from databases and documents – and only from there.

Related posts:

Stay up-to-date: