Teach your LLM to answer with facts, not fiction

Large Language Models (LLMs) are advanced AI systems known for their ability to answer various questions. However, they can sometimes provide inaccurate responses on unfamiliar topics, a phenomenon called hallucination. Hallucinations are false perceptions of something real or concrete that occur in the absence of an external stimulus. To improve the accuracy of LLM responses, it is recommended to add relevant information and facts to the question, guiding the LLM towards a more informed answer. Including supporting documents, such as from search engines or digital libraries, can further enhance the accuracy of LLM responses. Vector SQL, a powerful and easy-to-learn tool, can be used to automate and refine the query process for LLMs. By leveraging external knowledge and utilizing Vector SQL, LLMs can provide more accurate and reliable answers to complex questions.


To top