Skip to main content
Version: v0.3.0

Frequently asked questions

What is Retrieval-Augmented Generation?

Retrieval Augmented Generation (RAG) is a technique used to improve the accuracy and reliability of responses from foundation models, specifically Large Language Models (LLMs). It works by supplementing the existing LLMs with external sources of knowledge.

The benefits of using RAG include enhanced response quality and relevance, access to the most current and reliable facts, and the ability to verify the model's responses. It reduces the need for continuous training and updating of the LLM, thereby lowering costs. RAG relies on the use of vectors, which are mathematical representations of data, to enrich prompts with relevant external information.

RAG enables the retrieval of external data from a variety of sources, including document repositories, databases, and APIs. This data is then converted into a compatible format to facilitate relevancy searches. The retrieved information is added to the original user prompt, empowering the LLM to provide responses based on more relevant or up-to-date knowledge.

What is an AI-native database? Is it just a paraphrase of vector database?

An AI-native database is designed specifically to address the challenges of retrieval-augmented generation (RAG), which is currently an industry standard for enhancing the accuracy and relevance of responses generated by foundation models.

In addition to basic vector search, an AI-vector database also offers advanced capabilities such as more full-text search, multi-vector retrieval, mixed data type query, refined data analytics, and hybrid search.

Where can I find a benchmark report of your database?

You can find a benchmark report on Infinity, the AI-native database, here.