Jeremy K.
Jeremy K.

Reputation: 1792

llama-index RAG: how to display retrieved context?

I am using LlamaIndex to perform retrieval-augmented generation (RAG).

Currently, I can retrieve and answer questions using the following minimal 5 line example, from https://docs.llamaindex.ai/en/stable/getting_started/starter_example/:

from llama_index import VectorStoreIndex, SimpleDirectoryReader

documents = SimpleDirectoryReader("data").load_data()
index = VectorStoreIndex.from_documents(documents)
query_engine = index.as_query_engine()
response = query_engine.query("What did the author do growing up?")
print(response)

This returns an answer, but I would like to display the retrieved context (e.g., the document chunks or sources) before the answer.

Desired output format would look something like:

Here's my retrieved context:
[x]
[y]
[z]

And here's my answer:
[answer]

What is the simplest reproducible way to modify the 5 line example to achieve this?

Upvotes: 0

Views: 69

Answers (0)

Related Questions