Reputation: 161
by reading the official documentation, it looks like the right way of doing Azure OpenAI is via RAG and vector cognitive search. Most examples are based on having loads of documents on blob storage, but what about "normal" real-life scenarios where documents are articles stored in a SQL database and indexed with Azure Cognitive search?
So, we have an index that we update via API whenever an article is updated/uploaded. Is this still going to produce good enough results for RAG? Would adding a semantic search layer help to return even more relevant results to be passed to OpenAI via RAG?
Does anyone have real life experience with this? How bad is just normal cognitive search compared with vectorised for RAG?
thanks
Upvotes: 0
Views: 392
Reputation: 466
Please refer to this blog post regarding benchmarks of keyword search, vector, hybrid and semantic search and which work best in general for most scenarios for higher relevance and depending on the scenario.
If your SQL database has fields that have been already indexed in Cognitive Search, they are retrievable and with a size that would fit the input limits of the GPT models, then you should be able to use your existing index in Azure OpenAI Service on your data feature as the source and test. That same feature allows you to use vector search and semantic to understand the different responses, based on the configuration chosen.
Upvotes: 0