Reputation: 702
I am building a ReactJS app without a backend. I am using Azure Open AI service via LangChainJS/LangGraphJS.
Though I can ground model/provide system prompts using Javascript and LagChain making sure the model only answers questions related to my Web App, anyone with a network inspector could access the chat completion endpoint and directly invoke the endpoint and get answers to any generic question.
I am not worried about the Azure OpenAI API key being exposed as I can use URL Rewrite and Akamai to hide the actual Azure Open AI endpoint.
I would like to add meta/system prompts to the model at the Azure AI Foundry Deployment level, instead of LangChain based system prompts. This will ensure that the chat completion API will adhere to the system prompts provided in Foundry and only answer based on specific knowledge of my web app rather than answering generic questions like "Who is Tom Cruise?"
Upvotes: -4
Views: 62
Reputation: 388
limited options that i can think of...
1- write a logic in akamai to add system prompt (add or overwrite the system prompt from client side). similar to this example, just add some js logics to change request body json. https://techdocs.akamai.com/edgeworkers/docs/transform-response-content
2- if akamai can't do that, Azure API Management can definitely change request body. but some extra $$$
Upvotes: 0