Reputation: 1314
I'm using Prompt Optimizer on Vertex AI, as outlined in the documentation and demonstrated in this example notebook.
The service optimizes prompt content based on its performance on a given dataset. I’ve successfully used the tool to enhance prompts with the exact_match
evaluation metric.
However, now I want to "lock" certain sections of the prompt so that the tool won’t modify these specific parts.
Instead, I'd like the optimizer to adjust only the remaining text outside these fixed sections. I believe the placeholder_to_content
argument may be designed for this purpose.
Here’s what the documentation says about this argument:
PLACEHOLDER_TO_CONTENT: the information that replaces any variables in the system instructions. Information included within this flag is not optimized by the Vertex AI prompt optimizer.
Following this, I’ve tried using placeholder_to_content
as follows:
PLACEHOLDER_TO_CONTENT = json.loads("""{{
"part1": "This is the firt text that should not be modified by the LLM.",
"part2": "This is the second text that should not be modified by the LLM."
}}""")
SYSTEM_INSTRUCTION_TEMPLATE = """
{{part1}}
{{part2}}
"""
args = Namespace(
# other arguments
# ...
placeholder_to_content=PLACEHOLDER_TO_CONTENT,
system_instruction=SYSTEM_INSTRUCTION_TEMPLATE
)
This configuration is then uploaded to a storage bucket and used in the Prompt Optimization Job, as demonstrated in the notebook.
However, when I check the generated template.json
in the output, the variables defined with placeholder_to_content
don’t seem to be recognized as expected:
{"step": 0, "metrics": {"exact_match/mean": 0.0}, "prompt": "\n {part1}\n {part2}\n "}
I also tried to have SYSTEM_INSTRUCTION_TEMPLATE
defined as follows:
SYSTEM_INSTRUCTION_TEMPLATE = """
{part1}
{part2}
"""
but I still get the same output prompt.
Instead of replacing {part1}
and {part2}
with the intended text, the output still shows the placeholders. Am I missing something in the configuration? Is placeholder_to_content
meant for this type of use case?
Upvotes: 0
Views: 85
Reputation: 178
While Vertex AI Prompt Optimizer doesn't have a direct feature to "lock" specific sections of a prompt, you can effectively achieve this by leveraging placeholder_to_content mappings and careful prompt engineering.
Understanding the Approach:
Identify the Fixed Sections: Pinpoint the parts of your prompt that you want to remain constant. These might be essential instructions, specific context, or particular phrasing.
Create Placeholders: Replace these fixed sections with placeholders. These placeholders will be defined in your placeholder_to_content mapping.
Define Placeholders: In your placeholder_to_content mapping, associate each placeholder with its corresponding fixed content.
Optimize the Variable Sections: The optimizer will now focus on refining the remaining parts of the prompt, leaving the locked sections untouched.
Example:
Original Prompt:
Write a creative story about a robot who dreams of becoming a chef. The story should be humorous and include a surprising twist.
Prompt with Placeholders:
{placeholder_1} a creative story about a robot who dreams of becoming a chef. The story should be {placeholder_2} and include a surprising twist.
Placeholder-to-Content Mapping:
{
"placeholder_1": "Write",
"placeholder_2": "humorous"
}
In this example, the optimizer will experiment with different phrasings for the creative story and the surprising twist, while the instructions to write a humorous story and the requirement for a twist remain fixed.
Upvotes: 0