Reputation: 2146
I have this piece of code:
public static String simpleQuestion(String projectId, String location, String modelName) throws Exception {
// Initialize client that will be used to send requests.
// This client only needs to be created once, and can be reused for multiple requests.
try (VertexAI vertexAI = new VertexAI(projectId, location)) {
String output;
GenerativeModel model = new GenerativeModel(modelName, vertexAI);
model.setGenerationConfig(GenerationConfig.newBuilder().build());
model.setSafetySettings(Collections.singletonList(
SafetySetting.newBuilder()
.setThreshold(SafetySetting
.HarmBlockThreshold.BLOCK_NONE).build()));
GenerateContentResponse response = model.generateContent("What can you tell me about Pythagorean theorem");
output = ResponseHandler.getText(response);
return output;
}
}
and sometimes I get this error;
context/6.1.1/spring-context-6.1.1.jar com.mysticriver.service.GoogleGeminiService
Exception in thread "main" java.lang.IllegalArgumentException: The response is blocked due to safety reason.
at com.google.cloud.vertexai.generativeai.preview.ResponseHandler.getText(ResponseHandler.java:46)
even I have HarmBlockThreshold.BLOCK_NONE
in the settings
Upvotes: 8
Views: 4357
Reputation: 23060
You need to set the harm block threshold for a specific harm category.
Change this...
SafetySetting.newBuilder()
.setThreshold(SafetySetting.HarmBlockThreshold.BLOCK_NONE)
.build()
...to this.
SafetySetting.newBuilder()
.setCategory(HarmCategory.HARM_CATEGORY_YOUR_CATEGORY)
.setThreshold(SafetySetting.HarmBlockThreshold.BLOCK_NONE)
.build()
List of all harm categories:
HARM_CATEGORY_SEXUALLY_EXPLICIT
HARM_CATEGORY_HATE_SPEECH
HARM_CATEGORY_HARASSMENT
HARM_CATEGORY_DANGEROUS_CONTENT
Upvotes: 7