Satyam Raj
Satyam Raj

Reputation: 343

Google Gemini API error - Content generation stopped. Reason: SAFETY

I'm building an android app by using Google Gemini API.

I'm prompting what is 58+78 which is giving correct output. But when I'm prompting what is 2+2, then my app crashes and in Logcat it says : Content generation stopped. Reason: SAFETY.

Complete Logcat :

FATAL EXCEPTION: main
Process: com.example.gemniapi, PID: 22751
com.google.ai.client.generativeai.type.ResponseStoppedException: Content generation stopped. Reason: SAFETY
 at com.google.ai.client.generativeai.GenerativeModel.validate(GenerativeModel.kt:193)
 at com.google.ai.client.generativeai.GenerativeModel.generateContent(GenerativeModel.kt:86)
 at com.google.ai.client.generativeai.GenerativeModel$generateContent$1.invokeSuspend(Unknown Source:15)
 at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)
 at kotlinx.coroutines.DispatchedTask.run(DispatchedTask.kt:108)
 at kotlinx.coroutines.EventLoop.processUnconfinedEvent(EventLoop.common.kt:68)
 at io.ktor.util.pipeline.SuspendFunctionGun.resumeRootWith(SuspendFunctionGun.kt:135)
 at io.ktor.util.pipeline.SuspendFunctionGun.loop(SuspendFunctionGun.kt:109)
 at io.ktor.util.pipeline.SuspendFunctionGun.access$loop(SuspendFunctionGun.kt:11)
 at io.ktor.util.pipeline.SuspendFunctionGun$continuation$1.resumeWith(SuspendFunctionGun.kt:59)
 at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:46)
 at kotlinx.coroutines.DispatchedTask.run(DispatchedTask.kt:108)
 at android.os.Handler.handleCallback(Handler.java:983)
 at android.os.Handler.dispatchMessage(Handler.java:99)
 at android.os.Looper.loopOnce(Looper.java:226)
 at android.os.Looper.loop(Looper.java:328)
 at android.app.ActivityThread.main(ActivityThread.java:9155)
 at java.lang.reflect.Method.invoke(Native Method)
 at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:586)
 at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:1099)
 Suppressed: kotlinx.coroutines.internal.DiagnosticCoroutineContextException: 
  [StandaloneCoroutine{Cancelling}@e794c8a, Dispatchers.Main.immediate]
    

I can't understand what's the SAFETY concern here.

Upvotes: 5

Views: 2775

Answers (3)

F.Mysir
F.Mysir

Reputation: 4176

For kotlin for android per question you can use something:

    private val dangerousContent = SafetySetting(HarmCategory.DANGEROUS_CONTENT, BlockThreshold.NONE)
    private val sexuallyExplicit = SafetySetting(HarmCategory.SEXUALLY_EXPLICIT, BlockThreshold.NONE)
    private val hateSpeech = SafetySetting(HarmCategory.HATE_SPEECH, BlockThreshold.NONE)
    private val harassment = SafetySetting(HarmCategory.HARASSMENT, BlockThreshold.NONE)

    private val config = generationConfig {
        temperature = 0.8f
    }

    private val generativeModel = GenerativeModel(
        modelName = "gemini-1.0-pro",
        apiKey = YOUR_API_KEY,
        generationConfig = config,
        safetySettings = listOf(dangerousContent, sexuallyExplicit, hateSpeech, harassment)
    )

You can also play with the temperature:

Temperature = 0: The model will always pick the most likely word or token for each step, making the output highly deterministic (no variation). Useful for factual or exact outputs.

Temperature = 1: The model will generate responses that are more varied and closer to the raw probability distribution, balancing between creativity and coherence.

Temperature > 1: The model will take even more risks in generating responses, often leading to more unpredictable, creative, or experimental results. This could increase the chances of less relevant or unusual responses.

Take always into consideration Ai Principles

Upvotes: 1

Dinura Dissanayake
Dinura Dissanayake

Reputation: 75

The Gemini API has adjustable safety settings. If your prompt is getting this error see why the prompt is blocked and set the filters appropriately.

response = model.generate_content(
    unsafe_prompt,
    safety_settings={
        'HATE': 'BLOCK_NONE',
        'HARASSMENT': 'BLOCK_NONE',
        'SEXUAL' : 'BLOCK_NONE',
        'DANGEROUS' : 'BLOCK_NONE'
    })

More details about safety settings can be found in this Colab Notebook

Upvotes: 0

Viswanath Kumar Sandu
Viswanath Kumar Sandu

Reputation: 2274

While using Gemini API, we need to take care of SAFETY Settings. These settings are used to avoid dagenrous/inappropriate being delivered. You can add these like below

from google.generativeai.types import HarmCategory, HarmBlockThreshold

model = genai.GenerativeModel(model_name='gemini-pro-vision')
response = model.generate_content(
    ['Do these look store-bought or homemade?', img],
    safety_settings={
        HarmCategory.HARM_CATEGORY_HATE_SPEECH: HarmBlockThreshold.BLOCK_LOW_AND_ABOVE,
        HarmCategory.HARM_CATEGORY_HARASSMENT: HarmBlockThreshold.BLOCK_LOW_AND_ABOVE,
    }
)

Ref: To find all the safety settings, please check the documentation https://ai.google.dev/docs/safety_setting_gemini

Upvotes: 0

Related Questions