Niklas Mohler
Niklas Mohler

Reputation: 111

How to fix nsfw error for stable diffusion?

I always get the "Potential NSFW content was detected in one or more images. A black image will be returned instead. Try again with a different prompt and/or seed." error when using stable diffusion, even with the code that was given on huggingface:

import torch
from torch import autocast
from diffusers import StableDiffusionPipeline

model_id = "CompVis/stable-diffusion-v1-4"
device = "cuda"
token = 'MY TOKEN'


pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16, revision="fp16", use_auth_token=token)
pipe = pipe.to(device)

prompt = "a photo of an astronaut riding a horse on mars"
with autocast("cuda"):
    image = pipe(prompt, guidance_scale=7.5).images[0]  
    
image.save("astronaut_rides_horse.png")

Upvotes: 11

Views: 31177

Answers (5)

jemiloii
jemiloii

Reputation: 25749

They have a single variable to remove it safety_checker.

StableDiffusionPipeline.from_pretrained(
    safety_checker = None,
)

However, depending on the pipelines you use, you can get a warning message if safety_checker is set to None, but requires_safety_checker is True.

From pipeline_stable_diffusion_inpaint_legacy.py

if safety_checker is None and requires_safety_checker:
            logger.warning(f"...")

So you can do this:

StableDiffusionPipeline.from_pretrained(
    ...
    safety_checker = None,
    requires_safety_checker = False
)

This also works with from_single_file

StableDiffusionPipeline.from_single_file(
    ...
    safety_checker = None,
    requires_safety_checker = False
)

You can also change it later if necessary by doing this.

pipeline.safety_checker = None
pipeline.requires_safety_checker = False

Upvotes: 15

Hassan ALi
Hassan ALi

Reputation: 1331

folks. I'm having the same issue , with the lambda approach throwing a TypeError because 'bool' object is not iterable. I also tried the other suggested syntax, but StableDiffusion continues to return black images when they would be NSFW. This is being run in Google Colab.

pipe = StableDiffusionPipeline.from_pretrained(
    model_id,
    torch_dtype=torch.float16,
    safety_checker = None,
    requires_safety_checker = False
)

I did the following and it worked for me

from diffusers.pipelines.stable_diffusion import safety_checker

def sc(self, clip_input, images) : return images, [False for i in images]

# edit the StableDiffusionSafetyChecker class so that, when called, it just returns the images and an array of True values
safety_checker.StableDiffusionSafetyChecker.forward = sc

Upvotes: 0

tripleee
tripleee

Reputation: 189789

If you don't want to disable the NSFW check, try to articulate a different prompt which attempts to work around the problem.

Without having tried it, I would suggest to replace "riding" with something more explicitly safe, like "sitting on the back of".

Upvotes: 0

nullforce
nullforce

Reputation: 1161

This covers a bit of what the checker does: https://vickiboykis.com/2022/11/18/some-notes-on-the-stable-diffusion-safety-filter/

If you want to simply disable it, you can now set the safety_checker argument to None (no longer have to modify the source Python):

StableDiffusionPipeline.from_pretrained(
    safety_checker = None,

Upvotes: 3

PRANIT PURI
PRANIT PURI

Reputation: 1

Depending on your usecase, you can simply comment out the run_safety_checker function in pipeline_stable_diffusion img2img or txt2img. You can alter the function in this way.

def run_safety_checker(self, image, device, dtype):
    has_nsfw_concept = None
    return image, has_nsfw_concept

Upvotes: 0

Related Questions