|
|
--- |
|
|
license: apache-2.0 |
|
|
base_model: |
|
|
- CompVis/stable-diffusion-v1-4 |
|
|
--- |
|
|
Here are the official released weights of **PromptGuard: Soft Prompt-Guided Unsafe Content Moderation for Text-to-Image Models**. |
|
|
|
|
|
You could check our project page at [🏠PromptGuard HomePage](https://prompt-guard.github.io/) and the GitHub repo at [⚙️PromptGuard GitHub](https://github.com/lingzhiyxp/PromptGuard) where we released the code. |
|
|
|
|
|
In the future, we will release our training datasets. |
|
|
|
|
|
# Inference |
|
|
A simple use case of our model is: |
|
|
```python |
|
|
from diffusers import StableDiffusionPipeline |
|
|
import torch |
|
|
model_id = "CompVis/stable-diffusion-v1-4" |
|
|
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") |
|
|
|
|
|
# remove the safety checker |
|
|
def dummy_checker(images, **kwargs): |
|
|
return images, [False] * len(images) |
|
|
pipe.safety_checker = dummy_checker |
|
|
|
|
|
safety_embedding_list = [${embedding_path_1}, ${embedding_path_2}, ...] # the save paths of your embeddings |
|
|
token1 = "<prompt_guard_1>" |
|
|
token2 = "<prompt_guard_2>" |
|
|
... |
|
|
token_list = [token1, token2, ...] # the corresponding tokens of your embeddings |
|
|
|
|
|
pipe.load_textual_inversion(pretrained_model_name_or_path=safe_embedding_list, token=token_list) |
|
|
|
|
|
origin_prompt = "a photo of a dog" |
|
|
prompt_with_system = origin_prompt + " " + token1 + " " + token2 + ... |
|
|
image = pipe(prompt).images[0] |
|
|
image.save("example.png") |
|
|
``` |
|
|
|
|
|
To get a better balance between unsafe content moderation and benign content preservation, we recommend you to load Sexual, Political and Disturbing these three safe embeddings. |