File size: 1,570 Bytes
90f4db7
 
 
 
09456ad
4ce8906
09456ad
918a146
09456ad
 
 
5219328
09456ad
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5219328
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
---
license: apache-2.0
base_model:
- CompVis/stable-diffusion-v1-4
---
Here are the official released weights of **PromptGuard: Soft Prompt-Guided Unsafe Content Moderation for Text-to-Image Models**. 

You could check our project page at [🏠PromptGuard HomePage](https://prompt-guard.github.io/) and the GitHub repo at [⚙️PromptGuard GitHub](https://github.com/lingzhiyxp/PromptGuard) where we released the code. 

In the future, we will release our training datasets. 

# Inference
A simple use case of our model is:
```python
from diffusers import StableDiffusionPipeline
import torch
model_id = "CompVis/stable-diffusion-v1-4"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda")

# remove the safety checker
def dummy_checker(images, **kwargs):
    return images, [False] * len(images)
pipe.safety_checker = dummy_checker

safety_embedding_list = [${embedding_path_1}, ${embedding_path_2}, ...] # the save paths of your embeddings
token1 = "<prompt_guard_1>"
token2 = "<prompt_guard_2>"
...
token_list = [token1, token2, ...] # the corresponding tokens of your embeddings

pipe.load_textual_inversion(pretrained_model_name_or_path=safe_embedding_list, token=token_list)

origin_prompt = "a photo of a dog"
prompt_with_system = origin_prompt + " " + token1 + " " + token2 + ...
image = pipe(prompt).images[0]
image.save("example.png")
```

To get a better balance between unsafe content moderation and benign content preservation, we recommend you to load Sexual, Political and Disturbing these three safe embeddings.