File size: 2,435 Bytes
c6db57c
77957a4
 
 
 
 
 
 
 
c6db57c
77957a4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ef8b667
 
77957a4
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
---
language:
- en
tags:
- stable-diffusion
- text-to-image
license: creativeml-openrail-m
inference: true

---

# Anime Diffusion

Anime Diffusion is a latent text-to-image diffusion model trained on


# Gradio & Colab


## Model Description

## License

This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies: 

1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content 
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)

## Downstream Uses

This model can be used for entertainment purposes and as a generative art assistant.

## Example Code

```python
import torch
from torch import autocast
from diffusers import StableDiffusionPipeline

pipe = StableDiffusionPipeline.from_pretrained(
    'AlexWortega/AnimeDiffuion',
    torch_dtype=torch.float32
).to('cuda')

negative_prompt = """low-res, duplicate, poorly drawn face, ugly, undetailed face"""
d = 'white girl'
prompt = f"1girl, aqua eyes, baseball cap, blonde hair, closed mouth, earrings, green background, hat, hoop earrings, jewelry, looking at viewer, shirt, short hair, simple background, solo, upper body, yellow shirt" 
num_samples = 1

with  torch.inference_mode():
    images = pipe([prompt] * num_samples,
                  negative_prompt = [negative_prompt]*num_samples,
                  height=512, width=512,
                  num_inference_steps=50,
                  guidance_scale=8,
                  ).images


    
images[0].save("test.png")
```

## Team Members and Acknowledgements

This project would not have been possible without the incredible work by the [CompVis Researchers](https://ommer-lab.com/).

- [Alex Wortega](https://github.com/AlexWortega) Open to work!!


In order to reach us, you can join our 

[![My tg channel]](https://t.me/lovedeathtransformers)