patrickvonplaten commited on
Commit
597ad97
1 Parent(s): d4af770

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -13,14 +13,14 @@ inference: false
13
  Latent Consistency Model (LCM) LoRA was proposed in [LCM-LoRA: A universal Stable-Diffusion Acceleration Module](TODO:)
14
  by *Simian Luo, Yiqin Tan, Suraj Patil, Daniel Gu et al.*
15
 
16
- It is a distilled consistency adapter for [`stable-diffusion-xl-base-1.0`](stabilityai/stable-diffusion-xl-base-1.0) that allows
17
  to reduce the number of inference steps to only between **2 - 8 steps**.
18
 
19
  | Model | Params / M |
20
  |----------------------------------------------------------------------------|------------|
21
- | [lcm-lora-sdv1-5](https://huggingface.co/latent-consistency/lcm-lora-sdv1-5) | 67.5 |
22
  | [lcm-lora-ssd-1b](https://huggingface.co/latent-consistency/lcm-lora-ssd-1b) | 105 |
23
- | [**lcm-lora-sdxl**](https://huggingface.co/latent-consistency/lcm-lora-sdxl) | **197M** |
24
 
25
  ## Usage
26
 
 
13
  Latent Consistency Model (LCM) LoRA was proposed in [LCM-LoRA: A universal Stable-Diffusion Acceleration Module](TODO:)
14
  by *Simian Luo, Yiqin Tan, Suraj Patil, Daniel Gu et al.*
15
 
16
+ It is a distilled consistency adapter for [`runwayml/stable-diffusion-v1-5`](https://huggingface.co/runwayml/stable-diffusion-v1-5) that allows
17
  to reduce the number of inference steps to only between **2 - 8 steps**.
18
 
19
  | Model | Params / M |
20
  |----------------------------------------------------------------------------|------------|
21
+ | [**lcm-lora-sdv1-5**](https://huggingface.co/latent-consistency/lcm-lora-sdv1-5) | **67.5** |
22
  | [lcm-lora-ssd-1b](https://huggingface.co/latent-consistency/lcm-lora-ssd-1b) | 105 |
23
+ | [lcm-lora-sdxl](https://huggingface.co/latent-consistency/lcm-lora-sdxl) | 197M |
24
 
25
  ## Usage
26