havenfeng commited on
Commit
8a382af
1 Parent(s): ea5e346

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +31 -0
README.md CHANGED
@@ -25,3 +25,34 @@ configs:
25
  - split: test
26
  path: data/test-*
27
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
25
  - split: test
26
  path: data/test-*
27
  ---
28
+ # Dataset Card for Control-CelebA-HQ
29
+
30
+ ## Overview
31
+ **Dataset Name**: Control-CelebA-HQ
32
+ **Description**: An enhanced version of the CelebA-HQ dataset, Control-CelebA-HQ is specifically designed for evaluating the controlling ability of controllable generative models. This dataset is featured in the NeurIPS 2023 work titled "Controlling Text-to-Image Diffusion by Orthogonal Finetuning (OFT)", and is pivotal in evaluating the control ability of the controllable generative models.
33
+ **Dataset Type**: Generative Model, Controllable Generation, PEFT
34
+ **Official Page**: https://oft.wyliu.com/
35
+
36
+ ## Dataset Structure
37
+ **Data Format**: Images with paired facial landmarks
38
+ **Size**: Training set - 29.5k images; Testing set - 500 images
39
+ **Resolution**: High Quality (CelebA-HQ standard)
40
+ **Attributes**: Facial features with color-coded facial landmarks for controllable generation
41
+
42
+ ## Data Collection and Preparation
43
+ **Source**: Derived from the CelebA-HQ dataset
44
+ **Collection Method**: Original CelebA-HQ images processed with a standard face alignment tracker (available at https://github.com/1adrianb/face-alignment) for facial landmark detection
45
+ **Data Split**: 29.5k images for training, 500 images for testing
46
+
47
+ ## Dataset Use and Access
48
+ **Recommended Uses**: Training and testing controllable generative models, particularly in the context of facial image generation with landmark-based control
49
+ **User Guidelines**: To use the dataset, train models on the training set using facial landmarks as control signals. For testing, generate images with landmarks as control and evaluate control consistency error between input and generated image's landmarks. Please cite the OFT paper when using this dataset and protocol.
50
+ **Citation**:
51
+ ```
52
+ @InProceedings{Qiu2023OFT,
53
+ title={Controlling Text-to-Image Diffusion by Orthogonal Finetuning},
54
+ author={Qiu, Zeju and Liu, Weiyang and Feng, Haiwen and Xue, Yuxuan and Feng, Yao and Liu, Zhen and Zhang, Dan and Weller, Adrian and Schölkopf, Bernhard},
55
+ booktitle={NeurIPS},
56
+ year={2023}
57
+ }
58
+ ```