control-celeba-hq / README.md
zqiu's picture
Update README.md
a89037e
---
license: mit
dataset_info:
features:
- name: image
dtype: image
- name: conditioning_image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 1087163166.621
num_examples: 29487
- name: test
num_bytes: 18131154.0
num_examples: 500
download_size: 1089858259
dataset_size: 1105294320.621
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
# Dataset Card for Control-CelebA-HQ
## Overview
**Dataset Name**: Control-CelebA-HQ
**Description**: An enhanced version of the CelebA-HQ dataset, Control-CelebA-HQ is specifically designed for evaluating the controlling ability of controllable generative models. This dataset is featured in the NeurIPS 2023 work titled "Controlling Text-to-Image Diffusion by Orthogonal Finetuning (OFT)", and is pivotal in evaluating the control ability of the controllable generative models.
**Dataset Type**: Generative Model, Controllable Generation, PEFT
**Official Page**: https://oft.wyliu.com/
## Dataset Structure
**Data Format**: Images with paired facial landmarks
**Size**: Training set - 29.5k images; Testing set - 500 images
**Resolution**: High Quality (CelebA-HQ standard)
**Attributes**: Facial features with color-coded facial landmarks for controllable generation
## Data Collection and Preparation
**Source**: Derived from the CelebA-HQ dataset
**Collection Method**: Original CelebA-HQ images processed with a standard face alignment tracker (available at https://github.com/1adrianb/face-alignment) for facial landmark detection
**Data Split**: 29.5k images for training, 500 images for testing
## Dataset Use and Access
**Recommended Uses**: Training and testing controllable generative models, particularly in the context of facial image generation with landmark-based control
**User Guidelines**: To use the dataset, train models on the training set using facial landmarks as control signals. For testing, generate images with landmarks as control and evaluate control consistency error between input and generated image's landmarks. Please cite the OFT paper when using this dataset and protocol.
**Note:** Example usage and evaluation script will come out soon in Huggingface PEFT and Diffusers example. Stay tuned:D
**Citation**:
```
@InProceedings{Qiu2023OFT,
title={Controlling Text-to-Image Diffusion by Orthogonal Finetuning},
author={Qiu, Zeju and Liu, Weiyang and Feng, Haiwen and Xue, Yuxuan and Feng, Yao and Liu, Zhen and Zhang, Dan and Weller, Adrian and Schölkopf, Bernhard},
booktitle={NeurIPS},
year={2023}
}
```