hamishivi's picture
Update README.md
c009048 verified
metadata
model-index:
  - name: llama-3-tulu-2-8b-uf-mean-rm
    results: []
datasets:
  - allenai/tulu-2.5-preference-data
  - allenai/tulu-v2-sft-mixture
language:
  - en
base_model: allenai/llama-3-tulu-2-8b
license: apache-2.0
Tulu 2.5 banner image

Model Card for Llama 3 Tulu V2 8B RM - UltraFeedback

Tulu is a series of language models that are trained to act as helpful assistants. This is a 8B reward model used for PPO training trained on the UltraFeedback dataset.

For more details, read the paper: Unpacking DPO and PPO: Disentangling Best Practices for Learning from Preference Feedback.

Built with Meta Llama 3! Note that Llama 3 is released under the Meta Llama 3 community license, included here under llama_3_license.txt.

Performance

We evaluate the model on RewardBench:

Model Score Chat Chat Hard Safety Reasoning
Llama 3 Tulu 2 8b UF RM (this model) 73.6 95.3 59.2 57.9 82.1
Llama 3 Tulu 2 70b UF RM 71.0 86.3 56.1 58.9 82.7

Model description

  • Model type: A reward model trained on UltraFeedback, designed to be used in RLHF training.
  • Language(s) (NLP): English
  • License: Apache 2.0.
  • Finetuned from model: allenai/llama-3-tulu-2-8b

Model Sources

Input Format

The model is trained to use the following format (note the newlines):

<|user|>
Your message here!
<|assistant|>

For best results, format all inputs in this manner. Make sure to include a newline after <|assistant|>, this can affect generation quality quite a bit. We have included a chat template in the tokenizer implementing this template.

Intended uses & limitations

The model was initially fine-tuned on a filtered and preprocessed of the Tulu V2 mix dataset, which contains a diverse range of human created instructions and synthetic dialogues generated primarily by other LLMs. We then further trained the model with a Jax RM trainer built on EasyLM on the dataset mentioned above. This model is meant as a research artefact.

Training hyperparameters

The following hyperparameters were used during PPO training:

  • learning_rate: 1e-06
  • total_train_batch_size: 512
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear cooldown to 1e-05.
  • lr_scheduler_warmup_ratio: 0.03
  • num_epochs: 1.0

Citation

If you find Tulu 2.5 is useful in your work, please cite it with:

@misc{ivison2024unpacking,
      title={{Unpacking DPO and PPO: Disentangling Best Practices for Learning from Preference Feedback}}, 
      author={{Hamish Ivison and Yizhong Wang and Jiacheng Liu and Ellen Wu and Valentina Pyatkin and Nathan Lambert and Yejin Choi and Noah A. Smith and Hannaneh Hajishirzi}}
      year={2024},
      eprint={2406.09279},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}