Datasets:

Modalities:
Tabular
Text
Formats:
parquet
Languages:
English
ArXiv:
DOI:
Libraries:
Datasets
Dask
License:

Details on the evaluation with lighteval

#22
by amaracani - opened

Hello! I have some questions about the evaluation of models.

  • for dataset siqa the metrics acc/acc_norm are not available in lighteval, just the ones from helm (exact_match, quasi_exact_match, prefix_exact_match prefix_quasi_exact_match), should I add the task or did you use the metrics from helm?
  • for mmlu did you average the accuracies of tasks from the leaderboard, the ones here, or do the results come from the single task mmlu of lighteval?
  • for arc are the results the average of arc-easy and arc-challenge?
  • are all the tasks 0-shot or few-shots?

Thank you for any help :)

HuggingFaceFW org

Hi! I've just added our full list of lighteval tasks to the repo: https://huggingface.co/datasets/HuggingFaceFW/fineweb/blob/main/lighteval_tasks.py, you should be able to reproduce our results with this file.
Everything was 0-shot.

Let me know if you still have other questions :)

I could perfectly reproduce the results, now I will use the same evaluation for my experiments.
Thank you very much, very helpful :)

guipenedo changed discussion status to closed

Sign up or log in to comment