Edit model card

CodeLlama-7B-ProXMath

ArXiv | Data: OpenWebMath-Pro | Code

CodeLlama-7B-ProXMath is a math-adapted language model that is continually pre-trained on OpenWebMath-Pro (a refined version by ProX) for 10B tokens.

Evaluations

ProX models are evaluated on 9 common math reasoning benchmarks.

Model asdiv gsm8k mathqa mawps minerva_math mmlu_stem sat_math svamp tabmwp average
CodeLlama-7B 50.7 11.8 14.3 62.6 5.0 20.4 21.9 44.2 30.6 29.1
CodeLlama-7B-ProXMath 67.9 35.6 38.9 82.7 17.6 42.6 62.5 55.8 41.3 49.4

Citation

@misc{TBD
}
Downloads last month
7
Inference API
Unable to determine this model's library. Check the docs .

Model tree for gair-prox/CodeLlama-7B-ProXMath

Finetuned
(47)
this model

Dataset used to train gair-prox/CodeLlama-7B-ProXMath

Collection including gair-prox/CodeLlama-7B-ProXMath