Edit model card

This model is traned with guanaco dataset. And this model only used by 49000 chat sample.
Improved performance in Chinese and Japanese.
Use the QLoRA to fine-tune the vanilla LLaMA2-7B.
And you can use test.py to test the model.

Recommend Generation parameters:

  • temperature: 0.5~0.7
  • top p: 0.65~1.0
  • top k: 30~50
  • repeat penalty: 1.03~1.17

Contribute by Yokohama Nationaly University Mori Lab.

Downloads last month
99
Inference Examples
Inference API (serverless) is not available, repository is disabled.