File size: 788 Bytes
69b3004
 
01fa45b
 
d571ae6
69b3004
01fa45b
0e442a8
01fa45b
69b3004
946002a
69b3004
302641c
 
082dbdf
 
01fa45b
69b3004
01fa45b
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
---
library_name: peft
language:
- ru
pipeline_tag: text-generation
---
Use in the same way as IlyaGusev/saiga2_7b_lora.

WARNING! Load tokenizer as AutoTokenizer.from_pretrained(model_path, use_fast=True)

Up to 60% faster generation and 35% training (on identical russian text sequences!) with HF because of different tokenizer.

Colab: https://colab.research.google.com/drive/109ZhEB6STy-0jO-Z_4ttkWr1jg_FCTRW?usp=sharing

Paper: Tikhomirov M., Chernyshev D. Impact of Tokenization on LLaMa Russian Adaptation //arXiv preprint arXiv:2312.02598. – 2023.

## Model description

Instruction version (Saiga datasets) of Russian adaptation of LLaMa-2-7B by replacing the tokenizer.
Paper: Tikhomirov M.M., Chernyshev D.I., Impact of Tokenization on LLaMa Russian Adaptation (will be soon)