File size: 1,100 Bytes
b6b7458
 
 
 
 
 
e1ae555
b6b7458
 
 
 
 
 
 
bf336e2
 
 
09797c9
 
 
b6b7458
 
e1ae555
b6b7458
0f473fc
b6b7458
 
 
 
 
 
 
 
 
 
 
e1ae555
b6b7458
f0a93f8
b6b7458
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
---
language: vi
tags:
- vi
- vietnamese
- gpt2
- gpt3
- text-generation
- lm
- nlp
datasets:
- wikilinguage
widget:
- text: "Hoa quả và rau thường rẻ hơn khi vào mùa. "

inference:
  parameters:
    max_length: 120
    do_sample: True
    temperature: 1.0
---

# GPT-3 small

Pretrained GPT Neo (GPT-3 small) , it's architecture intentionally resembles that of GPT-3, model was trained on Vietnamese dataset for text generation

# How to use the model

~~~~
from transformers import GPT2Tokenizer, GPTNeoForCausalLM

tokenizer = GPT2Tokenizer.from_pretrained('minhtoan/gpt3-small-vietnamese')
model = GPTNeoForCausalLM.from_pretrained('minhtoan/gpt3-small-vietnamese')

text = "Hoa quả và rau thường rẻ hơn khi vào mùa"
input_ids = tokenizer.encode(text, return_tensors='pt')
max_length = 100

sample_outputs = model.generate(input_ids, do_sample=True, max_length=max_length)

for i, sample_output in enumerate(sample_outputs):
    print(">> Generated text {}\n\n{}".format(i+1, tokenizer.decode(sample_output.tolist())))
    print('\n---')
~~~~


## Author
`
Phan Minh Toan 
`