File size: 1,820 Bytes
9e6c441 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Tinystories-gpt-0.1-3m - bnb 4bits
- Model creator: https://huggingface.co/segestic/
- Original model: https://huggingface.co/segestic/Tinystories-gpt-0.1-3m/
Original model description:
---
datasets:
- roneneldan/TinyStories
language:
- en
library_name: transformers
pipeline_tag: text-generation
---
## We tried to use the huggingface transformers library to recreate the TinyStories models on Consumer GPU using GPT2 Architecture instead of GPT-Neo Architecture orignally used in the paper (https://arxiv.org/abs/2305.07759). Output model is 15mb and has 3 million parameters.
# ------ EXAMPLE USAGE 1 ---
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("segestic/Tinystories-gpt-0.1-3m")
model = AutoModelForCausalLM.from_pretrained("segestic/Tinystories-gpt-0.1-3m")
prompt = "Once upon a time there was"
input_ids = tokenizer.encode(prompt, return_tensors="pt")
#### Generate completion
output = model.generate(input_ids, max_length = 1000, num_beams=1)
#### Decode the completion
output_text = tokenizer.decode(output[0], skip_special_tokens=True)
#### Print the generated text
print(output_text)
# ------ EXAMPLE USAGE 2 ------
## Use a pipeline as a high-level helper
from transformers import pipeline
#### pipeline
pipe = pipeline("text-generation", model="segestic/Tinystories-gpt-0.1-3m")
#### prompt
prompt = "where is the little girl"
#### generate completion
output = pipe(prompt, max_length=1000, num_beams=1)
#### decode the completion
generated_text = output[0]['generated_text']
#### Print the generated text
print(generated_text)
|