AIGym commited on
Commit
3d74df9
1 Parent(s): 5b34f5a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +35 -1
README.md CHANGED
@@ -75,7 +75,41 @@ Users (both direct and downstream) should be made aware of the risks, biases and
75
 
76
  Use the code below to get started with the model.
77
 
78
- [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
79
 
80
  ## Training Details
81
 
 
75
 
76
  Use the code below to get started with the model.
77
 
78
+ Inference Code:
79
+
80
+ ```
81
+ import torch
82
+ from transformers import GPT2LMHeadModel, GPT2Tokenizer
83
+
84
+ # Load fine-tuned GPT-2 model and tokenizer
85
+ model = GPT2LMHeadModel.from_pretrained("AIGym/TinyGPT2-81M-colab") # or change the name to the checkpoint if you wanted to try them out
86
+ tokenizer = GPT2Tokenizer.from_pretrained("AIGym/TinyGPT2-81M-colab") # use the same as the one above unless you know what you are doing
87
+
88
+ # Example prompts
89
+ prompts = [
90
+ "Artificial intelligence is",
91
+ "The future of humanity depends on",
92
+ "In a galaxy far, far away, there lived",
93
+ "To be or not to be, that is",
94
+ "Once upon a time, there was a"
95
+ ]
96
+
97
+ # Function to generate text based on a prompt
98
+ def generate_text(prompt, max_length=120, temperature=0.3):
99
+ input_ids = tokenizer.encode(prompt, return_tensors="pt")
100
+ attention_mask = torch.ones(input_ids.shape, dtype=torch.long)
101
+ output = model.generate(input_ids, attention_mask=attention_mask, max_length=max_length, temperature=temperature, num_return_sequences=1)
102
+ generated_text = tokenizer.decode(output[0], skip_special_tokens=True)
103
+ return generated_text
104
+
105
+ # Generate and print completions for each prompt
106
+ for prompt in prompts:
107
+ completion = generate_text(prompt)
108
+ print("Prompt:", prompt)
109
+ print("Completion:", completion)
110
+ print()
111
+ ```
112
+
113
 
114
  ## Training Details
115