How to remove input token to get only output token ?

#41
by ducknificient - opened
outputs = pipe(prompt, max_new_tokens=96, do_sample=True, temperature=0.1, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
<|system|>
You're an assistant. 

<|user|>
How to get in a good university?

<|assistant|>
Sure, here's a step-by-step guide on how to get into a good university:

how to remove the input token to get generated text ?

so far i can do by this

input_ids = pipe.tokenizer(prompt, return_tensors="pt").to("cuda")
len(input_ids)
# for llama
outputs = pipe.model.generate(**input_ids, max_new_tokens=50, temperature=0.01,pad_token_id=pipe.tokenizer.eos_token_id)
torch.cuda.empty_cache()
output_text = pipe.tokenizer.decode(
    outputs[0].tolist(),
    skip_special_tokens=True
)

answertokens = outputs[0][input_ids['input_ids'].shape[-1]:]
answeronly = pipe.tokenizer.decode(answertokens, skip_special_tokens=True)

Sign up or log in to comment