Good model but has the same issues as Lunaris

#4
by xxx31dingdong - opened

I feel like dataset used to train this lacks stop tokens in some parts, model REALLY wants to constantly extend it's output and spit out 1k tokens of inconsequential and unnecessary text.

@xxx31dingdong ...bad quant or settings?
I use i1_Q5_K_M from Mradermacher with around 4k tokens of instructions. Still, the output is pinpoint and around 350 tokens.

Sign up or log in to comment