GGUF and Ollama

#7
by FishersPlatens - opened

Is there any way to make this model in the GGUF format to use it with Ollama?

Hi, @FishersPlatens . I'm new to this topic, but did you get the GGUF file for this model? Thanks

@FishersPlatens @greenishpanda Same for me. Do you have any pointers?

@oschrenk I followed the instructions provided in this discussion: https://github.com/ggerganov/llama.cpp/discussions/2948

I downloaded the file "convert-hf-to-gguf.py" from its repository: https://github.com/ggerganov/llama.cpp/blob/master/convert-hf-to-gguf.py

@greenishpanda When I try to follow that tutorial I get this error:

python llama.cpp/convert-hf-to-gguf.py aguila-7b/ --outfile aguila-7b.gguf --outtype q8_0
INFO:hf-to-gguf:Loading model: aguila-7b
INFO:gguf.gguf_writer:gguf: This GGUF file is for Little Endian only
INFO:hf-to-gguf:Set model parameters
INFO:hf-to-gguf:Set model tokenizer
WARNING:hf-to-gguf:

WARNING:hf-to-gguf:**************************************************************************************
WARNING:hf-to-gguf:** WARNING: The BPE pre-tokenizer was not recognized!
WARNING:hf-to-gguf:** There are 2 possible reasons for this:
WARNING:hf-to-gguf:** - the model has not been added to convert-hf-to-gguf-update.py yet
WARNING:hf-to-gguf:** - the pre-tokenization config has changed upstream
WARNING:hf-to-gguf:** Check your model files and convert-hf-to-gguf-update.py and update them accordingly.
WARNING:hf-to-gguf:** ref: https://github.com/ggerganov/llama.cpp/pull/6920
WARNING:hf-to-gguf:**
WARNING:hf-to-gguf:** chkhsh: 68f595cb6b057e0bdb599dc13baf8aa2a2c5271a485c6acf8beded0b4e381e00
WARNING:hf-to-gguf:**************************************************************************************
WARNING:hf-to-gguf:

Traceback (most recent call last):
File "/home/adria/aguila-7b/llama.cpp/convert-hf-to-gguf.py", line 2881, in
main()
File "/home/adria/aguila-7b/llama.cpp/convert-hf-to-gguf.py", line 2866, in main
model_instance.set_vocab()
File "/home/adria/aguila-7b/llama.cpp/convert-hf-to-gguf.py", line 117, in set_vocab
self._set_vocab_gpt2()
File "/home/adria/aguila-7b/llama.cpp/convert-hf-to-gguf.py", line 509, in _set_vocab_gpt2
tokens, toktypes, tokpre = self.get_vocab_base()
File "/home/adria/aguila-7b/llama.cpp/convert-hf-to-gguf.py", line 382, in get_vocab_base
tokpre = self.get_vocab_base_pre(tokenizer)
File "/home/adria/aguila-7b/llama.cpp/convert-hf-to-gguf.py", line 500, in get_vocab_base_pre
raise NotImplementedError("BPE pre-tokenizer was not recognized - update get_vocab_base_pre()")
NotImplementedError: BPE pre-tokenizer was not recognized - update get_vocab_base_pre()

Did you modify the get_vocab_base_pre() function?

Oh, I think I got it working.
I had to run the convert-hf-to-gguf-update.py script with this repo added to its lists of models, then the convert-hf-to-gguf.py script worked

Sign up or log in to comment