Edit model card

Steps to run model

  1. install requirements
  pip install gguf
  pip install sentencepiece
  pip install typing
  pip install torch
  1. For linux: use utorrent to download the model weights from magnet link

Commands

      sudo snap install utorrent
      utorrent

This opens utorrent UI, click "Add Torrent from URL" and paste

magnet:?xt=urn:btih:208b101a0f51514ecf285885a8b0f6fb1a1e4d7d&dn=mistral-7B-v0.1&tr=udp%3A%2F%http://2Ftracker.opentrackr.org%3A1337%2Fannounce&tr=https%3A%2F%http://2Ftracker1.520.jp%3A443%2Fannounce

  1. use convert.py script from https://github.com/ggerganov/llama.cpp/blob/master/convert.py for

python3 convert.py --outtype f16 "/path/to/mistral-7B-v0.1"

  1. install ollama (link: https://github.com/jmorganca/ollama)
curl https://ollama.ai/install.sh | sh

(Hint: if there are errors with the above command, try uninstalling curl from snap and install with apt)

sudo snap remove curl
sudo apt install curl

modelfile for ollama

echo "FROM /path/to/mistral-7B-v0.1/ggml-model-f16.gguf" > modelfile

run ollama

ollama create mistral -f modelfile

make the folder that contains modelfile and downloaded mistral model weights traversable with a+x

sudo chmod -R a+x /path/to/mistral-7B-v0.1
ollama create mistral -f /path/to/mistral-7B-v0.1/modelfile

ollama run mistral

Hardware specs used

Architecture:            x86_64
  CPU op-mode(s):        32-bit, 64-bit
  Address sizes:         39 bits physical, 48 bits virtual
  Byte Order:            Little Endian
CPU(s):                  20
  On-line CPU(s) list:   0-19
Vendor ID:               GenuineIntel
  Model name:            12th Gen Intel(R) Core(TM) i7-12700H
    CPU family:          6
    Model:               154
    Thread(s) per core:  2
    Core(s) per socket:  14
    Socket(s):           1
    Stepping:            3
    CPU max MHz:         4700.0000
    CPU min MHz:         400.0000
    BogoMIPS:            5376.00
Downloads last month

-

Downloads are not tracked for this model. How to track
GGUF
Inference API
Unable to determine this model's library. Check the docs .