Works good generating python on my 64GB RAM w/ 3090TI 24GB VRAM dev box

#2
by ubergarm - opened

A quick field report for any interested.

Ran it locally and successfully generated a well documented recursive python function for processing YouTube channels to extract video data from all the nested playlists. I fed it some JSON out of the YT APIs and prepended a "system prompt" of You are an experienced software developer with many years experience writing programs and scripts in bash, python, and Linux. Assist the user generating high quality professional well commented code. before my coding request.

./llama-server \
    --model "../models/bartowski/Mistral-Large-Instruct-2407-GGUF/Mistral-Large-Instruct-2407-IQ3_XXS.gguf" \
    --n-gpu-layers 42 \
    --ctx-size 4096 \
    --cache-type-k f16 \
    --cache-type-v f16 \
    --threads 24 \
    --flash-attn \
    --mlock \
    --n-predict -1 \
    --host 127.0.0.1 \
    --port 8080
>>> Timings
{
  "predicted_ms": 233351.373,
  "predicted_n": 539,
  "predicted_per_second": 2.3098214211064447,
  "predicted_per_token_ms": 432.9339016697588,
  "prompt_ms": 9810.211,
  "prompt_n": 1775,
  "prompt_per_second": 180.93392690534384,
  "prompt_per_token_ms": 5.526879436619718
}

Awesome!!

You may get faster results if you use a non-IQ quant, since those tend to run slower when not fully offloaded, but I'm glad it's working well for your use case!

I too have read that non-IQ quants should be faster for CPU heavy inference workloads. I ran a few different models for comparison:

                MODEL                    | size GB | bpw  | Offload Layers | PROMPT EVAL t/s | GENERATION t/s
Mistral-Large-Instruct-2407-IQ3_XXS.gguf |    44   | 3.07 |     42         |     180.93      |     2.31
Mistral-Large-Instruct-2407-Q2_K_L.gguf  |    43   | 2.97 |     42         |     158.66      |     2.53
Mistral-Large-Instruct-2407-Q3_K_M* gguf |    56   | 3.86 |     34         |     135.82      |     1.83

I'm using a 4 month old already burned up Intel i9-14900K CPU (hope to RMA or switch to AMD soon given the current Intel debacle and segfaults despite BIOS/microcode updates), 2x32GB DDR5-5600 RAM, 3090TI FE w/ 24GB VRAM

Cheers and thanks for all the quants!

Awesome!!

You may get faster results if you use a non-IQ quant, since those tend to run slower when not fully offloaded, but I'm glad it's working well for your use case!

I suppose it might depend on CPU capability? I have AMD Ryzen 9 7950X3D and use 24 high priority threads for inference (turns out to be noticeably faster than recommended physical cores-1 which would only be 15 active threads) with DDR5 memory and my experience is that when offloading (same model) the file size is the only factor determining the speed. Eg. Q3_K_M is slower than IQ3_M and Q4_K_S is slower than IQ4_XS (despite being non-IQ only little bit bigger than IQ). Memory speed is clearly bottleneck in my case (as the inference speed more or less corresponds to time needed to read the part that is offloaded in RAM).

By the way IQ2_XXS is still pretty good for chatting/role-play despite being only ~ 2bpw and can get >3T/s with 8k context in the above setup. With older 70B models 2bit quants were pretty bad even in this use case but Mistral being larger and better probably makes it retain usability.

Sign up or log in to comment