build: 3785 (64c6af31) with cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 for x86_64-linux-gnu llama_model_loader: loaded meta data with 35 key-value pairs and 434 tensors from Qwen2.5-3B-Instruct-IMat-GGUF/Qwen2.5-3B-Instruct.Q8_0.gguf.hardlink.gguf (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen2 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Qwen2.5 3B Instruct llama_model_loader: - kv 3: general.finetune str = Instruct llama_model_loader: - kv 4: general.basename str = Qwen2.5 llama_model_loader: - kv 5: general.size_label str = 3B llama_model_loader: - kv 6: general.license str = other llama_model_loader: - kv 7: general.license.name str = qwen-research llama_model_loader: - kv 8: general.license.link str = https://huggingface.co/Qwen/Qwen2.5-3... llama_model_loader: - kv 9: general.base_model.count u32 = 1 llama_model_loader: - kv 10: general.base_model.0.name str = Qwen2.5 3B llama_model_loader: - kv 11: general.base_model.0.organization str = Qwen llama_model_loader: - kv 12: general.base_model.0.repo_url str = https://huggingface.co/Qwen/Qwen2.5-3B llama_model_loader: - kv 13: general.tags arr[str,2] = ["chat", "text-generation"] llama_model_loader: - kv 14: general.languages arr[str,1] = ["en"] llama_model_loader: - kv 15: qwen2.block_count u32 = 36 llama_model_loader: - kv 16: qwen2.context_length u32 = 32768 llama_model_loader: - kv 17: qwen2.embedding_length u32 = 2048 llama_model_loader: - kv 18: qwen2.feed_forward_length u32 = 11008 llama_model_loader: - kv 19: qwen2.attention.head_count u32 = 16 llama_model_loader: - kv 20: qwen2.attention.head_count_kv u32 = 2 llama_model_loader: - kv 21: qwen2.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 22: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 23: general.file_type u32 = 7 llama_model_loader: - kv 24: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 25: tokenizer.ggml.pre str = qwen2 llama_model_loader: - kv 26: tokenizer.ggml.tokens arr[str,151936] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 27: tokenizer.ggml.token_type arr[i32,151936] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 28: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 29: tokenizer.ggml.eos_token_id u32 = 151645 llama_model_loader: - kv 30: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 31: tokenizer.ggml.bos_token_id u32 = 151643 llama_model_loader: - kv 32: tokenizer.ggml.add_bos_token bool = false llama_model_loader: - kv 33: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>... llama_model_loader: - kv 34: general.quantization_version u32 = 2 llama_model_loader: - type f32: 181 tensors llama_model_loader: - type q8_0: 253 tensors llm_load_vocab: special tokens cache size = 22 llm_load_vocab: token to piece cache size = 0.9310 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = qwen2 llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 151936 llm_load_print_meta: n_merges = 151387 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 32768 llm_load_print_meta: n_embd = 2048 llm_load_print_meta: n_layer = 36 llm_load_print_meta: n_head = 16 llm_load_print_meta: n_head_kv = 2 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 8 llm_load_print_meta: n_embd_k_gqa = 256 llm_load_print_meta: n_embd_v_gqa = 256 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-06 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 11008 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 2 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 1000000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 32768 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: ssm_dt_b_c_rms = 0 llm_load_print_meta: model type = ?B llm_load_print_meta: model ftype = Q8_0 llm_load_print_meta: model params = 3.09 B llm_load_print_meta: model size = 3.05 GiB (8.50 BPW) llm_load_print_meta: general.name = Qwen2.5 3B Instruct llm_load_print_meta: BOS token = 151643 '<|endoftext|>' llm_load_print_meta: EOS token = 151645 '<|im_end|>' llm_load_print_meta: PAD token = 151643 '<|endoftext|>' llm_load_print_meta: LF token = 148848 'ÄĬ' llm_load_print_meta: EOT token = 151645 '<|im_end|>' llm_load_print_meta: max token length = 256 ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes llm_load_tensors: ggml ctx size = 0.38 MiB llm_load_tensors: offloading 36 repeating layers to GPU llm_load_tensors: offloading non-repeating layers to GPU llm_load_tensors: offloaded 37/37 layers to GPU llm_load_tensors: CPU buffer size = 315.30 MiB llm_load_tensors: CUDA0 buffer size = 3127.61 MiB .................................................................................... llama_new_context_with_model: n_ctx = 512 llama_new_context_with_model: n_batch = 512 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 1000000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: CUDA0 KV buffer size = 18.00 MiB llama_new_context_with_model: KV self size = 18.00 MiB, K (f16): 9.00 MiB, V (f16): 9.00 MiB llama_new_context_with_model: CUDA_Host output buffer size = 0.58 MiB llama_new_context_with_model: CUDA0 compute buffer size = 300.75 MiB llama_new_context_with_model: CUDA_Host compute buffer size = 5.01 MiB llama_new_context_with_model: graph nodes = 1266 llama_new_context_with_model: graph splits = 2 system_info: n_threads = 25 (n_threads_batch = 25) / 32 | AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 1 | AVX512_VBMI = 1 | AVX512_VNNI = 1 | AVX512_BF16 = 1 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | RISCV_VECT = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | compute_imatrix: tokenizing the input .. compute_imatrix: tokenization took 135.302 ms compute_imatrix: computing over 128 chunks with batch_size 512 compute_imatrix: 0.53 seconds per pass - ETA 1.12 minutes [1]5.4725,[2]3.7782,[3]3.7538,[4]4.3429,[5]4.1577,[6]3.8350,[7]4.2738,[8]4.3086,[9]4.7510,[10]4.5923,[11]4.5057,[12]4.9433,[13]5.4815,[14]5.7276,[15]6.2856,[16]6.6246,[17]6.8589,[18]7.3081,[19]7.1124,[20]7.2169,[21]7.2418,[22]7.2679,[23]7.1133,[24]7.3331,[25]7.5405,[26]7.4027,[27]7.5308,[28]7.6387,[29]7.8488,[30]7.8116,[31]7.5495,[32]7.2557,[33]7.0906,[34]6.9792,[35]6.8933,[36]6.8737,[37]6.8806,[38]6.9562,[39]6.9103,[40]7.0934,[41]7.1521,[42]7.4350,[43]7.6915,[44]7.9158,[45]8.0726,[46]8.1978,[47]8.0729,[48]8.1199,[49]8.2099,[50]8.2747,[51]8.1531,[52]8.2316,[53]8.3993,[54]8.5016,[55]8.6094,[56]8.6917,[57]8.7388,[58]8.8038,[59]8.8242,[60]8.8550,[61]8.8206,[62]8.7785,[63]8.8293,[64]8.8878,[65]8.8225,[66]8.8215,[67]8.8153,[68]8.7218,[69]8.6522,[70]8.6330,[71]8.5902,[72]8.5736,[73]8.5788,[74]8.4884,[75]8.4141,[76]8.3351,[77]8.3025,[78]8.2680,[79]8.2208,[80]8.1340,[81]8.1606,[82]8.1431,[83]8.0850,[84]8.1047,[85]8.1147,[86]8.0559,[87]8.0283,[88]8.0199,[89]8.0413,[90]8.0703,[91]8.0663,[92]8.0062,[93]7.9531,[94]7.8856,[95]7.8208,[96]7.7639,[97]7.6987,[98]7.6445,[99]7.6172,[100]7.6257,[101]7.6478,[102]7.7493,[103]7.8420,[104]7.9199,[105]8.0490,[106]8.1359,[107]8.1713,[108]8.1466,[109]8.1379,[110]8.1185,[111]8.0864,[112]8.0307,[113]8.0410,[114]8.0888,[115]8.0976,[116]8.1109,[117]8.1307,[118]8.1667,[119]8.1699,[120]8.1621,[121]8.1729,[122]8.1367,[123]8.1824,[124]8.2372,[125]8.2794,[126]8.3367,[127]8.3891,[128]8.4361, Final estimate: PPL = 8.4361 +/- 0.12256 llama_perf_context_print: load time = 1601.73 ms llama_perf_context_print: prompt eval time = 48007.13 ms / 65536 tokens ( 0.73 ms per token, 1365.13 tokens per second) llama_perf_context_print: eval time = 0.00 ms / 1 runs ( 0.00 ms per token, inf tokens per second) llama_perf_context_print: total time = 50067.68 ms / 65537 tokens