Qwen3-4B-Instruct-2507 GGUF (ShapeLearn Quantized)

This is a GGUF-quantized version of Qwen3-4B-Instruct-2507 produced with ByteShape's ShapeLearn, which learns the optimal datatype per tensor to maintain high quality even at very low bit lengths (the exclusive focus of this release).

To learn more about ShapeLearn and to see detailed benchmarks across GPUs, CPUs, and even the Raspberry Pi, please visit our blog.

If you have questions or want to share feedback, reach us on Reddit.

How to Pick a Model

We provide CPU and GPU optimized variants for llama.cpp:

  • CPUs: KQ quantization is preferred due to GGML kernel efficiency.
  • Nvidia GPUs: IQ quantization delivers faster throughput on modern architectures.

Each hardware target includes a range of models covering different size and quality tradeoffs.

The charts below show quality vs tokens per second for each device, comparing ShapeLearn models with Unsloth baselines.

Selection rule: Choose the model with the highest quality at your target throughput or the fastest model that still meets your required quality.

GGUF-KQ Models (Best for CPU)

CPU Benchmark - Intel

Table sorted by inference speed (match the chart numbers to model IDs):

Model ID Bits/Weight Model Size Normalized Quality
KQ-1 2.77 1.4 GB 70.33%
KQ-2 2.95 1.49 GB 79.81%
KQ-3 3.19 1.61 GB 87.31%
KQ-4 3.34 1.69 GB 92.04%
KQ-5 3.45 1.74 GB 93.01%
KQ-6 3.66 1.84 GB 94.46%
KQ-7 3.87 1.95 GB 95.89%
KQ-8 4.31 2.17 GB 98.44%
KQ-9 4.74 2.39 GB 98.95%

GGUF-IQ Models (Best for higher-end GPUs)

GPU Benchmark - RTX 5090

Table sorted by inference speed (match the chart numbers to model IDs):

Model ID Bits/Weight Model Size Normalized Score
IQ-1 2.55 1.29 GB 69.87%
IQ-2 2.76 1.39 GB 83.32%
IQ-3 2.94 1.49 GB 89.04%
IQ-4 3.07 1.55 GB 92.32%
IQ-5 3.31 1.67 GB 92.45%
IQ-6 3.55 1.79 GB 95.16%
IQ-7 4.04 2.04 GB 99.25%
IQ-8 4.54 2.29 GB 99.80%

Notes on quantization labels

The labels you see (for example IQ4_XS) are only there to make Hugging Face show our models in the GGUF table. We do not use the conventional quantization profiles as defined in llama.cpp. In our case these labels simply indicate whether the model uses KQ or IQ quantization and the average bit length, which is why several models can share the same tag.

Running these models with Ollama

All GGUF files in this repo can be used directly with Ollama.

To run a model with Ollama, use:

ollama run hf.co/byteshape/Qwen3-4B-Instruct-2507-GGUF:FILE_NAME.gguf

Replace FILE_NAME.gguf with the GGUF filename you want. For example:

ollama run hf.co/byteshape/Qwen3-4B-Instruct-2507-GGUF:Qwen3-4B-Instruct-2507-IQ4_XS-3.55bpw.gguf
Downloads last month
2,414
GGUF
Model size
4B params
Architecture
qwen3
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

5-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for byteshape/Qwen3-4B-Instruct-2507-GGUF

Quantized
(155)
this model