01 // Inference benchmarks
Single-stream decode · llama.cpp
Qwen3 32B · Q4_K_M
138 t/s # env llama.cpp b4732 · 4096 ctx · batch=1 · prompt=512 · temp=0.0 · median of 5 runs
02 // Hardware specs
ArchitectureBlackwell
Process nodeTSMC 4NP
Memory32 GB
Memory bandwidth1,792 GB/s
FP16 compute105 TFLOPS
INT8 compute210 TOPS
TDP575 W
PCIeGen 5 x16
Form factorTriple-slot
CoolingAxial
03 // Model fit
Approximate VRAM required to load weights + 4096 ctx KV cache.
+ STRENGTHS
- ✓32GB VRAM is enough for 70B-class models at Q8
- ✓1792 GB/s memory bandwidth · top tier in its class
- ✓Strong tooling: FP16, FP8, Q8, Q4 all officially supported
− TRADE-OFFS
- −Draws 575W under load — plan PSU and thermals accordingly
- −Limited to triple-slot chassis
- −Driver lock-in to vendor stack