Leaderboard
On-device LLM performance rankings powered by Glicko-2
Pixel 9
AndroidRank
#172
Rating
1,389
±18 RD
Win Rate
39.3%
Conservative Rating
1,354
TG Rating
1,362
PP Rating
1,561
Matches
850
Record
334W – 516L
Models Tested
| Model | TG Median (tok/s) | PP Median (tok/s) | TG Best | PP Best | Runs |
|---|---|---|---|---|---|
| SmolLM2-135M-Instruct-Q4_K_S | 22.14 | 280.81 | 22.55 | 283.19 | 2 |
| SmolLM2-135M-Instruct-Q4_0 | 21.74 | 406.62 | 28.67 | 421.69 | 4 |
| SmolLM2-135M-Instruct-Q8_0 | 20.57 | 443.30 | 20.57 | 443.30 | 1 |
| Qwen_Qwen3-0.6B-Q8_0 | 11.90 | 130.52 | 11.90 | 130.52 | 1 |
| Qwen_Qwen3-0.6B-Q4_K_M | 11.89 | 129.09 | 11.89 | 129.09 | 1 |
| LFM2-700M-Q8_0 | 10.73 | 119.34 | 10.73 | 119.34 | 1 |
| google_gemma-3-1b-it-Q8_0 | 10.61 | 97.90 | 10.61 | 97.90 | 1 |
| google_gemma-3-1b-it-qat-Q4_0 | 10.37 | 103.90 | 10.37 | 103.90 | 1 |
| LFM2-1.2B-Q5_K_M | 10.30 | 46.30 | 10.30 | 46.30 | 1 |
| LFM2-1.2B-Q8_0 | 10.09 | 74.79 | 10.09 | 74.79 | 1 |
| gemma-3-1b-it.Q8_0 | 9.81 | 93.46 | 10.65 | 96.30 | 2 |
| Dolphin3.0-Llama3.2-1B-Q4_0 | 9.67 | 75.04 | 9.67 | 75.04 | 1 |
| Dolphin3.0-Llama3.2-1B-Q3_K_M | 9.62 | 36.77 | 9.62 | 36.77 | 1 |
| Qwen_Qwen3-1.7B-Q4_K_S | 9.53 | 54.17 | 9.53 | 54.17 | 1 |
| Llama-3.2-1B-Instruct.Q4_K_M | 9.45 | 60.65 | 9.62 | 63.87 | 2 |
| google_gemma-3-1b-it-qat-Q4_K_M | 9.35 | 60.37 | 9.35 | 60.37 | 1 |
| deepseek-ai.DeepSeek-R1-Distill-Qwen-1.5B.Q4_K_M | 8.84 | 40.66 | 8.84 | 40.66 | 1 |
| Qwen_Qwen3-1.7B-Q4_K_M | 8.26 | 50.42 | 8.91 | 52.53 | 2 |
| llama-3.2-1b-instruct-q8_0 | 8.08 | 71.44 | 8.90 | 83.04 | 8 |
| Qwen_Qwen3-1.7B-Q4_0 | 7.78 | 53.05 | 8.49 | 60.44 | 2 |
| SmolLM2-1.7B-Instruct-Q4_K_M | 7.56 | 31.68 | 7.83 | 37.56 | 3 |
| Qwen_Qwen3-1.7B-IQ4_NL | 7.41 | 49.14 | 7.64 | 50.67 | 4 |
| SmolLM2-1.7B-Instruct-Q8_0 | 7.07 | 50.58 | 7.07 | 50.58 | 1 |
| DeepSeek-R1-Distill-Qwen-1.5B-Q8_0 | 7.04 | 52.36 | 7.04 | 52.36 | 1 |
| SmolLM2-1.7B-Instruct-Q4_0 | 6.85 | 52.82 | 6.85 | 52.82 | 1 |
| gemma-3-1b-it-Q4_K_M | 5.68 | 22.46 | 5.68 | 22.46 | 1 |
| gemma-3-1b-it-Q8_0 | 5.68 | 42.10 | 5.68 | 42.10 | 1 |
| SmallThinker-3B-Preview-Q4_0 | 5.11 | 31.86 | 5.11 | 31.86 | 1 |
| Dolphin3.0-Qwen2.5-3b-Q4_0 | 5.05 | 32.50 | 5.05 | 32.50 | 1 |
| Llama-3.2-3B-Instruct-Q4_K_L | 4.91 | 22.00 | 4.91 | 22.00 | 1 |
| Qwen_Qwen3-1.7B-IQ2_M | 4.29 | 12.76 | 4.29 | 12.76 | 1 |
| Qwen3-4B-Instruct-2507-UD-Q5_K_XL | 4.12 | 17.55 | 4.12 | 17.55 | 1 |
| Qwen_Qwen3-1.7B-IQ4_XS | 3.91 | 27.33 | 3.91 | 27.33 | 1 |
| Qwen_Qwen3-1.7B-IQ3_XXS | 3.91 | 14.64 | 3.91 | 14.64 | 1 |
| gemma-2-2b-it-Q6_K | 3.82 | 18.42 | 5.11 | 28.98 | 8 |
| gemma-3-1b-it-f16 | 3.65 | 10.50 | 3.65 | 10.50 | 1 |
| Qwen_Qwen3-1.7B-IQ3_M | 3.41 | 10.54 | 3.41 | 10.54 | 1 |
| Qwen_Qwen3-1.7B-IQ3_XS | 3.27 | 10.44 | 3.27 | 10.44 | 1 |
| Llama-3.2-3B-Instruct-Q6_K | 3.03 | 12.37 | 3.10 | 13.33 | 2 |
| gemma-3-1b-it-abliterated-q8_0 | 2.89 | 31.18 | 2.89 | 31.18 | 1 |
| gemma-3-4b-it.Q4_K_S | 2.67 | 7.85 | 2.67 | 7.85 | 1 |
| Llama-3.1-Argunaut-1-8B-SFT-Q4_K_S | 2.44 | 7.60 | 2.44 | 7.60 | 1 |
| Llama-3.1-Argunaut-1-8B-SFT-Q4_0 | 2.16 | 8.19 | 2.16 | 8.19 | 1 |
| Llama-3-ELYZA-JP-8B-q4_k_m | 1.85 | 5.76 | 1.85 | 5.76 | 1 |
| Gemmasutra-Mini-2B-v1-Q6_K | 1.63 | 10.23 | 1.63 | 10.23 | 1 |
| Visionary-R1.Q6_K | 1.54 | 11.07 | 1.54 | 11.07 | 1 |
Head-to-Head Record
1–50 of 210 rows
1 / 5
Performance by App Version
ImprovedRegressed