Leaderboard
On-device LLM performance rankings powered by Glicko-2
Pixel 8
AndroidRank
#138
Rating
1,492
±14 RD
Win Rate
49.3%
Conservative Rating
1,464
TG Rating
1,483
PP Rating
1,549
Matches
1,344
Record
662W – 682L
Models Tested
| Model | TG Median (tok/s) | PP Median (tok/s) | TG Best | PP Best | Runs |
|---|---|---|---|---|---|
| Thinker-SmolLM2-135M-Instruct-Reasoning.i1-Q4_K_M | 28.34 | 211.70 | 28.34 | 211.70 | 1 |
| gemma-3-1b-it-q4_0_s | 24.85 | 141.23 | 24.85 | 141.23 | 1 |
| LFM2.5-1.2B-Instruct-Q4_K_M | 22.27 | 91.43 | 22.27 | 91.43 | 1 |
| SmolLM2-360M-Instruct.i1-IQ4_XS | 19.95 | 116.81 | 19.95 | 116.81 | 1 |
| smollm2-360m-instruct-q8_0 | 19.05 | 116.45 | 19.66 | 133.61 | 2 |
| LFM2.5-1.2B-Thinking-Q4_K_M | 14.43 | 62.66 | 14.43 | 62.66 | 1 |
| gemma-3-270m-it-IQ4_NL | 12.36 | 249.50 | 12.36 | 249.50 | 1 |
| Dolphin3.0-Llama3.2-1B-Q4_K_M | 10.60 | 32.61 | 10.60 | 32.61 | 1 |
| SmolLM2-1.7B-Instruct-abliterated.i1-Q4_K_M | 10.40 | 28.92 | 10.40 | 28.92 | 1 |
| Dolphin3.0-Llama3.2-1B-Q8_0 | 10.10 | 41.01 | 10.10 | 41.01 | 1 |
| gemma-3-1b-it.Q8_0 | 10.02 | 93.96 | 10.02 | 93.96 | 1 |
| smollm2-1.7b-instruct-q4_k_m | 8.72 | 20.43 | 8.72 | 20.43 | 1 |
| gemma-3-1b-it.Q5_K_M | 8.66 | 41.29 | 8.66 | 41.29 | 1 |
| SmolLM2-1.7B-Instruct-abliterated.i1-IQ4_XS | 8.64 | 19.70 | 8.64 | 19.70 | 1 |
| qwen2.5-1.5b-instruct-q8_0 | 8.49 | 45.86 | 9.59 | 54.06 | 3 |
| DeepSeek-R1-Distill-Qwen-1.5B-Abliterated-dpo.i1-IQ4_XS | 8.34 | 26.65 | 8.34 | 26.65 | 1 |
| DeepSeek-R1-Distill-Qwen-1.5B-Abliterated-dpo.Q4_K_M | 7.75 | 24.30 | 7.75 | 24.30 | 1 |
| Qwen_Qwen3-0.6B-IQ4_XS | 7.48 | 66.64 | 7.48 | 66.64 | 1 |
| DeepSeek-R1-Distill-Qwen-1.5B-uncensored.Q8_0 | 6.99 | 54.54 | 6.99 | 54.54 | 1 |
| DeepSeek-R1-Distill-Qwen-1.5B-Abliterated-dpo.Q8_0 | 6.92 | 32.65 | 6.92 | 32.65 | 1 |
| llama-3.2-1b-instruct-q8_0 | 6.65 | 64.15 | 12.85 | 78.82 | 9 |
| SmolLM2-1.7B-Instruct-Q8_0 | 6.03 | 22.95 | 6.03 | 22.95 | 1 |
| qwen2.5-3b-instruct-q5_k_m | 5.59 | 16.32 | 6.00 | 16.54 | 3 |
| Hermes-3-Llama-3.2-3B-abliterated.i1-Q4_K_M | 5.48 | 12.50 | 5.48 | 12.50 | 1 |
| SmallThinker-3B-Preview-abliterated.i1-IQ4_XS | 5.45 | 12.20 | 5.45 | 12.20 | 1 |
| Phi-3.5-mini-instruct.Q4_K_M | 5.05 | 12.21 | 6.04 | 13.74 | 3 |
| gemma-2-2b-it-abliterated-Q4_K_M | 4.89 | 14.72 | 4.89 | 14.72 | 1 |
| Llama-3.2-3B-Instruct-Q6_K | 4.74 | 13.12 | 5.37 | 15.16 | 6 |
| gemma-2-2b-it-Q6_K | 3.79 | 17.51 | 5.06 | 22.10 | 5 |
| Gemmasutra-Mini-2B-v1-Q6_K | 3.08 | 11.74 | 4.73 | 13.21 | 2 |
| gemma-3-4b-it.Q5_K_M | 2.31 | 10.27 | 2.31 | 10.27 | 1 |
Head-to-Head Record
1–50 of 322 rows
1 / 7
Performance by App Version
ImprovedRegressed