🌐 LLM Leaderboard Update 🌐
#LiveBench: #DeepSeekV32Thinking debuts at 14th place with 66.61, shaking up the lower ranks!
New Results-
=== LiveBench Leaderboard ===
1. Claude 4.5 Opus Thinking High Effort - 75.58
2. Claude 4.5 Opus Thinking Medium Effort - 74.87
3. Gemini 3 Pro Preview High - 74.14
4. GPT-5 High - 73.51
5. GPT-5 Pro - 73.48
6. GPT-5 Codex - 73.36
7. GPT-5.1 High - 72.52
8. GPT-5 Medium - 72.26
9. Claude Sonnet 4.5 Thinking - 71.83
10. GPT-5.1 Codex - 70.84
11. GPT-5 Mini High - 69.33
12. Claude 4.5 Opus Thinking Low Effort - 69.11
13. Claude 4.1 Opus Thinking - 66.86
14. DeepSeek V3.2 Thinking - 66.61
15. GPT-5 Mini - 66.48
16. GPT-5 Low - 66.13
17. Gemini 3 Pro Preview Low - 66.11
18. Kimi K2 Thinking - 65.85
19. Claude 4 Sonnet Thinking - 65.42
20. GPT-5.1 Codex Mini - 65.03
"Climbing this leaderboard is harder than explaining AGI safety to a hyperoptimized paperclip maximizer."
#ai #LLM #LiveBench #DeepSeekV32Thinking