本页面展示了多个主流大模型在各项评测基准上的表现,包括MMLU、GSM8K、HumanEval等多个标准数据集。我们通过实时更新的评测结果,帮助开发者和研究人员了解不同大模型在各种任务下的表现。用户可以选择自定义模型与评测基准进行对比,快速获取不同模型在实际应用中的优劣势。
各个评测基准的详细介绍可见: LLM 评测基准列表与介绍
模型名称 | 参数数量 | 开源情况 | 发布机构 | ||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
OpenAI o1 | 91.04 | 91.80 | 0.00 | 96.40 | 77.30 | 0.00 | 0.00 | 0.00 | 未知 |
![]() |
|
DeepSeek-R1 | 84.00 | 90.80 | 0.00 | 0.00 | 71.50 | 0.00 | 0.00 | 0.00 | 6710.0 |
![]() |
|
OpenAI o1-mini | 80.30 | 85.20 | 0.00 | 0.00 | 60.00 | 0.00 | 92.40 | 0.00 | 未知 |
![]() |
|
Gemini 2.0 Pro Experimental | 79.10 | 86.50 | 0.00 | 91.80 | 64.70 | 0.00 | 0.00 | 0.00 | 未知 |
![]() |
|
Hunyuan-TurboS | 79.00 | 89.50 | 0.00 | 89.70 | 57.50 | 92.20 | 91.00 | 0.00 | 未知 |
![]() |
|
Claude 3.5 Sonnet New | 78.00 | 88.30 | 0.00 | 78.30 | 65.00 | 92.60 | 93.70 | 0.00 | 未知 |
![]() |
|
GPT-4o | 77.90 | 88.70 | 0.00 | 75.90 | 53.60 | 91.70 | 90.00 | 0.00 | 未知 |
![]() |
|
GPT-4o(2024-11-20) | 77.90 | 85.70 | 0.00 | 68.50 | 0.00 | 0.00 | 90.20 | 0.00 | 未知 |
![]() |
|
Claude 3.5 Sonnet | 77.64 | 88.30 | 0.00 | 71.10 | 59.40 | 0.00 | 92.00 | 0.00 | 未知 |
![]() |
|
Gemini 2.0 Flash Experimental | 76.24 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 未知 |
![]() |
|
Qwen2.5-Max | 76.10 | 87.90 | 94.50 | 68.50 | 0.00 | 0.00 | 73.20 | 80.60 | 未知 |
![]() |
|
Gemini 1.5 Pro | 76.10 | 87.10 | 0.00 | 82.90 | 53.50 | 0.00 | 89.00 | 87.80 | 未知 |
![]() |
|
QwQ-32B | 76.00 | 0.00 | 0.00 | 0.00 | 58.00 | 0.00 | 19.00 | 0.00 | 325.0 |
![]() |
|
DeepSeek-V3 | 75.90 | 88.50 | 0.00 | 87.80 | 59.10 | 92.30 | 89.00 | 0.00 | 6810.0 |
![]() |
|
Grok 2 | 75.50 | 87.50 | 0.00 | 76.10 | 56.00 | 0.00 | 88.40 | 0.00 | 未知 |
|
|
Llama3.1-405B Instruct | 73.40 | 88.60 | 0.00 | 73.90 | 49.00 | 89.20 | 89.00 | 88.60 | 4050.0 |
![]() |
|
QwQ-32B-Preview | 70.97 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 320.0 |
![]() |
|
Phi 4 - 14B | 70.40 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 140.0 |
![]() |
|
Qwen2.5-32B | 69.23 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 320.0 |
![]() |
|
Llama3.3-70B-Instruct | 68.90 | 86.00 | 0.00 | 77.00 | 50.50 | 0.00 | 88.40 | 87.60 | 700.0 |
![]() |
|
Claude3-Opus | 68.45 | 86.80 | 95.00 | 60.10 | 50.40 | 0.00 | 84.90 | 0.00 | 未知 |
![]() |
|
Gemma 3 - 27B (IT) | 67.50 | 0.00 | 0.00 | 89.00 | 42.40 | 0.00 | 0.00 | 0.00 | 270.0 |
![]() |
|
Llama3.1-70B-Instruct | 66.40 | 86.00 | 0.00 | 67.80 | 48.00 | 0.00 | 80.50 | 86.00 | 700.0 |
![]() |
|
Qwen2.5-14B | 63.69 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 140.0 |
![]() |
|
GPT-4o mini | 63.09 | 82.00 | 91.30 | 70.20 | 41.10 | 0.00 | 87.20 | 0.00 | 未知 |
![]() |
|
Claude 3.5 Haiku | 62.12 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 未知 |
![]() |
|
Llama3.1-405B | 61.60 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 4050.0 |
![]() |
|
Gemma 3 - 12B (IT) | 60.60 | 0.00 | 0.00 | 83.80 | 40.90 | 0.00 | 0.00 | 0.00 | 120.0 |
![]() |
|
Qwen2.5-72B | 58.10 | 86.10 | 91.50 | 62.10 | 45.90 | 86.30 | 59.10 | 84.70 | 727.0 |
![]() |
|
Claude3-Sonnet | 56.80 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 未知 |
![]() |
|
Gemma2-27B | 56.54 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 270.0 |
![]() |
|
Mixtral-8x22B-Instruct-v0.1 | 56.33 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 1410.0 |
![]() |
|
Llama3-70B-Instruct | 56.20 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 700.0 |
![]() |
|
Phi-4-mini-instruct (3.8B) | 52.80 | 67.30 | 88.60 | 64.00 | 36.00 | 0.00 | 74.40 | 65.30 | 38.0 |
![]() |
|
Llama3-70B | 52.78 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 700.0 |
![]() |
|
Llama3.1-70B | 52.47 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 700.0 |
![]() |
|
Grok-1.5 | 51.00 | 81.30 | 0.00 | 50.60 | 35.90 | 0.00 | 74.10 | 0.00 | 未知 |
|
|
Qwen2.5-7B | 45.00 | 74.20 | 85.40 | 49.80 | 36.40 | 0.00 | 57.90 | 74.90 | 70.0 |
![]() |
|
Gemma 2 - 9B | 44.70 | 71.30 | 70.70 | 37.70 | 32.80 | 68.20 | 37.80 | 62.20 | 90.0 |
![]() |
|
Llama3.1-8B-Instruct | 44.00 | 68.10 | 82.40 | 47.60 | 26.30 | 0.00 | 66.50 | 69.40 | 80.0 |
![]() |
|
Moonlight-16B-A3B-Instruct | 42.40 | 70.00 | 77.40 | 45.30 | 0.00 | 65.20 | 48.10 | 63.80 | 160.0 |
![]() |
|
Llama3.1-8B | 35.40 | 66.60 | 55.30 | 20.50 | 25.80 | 57.70 | 33.50 | 53.90 | 80.0 |
![]() |
|
Qwen2.5-3B | 34.60 | 65.60 | 79.10 | 42.60 | 24.30 | 56.30 | 42.10 | 57.10 | 30.0 |
![]() |
|
Mistral-7B-Instruct-v0.3 | 30.90 | 64.20 | 36.20 | 10.20 | 24.70 | 56.10 | 29.30 | 51.10 | 70.0 |
![]() |
|
Llama-3.2-3B | 25.00 | 54.75 | 34.00 | 8.50 | 26.60 | 46.80 | 28.00 | 48.70 | 32.0 |
![]() |
|
Kimi k1.5 (Short-CoT) | 0.00 | 87.40 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 未知 |
![]() |
|
Grok-3 - Reasoning Beta | 0.00 | 0.00 | 0.00 | 0.00 | 85.00 | 0.00 | 0.00 | 0.00 | 未知 |
|
|
Claude Sonnet 3.7-64K Extended Thinking | 0.00 | 0.00 | 0.00 | 0.00 | 84.80 | 0.00 | 0.00 | 0.00 | 未知 |
![]() |
|
Claude Sonnet 3.7 | 0.00 | 0.00 | 0.00 | 0.00 | 68.00 | 0.00 | 0.00 | 0.00 | 未知 |
![]() |
|
Grok-3 mini - Reasoning | 0.00 | 0.00 | 0.00 | 0.00 | 84.00 | 0.00 | 0.00 | 0.00 | 未知 |
|
|
Amazon Nova Pro | 0.00 | 85.90 | 0.00 | 76.60 | 0.00 | 0.00 | 89.00 | 0.00 | 未知 |
![]() |
|
Grok 3 mini | 0.00 | 0.00 | 0.00 | 0.00 | 65.00 | 0.00 | 0.00 | 0.00 | 未知 |
|
|
OpenAI o3-mini (high) | 0.00 | 86.90 | 0.00 | 97.90 | 79.70 | 0.00 | 97.60 | 0.00 | 未知 |
![]() |
|
DeepSeek-R1-Distill-Llama-70B | 0.00 | 0.00 | 0.00 | 0.00 | 65.20 | 0.00 | 0.00 | 0.00 | 700.0 |
![]() |
|
Grok 3 | 0.00 | 0.00 | 0.00 | 0.00 | 75.00 | 0.00 | 0.00 | 0.00 | 未知 |
|
|
GPT-4.5 | 0.00 | 0.00 | 0.00 | 0.00 | 71.40 | 0.00 | 0.00 | 0.00 | 未知 |
![]() |
|
DeepSeek-R1-Distill-Qwen-7B | 0.00 | 0.00 | 0.00 | 0.00 | 49.50 | 0.00 | 0.00 | 0.00 | 70.0 |
![]() |
|
Phi-4-instruct (reasoning-trained) | 0.00 | 0.00 | 0.00 | 0.00 | 49.00 | 0.00 | 0.00 | 0.00 | 38.0 |
![]() |