大模型评测基准与性能对比

本页面展示了多个主流大模型在各项评测基准上的表现,包括MMLU、GSM8K、HumanEval等多个标准数据集。我们通过实时更新的评测结果,帮助开发者和研究人员了解不同大模型在各种任务下的表现。用户可以选择自定义模型与评测基准进行对比,快速获取不同模型在实际应用中的优劣势。

各个评测基准的详细介绍可见: LLM 评测基准列表与介绍

自定义评测选择

+
+
模型名称
MMLU Pro
知识问答
MMLU
知识问答
GSM8K
数学推理
MATH
数学推理
GPQA Diamond
常识推理
HumanEval
代码生成
MATH-500
数学推理
LiveCodeBench
代码生成
参数数量 开源情况 发布机构
GPT-4.5 86.10 0.00 0.00 0.00 71.40 0.00 90.70 46.40 未知 OpenAI
DeepSeek-V3-0324 81.20 0.00 0.00 0.00 68.40 0.00 94.00 49.20 6810.0 DeepSeek-AI
GPT-4.1 80.50 0.00 0.00 0.00 66.30 0.00 0.00 0.00 未知 OpenAI
Gemini 2.0 Pro Experimental 79.10 86.50 0.00 91.80 64.70 0.00 0.00 0.00 未知 DeepMind
Claude 3.5 Sonnet New 78.00 88.30 0.00 78.30 65.00 93.70 78.00 38.70 未知 Anthropic
GPT-4o(2024-11-20) 77.90 85.70 0.00 68.50 0.00 90.20 0.00 0.00 未知 OpenAI
Qwen2.5-Max 76.10 87.90 94.50 68.50 0.00 73.20 0.00 0.00 未知 阿里巴巴
DeepSeek-V3 75.90 88.50 0.00 87.80 59.10 89.00 87.80 34.60 6810.0 DeepSeek-AI
Grok 2 75.50 87.50 0.00 76.10 56.00 88.40 0.00 0.00 未知 xAI
Llama3.3-70B-Instruct 68.90 86.00 0.00 77.00 50.50 88.40 0.00 33.30 700.0 Facebook AI研究实验室
Gemma 3 - 27B (IT) 67.50 76.90 0.00 89.00 42.40 87.80 0.00 29.70 270.0 Google Deep Mind
Mixtral-8x22B-Instruct-v0.1 56.33 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1410.0 MistralAI
Llama3-70B-Instruct 56.20 0.00 0.00 0.00 0.00 0.00 0.00 0.00 700.0 Facebook AI研究实验室
Phi-4-mini-instruct (3.8B) 52.80 67.30 88.60 64.00 36.00 74.40 71.80 0.00 38.0 Microsoft
Llama3-70B 52.78 0.00 0.00 0.00 0.00 0.00 0.00 0.00 700.0 Facebook AI研究实验室
Grok-1.5 51.00 81.30 0.00 50.60 35.90 74.10 0.00 0.00 未知 xAI
Llama3.1-8B-Instruct 44.00 68.10 82.40 47.60 26.30 66.50 0.00 0.00 80.0 Facebook AI研究实验室
Moonlight-16B-A3B-Instruct 42.40 70.00 77.40 45.30 0.00 48.10 0.00 0.00 160.0 Moonshot AI
Mistral-7B-Instruct-v0.3 30.90 64.20 36.20 10.20 24.70 29.30 0.00 0.00 70.0 MistralAI
Grok 3 0.00 0.00 0.00 0.00 80.20 0.00 0.00 70.60 未知 xAI
Claude Sonnet 3.7 0.00 0.00 0.00 0.00 68.00 0.00 82.20 0.00 未知 Anthropic
GPT-4.1 mini 0.00 87.50 0.00 0.00 65.00 0.00 0.00 0.00 未知 OpenAI
GPT-4.1 nano 0.00 80.10 0.00 0.00 50.30 0.00 0.00 0.00 未知 OpenAI
MMLU Pro
86.10
MMLU
0.00
GSM8K
0.00
MATH
0.00
GPQA Diamond
71.40
HumanEval
0.00
MATH-500
90.70
LiveCodeBench
46.40
MMLU Pro
81.20
MMLU
0.00
GSM8K
0.00
MATH
0.00
GPQA Diamond
68.40
HumanEval
0.00
MATH-500
94.00
LiveCodeBench
49.20
MMLU Pro
80.50
MMLU
0.00
GSM8K
0.00
MATH
0.00
GPQA Diamond
66.30
HumanEval
0.00
MATH-500
0.00
LiveCodeBench
0.00
MMLU Pro
79.10
MMLU
86.50
GSM8K
0.00
MATH
91.80
GPQA Diamond
64.70
HumanEval
0.00
MATH-500
0.00
LiveCodeBench
0.00
MMLU Pro
78.00
MMLU
88.30
GSM8K
0.00
MATH
78.30
GPQA Diamond
65.00
HumanEval
93.70
MATH-500
78.00
LiveCodeBench
38.70
MMLU Pro
77.90
MMLU
85.70
GSM8K
0.00
MATH
68.50
GPQA Diamond
0.00
HumanEval
90.20
MATH-500
0.00
LiveCodeBench
0.00
MMLU Pro
76.10
MMLU
87.90
GSM8K
94.50
MATH
68.50
GPQA Diamond
0.00
HumanEval
73.20
MATH-500
0.00
LiveCodeBench
0.00
MMLU Pro
75.90
MMLU
88.50
GSM8K
0.00
MATH
87.80
GPQA Diamond
59.10
HumanEval
89.00
MATH-500
87.80
LiveCodeBench
34.60
MMLU Pro
75.50
MMLU
87.50
GSM8K
0.00
MATH
76.10
GPQA Diamond
56.00
HumanEval
88.40
MATH-500
0.00
LiveCodeBench
0.00
MMLU Pro
68.90
MMLU
86.00
GSM8K
0.00
MATH
77.00
GPQA Diamond
50.50
HumanEval
88.40
MATH-500
0.00
LiveCodeBench
33.30
MMLU Pro
67.50
MMLU
76.90
GSM8K
0.00
MATH
89.00
GPQA Diamond
42.40
HumanEval
87.80
MATH-500
0.00
LiveCodeBench
29.70
MMLU Pro
56.33
MMLU
0.00
GSM8K
0.00
MATH
0.00
GPQA Diamond
0.00
HumanEval
0.00
MATH-500
0.00
LiveCodeBench
0.00
MMLU Pro
56.20
MMLU
0.00
GSM8K
0.00
MATH
0.00
GPQA Diamond
0.00
HumanEval
0.00
MATH-500
0.00
LiveCodeBench
0.00
MMLU Pro
52.80
MMLU
67.30
GSM8K
88.60
MATH
64.00
GPQA Diamond
36.00
HumanEval
74.40
MATH-500
71.80
LiveCodeBench
0.00
MMLU Pro
52.78
MMLU
0.00
GSM8K
0.00
MATH
0.00
GPQA Diamond
0.00
HumanEval
0.00
MATH-500
0.00
LiveCodeBench
0.00
MMLU Pro
51.00
MMLU
81.30
GSM8K
0.00
MATH
50.60
GPQA Diamond
35.90
HumanEval
74.10
MATH-500
0.00
LiveCodeBench
0.00
MMLU Pro
44.00
MMLU
68.10
GSM8K
82.40
MATH
47.60
GPQA Diamond
26.30
HumanEval
66.50
MATH-500
0.00
LiveCodeBench
0.00
MMLU Pro
42.40
MMLU
70.00
GSM8K
77.40
MATH
45.30
GPQA Diamond
0.00
HumanEval
48.10
MATH-500
0.00
LiveCodeBench
0.00
MMLU Pro
30.90
MMLU
64.20
GSM8K
36.20
MATH
10.20
GPQA Diamond
24.70
HumanEval
29.30
MATH-500
0.00
LiveCodeBench
0.00
MMLU Pro
0.00
MMLU
0.00
GSM8K
0.00
MATH
0.00
GPQA Diamond
80.20
HumanEval
0.00
MATH-500
0.00
LiveCodeBench
70.60
MMLU Pro
0.00
MMLU
0.00
GSM8K
0.00
MATH
0.00
GPQA Diamond
68.00
HumanEval
0.00
MATH-500
82.20
LiveCodeBench
0.00
MMLU Pro
0.00
MMLU
87.50
GSM8K
0.00
MATH
0.00
GPQA Diamond
65.00
HumanEval
0.00
MATH-500
0.00
LiveCodeBench
0.00
MMLU Pro
0.00
MMLU
80.10
GSM8K
0.00
MATH
0.00
GPQA Diamond
50.30
HumanEval
0.00
MATH-500
0.00
LiveCodeBench
0.00