CODE LLM
Model |
参数 |
模型大小 |
模型准确率(Pass@1) |
发布时间 |
License |
机构 |
GPU消耗 |
Respository |
CodeGen-16B-multi |
160亿 |
27.5G |
19.2 |
2022-04-01 |
免费商用授权 |
https://huggingface.co/Salesforce/codegen-16B-multi/tree/mainhttps://github.com/salesforce/CodeGen |
||
CodeGeeX-13B |
130亿 |
22.9 |
2022-09-30 |
开源 |
清华大学 |
https://github.com/THUDM/CodeGeeX |
||
Codex-12B |
120亿 |
28.8 |
不开源 |
OpenAI |
||||
CodeT5Plus-16B-mono |
160亿 |
41GB |
30.9 |
2023-05-13 |
免费商用授权 |
https://github.com/salesforce/CodeT5https://huggingface.co/Salesforce/codet5p-16b |
||
Code-Cushman-001 |
33.5 |
不开源 |
OpenAI |
|||||
LLaMA-65B |
650亿 |
120GB |
23.7 |
2023-02-24 |
开源不可商用 |
Meta |
||
LLaMA2-70B |
700亿 |
129GB |
29.9 |
2023-07-18 |
免费商用授权 |
Meta |
https://github.com/facebookresearch/llamahttps://huggingface.co/meta-llama/Llama-2-70b |
|
CodeGen2.5-7B-mono |
70亿 |
27GB |
33.4 |
2023-07-07 |
免费商用授权 |
https://github.com/salesforce/CodeGenhttps://huggingface.co/Salesforce/codegen25-7b-multi |
||
StarCoder-15B |
150亿 |
64GB |
33.2 |
2023-05-05 |
免费商用授权 |
BigCode |
https://huggingface.co/bigcode/starcoderhttps://github.com/bigcode-project/starcoder/tree/main |
|
CodeGeeX2-6B |
60亿 |
12.5GB |
35.9 |
2023-07-25 |
免费商用授权 |
清华大学 |
GPU>13G 内存14G |
|
GPT-3.5 - OpenAI- 175B |
1750亿 |
48.1 |
2022-11-30 |
不开源 |
OpenAI |
|||
WizardCoder-15B |
150亿 |
31GB |
57.3 |
2023-06-14 |
免费商用授权 |
微软 |
内存40G |
|
PanGu-Coder2-150B |
1500亿 |
61.64 |
2023-07-27 |
不开源 |
华为 |
|||
GPT-4 - OpenAI-175B |
1750亿 |
67.0 |
2023-03-14 |
不开源 |
OpenAI |
|||
Qwen-7B |
70亿 |
15.4GB |
2023-08-03 |
免费商用授权 |
阿里 |
GPU> 23g |
Qwen/Qwen-7B · Hugging Face https://github.com/QwenLM/Qwen-7B |
|
ChatGLM-6B |
62亿 |
8GB |
2023-03-14 |
清华大学 |
https://github.com/THUDM/ChatGLM-6B https://huggingface.co/THUDM/chatglm-6b |
reference:
https://github.com/abacaj/code-eval
Chatbot Arena Leaderboard - a Hugging Face Space by lmsys
https://huggingface.co/WizardLM/WizardLM-30B-V1.0
https://github.com/QwenLM/Qwen-7B