Large language Models#
The following is a list of built-in LLM in Xinference:
MODEL NAME |
ABILITIES |
COTNEXT_LENGTH |
DESCRIPTION |
---|---|---|---|
generate |
2048 |
Aquila2 series models are the base language models |
|
chat |
2048 |
Aquila2-chat series models are the chat models |
|
chat |
16384 |
AquilaChat2-16k series models are the long-text chat models |
|
generate |
4096 |
Baichuan is an open-source Transformer based LLM that is trained on both Chinese and English data. |
|
generate |
4096 |
Baichuan2 is an open-source Transformer based LLM that is trained on both Chinese and English data. |
|
chat |
4096 |
Baichuan2-chat is a fine-tuned version of the Baichuan LLM, specializing in chatting. |
|
chat |
4096 |
Baichuan-chat is a fine-tuned version of the Baichuan LLM, specializing in chatting. |
|
generate |
131072 |
C4AI Command-R is a research release of a 35 billion parameter highly performant generative model. |
|
generate |
131072 |
This model is 4bit quantized version of C4AI Command-R using bitsandbytes. |
|
chat |
2048 |
ChatGLM is an open-source General Language Model (GLM) based LLM trained on both Chinese and English data. |
|
chat |
8192 |
ChatGLM2 is the second generation of ChatGLM, still open-source and trained on Chinese and English data. |
|
chat |
32768 |
ChatGLM2-32k is a special version of ChatGLM2, with a context window of 32k tokens instead of 8k. |
|
chat, tools |
8192 |
ChatGLM3 is the third generation of ChatGLM, still open-source and trained on Chinese and English data. |
|
chat |
131072 |
ChatGLM3 is the third generation of ChatGLM, still open-source and trained on Chinese and English data. |
|
chat |
32768 |
ChatGLM3 is the third generation of ChatGLM, still open-source and trained on Chinese and English data. |
|
generate |
100000 |
Code-Llama is an open-source LLM trained by fine-tuning LLaMA2 for generating and discussing code. |
|
chat |
100000 |
Code-Llama-Instruct is an instruct-tuned version of the Code-Llama LLM. |
|
generate |
100000 |
Code-Llama-Python is a fine-tuned version of the Code-Llama LLM, specializing in Python. |
|
chat |
65536 |
CodeQwen1.5 is the Code-Specific version of Qwen1.5. It is a transformer-based decoder-only language model pretrained on a large amount of data of codes. |
|
generate |
8194 |
CodeShell is a multi-language code LLM developed by the Knowledge Computing Lab of Peking University. |
|
chat |
8194 |
CodeShell is a multi-language code LLM developed by the Knowledge Computing Lab of Peking University. |
|
chat |
4096 |
DeepSeek LLM is an advanced language model comprising 67 billion parameters. It has been trained from scratch on a vast dataset of 2 trillion tokens in both English and Chinese. |
|
chat |
4096 |
deepseek-coder-instruct is a model initialized from deepseek-coder-base and fine-tuned on 2B tokens of instruction data. |
|
chat, vision |
4096 |
DeepSeek-VL possesses general multimodal understanding capabilities, capable of processing logical diagrams, web pages, formula recognition, scientific literature, natural images, and embodied intelligence in complex scenarios. |
|
generate |
2048 |
Falcon is an open-source Transformer based LLM trained on the RefinedWeb dataset. |
|
chat |
2048 |
Falcon-instruct is a fine-tuned version of the Falcon LLM, specializing in chatting. |
|
chat |
8192 |
Gemma is a family of lightweight, state-of-the-art open models from Google, built from the same research and technology used to create the Gemini models. |
|
chat |
16384 |
A code model trained on a dataset of ~140k programming related problems and solutions generated from Glaive’s synthetic data generation platform. |
|
chat |
4096 |
OpenFunctions is designed to extend Large Language Model (LLM) Chat Completion feature to formulate executable APIs call given natural language instructions and API context. |
|
chat |
4096 |
OpenFunctions is designed to extend Large Language Model (LLM) Chat Completion feature to formulate executable APIs call given natural language instructions and API context. |
|
generate |
1024 |
GPT-2 is a Transformer-based LLM that is trained on WebTest, a 40 GB dataset of Reddit posts with 3+ upvotes. |
|
generate |
16384 |
Pre-trained on over 2.3T Tokens containing high-quality English, Chinese, and code data. |
|
generate |
8192 |
InternLM is a Transformer-based LLM that is trained on both Chinese and English data, focusing on practical scenarios. |
|
chat |
16384 |
Pre-trained on over 2.3T Tokens containing high-quality English, Chinese, and code data. The Chat version has undergone SFT and RLHF training. |
|
chat |
4096 |
Internlm-chat is a fine-tuned version of the Internlm LLM, specializing in chatting. |
|
chat |
204800 |
The second generation of the InternLM model, InternLM2. |
|
generate |
4096 |
Llama-2 is the second generation of Llama, open-source and trained on a larger amount of data. |
|
chat |
4096 |
Llama-2-Chat is a fine-tuned version of the Llama-2 LLM, specializing in chatting. |
|
generate |
8192 |
Llama 3 is an auto-regressive language model that uses an optimized transformer architecture |
|
chat |
8192 |
The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks.. |
|
chat |
4096 |
MiniCPM is an End-Size LLM developed by ModelBest Inc. and TsinghuaNLP, with only 2.4B parameters excluding embeddings. |
|
chat |
4096 |
MiniCPM is an End-Size LLM developed by ModelBest Inc. and TsinghuaNLP, with only 2.4B parameters excluding embeddings. |
|
chat |
4096 |
MiniCPM is an End-Size LLM developed by ModelBest Inc. and TsinghuaNLP, with only 2.4B parameters excluding embeddings. |
|
chat |
4096 |
MiniCPM is an End-Size LLM developed by ModelBest Inc. and TsinghuaNLP, with only 2.4B parameters excluding embeddings. |
|
chat |
4096 |
MiniCPM is an End-Size LLM developed by ModelBest Inc. and TsinghuaNLP, with only 2.4B parameters excluding embeddings. |
|
chat |
8192 |
Mistral-7B-Instruct is a fine-tuned version of the Mistral-7B LLM on public datasets, specializing in chatting. |
|
chat |
8192 |
The Mistral-7B-Instruct-v0.2 Large Language Model (LLM) is an improved instruct fine-tuned version of Mistral-7B-Instruct-v0.1. |
|
generate |
8192 |
Mistral-7B is a unmoderated Transformer based LLM claiming to outperform Llama2 on all benchmarks. |
|
chat |
65536 |
The Mixtral-8x22B-Instruct-v0.1 Large Language Model (LLM) is an instruct fine-tuned version of the Mixtral-8x22B-v0.1, specializing in chatting. |
|
chat |
32768 |
Mistral-8x7B-Instruct is a fine-tuned version of the Mistral-8x7B LLM, specializing in chatting. |
|
generate |
32768 |
The Mixtral-8x7B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts. |
|
chat, vision |
2048 |
OmniLMM is a family of open-source large multimodal models (LMMs) adept at vision & language modeling. |
|
chat |
2048 |
OpenBuddy is a powerful open multilingual chatbot model aimed at global users. |
|
chat |
8192 |
Openhermes 2.5 is a fine-tuned version of Mistral-7B-v0.1 on primarily GPT-4 generated data. |
|
generate |
2048 |
Opt is an open-source, decoder-only, Transformer based LLM that was designed to replicate GPT-3. |
|
chat |
2048 |
Orca is an LLM trained by fine-tuning LLaMA on explanation traces obtained from GPT-4. |
|
chat |
4096 |
Orion-14B series models are open-source multilingual large language models trained from scratch by OrionStarAI. |
|
chat |
4096 |
Orion-14B series models are open-source multilingual large language models trained from scratch by OrionStarAI. |
|
generate |
2048 |
Phi-2 is a 2.7B Transformer based LLM used for research on model safety, trained with data similar to Phi-1.5 but augmented with synthetic texts and curated websites. |
|
chat |
128000 |
The Phi-3-Mini-128K-Instruct is a 3.8 billion-parameter, lightweight, state-of-the-art open model trained using the Phi-3 datasets. |
|
chat |
4096 |
The Phi-3-Mini-4k-Instruct is a 3.8 billion-parameter, lightweight, state-of-the-art open model trained using the Phi-3 datasets. |
|
generate |
4096 |
Platypus-70B-instruct is a merge of garage-bAInd/Platypus2-70B and upstage/Llama-2-70b-instruct-v2. |
|
chat, tools |
32768 |
Qwen-chat is a fine-tuned version of the Qwen LLM trained with alignment techniques, specializing in chatting. |
|
chat, vision |
4096 |
Qwen-VL-Chat supports more flexible interaction, such as multiple image inputs, multi-round question answering, and creative capabilities. |
|
chat, tools |
32768 |
Qwen1.5 is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data. |
|
chat |
32768 |
Qwen1.5-MoE is a transformer-based MoE decoder-only language model pretrained on a large amount of data. |
|
generate |
8192 |
We introduce SeaLLM-7B-v2, the state-of-the-art multilingual LLM for Southeast Asian (SEA) languages |
|
generate |
8192 |
We introduce SeaLLM-7B-v2.5, the state-of-the-art multilingual LLM for Southeast Asian (SEA) languages |
|
generate |
4096 |
Skywork is a series of large models developed by the Kunlun Group · Skywork team. |
|
generate |
4096 |
Skywork is a series of large models developed by the Kunlun Group · Skywork team. |
|
chat |
8192 |
Starchat-beta is a fine-tuned version of the Starcoderplus LLM, specializing in coding assistance. |
|
generate |
8192 |
Starcoder is an open-source Transformer based LLM that is trained on permissively licensed data from GitHub. |
|
generate |
8192 |
Starcoderplus is an open-source LLM trained by fine-tuning Starcoder on RedefinedWeb and StarCoderData datasets. |
|
starling-lm |
chat |
4096 |
We introduce Starling-7B, an open large language model (LLM) trained by Reinforcement Learning from AI Feedback (RLAIF). The model harnesses the power of our new GPT-4 labeled ranking dataset |
generate |
2048 |
The TinyLlama project aims to pretrain a 1.1B Llama model on 3 trillion tokens. |
|
chat |
2048 |
Vicuna is an open-source LLM trained by fine-tuning LLaMA on data collected from ShareGPT. |
|
chat |
4096 |
Vicuna is an open-source LLM trained by fine-tuning LLaMA on data collected from ShareGPT. |
|
chat |
16384 |
Vicuna-v1.5-16k is a special version of Vicuna-v1.5, with a context window of 16k tokens instead of 4k. |
|
chat |
100000 |
||
chat |
2048 |
WizardLM is an open-source LLM trained by fine-tuning LLaMA with Evol-Instruct. |
|
chat |
2048 |
WizardMath is an open-source LLM trained by fine-tuning Llama2 with Evol-Instruct, specializing in math. |
|
generate |
2048 |
XVERSE is a multilingual large language model, independently developed by Shenzhen Yuanxiang Technology. |
|
chat |
2048 |
XVERSEB-Chat is the aligned version of model XVERSE. |
|
generate |
4096 |
The Yi series models are large language models trained from scratch by developers at 01.AI. |
|
generate |
4096 |
Yi-1.5 is an upgraded version of Yi. It is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples. |
|
chat |
4096 |
Yi-1.5 is an upgraded version of Yi. It is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples. |
|
generate |
262144 |
The Yi series models are large language models trained from scratch by developers at 01.AI. |
|
chat |
4096 |
The Yi series models are large language models trained from scratch by developers at 01.AI. |
|
chat, vision |
4096 |
Yi Vision Language (Yi-VL) model is the open-source, multimodal version of the Yi Large Language Model (LLM) series, enabling content comprehension, recognition, and multi-round conversations about images. |
|
chat |
8192 |
Zephyr-7B-α is the first model in the series, and is a fine-tuned version of mistralai/Mistral-7B-v0.1. |
|
chat |
8192 |
Zephyr-7B-β is the second model in the series, and is a fine-tuned version of mistralai/Mistral-7B-v0.1 |
- aquila2
- aquila2-chat
- aquila2-chat-16k
- baichuan
- baichuan-2
- baichuan-2-chat
- baichuan-chat
- c4ai-command-r-v01
- c4ai-command-r-v01-4bit
- chatglm
- chatglm2
- chatglm2-32k
- chatglm3
- chatglm3-128k
- chatglm3-32k
- code-llama
- code-llama-instruct
- code-llama-python
- codeqwen1.5-chat
- codeshell
- codeshell-chat
- deepseek-chat
- deepseek-coder-instruct
- deepseek-vl-chat
- falcon
- falcon-instruct
- gemma-it
- glaive-coder
- gorilla-openfunctions-v1
- gorilla-openfunctions-v2
- gpt-2
- internlm-20b
- internlm-7b
- internlm-chat-20b
- internlm-chat-7b
- internlm2-chat
- llama-2
- Specifications
- Model Spec 1 (ggmlv3, 7 Billion)
- Model Spec 2 (gptq, 7 Billion)
- Model Spec 3 (awq, 7 Billion)
- Model Spec 4 (ggmlv3, 13 Billion)
- Model Spec 5 (ggmlv3, 70 Billion)
- Model Spec 6 (pytorch, 7 Billion)
- Model Spec 7 (pytorch, 13 Billion)
- Model Spec 8 (gptq, 13 Billion)
- Model Spec 9 (awq, 13 Billion)
- Model Spec 10 (pytorch, 70 Billion)
- Model Spec 11 (gptq, 70 Billion)
- Model Spec 12 (awq, 70 Billion)
- Specifications
- llama-2-chat
- Specifications
- Model Spec 1 (ggmlv3, 7 Billion)
- Model Spec 2 (ggmlv3, 13 Billion)
- Model Spec 3 (ggmlv3, 70 Billion)
- Model Spec 4 (pytorch, 7 Billion)
- Model Spec 5 (gptq, 7 Billion)
- Model Spec 6 (gptq, 70 Billion)
- Model Spec 7 (awq, 70 Billion)
- Model Spec 8 (awq, 7 Billion)
- Model Spec 9 (pytorch, 13 Billion)
- Model Spec 10 (gptq, 13 Billion)
- Model Spec 11 (awq, 13 Billion)
- Model Spec 12 (pytorch, 70 Billion)
- Model Spec 13 (ggufv2, 7 Billion)
- Model Spec 14 (ggufv2, 13 Billion)
- Model Spec 15 (ggufv2, 70 Billion)
- Specifications
- llama-3
- llama-3-instruct
- minicpm-2b-dpo-bf16
- minicpm-2b-dpo-fp16
- minicpm-2b-dpo-fp32
- minicpm-2b-sft-bf16
- minicpm-2b-sft-fp32
- mistral-instruct-v0.1
- mistral-instruct-v0.2
- mistral-v0.1
- mixtral-8x22B-instruct-v0.1
- mixtral-instruct-v0.1
- mixtral-v0.1
- OmniLMM
- OpenBuddy
- openhermes-2.5
- opt
- orca
- orion-chat
- orion-chat-rag
- phi-2
- phi-3-mini-128k-instruct
- phi-3-mini-4k-instruct
- platypus2-70b-instruct
- qwen-chat
- Specifications
- Model Spec 1 (ggufv2, 7 Billion)
- Model Spec 2 (ggufv2, 14 Billion)
- Model Spec 3 (pytorch, 1_8 Billion)
- Model Spec 4 (pytorch, 7 Billion)
- Model Spec 5 (pytorch, 14 Billion)
- Model Spec 6 (pytorch, 72 Billion)
- Model Spec 7 (gptq, 7 Billion)
- Model Spec 8 (gptq, 1_8 Billion)
- Model Spec 9 (gptq, 14 Billion)
- Model Spec 10 (gptq, 72 Billion)
- Specifications
- qwen-vl-chat
- qwen1.5-chat
- Specifications
- Model Spec 1 (pytorch, 0_5 Billion)
- Model Spec 2 (pytorch, 1_8 Billion)
- Model Spec 3 (pytorch, 4 Billion)
- Model Spec 4 (pytorch, 7 Billion)
- Model Spec 5 (pytorch, 14 Billion)
- Model Spec 6 (pytorch, 32 Billion)
- Model Spec 7 (pytorch, 72 Billion)
- Model Spec 8 (pytorch, 110 Billion)
- Model Spec 9 (gptq, 0_5 Billion)
- Model Spec 10 (gptq, 1_8 Billion)
- Model Spec 11 (gptq, 4 Billion)
- Model Spec 12 (gptq, 7 Billion)
- Model Spec 13 (gptq, 14 Billion)
- Model Spec 14 (gptq, 32 Billion)
- Model Spec 15 (gptq, 72 Billion)
- Model Spec 16 (gptq, 110 Billion)
- Model Spec 17 (awq, 0_5 Billion)
- Model Spec 18 (awq, 1_8 Billion)
- Model Spec 19 (awq, 4 Billion)
- Model Spec 20 (awq, 7 Billion)
- Model Spec 21 (awq, 14 Billion)
- Model Spec 22 (awq, 32 Billion)
- Model Spec 23 (awq, 72 Billion)
- Model Spec 24 (awq, 110 Billion)
- Model Spec 25 (ggufv2, 0_5 Billion)
- Model Spec 26 (ggufv2, 1_8 Billion)
- Model Spec 27 (ggufv2, 4 Billion)
- Model Spec 28 (ggufv2, 7 Billion)
- Model Spec 29 (ggufv2, 14 Billion)
- Model Spec 30 (ggufv2, 32 Billion)
- Model Spec 31 (ggufv2, 72 Billion)
- Specifications
- qwen1.5-moe-chat
- seallm_v2
- seallm_v2.5
- Skywork
- Skywork-Math
- starchat-beta
- starcoder
- starcoderplus
- tiny-llama
- vicuna-v1.3
- vicuna-v1.5
- vicuna-v1.5-16k
- wizardcoder-python-v1.0
- wizardlm-v1.0
- wizardmath-v1.0
- xverse
- xverse-chat
- Yi
- Yi-1.5
- Yi-1.5-chat
- Yi-200k
- Yi-chat
- yi-vl-chat
- zephyr-7b-alpha
- zephyr-7b-beta