Open Source LLMs

Compare 20 open source llms tools to find the right one for your needs

🔧 Tools

Compare and find the best open source llms for your needs

Meta Llama 3

The most capable openly available LLM to date.

A family of pretrained and instruction-tuned generative text models from Meta.

View tool details →

Mistral AI

Frontier AI in your hands.

A French company specializing in high-performance, efficient, and accessible large language models.

View tool details →

EleutherAI

A grassroots AI research collective.

A non-profit AI research group focused on open-source AI research and the development of large language models.

View tool details →

Qwen

A family of large language models by Alibaba Cloud.

A series of large language and multimodal models developed by Alibaba Cloud, with many variants distributed as open-weight models.

View tool details →

Google Gemma

A family of lightweight, state-of-the-art open models from Google.

A family of lightweight, open models built from the same research and technology used to create the Gemini models.

View tool details →

Falcon

A family of large language models from the Technology Innovation Institute.

A family of open-source large language models available in various parameter sizes, released under the Apache 2.0 license.

View tool details →

BLOOM

A 176B-Parameter Open-Access Multilingual Language Model.

An open-access multilingual large language model created by the BigScience collaborative research workshop.

View tool details →

Vicuna

An Open-Source Chatbot Impressing GPT-4 with 90%* ChatGPT Quality.

An open-source chatbot fine-tuned from LLaMA on user-shared conversations.

View tool details →

Databricks Dolly

The world's first truly open instruction-tuned LLM.

An open-source, instruction-following large language model from Databricks, licensed for commercial use.

View tool details →

MosaicML MPT

A new standard for open-source, commercially usable LLMs.

A series of open-source, commercially usable transformers trained from scratch by MosaicML.

View tool details →

Microsoft Phi-3

Redefining what's possible with SLMs.

A family of small, open models from Microsoft with the capability of models 10 times larger.

View tool details →

Microsoft Orca 2

Teaching Small Language Models How to Reason.

A family of small language models from Microsoft that are taught to use different reasoning strategies for different tasks.

View tool details →

Zephyr

A series of language models trained to act as helpful assistants.

A fine-tuned version of Mistral-7B-v0.1 that is trained on a mix of publicly available, synthetic datasets using Direct Preference Optimization (DPO).

View tool details →

Stability AI StableLM

A suite of open-source language models from Stability AI.

A family of open-source language models from Stability AI, available in various sizes and designed for a range of text and code generation tasks.

View tool details →

OpenLLaMA

An Open Reproduction of LLaMA.

A permissively licensed open-source reproduction of Meta AI's LLaMA large language model.

View tool details →

Cerebras-GPT

A Family of Open, Compute-efficient, Large Language Models.

A family of seven open-source GPT models from Cerebras Systems, trained to be compute-efficient.

View tool details →

Pythia

A suite for analyzing large language models across training and scaling.

A suite of 16 language models from EleutherAI, all trained on the same public data in the same order, designed for scientific research on LLMs.

View tool details →

RedPajama-INCITE

A family of open-source models trained on the RedPajama dataset.

A set of open-source language models from Together AI, trained on the RedPajama dataset, which is an open-source reproduction of the LLaMA training data.

View tool details →

MPT-30B

Raising the bar for open-source foundation models.

A 30-billion parameter open-source model from MosaicML, licensed for commercial use and outperforming GPT-3.

View tool details →

DeepSpeed-Chat

Easy, Fast and Affordable RLHF Training of ChatGPT-like Models at All Scales.

A framework from Microsoft's DeepSpeed team for easily and efficiently training high-quality, ChatGPT-style models using RLHF.

View tool details →