New generative AI models are popping up everywhere and claims about openness abound. When we launched Opening Up ChatGPT in July 2023, it was the first global openness index for instruction-tuned large language models. Soon it featured over 50 models from >25 model providers. However, not everyone likes looking at enormous tables with more models and features than you can handle. Often, what people need is specific guidance on the best models to use in education, a comparison of specific models like Llama and BloomZ, or just a quick list of all models that provide source code as well as scientific documentation.
We designed the European Open Source AI index to facilitate this form of flexible information sharing. You can check out a growing number of guides, or sift through the model index using fulltext search and a comprehensive set of filters. We welcome your requests for guides and your suggestions for new models to include or data points to update. See here for how to contribute.
Read on to learn about about how to navigate today's Open Source AI landscape. Or just start exploring the index on your own.
For the longest time, open-source software has been the key mental model for the notion of open source. Even if there was a large crop of licenses that differ on the finer points of redistribution, attribution, and so on, most people would agree that open source means the source of a piece software can be openly inspected, adjusted, and redistributed. Developers like open source because it allows them to poke around to see how things work and contribute improvements and bug fixes. Users like open source because it provides a degree of transparency and security that is unmatched by a lot of software from proprietary vendors. As Linus' Law has it, many eyeballs make all bugs shallow.
Today, the advent of complex machine learning models considerably complicates the picture. What exactly is the source of a large language model like Llama or a text-to-image model like Stable Diffusion? Some argue it is at least the source code — the actual computer code written to train and fine-tune the model. Others argue that in addition, the training data is equally crucial. Without countless terabytes of text and imagery, usually scraped from the open web, none of these models would do anything. At the same time, the amount of computational power needed to even train such models is astronomical, and therefore only within the reach of a select few large companies. Is there a meaningful way in which such models can be reverse-engineered or redistributed?
Today, openness is a moving target: single or simple definitions of "open source" won't suffice. Instead, we need more informed takes that allow us to distinguish the relevant degrees and dimensions of openness. One goal of the European Open Source AI index is to supply this information. The index is directly rooted in academic work on dimensions of openness in generative AI technology (Liesenfeld & Dingemanse, 2024; Solaiman, 2023). It aims to cut through the tangle of competing notions by recognising that openness is always a gradient and composite notion. What does that mean?
Openness is gradient. Some systems are more open than others. A commercial model provider like Meta (or Facebook AI Research) agressively markets its Llama models as "open source", but very little about Llama is actually open except for the model weights — the most inscrutable component. Smaller scale research-focused labs like AllenAI provide models like OLMo that are much more open, as our index shows. The gradience of openness should make you wary of any simple claim of "open source AI". Inquiring minds want to know: How open is it?
Here are two models at opposite ends of our openness scale. Both bill themselves as "open source". Only one of them is. In views like this you can also click "compare" to see multiple models side by side. Here's a direct link to [compare Llama 3.3 and OLMo 7B](/compare?models=OLMo,llama-3.3 "Comparison of Llama and OLMo).
Parameter descriptions:
Base Model Data
Are datasources for training the base model comprehensively documented and freely made available? In case a distinction between base (foundation) and end (user) model is not applicable, this mirrors the end model data entries.
End User Model Data
Are datasources for training the model that the enduser interacts with comprehensively documented and freely made available?
Base Model Weights
Are the weights of the base models made freely available? In case a distinction between base (foundation) and end (user) model is not applicable, this mirrors the end model data entries.
End User Model Weights
Are the weights of the model that the enduser interacts with made freely available?
Training Code
Is the source code of datasource processing, model training and tuining comprehensively and freely made available?
Code Documentation
Is the source code of datasource processing, model training and tuning comprehensively documented?
Hardware Architecture
Is the hardware architecture used for datasource processing and model training comprehensively documented?
Preprint
Are archived preprint(s) are available that detail all major parts of the system including datasource processing, model training and tuning steps?
Paper
Are peer-reviewed scientific publications available that detail all major parts of the system including datasource processing, model training and tuning steps?
Modelcard
Is a model card in standardized format available that provides comprehensive insight on model architecture, training, fine-tuning, and evaluation are available?
Datasheet
Is a datasheet as defined in "Datasheets for Datasets" (Gebru et al. 2021) available?
Package
Is a packaged release of the model available on a software repository (e.g. a Python Package Index, Homebrew)?
API and Meta Prompts
Is an API available that provides unrestricted access to the model (other than security and CDN restrictions)? If applicable, this entry also collects information on the use and availability of meta prompts.
Licenses
Is the project fully covered by Open Source Initiative (OSI)-approved licenses, including all data sources and training pipeline code?
Poro-34B by Silo AI, TurkuNLP, High Performance Language Technologies (HPLT)
Poro-34B
mT0 by bigscience-workshop
mT5-XXL
Pythia by EleutherAI, Together Computer
Pythia-6.9B
Open Assistant by LAION-AI
Pythia-12B
Amber by LLM360
Amber
YuLan by Gaoling School of Artificial Intelligence
YuLan-Mini
K2 by LLM360
K2
SmolLM by HuggingFace
SmolLM2-1.7B
OpenChat by OpenChat
Meta-Llama-3-8B
Arabic StableLM by StabilityAI
StableLM-2-1.6B
Intestella by AMD
Instella-3B
Dolly by Databricks
Pythia-12B
Tülu by Ai2
Llama-3.1-405B
T5 by Google AI
T5
RedPajama by Together Computer
RedPajama-INCITE-7B-Base
Phi by Microsoft
Phi-4
Neo by Multimodal Art Projection
Neo-7B
BERT by Google AI
BERT
AquilaChat by Beijing Academy of Artificial Intelligence
Aquila2-70B-Expr
DeepSeek V3 by DeepSeek
DeepSeek-V3-Base
Yi by 01.AI
Yi-34B
Teuken by openGPT-X
Teuken-7B-base
Salamandra by Barcelona Supercomputing Center
Salamandra-7B
NeuralChat by Intel
Mistral-7B-v0.1
MPT by Databricks
MPT-30B
Lucie by OpenLLM-France
Lucie-7B
GPT-SW3 by AI Sweden
GPT-SW3-6.7B-V2
GPT-NeoXT by Together Computer
GPT-NeoX-20B
Fietje by Bram Vanroy
Phi-2
BTLM by Cerebras
BTLM-3B-8K-Base
Pharia by Aleph Alpha Research
Pharia 1 LLM 7B
minChatGPT by Ethan Yanjia Li
GPT2
Eurus by OpenBMB
Mixtral-8x22B-v0.1
Xwin-LM by Xwin-LM
Llama-2-13B
Vicuna by LMSYS
LLaMA
OpenELM by Apple
OpenELM-3B
Occiglot by Occiglot
Occiglot-7B-EU5
Mistral by Mistral AI
Mistral-Large-2411
GLM by Zhipu AI
GLM-4-9B
Falcon by Technology Innovation Institute
Falcon3-10B-Base
Minerva by Sapienza Natural Language Processing Group
Minerva-7B-base-v1.0
DeepSeek R1 by DeepSeek
DeepSeek-V3-Base
Zephyr by HuggingFace
Mixtral-8x22B-v0.1
QwQ-32B by Alibaba Cloud
Qwen2.5-32B
InternLM by Shanghai AI Laboratory
InternLM3-8B
CT-LLM by Multimodal Art Projection
CT-LLM-Base
Mistral NeMo by Mistral AI, NVIDIA
Mistral NeMo
WizardLM by Microsoft & Peking University
LLaMA-7B
Starling by NexusFlow
Llama-2-13B
Saul by Equall
Mixtral-8x22B-v0.1
BELLE by KE Technologies
Llama-2-13B
Airoboros by Jon Durbin
Qwen1.5-110B
Gemma by Google AI
Gemma-3-27B-PT
Geitje by Bram Vanroy
Mistral 7B
Marco by Alibaba
Marco-LLM-GLO
Viking by Silo AI, TurkuNLP, High Performance Language Technologies (HPLT)
Viking-33B
UltraLM by OpenBMB
LLaMA2
Llama 3.1 by Meta
Meta Llama 3
OpenMoE by Zheng Zian
OpenMoE-8B
Command-R by Cohere AI
C4AI-Command-R-V01
Stanford Alpaca by Stanford University CRFM
Llama-7B
StripedHyena by Together Computer
StripedHyena-Hessian-7B
Stable Beluga by Stability AI
LLaMA2
LongAlign by Zhipu AI
Llama-2-13B
Claire by OpenLLM-France
Falcon-7B
Llama 3.3 by Meta
Llama 3.3 70B
Koala by BAIR
unspecified
RWKV by BlinkDL/RWKV
RWKV-x070-Pile-1.47B-ctx4096
Persimmon by Adept AI Labs
Persimmon-8B-Base
OPT by Meta
OPT-30B
Nanbeige by Nanbeige LLM lab
Unknown
Infinity-Instruct by Beijing Academy of Artificial Intelligence
Llama-3.1-70B
H2O-Danube by H2O.ai
H2O-Danube3.1-4B-Chat
FastChat-T5 by LMSYS
Flan-T5-XL
Crystal by LLM360
Crystal
Baichuan by Baichuan Intelligent Technology
Baichuan2-13B-Base
StableVicuna by CarperAI
LLaMA
Llama 3 Instruct by Meta
Meta Llama 3
XGen by Salesforce
XGen-7B-4K-Base
Solar by Upstage AI
LLaMA2
Llama-Sherkala by G42
Llama-3.1-8B
Jais by G42
Llama-2-70B
Hunyuan by Tencent
Hunyuan-A52B-Pretrain
Granite by IBM
Granite-3.1-8B-Base
DeepHermes by Nous Research
Llama-3.1-8B
LLaMA2 Chat by Meta
LLaMA2
Snowflake Arctic by Snowflake
Snowflake-Arctic-Base
Minimax-Text by Minimax AI
MiniMax-Text-01
Gemma Japanese by Google AI
Gemma-2-2B
Openness is composite. Openness is composed of multiple elements. This means it is more like press freedom than like temperature. The World Press Freedom Index ranks countries by their press freedom, but this itself takes into account measures on multiple dimensions, including political context, sociocultural context, and legal framework. Likewise, the European Open Source AI index gathers information on the openness of generative AI systems in terms of three broad categories (each in turn composed of finer-grained features): availability, documentation, and access. Recognising the composite nature of openness makes it possible to be concrete about what is open about a model and to what extent. It allows us to answer the question: How is it open?
Hovering over any model in our index will display the evidence we have of its openness across the three dimensions, further broken up into 15 features. Here's a grid view of OLMo's openness features:
Parameter descriptions:
Base Model Data
Are datasources for training the base model comprehensively documented and freely made available? In case a distinction between base (foundation) and end (user) model is not applicable, this mirrors the end model data entries.
End User Model Data
Are datasources for training the model that the enduser interacts with comprehensively documented and freely made available?
Base Model Weights
Are the weights of the base models made freely available? In case a distinction between base (foundation) and end (user) model is not applicable, this mirrors the end model data entries.
End User Model Weights
Are the weights of the model that the enduser interacts with made freely available?
Training Code
Is the source code of datasource processing, model training and tuining comprehensively and freely made available?
Code Documentation
Is the source code of datasource processing, model training and tuning comprehensively documented?
Hardware Architecture
Is the hardware architecture used for datasource processing and model training comprehensively documented?
Preprint
Are archived preprint(s) are available that detail all major parts of the system including datasource processing, model training and tuning steps?
Paper
Are peer-reviewed scientific publications available that detail all major parts of the system including datasource processing, model training and tuning steps?
Modelcard
Is a model card in standardized format available that provides comprehensive insight on model architecture, training, fine-tuning, and evaluation are available?
Datasheet
Is a datasheet as defined in "Datasheets for Datasets" (Gebru et al. 2021) available?
Package
Is a packaged release of the model available on a software repository (e.g. a Python Package Index, Homebrew)?
API and Meta Prompts
Is an API available that provides unrestricted access to the model (other than security and CDN restrictions)? If applicable, this entry also collects information on the use and availability of meta prompts.
Licenses
Is the project fully covered by Open Source Initiative (OSI)-approved licenses, including all data sources and training pipeline code?
Liesenfeld, A., & Dingemanse, M. (2024). Rethinking open source generative AI: open-washing and the EU AI Act. Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’24). doi: 10.1145/3630106.3659005
Solaiman, I. (2023). The Gradient of Generative AI Release: Methods and Considerations. Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 111–122. doi: 10.1145/3593013.3593981