New generative AI models are popping up everywhere and claims about openness abound. When we launched Opening Up ChatGPT in July 2023, it was the first global openness index for instruction-tuned large language models. Soon it featured over 50 models from >25 model providers. However, not everyone likes looking at enormous tables with more models and features than you can handle. Often, what people need is specific guidance on the best models to use in education, a comparison Llama versus Bloom, or just a quick list of all models that provide source code as well as scientific documentation.
We designed the European Open Source AI index to facilitate this form of flexible information sharing. You can check out a growing number of guides, or sift through the model index using fulltext search and a comprehensive set of filters. We welcome your requests for guides and your suggestions for new models to include or data points to update. See here for how to contribute.
Read on to learn about about how to navigate today's Open Source AI landscape. Or just start exploring the index on your own.
For the longest time, open-source software has been the key mental model for the notion of open source. Even if there was a large crop of licenses that differ on the finer points of redistribution, attribution, and so on, most people would agree that open source means the source of a piece software can be openly inspected, adjusted, and redistributed. Developers like open source because it allows them to poke around to see how things work and contribute improvements and bug fixes. Users like open source because it provides a degree of transparency and security that is unmatched by a lot of software from proprietary vendors. As Linus' Law has it, many eyeballs make all bugs shallow.
Today, the advent of complex machine learning models considerably complicates the picture. What exactly is the source of a large language model like Llama or a text-to-image model like Stable Diffusion? Some argue it is at least the source code — the actual computer code written to train and fine-tune the model. Others argue that in addition, the training data is equally crucial. Without countless terabytes of text and imagery, usually scraped from the open web, none of these models would do anything. At the same time, the amount of computational power needed to even train such models is astronomical, and therefore only within the reach of a select few large companies. Is there a meaningful way in which such models can be reverse-engineered or redistributed?
Today, openness is a moving target: single or simple definitions of "open source" won't suffice. Instead, we need more informed takes that allow us to distinguish the relevant degrees and dimensions of openness. One goal of the European Open Source AI index is to supply this information. The index is directly rooted in academic work on dimensions of openness in generative AI technology (Liesenfeld & Dingemanse, 2024; Solaiman, 2023). It aims to cut through the tangle of competing notions by recognising that openness is always a gradient and composite notion. What does that mean?
Openness is gradient. Some systems are more open than others. A commercial model provider like Meta (or Facebook AI Research) agressively markets its Llama models as "open source", but very little about Llama is actually open except for the model weights — the most inscrutable component. Smaller scale research-focused labs like AllenAI provide models like OLMo that are much more open, as our index shows. The gradience of openness should make you wary of any simple claim of "open source AI". Inquiring minds want to know: How open is it?
Here are two models at opposite ends of our openness scale. Both bill themselves as "open source". Only one of them is. In views like this you can also click "compare" to see multiple models side by side. Here's a direct link to compare Llama 3.1 and OLMo 7B.
Parameter descriptions:
Base Model Data
Are datasources for training the base model comprehensively documented and freely made available?
End User Model Data
Are datasources for training the model that the enduser interacts with comprehensively documented and freely made available?
Base Model Weights
Are the weights of the base models made freely available?
End User Model Weights
Are the weights of the model that the enduser interacts with made freely available?
Training Code
Is the source code of datasource processing, model training and tuining comprehensively and freely made available?
Model Data
Are datasources for training the model comprehensively documented and freely made available?
Model Weights
Are the weights of the model that the enduser interacts with made freely available?
Watermarking
Are watermarking techniques comprehensively documented and shared?
Prompt Moderation
Is prompt moderation comprehensively documented and shared?
Code Documentation
Is the source code of datasource processing, model training and tuning comprehensively documented?
Architecture Documentation
Is the hardware architecture used for datasource processing and model training comprehensively documented?
Preprint
Are archived preprint(s) are available that detail all major parts of the system including datasource processing, model training and tuning steps?
Paper
Are peer-reviewed scientific publications available that detail all major parts of the system including datasource processing, model training and tuning steps?
Modelcard
Is a model card in standardized format available that provides comprehensive insight on model architecture, training, fine-tuning, and evaluation are available?
Datasheet
Is a datasheet as defined in "Datasheets for Datasets" (Gebru et al. 2021) available?
Package
Is a packaged release of the model available on a software repository (e.g. a Python Package Index, Homebrew)?
API
Is an API available that provides unrestricted access to the model (other than security and CDN restrictions)?
Licenses
Is the project fully covered by Open Source Initiative (OSI)-approved licenses, including all data sources and training pipeline code?
WizardLM 13B v1.2 by Microsoft & Peking University
LLaMA2-13B
Qwen 1.5 by Alibaba Cloud
QwenLM
Phi 3 Instruct by Microsoft
Phi3
Mistral NeMo Instruct by Mistral AI
Mistral NeMo
DeepSeek R1 by DeepSeek
DeepSeek-V3-Base
Falcon-40B-instruct by Technology Innovation Institute
Falcon 40B
BELLE by KE Technologies
LLaMA & BLOOMZ
WizardLM-7B by Microsoft & Peking University
LLaMA-7B
Minerva-7B by Sapienza Natural Language Processing Group
Minerva-7B-base-v1.0
Geitje Ultra 7B by Bram van Roy
Mistral 7B
Falcon-180B-chat by Technology Innovation Institute
Falcon 180B
Yi 34B Chat by 01.AI
Yi 34B
Mixtral 8x7B Instruct by Mistral AI
Mistral
UltraLM by OpenBMB
LLaMA2
Llama 3.1 by Facebook Research
Meta Llama 3
Orca 2 by Microsoft Research
LLaMA2
Koala 13B by BAIR
unspecified
Stanford Alpaca by Stanford University CRFM
LLaMA
Xwin-LM by Xwin-LM
LLaMA2
Gemma 7B Instruct by Google DeepMind
Gemma
StableVicuna-13B by CarperAI
LLaMA
Nanbeige2-Chat by Nanbeige LLM lab
Unknown
Command R+ by Cohere AI
unspecified
Stable Beluga 2 by Stability AI
LLaMA2
Llama 3.3 by Meta Llama
Llama 3.3 70B
LLaMA2 Chat by Facebook Research
LLaMA2
Solar 70B by Upstage AI
LLaMA2
Llama 3 Instruct by Facebook Research
Meta Llama 3
Openness is composite. Openness is composed of multiple elements. This means it is more like press freedom than like temperature. The World Press Freedom Index ranks countries by their press freedom, but this itself takes into account measures on multiple dimensions, including political context, sociocultural context, and legal framework. Likewise, the European Open Source AI index gathers information on the openness of generative AI systems in terms of three broad categories (each in turn composed of finer-grained features): availability, documentation, and access. Recognising the composite nature of openness makes it possible to be concrete about what is open about a model and to what extent. It allows us to answer the question: How is it open?
Hovering over any model in our index will display the evidence we have of its openness across the three dimensions, further broken up into 15 features. Here's a grid view of OLMo-7B's openness features:
Parameter descriptions:
Base Model Data
Are datasources for training the base model comprehensively documented and freely made available?
End User Model Data
Are datasources for training the model that the enduser interacts with comprehensively documented and freely made available?
Base Model Weights
Are the weights of the base models made freely available?
End User Model Weights
Are the weights of the model that the enduser interacts with made freely available?
Training Code
Is the source code of datasource processing, model training and tuining comprehensively and freely made available?
Model Data
Are datasources for training the model comprehensively documented and freely made available?
Model Weights
Are the weights of the model that the enduser interacts with made freely available?
Watermarking
Are watermarking techniques comprehensively documented and shared?
Prompt Moderation
Is prompt moderation comprehensively documented and shared?
Code Documentation
Is the source code of datasource processing, model training and tuning comprehensively documented?
Architecture Documentation
Is the hardware architecture used for datasource processing and model training comprehensively documented?
Preprint
Are archived preprint(s) are available that detail all major parts of the system including datasource processing, model training and tuning steps?
Paper
Are peer-reviewed scientific publications available that detail all major parts of the system including datasource processing, model training and tuning steps?
Modelcard
Is a model card in standardized format available that provides comprehensive insight on model architecture, training, fine-tuning, and evaluation are available?
Datasheet
Is a datasheet as defined in "Datasheets for Datasets" (Gebru et al. 2021) available?
Package
Is a packaged release of the model available on a software repository (e.g. a Python Package Index, Homebrew)?
API
Is an API available that provides unrestricted access to the model (other than security and CDN restrictions)?
Licenses
Is the project fully covered by Open Source Initiative (OSI)-approved licenses, including all data sources and training pipeline code?
Liesenfeld, A., & Dingemanse, M. (2024). Rethinking open source generative AI: open-washing and the EU AI Act. Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’24). doi: 10.1145/3630106.3659005
Solaiman, I. (2023). The Gradient of Generative AI Release: Methods and Considerations. Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 111–122. doi: 10.1145/3593013.3593981