Generative AI models have many moving parts. This guide provides a survey of the most important openness dimensions by discussing two models both self-billed as "open source". BloomZ was introduced by the BigScience Workshop team in May 2023 as an early open source large language model; Llama 3.1 (Llama for short) was introduced by Facebook Research as “the next generation of our open source large language model”. However, a glance at the openness scores below shows that the models differ quite a lot in terms of overall openness.
Parameter descriptions:
Base Model Data
Are datasources for training the base model comprehensively documented and freely made available? In case a distinction between base (foundation) and end (user) model is not applicable, this mirrors the end model data entries.
End User Model Data
Are datasources for training the model that the enduser interacts with comprehensively documented and freely made available?
Base Model Weights
Are the weights of the base models made freely available? In case a distinction between base (foundation) and end (user) model is not applicable, this mirrors the end model data entries.
End User Model Weights
Are the weights of the model that the enduser interacts with made freely available?
Training Code
Is the source code of datasource processing, model training and tuining comprehensively and freely made available?
Code Documentation
Is the source code of datasource processing, model training and tuning comprehensively documented?
Hardware Architecture
Is the hardware architecture used for datasource processing and model training comprehensively documented?
Preprint
Are archived preprint(s) are available that detail all major parts of the system including datasource processing, model training and tuning steps?
Paper
Are peer-reviewed scientific publications available that detail all major parts of the system including datasource processing, model training and tuning steps?
Modelcard
Is a model card in standardized format available that provides comprehensive insight on model architecture, training, fine-tuning, and evaluation are available?
Datasheet
Is a datasheet as defined in "Datasheets for Datasets" (Gebru et al. 2021) available?
Package
Is a packaged release of the model available on a software repository (e.g. a Python Package Index, Homebrew)?
API and Meta Prompts
Is an API available that provides unrestricted access to the model (other than security and CDN restrictions)? If applicable, this entry also collects information on the use and availability of meta prompts.
Licenses
Is the project fully covered by Open Source Initiative (OSI)-approved licenses, including all data sources and training pipeline code?
Poro-34B by Silo AI, TurkuNLP, High Performance Language Technologies (HPLT)
Poro-34B
mT0 by bigscience-workshop
mT5-XXL
Pythia by EleutherAI, Together Computer
Pythia-6.9B
Open Assistant by LAION-AI
Pythia-12B
Amber by LLM360
Amber
YuLan by Gaoling School of Artificial Intelligence
YuLan-Mini
K2 by LLM360
K2
SmolLM by HuggingFace
SmolLM2-1.7B
OpenChat by OpenChat
Meta-Llama-3-8B
Arabic StableLM by StabilityAI
StableLM-2-1.6B
Intestella by AMD
Instella-3B
Dolly by Databricks
Pythia-12B
Tülu by Ai2
Llama-3.1-405B
T5 by Google AI
T5
RedPajama by Together Computer
RedPajama-INCITE-7B-Base
Phi by Microsoft
Phi-4
Neo by Multimodal Art Projection
Neo-7B
BERT by Google AI
BERT
AquilaChat by Beijing Academy of Artificial Intelligence
Aquila2-70B-Expr
DeepSeek V3 by DeepSeek
DeepSeek-V3-Base
Yi by 01.AI
Yi-34B
Teuken by openGPT-X
Teuken-7B-base
Salamandra by Barcelona Supercomputing Center
Salamandra-7B
NeuralChat by Intel
Mistral-7B-v0.1
MPT by Databricks
MPT-30B
Lucie by OpenLLM-France
Lucie-7B
GPT-SW3 by AI Sweden
GPT-SW3-6.7B-V2
GPT-NeoXT by Together Computer
GPT-NeoX-20B
Fietje by Bram Vanroy
Phi-2
BTLM by Cerebras
BTLM-3B-8K-Base
Pharia by Aleph Alpha Research
Pharia 1 LLM 7B
minChatGPT by Ethan Yanjia Li
GPT2
Eurus by OpenBMB
Mixtral-8x22B-v0.1
Xwin-LM by Xwin-LM
Llama-2-13B
Vicuna by LMSYS
LLaMA
OpenELM by Apple
OpenELM-3B
Occiglot by Occiglot
Occiglot-7B-EU5
Mistral by Mistral AI
Mistral-Large-2411
GLM by Zhipu AI
GLM-4-9B
Falcon by Technology Innovation Institute
Falcon3-10B-Base
Minerva by Sapienza Natural Language Processing Group
Minerva-7B-base-v1.0
DeepSeek R1 by DeepSeek
DeepSeek-V3-Base
Zephyr by HuggingFace
Mixtral-8x22B-v0.1
QwQ-32B by Alibaba Cloud
Qwen2.5-32B
InternLM by Shanghai AI Laboratory
InternLM3-8B
CT-LLM by Multimodal Art Projection
CT-LLM-Base
Mistral NeMo by Mistral AI, NVIDIA
Mistral NeMo
WizardLM by Microsoft & Peking University
LLaMA-7B
Starling by NexusFlow
Llama-2-13B
Saul by Equall
Mixtral-8x22B-v0.1
BELLE by KE Technologies
Llama-2-13B
Airoboros by Jon Durbin
Qwen1.5-110B
Gemma by Google AI
Gemma-3-27B-PT
Geitje by Bram Vanroy
Mistral 7B
Marco by Alibaba
Marco-LLM-GLO
Viking by Silo AI, TurkuNLP, High Performance Language Technologies (HPLT)
Viking-33B
UltraLM by OpenBMB
LLaMA2
Llama 3.1 by Meta
Meta Llama 3
OpenMoE by Zheng Zian
OpenMoE-8B
Command-R by Cohere AI
C4AI-Command-R-V01
Stanford Alpaca by Stanford University CRFM
Llama-7B
StripedHyena by Together Computer
StripedHyena-Hessian-7B
Stable Beluga by Stability AI
LLaMA2
LongAlign by Zhipu AI
Llama-2-13B
Claire by OpenLLM-France
Falcon-7B
Llama 3.3 by Meta
Llama 3.3 70B
Koala by BAIR
unspecified
RWKV by BlinkDL/RWKV
RWKV-x070-Pile-1.47B-ctx4096
Persimmon by Adept AI Labs
Persimmon-8B-Base
OPT by Meta
OPT-30B
Nanbeige by Nanbeige LLM lab
Unknown
Infinity-Instruct by Beijing Academy of Artificial Intelligence
Llama-3.1-70B
H2O-Danube by H2O.ai
H2O-Danube3.1-4B-Chat
FastChat-T5 by LMSYS
Flan-T5-XL
Crystal by LLM360
Crystal
Baichuan by Baichuan Intelligent Technology
Baichuan2-13B-Base
StableVicuna by CarperAI
LLaMA
Llama 3 Instruct by Meta
Meta Llama 3
XGen by Salesforce
XGen-7B-4K-Base
Solar by Upstage AI
LLaMA2
Llama-Sherkala by G42
Llama-3.1-8B
Jais by G42
Llama-2-70B
Hunyuan by Tencent
Hunyuan-A52B-Pretrain
Granite by IBM
Granite-3.1-8B-Base
DeepHermes by Nous Research
Llama-3.1-8B
LLaMA2 Chat by Meta
LLaMA2
Snowflake Arctic by Snowflake
Snowflake-Arctic-Base
Minimax-Text by Minimax AI
MiniMax-Text-01
Gemma Japanese by Google AI
Gemma-2-2B
Hovering over the openness indicators below makes visible the 14 dimensions by which we judge model openness. In this guide, we summarise the dimensions in terms of three areas: availability, documentation and access.
When it comes to open code, we find that BloomZ makes available source code for training, fine-tuning and running the model, while for Llama none of the model's source code is made available, only scripts for running the model are shared. The LLM data underlying the base model is documented in great detail by BloomZ, while for Llama only the vaguest details are provided in a corporate preprint: “a new mix of data from publicly available sources, which does not include data from Meta's products or services”. The statement is clearly designed to minimise legal exposure.
Both systems make available model weights, though for Llama access is restricted through a consent form. The training data for instruction tuning (RL data) is described and documented by BloomZ as consisting of xP3 (Crosslingual Public Pool of Prompts); for Llama, the corporate preprint notes that fine-tuning was done based on “a large dataset of over 1 million binary comparisons based on humans applying our specified guidelines, which we refer to as Meta reward modeling data”, and which remains undisclosed. (The same preprint mentions that for evaluation, Meta did build on several RLHF datasets openly shared by others.) Model weights for the instruction-tuned version (RL weights) are made openly available by BloomZ, while for Llama they require an access request.
The BloomZ code is not just available, it is also well-documented. For Llama on the other hand, no no documentation of source code is available (as the source code itself is not open). Model architecture is described for BloomZ in multiple scientific papers and supported by a github repository of code and recipes on HuggingFace; for Llama, the architecture is described in less detail and scattered across corporate websites and a preprint.
BloomZ's multiple preprints document data curation and fine-tuning in great detail; in contrast, Llama's single preprint offers fewer details and appears strategically vague on crucial details (for instance, training datasets and instruction tuning). The scientific documentation of BloomZ also includes multiple peer-reviewed papers, including one of the very few scientifically vetted sources of data on the energy footprint of training large language models. No peer-reviewed papers providing scientific documentation or evaluation of Llama are known currently.
The two systems also differ in terms of the sytematicity of documentation, as measured by the availability of model cards and data sheets: industry-standard models for providing metadata on architecture, training, and evaluation. For Bloom, these system cards provide basic details alongside extensive cross-references to other documentation on training data, training approach, model architecture, fine-tuning and responsible use. In contrast, the Llama model card only provides minimum detail and none whatsoever on training data. A data sheet is only available for BloomZ. This means that for Llama, there is no documentation of training datasets whatsoever — a prime example of a strategy described by Birhane et al. as a tactical template of “(non)declaring the training dataset information”.
Access presents a mixed picture for both of the models. Both are primarily intended for local deployment, and are additionally available through various application programming interfaces (APIs).
Neither model is fully released under an OSI-approved open source license, but the licensing details are interestingly different. The BloomZ source code is released under Apache 2.0, a maximally open license with a minimum of restrictions. The model weights on the other hand are released under the Responsible AI License (RAIL). Llama releases no source code, as we saw above; meanwhile the model weights are released under a bespoke "Meta Community License".
Both licences aim to restrict harmful use cases, but there is a key difference in the constraints they put on representing model outputs. RAIL stipulates that a user may not “generate content without expressly and intelligibly disclaiming that the text is machine-generated”. the Meta Community License for Llama on the other hand stipulates that a user may not “represent that Llama 2 outputs are human-generated”. This is a much lower bar, because it leaves open a wide swathe of use cases where there may not be the explicit claim of human-generated output, but merely a strong implication.
This brief guide offers a walkthrough of the main dimensions of openness. As we see, the open source claim of BloomZ is well-founded. Llama on the other hand really cannot be called "open source" under any reasonable definition of the term. It is at best open weights, and is closed in almost all other aspects. Llama, in all currently available versions, is a prime example of a model that claims openness benefits by merely providing access to its most inscrutable element: model weights.
This guide incorporates some text and data from the following paper:
Liesenfeld, Andreas, and Mark Dingemanse. 2024. ‘Rethinking Open Source Generative AI: Open-Washing and the EU AI Act’. In The 2024 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’24). Rio de Janeiro, Brazil: ACM. doi: 10.1145/3630106.3659005