Mistral 3: A Heavyweight Model, Lightweight on Openness
by Andreas Liesenfeld
02 December 2025
Today, Mistral AI released the Mistral 3 family, including the massive 675B "Mistral Large 3." While this fills a gap for high-performance European models and aids local data governance, it fails to set a new standard for openness.
Mirroring its partner Microsoft (and OpenAI), Mistral has released model weights but kept training data under wraps. Without documented data, true performance evaluation and compliance testing are impossible. By conflating "open-weight" with "open-source," Mistral engages in "open-washing"—a practice that hurts the ecosystem by overshadowing projects that adhere to the EU-endorsed UN Open Source Principles.
Europe does not need more open-weight models; it needs a path towards data-at-scale to drive innovation. Mistral's approach sucks the oxygen out of the room for smaller, highly open alternatives that contribute to this mission like the Swiss AI Initiative's "Apertus" models, which offer the genuine openness and data access the industry requires.
This lack of commitment is jarring as Mistral AI recently signed an Open Source Initiative letter championing open data as a European value. By ignoring the very principles they publicly supported, Mistral has missed an opportunity to make a meaningful contribution to the European open source AI ecosystem. Europe needs AI-ready data, not more weights.
Parameter descriptions:
Base Model Data
Are datasources for training the base model comprehensively documented and made available? In case a distinction between base (foundation) and end (user) model is not applicable, this mirrors the end model data entries.
End User Model Data
Are datasources for training the model that the end user interacts with comprehensively documented and made available?
Base Model Weights
Are the weights of the base models made freely available? In case a distinction between base (foundation) and end (user) model is not applicable, this mirrors the end model data entries.
End User Model Weights
Are the weights of the model that the end user interacts with made freely available?
Training Code
Is the source code of dataset processing, model training and tuning comprehensively made available?
Code Documentation
Is the source code of datasource processing, model training and tuning comprehensively documented?
Hardware Architecture
Is the hardware architecture used for datasource processing and model training comprehensively documented?
Preprint
Are archived preprint(s) are available that detail all major parts of the system including datasource processing, model training and tuning steps?
Paper
Are peer-reviewed scientific publications available that detail all major parts of the system including datasource processing, model training and tuning steps?
Modelcard
Is a model card available in standardized format that provides comprehensive insight on model architecture, training, fine-tuning, and evaluation?
Datasheet
Is a datasheet as defined in "Datasheets for Datasets" (Gebru et al. 2021) available?
Package
Is a packaged release of the model available on a software repository (e.g. a Python Package Index, Homebrew)?
API and Meta Prompts
Is an API available that provides unrestricted access to the model (other than security and CDN restrictions)? If applicable, this entry also collects information on the use and availability of meta prompts.
Licenses
Is the project fully covered by Open Source Initiative (OSI)-approved licenses, including all data sources and training pipeline code?