The IndexGuidesNews
AboutContribute

Lumo: the least open 'open' AI assistant

by Mark Dingemanse
29 August 2025

Proton, the privacy-friendly internet service provider based in Europe, has jumped on the LLM bandwagon and released an LLM service last month called Lumo. Besides touting its privacy features, Proton's PR focuses on open source:

Unlike other AI assistants, my code is fully open source, so anyone can verify that it’s private and secure — and that we never use your data to train the model. (source)

Elsewhere on the Proton website, we find a claim that Lumo is "based upon open-source language models" (we are left to guess which). A comparison table shows Lumo alongside some other LLM assistants, with a feature "Opens source code for the public" prominently checked for Lumo and DeepSeek. Factcheck: for DeepSeek that is definitely not the case; at best it is an open weights model and very little is known about its source code. Does Lumo fare any better?

It turns out that Lumo hits a new record in openness: it is the least open "open" AI assistant that we have ever added to our index. One of our reasons for inclusion is an openness claim: a model provider that calls their system "open" or "open source" or a variation on that. Lumo is "open" in that sense (Proton calls it open source) but in no other way. Nothing about it is currently open.

Here is a comparison of the openness status of Lumo and two well-known other models in the space: DeepSeek (included in Proton's own comparison), and OLMo from AllenAI, the current leading openness champion.

Parameter descriptions:

Base Model Data
Are datasources for training the base model comprehensively documented and made available? In case a distinction between base (foundation) and end (user) model is not applicable, this mirrors the end model data entries.
End User Model Data
Are datasources for training the model that the end user interacts with comprehensively documented and made available?
Base Model Weights
Are the weights of the base models made freely available? In case a distinction between base (foundation) and end (user) model is not applicable, this mirrors the end model data entries.
End User Model Weights
Are the weights of the model that the end user interacts with made freely available?
Training Code
Is the source code of dataset processing, model training and tuning comprehensively made available?
Code Documentation
Is the source code of datasource processing, model training and tuning comprehensively documented?
Hardware Architecture
Is the hardware architecture used for datasource processing and model training comprehensively documented?
Preprint
Are archived preprint(s) are available that detail all major parts of the system including datasource processing, model training and tuning steps?
Paper
Are peer-reviewed scientific publications available that detail all major parts of the system including datasource processing, model training and tuning steps?
Modelcard
Is a model card available in standardized format that provides comprehensive insight on model architecture, training, fine-tuning, and evaluation?
Datasheet
Is a datasheet as defined in "Datasheets for Datasets" (Gebru et al. 2021) available?
Package
Is a packaged release of the model available on a software repository (e.g. a Python Package Index, Homebrew)?
API and Meta Prompts
Is an API available that provides unrestricted access to the model (other than security and CDN restrictions)? If applicable, this entry also collects information on the use and availability of meta prompts.
Licenses
Is the project fully covered by Open Source Initiative (OSI)-approved licenses, including all data sources and training pipeline code?
Last updated 12 Sep 2025
OLMo by Ai2
OLMo-2-0325-32B
YuLan by Gaoling School of Artificial Intelligence
YuLan-Mini
Apertus by Swiss AI Initiative
Apertus-70B-2509
BLOOMZ by BigScience Workshop
BLOOM
Amber by LLM360
Amber
Open Assistant by Open Assistant
Pythia-12B
mT0 by BigScience Workshop
mT5-XXL
Whisper by OpenAI
Whisper-large-v3
Pythia by EleutherAI and Together Computer
Pythia-6.9B
SmolLM by HuggingFace
SmolLM3-3B-Base
K2 by LLM360
K2
Tülu by Ai2
Llama-3.1-405B
OpenChat by OpenChat
Meta-Llama-3-8B
Arabic StableLM by StabilityAI
StableLM-2-1.6B
Vicuna by LMSYS
Vicuna-13B
Teuken by OpenGPT-X
Teuken-7B-base
Skywork-OR1 by Skywork
DeepSeek-R1-Distill-Qwen-32B
MobileLLM by Meta
MobileLLM-R1-950M-base
Instella by AMD
Instella-3B
Minerva by Sapienza Natural Language Processing Group
Minerva-7B-base-v1.0
Dolly by Databricks
Pythia-12B
T5 by Google AI
T5
RedPajama by Together Computer
RedPajama-INCITE-7B-Base
MPT by Databricks
MPT-30B
Lucie by OpenLLM-France
Lucie-7B
Eurus by OpenBMB
Mixtral-8x22B-v0.1
DeepSeek V3.1 by DeepSeek
DeepSeek-V3.1-Base
Poro by AMD Silo AI and TurkuNLP and High Performance Language Technologies (HPLT)
Llama-Poro-2-70B-Base
Neo by Multimodal Art Projection
Neo-7B
BERT by Google AI
unspecified
AquilaChat by Beijing Academy of Artificial Intelligence
Aquila2-70B-Expr
Zephyr by HuggingFace
Mixtral-8x22B-v0.1
Yi by 01.AI
Yi-34B
WizardLM by Microsoft and Peking University
LLaMA-7B
Salamandra by Barcelona Supercomputing Center
Salamandra-7B
Occiglot by Occiglot
Occiglot-7B-EU5
NeuralChat by Intel
Mistral-7B-v0.1
Llama Nemotron by NVIDIA
Llama-3.3-70B-Instruct
Guru by LLM360
Guru-32B
GPT-SW3 by AI Sweden
GPT-SW3-6.7B-V2
GPT-NeoXT by Together Computer
GPT-NeoX-20B
Fietje by Bram Vanroy
Phi-2
BTLM by Cerebras
BTLM-3B-8K-Base
Pharia by Aleph Alpha Research
Pharia-1-LLM-7B
minChatGPT by Ethan Yanjia Li
GPT2-Medium
Xwin-LM by Xwin-LM
Llama-2-13B
Phi by Microsoft
Phi-4
Mistral by Mistral AI
Mistral-Large-2411
Kimi K2 by Moonshot AI
Kimi K2 Base
DeepSeek R1 by DeepSeek
DeepSeek-V3-Base
SynLogic by Minimax AI
SynLogic-32B
OpenMoE by Zheng Zian
OpenMoE-8B
OpenELM by Apple
OpenELM-3B
InternLM by Shanghai AI Laboratory
InternLM3-8B
Hunyuan by Tencent
Hunyuan-7B-Pretrain
GPT OSS by OpenAI
unspecified
Falcon by Technology Innovation Institute
Falcon-H1-34B-Base
DeepHermes by Nous Research
Llama-3.1-70B
CT-LLM by Multimodal Art Projection
CT-LLM-Base
Mistral NeMo by Mistral AI and NVIDIA
Mistral-NeMo-12B-Base
XBai-04 by Yuan Shi Technology
Qwen3-32B
Saul by Equall
Mixtral-8x22B-v0.1
Qwen by Alibaba
Qwen3-235B-A22B-Base
Granite by IBM
Granite-3.3-8B-Base
MiMo by Xiaomi
MiMo-7B-Base
Airoboros by Jon Durbin
Qwen1.5-110B
Starling by NexusFlow
Llama-2-13B
Solar by Upstage AI
Mistral-7B-v0.1
Gemma by Google AI
Gemma-3-27B-PT
Geitje by Bram Vanroy
Mistral-7B-v0.1
Claire by OpenLLM-France
Falcon-7B
BELLE by KE Technologies
Llama-2-13B
UltraLM by OpenBMB
Llama-13B
Llama 4 by Meta
Llama-4-Maverick-17B-128E
dots.llm1 by RedNote
dots.llm1.base
StripedHyena by Together Computer
StripedHyena-Hessian-7B
Marco by Alibaba
Marco-LLM-GLO
Viking by Silo AI and TurkuNLP and High Performance Language Technologies (HPLT)
unspecified
Llama 3.1 by Meta
Llama-3.1-405B
XVERSE by Shenzhen Yuanxiang Technology
XVERSE-MoE-A4.2B
RWKV by BlinkDL/RWKV
unspecified
Minimax-M1 by Minimax AI
MiniMax-Text-01
LongAlign by Zhipu AI
Llama-2-13B
Stanford Alpaca by Stanford University CRFM
Llama-7B
GLM by Zhipu AI
unspecified
Llama 3.3 by Meta
Llama-3.3-70B
Stable Beluga by StabilityAI
Llama-2-70B
Snowflake Arctic by Snowflake
Snowflake-Arctic-Base
Persimmon by Adept AI Labs
Persimmon-8B-Base
OPT by Meta
OPT-30B
Jais by G42
Llama-2-70B
Infinity-Instruct by Beijing Academy of Artificial Intelligence
Llama-3.1-70B
H2O-Danube by H2O.ai
H2O-Danube3.1-4B-Base
FastChat-T5 by LMSYS
Flan-T5-XL
EXAONE by LG
unspecified
Crystal by LLM360
Crystal
BitNet by Microsoft
unspecified
Baichuan by Baichuan Intelligent Technology
Baichuan2-13B-Base
StableVicuna by CarperAI
LLaMA-13B
Llama 3 by Meta
Meta-Llama-3-70B
Llama 2 by Meta
Llama-2-70B
LeoLM by LAION
Llama-2-70B
Koala by BAIR
Llama-13B
XGen by Salesforce
XGen-Small-9B-Base-R
Gemma Japanese by Google AI
Gemma-2-2B
Command A by Cohere AI
Command A?
Llama-Sherkala by G42
Llama-3.1-8B
Nanbeige by Nanbeige LLM lab
Nanbeige2-16B
Lumo AI by Proton
Undisclosed

In the past, we've gotten model providers like Mistral retreat from using the term "open source" and instead use the more precise "open weights". But Proton will need to do more work to get there, as they don't even reach the middling openness levels of Mistral and other model providers evasive about training data.

It may be that Lumo is a bespoke fine-tuned version of an open weights model like Mistral, OLMo or Llama. It may be a wrapper for an API from Anthropic or another provier. It may be all of these things together in a trenchcoat. The point is: we would know if it were truly open source.

On the sunny side: the only way from here is up. Literally disclosing anything, from training data to model weights to model architecture to data sheets, will be an improvement over the current status. We will be following its rise with great interest.

Update September 1

Proton's Lumo account on BlueSky has responded to this post saying "Lumo isn't a model; it uses open models, including the model listed as the best for openness in this piece. You can find the models Lumo uses in our privacy policy". That page repeats the open source claim:

Lumo’s code is open source, meaning anyone can see it’s secure and does what it claims to. We’re constantly improving Lumo with the latest models that give the best user experience.

The only open source code we have found is for the Lumo mobile and web apps. Proton calling the Lumo AI assistant open source based on that is a bit like Microsoft calling Windows open source just because there's a github repository for Windows Terminal.

The models listed on Lumo's privacy policy page are "Nemo, OpenHands 32B, OLMO 2 32B, and Mistral Small 3". OpenHands is a QWEN fine-tune, and Nemo and Mistral Small are both Mistral models. Since Proton has open-sourced neither the Lumo system prompt nor the mysterious routing methods that decide which model will handle your query, you never know what you are going to get. At best, Lumo can only be as open as the least open system they use. In practice, it has to be even less open than that, because Proton has added additional undisclosed optimization steps and further layers of routing obscurity.

In the absence of technical documentation of system architecture, we cannot update our assessment. If Proton releases more detailed information, we welcome a pull request with the requisite updates. Perhaps the conclusion will be that Lumo is not enough of a unified system to merit inclusion; or perhaps additional information will allow it to rise through the openness ranks. As noted before, none of this would come up if Lumo were actually open source.

Parameter descriptions:

Base Model Data
Are datasources for training the base model comprehensively documented and made available? In case a distinction between base (foundation) and end (user) model is not applicable, this mirrors the end model data entries.
End User Model Data
Are datasources for training the model that the end user interacts with comprehensively documented and made available?
Base Model Weights
Are the weights of the base models made freely available? In case a distinction between base (foundation) and end (user) model is not applicable, this mirrors the end model data entries.
End User Model Weights
Are the weights of the model that the end user interacts with made freely available?
Training Code
Is the source code of dataset processing, model training and tuning comprehensively made available?
Code Documentation
Is the source code of datasource processing, model training and tuning comprehensively documented?
Hardware Architecture
Is the hardware architecture used for datasource processing and model training comprehensively documented?
Preprint
Are archived preprint(s) are available that detail all major parts of the system including datasource processing, model training and tuning steps?
Paper
Are peer-reviewed scientific publications available that detail all major parts of the system including datasource processing, model training and tuning steps?
Modelcard
Is a model card available in standardized format that provides comprehensive insight on model architecture, training, fine-tuning, and evaluation?
Datasheet
Is a datasheet as defined in "Datasheets for Datasets" (Gebru et al. 2021) available?
Package
Is a packaged release of the model available on a software repository (e.g. a Python Package Index, Homebrew)?
API and Meta Prompts
Is an API available that provides unrestricted access to the model (other than security and CDN restrictions)? If applicable, this entry also collects information on the use and availability of meta prompts.
Licenses
Is the project fully covered by Open Source Initiative (OSI)-approved licenses, including all data sources and training pipeline code?
Last updated 12 Sep 2025
OLMo by Ai2
OLMo-2-0325-32B
YuLan by Gaoling School of Artificial Intelligence
YuLan-Mini
Apertus by Swiss AI Initiative
Apertus-70B-2509
BLOOMZ by BigScience Workshop
BLOOM
Amber by LLM360
Amber
Open Assistant by Open Assistant
Pythia-12B
mT0 by BigScience Workshop
mT5-XXL
Whisper by OpenAI
Whisper-large-v3
Pythia by EleutherAI and Together Computer
Pythia-6.9B
SmolLM by HuggingFace
SmolLM3-3B-Base
K2 by LLM360
K2
Tülu by Ai2
Llama-3.1-405B
OpenChat by OpenChat
Meta-Llama-3-8B
Arabic StableLM by StabilityAI
StableLM-2-1.6B
Vicuna by LMSYS
Vicuna-13B
Teuken by OpenGPT-X
Teuken-7B-base
Skywork-OR1 by Skywork
DeepSeek-R1-Distill-Qwen-32B
MobileLLM by Meta
MobileLLM-R1-950M-base
Instella by AMD
Instella-3B
Minerva by Sapienza Natural Language Processing Group
Minerva-7B-base-v1.0
Dolly by Databricks
Pythia-12B
T5 by Google AI
T5
RedPajama by Together Computer
RedPajama-INCITE-7B-Base
MPT by Databricks
MPT-30B
Lucie by OpenLLM-France
Lucie-7B
Eurus by OpenBMB
Mixtral-8x22B-v0.1
DeepSeek V3.1 by DeepSeek
DeepSeek-V3.1-Base
Poro by AMD Silo AI and TurkuNLP and High Performance Language Technologies (HPLT)
Llama-Poro-2-70B-Base
Neo by Multimodal Art Projection
Neo-7B
BERT by Google AI
unspecified
AquilaChat by Beijing Academy of Artificial Intelligence
Aquila2-70B-Expr
Zephyr by HuggingFace
Mixtral-8x22B-v0.1
Yi by 01.AI
Yi-34B
WizardLM by Microsoft and Peking University
LLaMA-7B
Salamandra by Barcelona Supercomputing Center
Salamandra-7B
Occiglot by Occiglot
Occiglot-7B-EU5
NeuralChat by Intel
Mistral-7B-v0.1
Llama Nemotron by NVIDIA
Llama-3.3-70B-Instruct
Guru by LLM360
Guru-32B
GPT-SW3 by AI Sweden
GPT-SW3-6.7B-V2
GPT-NeoXT by Together Computer
GPT-NeoX-20B
Fietje by Bram Vanroy
Phi-2
BTLM by Cerebras
BTLM-3B-8K-Base
Pharia by Aleph Alpha Research
Pharia-1-LLM-7B
minChatGPT by Ethan Yanjia Li
GPT2-Medium
Xwin-LM by Xwin-LM
Llama-2-13B
Phi by Microsoft
Phi-4
Mistral by Mistral AI
Mistral-Large-2411
Kimi K2 by Moonshot AI
Kimi K2 Base
DeepSeek R1 by DeepSeek
DeepSeek-V3-Base
SynLogic by Minimax AI
SynLogic-32B
OpenMoE by Zheng Zian
OpenMoE-8B
OpenELM by Apple
OpenELM-3B
InternLM by Shanghai AI Laboratory
InternLM3-8B
Hunyuan by Tencent
Hunyuan-7B-Pretrain
GPT OSS by OpenAI
unspecified
Falcon by Technology Innovation Institute
Falcon-H1-34B-Base
DeepHermes by Nous Research
Llama-3.1-70B
CT-LLM by Multimodal Art Projection
CT-LLM-Base
Mistral NeMo by Mistral AI and NVIDIA
Mistral-NeMo-12B-Base
XBai-04 by Yuan Shi Technology
Qwen3-32B
Saul by Equall
Mixtral-8x22B-v0.1
Qwen by Alibaba
Qwen3-235B-A22B-Base
Granite by IBM
Granite-3.3-8B-Base
MiMo by Xiaomi
MiMo-7B-Base
Airoboros by Jon Durbin
Qwen1.5-110B
Starling by NexusFlow
Llama-2-13B
Solar by Upstage AI
Mistral-7B-v0.1
Gemma by Google AI
Gemma-3-27B-PT
Geitje by Bram Vanroy
Mistral-7B-v0.1
Claire by OpenLLM-France
Falcon-7B
BELLE by KE Technologies
Llama-2-13B
UltraLM by OpenBMB
Llama-13B
Llama 4 by Meta
Llama-4-Maverick-17B-128E
dots.llm1 by RedNote
dots.llm1.base
StripedHyena by Together Computer
StripedHyena-Hessian-7B
Marco by Alibaba
Marco-LLM-GLO
Viking by Silo AI and TurkuNLP and High Performance Language Technologies (HPLT)
unspecified
Llama 3.1 by Meta
Llama-3.1-405B
XVERSE by Shenzhen Yuanxiang Technology
XVERSE-MoE-A4.2B
RWKV by BlinkDL/RWKV
unspecified
Minimax-M1 by Minimax AI
MiniMax-Text-01
LongAlign by Zhipu AI
Llama-2-13B
Stanford Alpaca by Stanford University CRFM
Llama-7B
GLM by Zhipu AI
unspecified
Llama 3.3 by Meta
Llama-3.3-70B
Stable Beluga by StabilityAI
Llama-2-70B
Snowflake Arctic by Snowflake
Snowflake-Arctic-Base
Persimmon by Adept AI Labs
Persimmon-8B-Base
OPT by Meta
OPT-30B
Jais by G42
Llama-2-70B
Infinity-Instruct by Beijing Academy of Artificial Intelligence
Llama-3.1-70B
H2O-Danube by H2O.ai
H2O-Danube3.1-4B-Base
FastChat-T5 by LMSYS
Flan-T5-XL
EXAONE by LG
unspecified
Crystal by LLM360
Crystal
BitNet by Microsoft
unspecified
Baichuan by Baichuan Intelligent Technology
Baichuan2-13B-Base
StableVicuna by CarperAI
LLaMA-13B
Llama 3 by Meta
Meta-Llama-3-70B
Llama 2 by Meta
Llama-2-70B
LeoLM by LAION
Llama-2-70B
Koala by BAIR
Llama-13B
XGen by Salesforce
XGen-Small-9B-Base-R
Gemma Japanese by Google AI
Gemma-2-2B
Command A by Cohere AI
Command A?
Llama-Sherkala by G42
Llama-3.1-8B
Nanbeige by Nanbeige LLM lab
Nanbeige2-16B
Lumo AI by Proton
Undisclosed

Supported by the Centre for Language Studies and the Dutch Research Council. Website design & development © 2024 by BSTN. This version of the index generated 12 Sep 2025, website content last updated 04 Sep 2025.