Proton, the privacy-friendly internet service provider based in Europe, has jumped on the LLM bandwagon and released an LLM service last month called Lumo. Besides touting its privacy features, Proton's PR focuses on open source:
Unlike other AI assistants, my code is fully open source, so anyone can verify that it’s private and secure — and that we never use your data to train the model. (source)
Elsewhere on the Proton website, we find a claim that Lumo is "based upon open-source language models" (we are left to guess which). A comparison table shows Lumo alongside some other LLM assistants, with a feature "Opens source code for the public" prominently checked for Lumo and DeepSeek. Factcheck: for DeepSeek that is definitely not the case; at best it is an open weights model and very little is known about its source code. Does Lumo fare any better?
It turns out that Lumo hits a new record in openness: it is the least open "open" AI assistant that we have ever added to our index. One of our reasons for inclusion is an openness claim: a model provider that calls their system "open" or "open source" or a variation on that. Lumo is "open" in that sense (Proton calls it open source) but in no other way. Nothing about it is currently open.
Here is a comparison of the openness status of Lumo and two well-known other models in the space: DeepSeek (included in Proton's own comparison), and OLMo from AllenAI, the current leading openness champion.
YuLan by Gaoling School of Artificial Intelligence
YuLan-Mini
BLOOMZ by BigScience Workshop
BLOOM
OLMo by Ai2
Olmo-3-1125-32B
Apertus by Swiss AI Initiative
Apertus-70B-2509
SmolLM by HuggingFace
SmolLM3-3B-Base
mT0 by BigScience Workshop
mT5-XXL
Amber by LLM360
Amber
Pythia by EleutherAI and Together Computer
Pythia-6.9B
Open Assistant by Open Assistant
Pythia-12B
Lucie by OpenLLM-France
Lucie-7B
Instella by AMD
Instella-3B
EuroLLM by UTTER
EuroLLM-22B-2512
K2 by LLM360
K2-V2-Instruct
CT-LLM by Multimodal Art Projection
CT-LLM-Base
Arabic StableLM by StabilityAI
StableLM-2-1.6B
Skywork-OR1 by Skywork
DeepSeek-R1-Distill-Qwen-32B
Omnilingual ASR by Meta
Omnilingual ASR
MobileLLM by Meta
MobileLLM-R1-950M-base
BTLM by Cerebras
BTLM-3B-8K-Base
Minerva by Sapienza Natural Language Processing Group
Minerva-7B-base-v1.0
Whisper by OpenAI
Whisper-large-v3
Teuken by OpenGPT-X
Teuken-7B-base
T5 by Google AI
T5
Eurus by OpenBMB
Mixtral-8x22B-v0.1
RedPajama by Together Computer
RedPajama-INCITE-7B-Base
Poro by AMD Silo AI and TurkuNLP and High Performance Language Technologies (HPLT)
Llama-Poro-2-70B-Base
OpenChat by OpenChat
Meta-Llama-3-8B
Neo by Multimodal Art Projection
Neo-7B
Guru by LLM360
Guru-32B
BERT by Google AI
unspecified
Dolly by Databricks
Pythia-12B
Vicuna by LMSYS
Vicuna-13B
Tülu by Ai2
Llama-3.1-405B
TildeOpen by Tilde.ai
TildeOpen-30b
Salamandra by Barcelona Supercomputing Center
Salamandra-7B
OpenMoE by Zheng Zian
OpenMoE-8B
Occiglot by Occiglot
Occiglot-7B-EU5
Llama Nemotron by NVIDIA
Llama-3.3-70B-Instruct
GPT-SW3 by AI Sweden
GPT-SW3-6.7B-V2
GPT-NeoXT by Together Computer
GPT-NeoX-20B
Fietje by Bram Vanroy
Phi-2
AquilaChat by Beijing Academy of Artificial Intelligence
Aquila2-70B-Expr
Baguettotron by PleIAs
Baguettotron
Zephyr by HuggingFace
Mixtral-8x22B-v0.1
WizardLM by Microsoft and Peking University
LLaMA-7B
SynLogic by Minimax AI
SynLogic-32B
Phi by Microsoft
Phi-4
OpenELM by Apple
OpenELM-3B
NeuralChat by Intel
Mistral-7B-v0.1
DeepHermes by Nous Research
Llama-3.1-70B
Pharia by Aleph Alpha Research
Pharia-1-LLM-7B
minChatGPT by Ethan Yanjia Li
GPT2-Medium
DeepSeek V3.2 by DeepSeek
DeepSeek-V3.1-Base
Yi by 01.AI
Yi-34B
XBai-04 by Yuan Shi Technology
Qwen3-32B
StripedHyena by Together Computer
StripedHyena-Hessian-7B
Saul by Equall
Mixtral-8x22B-v0.1
Hunyuan by Tencent
Hunyuan-7B-Pretrain
Apriel by ServiceNow
Apriel-1.5-15b-Thinker
MiMo by Xiaomi
MiMo-V2-Flash-Base
DeepSeek R1 by DeepSeek
DeepSeek-V3-Base
Xwin-LM by Xwin-LM
Llama-2-13B
Geitje by Bram Vanroy
Mistral-7B-v0.1
GPT OSS by OpenAI
unspecified
Claire by OpenLLM-France
Falcon-7B
BELLE by KE Technologies
Llama-2-13B
UltraLM by OpenBMB
Llama-13B
Airoboros by Jon Durbin
Qwen1.5-110B
Solar by Upstage AI
unspecified
Marco by Alibaba
Marco-LLM-GLO
MPT by Databricks
MPT-30B
Intern-S1 by Shanghai AI Laboratory
Intern-S1-Pro
Granite by IBM
Granite-4.0-H-Small-Base
Bielik by SpeakLeash AI
Bielik-11B-v3-Base-20250730
Viking by Silo AI and TurkuNLP and High Performance Language Technologies (HPLT)
unspecified
Mistral NeMo by Mistral AI and NVIDIA
Mistral-NeMo-12B-Base
Starling by NexusFlow
Llama-2-13B
Seed-OSS by ByteDance
Seed-OSS-36B-Base
Nanbeige by Nanbeige LLM lab
Nanbeige4-3B-Base
LongAlign by Zhipu AI
Llama-2-13B
Ling by Inclusion AI
Ling-2.5-1T
Kimi K2.5 by Moonshot AI
unspecified
Gemma by Google AI
Gemma-3-27B-PT
Falcon by Technology Innovation Institute
Falcon-H1-7B-Base
Llama 4 by Meta
Llama-4-Maverick-17B-128E
dots.llm1 by RedNote
dots.llm1.base
Stanford Alpaca by Stanford University CRFM
Llama-7B
Jamba by AI21
AI21-Jamba2-Mini
GLM by Zhipu AI
GLM-5
Llama 3.1 by Meta
Llama-3.1-405B
XVERSE by Shenzhen Yuanxiang Technology
XVERSE-MoE-A4.2B
TeleChat by Tele-AI
unknown
Snowflake Arctic by Snowflake
Snowflake-Arctic-Base
RWKV by BlinkDL/RWKV
unspecified
Persimmon by Adept AI Labs
Persimmon-8B-Base
OPT by Meta
OPT-30B
Mistral Large 3 by Mistral AI
Mistral-Large-3-675B-Base-2512
LFM2 by Liquid AI
unknown
Infinity-Instruct by Beijing Academy of Artificial Intelligence
Llama-3.1-70B
H2O-Danube by H2O.ai
H2O-Danube3.1-4B-Base
FastChat-T5 by LMSYS
Flan-T5-XL
EXAONE by LG
unspecified
Crystal by LLM360
Crystal
Cogito by DeepCogito
DeepSeek-V3-Base
BitNet by Microsoft
unspecified
StableVicuna by CarperAI
LLaMA-13B
Llama 2 by Meta
Llama-2-70B
LeoLM by LAION
Llama-2-70B
Koala by BAIR
Llama-13B
XGen by Salesforce
XGen-Small-9B-Base-R
Reka Flash by Reka AI
unknown
Jais by G42
unspecified
Llama 3.3 by Meta
Llama-3.3-70B
Stable Beluga by StabilityAI
Llama-2-70B
Minimax-M2.5 by Minimax AI
MiniMax-Text-01?
Llama-Sherkala by G42
Llama-3.1-8B
Llama 3 by Meta
Meta-Llama-3-70B
Qwen by Alibaba
unspecified
Gemma Japanese by Google AI
Gemma-2-2B
Baichuan by Baichuan Intelligent Technology
Baichuan2-13B-Base
Command A by Cohere AI
Command A?
Grok 2 by xAI
unknown
Lumo AI by Proton
Undisclosed
In the past, we've gotten model providers like Mistral retreat from using the term "open source" and instead use the more precise "open weights". But Proton will need to do more work to get there, as they don't even reach the middling openness levels of Mistral and other model providers evasive about training data.
It may be that Lumo is a bespoke fine-tuned version of an open weights model like Mistral, OLMo or Llama. It may be a wrapper for an API from Anthropic or another provier. It may be all of these things together in a trenchcoat. The point is: we would know if it were truly open source.
On the sunny side: the only way from here is up. Literally disclosing anything, from training data to model weights to model architecture to data sheets, will be an improvement over the current status. We will be following its rise with great interest.
Proton's Lumo account on BlueSky has responded to this post saying "Lumo isn't a model; it uses open models, including the model listed as the best for openness in this piece. You can find the models Lumo uses in our privacy policy". That page repeats the open source claim:
Lumo’s code is open source, meaning anyone can see it’s secure and does what it claims to. We’re constantly improving Lumo with the latest models that give the best user experience.
The only open source code we have found is for the Lumo mobile and web apps. Proton calling the Lumo AI assistant open source based on that is a bit like Microsoft calling Windows open source just because there's a github repository for Windows Terminal.
The models listed on Lumo's privacy policy page are "Nemo, OpenHands 32B, OLMO 2 32B, and Mistral Small 3". OpenHands is a QWEN fine-tune, and Nemo and Mistral Small are both Mistral models. Since Proton has open-sourced neither the Lumo system prompt nor the mysterious routing methods that decide which model will handle your query, you never know what you are going to get. At best, Lumo can only be as open as the least open system they use. In practice, it has to be even less open than that, because Proton has added additional undisclosed optimization steps and further layers of routing obscurity.
In the absence of technical documentation of system architecture, we cannot update our assessment. If Proton releases more detailed information, we welcome a pull request with the requisite updates. Perhaps the conclusion will be that Lumo is not enough of a unified system to merit inclusion; or perhaps additional information will allow it to rise through the openness ranks. As noted before, none of this would come up if Lumo were actually open source.