Ethically trained AI startup Pleias releases new small reasoning models optimized for RAG with built-in citations
1 min read
Summary
French AI start-up Pleias has released two open source reasoning models, Pleias-RAG-350M and Pleias-RAG-1B for retrieval-augmented generation (RAG), citation synthesis, and structured multilingual output.
The models are designed for enterprises, developers and researchers seeking cost-effective alternatives to large-scale models without compromising on traceability, multilingual capabilities or structured reasoning workflows and are based on its ethically trained Pleias 1.0 family of small language models.
The new models have inbuilt support for source citations, allowing for auditability in sectors such as healthcare and finance and are described as proto-agentic, being able to assess whether a query is understandable and determine if it should be answered, reformulated or refused.
They outperform most open-weight models under 4 billion parameters on tasks such as HotPotQA, 2WikiMultiHopQA and MuSiQue and perform strongly across languages, suffering only minimal degradation on translated benchmark sets across French, German, Spanish and Italian.