Mistral Small 3 is the most efficient and versatile model of Mistral. Pre-trained and instructed version, Apache 2.0, 24B, 81% MMLU, 150 token/s. No synthetic data so great base for anything reasoning.
Impressive work on Mistral Small 3! The balance of efficiency, versatility, and no synthetic data makes it a solid foundation for so many applications. Excited to see how it evolves!
Report
Finally an A.I that is small enough for me to add to my mobile app, thanks for your work on this!
Replies
Exitfund
Pieces for Developers