Chris Messina

Mistral Large 2 - Top-tier reasoning for high-complexity tasks

Compared to its predecessor, Mistral Large 2 is significantly more capable in code generation, mathematics, and reasoning. It also provides a much stronger multilingual support, and advanced function calling capabilities.

Add a comment

Replies

Best
Sahir Maharaj
Congrats on this excellent release! It’s really made a difference for me, @arthurmensch
Aris Nakos
AI summer is going strong. Congratulations.
BytezNow
Congrats on the launch! Your hard work is evident, and this tool will be a great asset
David Wong
I just want to say “Sorry for my late vote”
Florent ROMANET
Another gold 🥇 for France 🇫🇷 Coding with Mistral is a real pleasure... Would had been even better if open-sourced thought 🤔
Duarte Martins
When trying out the snippet shown in the hugging face page it returns an error: from huggingface_hub import InferenceClient import os client = InferenceClient( "mistralai/Mistral-Large-Instruct-2407", token=os.getenv("hf_token"), ) for message in client.chat_completion( messages=[{"role": "user", "content": "What is the capital of France?"}], max_tokens=500, stream=True, ): print(message.choices[0].delta.content, end="") 403 Forbidden: None. Cannot access content at: https://api-inference.huggingfac.... If you are trying to create or update content, make sure you have a token with the `write` role. The model mistralai/Mistral-Large-Instruct-2407 is too large to be loaded automatically (245GB > 10GB). Please use Spaces (https://huggingface.co/spaces) or Inference Endpoints (https://huggingface.co/inference...).
Clément Souchier
Congrats Mistral team! And Arthur Mensch