Compared to its predecessor, Mistral Large 2 is significantly more capable in code generation, mathematics, and reasoning. It also provides a much stronger multilingual support, and advanced function calling capabilities.
Congrats on the launch! Your hard work is evident, and this tool will be a great asset
Report
I just want to say “Sorry for my late vote”
Report
Another gold 🥇 for France 🇫🇷
Coding with Mistral is a real pleasure... Would had been even better if open-sourced thought 🤔
Report
When trying out the snippet shown in the hugging face page it returns an error:
from huggingface_hub import InferenceClient
import os
client = InferenceClient(
"mistralai/Mistral-Large-Instruct-2407",
token=os.getenv("hf_token"),
)
for message in client.chat_completion(
messages=[{"role": "user", "content": "What is the capital of France?"}],
max_tokens=500,
stream=True,
):
print(message.choices[0].delta.content, end="")
403 Forbidden: None.
Cannot access content at: https://api-inference.huggingfac....
If you are trying to create or update content, make sure you have a token with the `write` role.
The model mistralai/Mistral-Large-Instruct-2407 is too large to be loaded automatically (245GB > 10GB). Please use Spaces (https://huggingface.co/spaces) or Inference Endpoints (https://huggingface.co/inference...).
Replies
Exifa.net
Llanai
Bridge.audio