
Alconost.MT/Evaluate
Let AI score your translation work
40 followers
Let AI score your translation work
40 followers
Experimental AI tool for translation quality evaluation using top-tier LLMs like GPT, Claude, Gemini, and more. Upload files, get detailed feedback and corrections. Add your style guide, glossary, and custom instructions for project-specific evaluation.





We’re beyond excited to see Alconost.MT/Evaluate on Product Hunt today! A huge thank you to everyone who supported us during the development. Your feedback and early testing were invaluable. Can’t wait to hear what the Product Hunt community thinks!
Huge kudos to the team! This is a bold move. Bringing an internal experiment to the public takes guts.
So good to see this live! Knowing the team behind it, I’m sure this will only get better.
Love that you can switch between GPT-4 and Claude. It’s great for comparing how different models see the same translation
A nice standalone QE tool with good potential and implementation perspective. Congratz on the launch!
Aww, finally: not a vague “good enough” statement but the structured, and hopefully unbiased, scoring! Fluency, accuracy, terminology - it deconstructs the translated content to pieces (and it doesn't roll its eyes haha). I think translators who struggle with subjective evaluations will be using it as a reference, bot to improve the results of their work... And feel less desperate and more self-confident :-)
I’ve used it with both MT and human output. The tool handles both pretty well. Also, love that no login is needed.