Anthropic just dropped Opus 4.7
by•
Here’s what changed:
• Production-ready code with minimal oversight, and it can verify its own outputs
• More control over reasoning effort
• 3x better vision (now up to 3.75MP images)
• Improved instruction following and overall reliability
• New “xhigh” reasoning mode for finer control between speed and depth
Same pricing as Opus 4.6 ($5 and $25 per million input and output tokens). The new tokenizer can use around 1.0 to 1.35x more tokens depending on content, though this can be managed through effort settings and task budgets.
Curious how you’d use this in your current stack?
45 views


Replies
3x better vision could unlock a lot especially for teams working with screenshots dashboards, or real world images. Have you tested it on messy inputs yet?
Voquill
@brandon_elliott1 Haven’t tested it yet, but I did see a post on X earlier where someone shared early tests, and it looked pretty strong.
Curious how this performs in long running tasks. Does the self verification help prevent drift over time?
Voquill
@bruce_warren That’s a big question for me, too. We’ll get to know once we dive in.
More control over reasoning effort task budgets feels like a big step toward predictable costs and behavior. Curious how easy it is to tune in practice though.
Voquill
@damian_cole Yeah, if this is easy to manage in practice, that’s a big win for the upgrade.
Launching SplitPost on PH on May 13th so the timing is interesting but I'm pinning to 4.6 for now. The tokenizer change means the same input costs up to 35% more in tokens depending on content type, and I'd rather benchmark that properly than discover it on launch week
Also seeing a lot of reports about 4.6 quality degrading in the weeks before this release too, which makes me want to let 4.7 settle before touching anything in production
Voquill
@splitpostio That makes sense. Good luck with the upcoming launch!