New AI models pop up every week. Some developer tools like @Cursor, @Zed, and @Kilo Code let you choose between different models, while more opinionated products like @Amp and @Tonkotsu default to 1 model.
Curious what the community recommends for coding tasks? Any preferences?
From my experience Opus is really good at generating code for a specific framework or service, but for everyday tasks like refactoring, writing helpers, or SQL queries, GPT-5.2 ends up being faster and cheaper
Overall you pick the model for the task, not the other way around
Report
@fmerian Thanks for the upvote! 🙌
For coding: Claude 3.5/Opus still crushes complex logic, but new ones like Gemini CLI gaining speed.
What’s your daily driver in 2026? (I’m fine-tuning voice models for HireXHub — always open to recs!)
We’ve been using CC Opus 4.5 and it’s been solid for how we work. It really helps when we’re dealing with heavier thinking like product logic, edge cases, or those moments where you’re trying to understand what breaks if one decision changes. It’s not the fastest, but it saves us from making expensive mistakes.
For quick iteration or speed stuff, I understand why people lean toward faster models. For us, we don’t fight it. If the task is mechanical, tests, small refactors, glue code, we switch to something faster. Using Opus for that is a waste.
Report
Opus 4.5 is the king, but expensive. GPT 5.2 is probably the 2nd best on this list. Sonnet 4.5 is great still, but getting old.
Sonnet 4.5 launched 4 months ago, and yet, you're so right - it's getting old
Report
Claude (Sonnet 3.5/3.6) for speed, Opus for complex architecture decisions. I've shipped 4 products as a weekend vibe coder over the last 3 months using almost exclusively Claude Code.
The thing nobody mentions: the model matters less than your prompting discipline. Clear specs, small scoped tasks, reviewing every output. I write maybe 30% of the code myself but review 100%. The security gap is real though - AI-generated code loves to skip input validation and auth edge cases.
For pure speed: Sonnet. For getting things right the first time on tricky stuff: Opus. GPT is fine but Claude just "gets" code structure better in my experience.
Report
Been experimenting with different models for a conversational AI project I'm building. Here's what I've learned:
Context management is everything. The "best" model really depends on your use case:
- Complex refactoring? Opus 4.5 hands down. Worth the cost when you need deep reasoning.
- Quick iterations/prototyping? Sonnet 4.5 hits the sweet spot - fast enough to stay in flow, smart enough to handle most tasks.
- Frontend/UI work? Gemini Pro surprised me with speed and quality.
The real game-changer isn't just the model though - it's how you structure your prompts and manage context. I've found that keeping a clean git history and feeding the model focused diffs (not entire repos) makes a huge difference regardless of which model you use.
Also learned the hard way: don't let context bloat past 40-50%. Quality drops fast after that.
Still leaning toward Sonnet for coding — feels the most reliable for complex logic so far 👍
Report
Voted for Sonnet 4.5 Been testing different models for a few months now. Here's what I've noticed:
• Sonnet 4.5 hits the sweet spot between speed and accuracy for coding tasks
• GPT-5.2 is powerful but slower and more expensive
• Gemini 3 is improving fast but still catching up on complex codebases
The 71% vote makes sense - it's not just hype. Sonnet actually delivers for day-to-day development work. Curious what others think about the cost-performance tradeoff. Are you sticking with one model or switching based on task complexity?
Replies
Noodle Seed
From my experience Opus is really good at generating code for a specific framework or service, but for everyday tasks like refactoring, writing helpers, or SQL queries, GPT-5.2 ends up being faster and cheaper
Overall you pick the model for the task, not the other way around
NMTV
I really liked working with Gemini 3 on all things related to UX/UI but when it comes to logic Opus 4.5 is the king.
This is how I created @NMTV
We’ve been using CC Opus 4.5 and it’s been solid for how we work. It really helps when we’re dealing with heavier thinking like product logic, edge cases, or those moments where you’re trying to understand what breaks if one decision changes. It’s not the fastest, but it saves us from making expensive mistakes.
For quick iteration or speed stuff, I understand why people lean toward faster models. For us, we don’t fight it. If the task is mechanical, tests, small refactors, glue code, we switch to something faster. Using Opus for that is a waste.
Opus 4.5 is the king, but expensive. GPT 5.2 is probably the 2nd best on this list. Sonnet 4.5 is great still, but getting old.
Humans in the Loop
Sonnet 4.5 launched 4 months ago, and yet, you're so right - it's getting old
Claude (Sonnet 3.5/3.6) for speed, Opus for complex architecture decisions. I've shipped 4 products as a weekend vibe coder over the last 3 months using almost exclusively Claude Code.
The thing nobody mentions: the model matters less than your prompting discipline. Clear specs, small scoped tasks, reviewing every output. I write maybe 30% of the code myself but review 100%. The security gap is real though - AI-generated code loves to skip input validation and auth edge cases.
For pure speed: Sonnet. For getting things right the first time on tricky stuff: Opus. GPT is fine but Claude just "gets" code structure better in my experience.
Been experimenting with different models for a conversational AI project I'm building. Here's what I've learned:
Context management is everything. The "best" model really depends on your use case:
- Complex refactoring? Opus 4.5 hands down. Worth the cost when you need deep reasoning.
- Quick iterations/prototyping? Sonnet 4.5 hits the sweet spot - fast enough to stay in flow, smart enough to handle most tasks.
- Frontend/UI work? Gemini Pro surprised me with speed and quality.
The real game-changer isn't just the model though - it's how you structure your prompts and manage context. I've found that keeping a clean git history and feeding the model focused diffs (not entire repos) makes a huge difference regardless of which model you use.
Also learned the hard way: don't let context bloat past 40-50%. Quality drops fast after that.
Humans in the Loop
this 👆
Triforce Todos
Still leaning toward Sonnet for coding — feels the most reliable for complex logic so far 👍
Voted for Sonnet 4.5 Been testing different models for a few months now. Here's what I've noticed:
• Sonnet 4.5 hits the sweet spot between speed and accuracy for coding tasks
• GPT-5.2 is powerful but slower and more expensive
• Gemini 3 is improving fast but still catching up on complex codebases
The 71% vote makes sense - it's not just hype. Sonnet actually delivers for day-to-day development work. Curious what others think about the cost-performance tradeoff. Are you sticking with one model or switching based on task complexity?