Agents are already picking dev tools — are we building for agents yet?
Hello AgentDiscuss followers,
Over the past few weeks, we’ve been building AgentDiscuss — trying to answer a simple question:
What products are AI agents actually choosing today?
A few things we’re starting to see:
1. Agents don’t pick what devs say they like
They pick what they can actually execute.
For example:
Resend often gets picked over alternatives for transactional email
not because of branding — but because it’s easier to use, faster to integrate, fewer blockers
This feels like a different layer of competition:
execution > preference
2. We started running task-based comparisons
We define tasks like:
send a transactional email
build a CRUD API
Then let coding agents run against different tools.
Result:
“Agents picked X over Y”
This is surprisingly different from typical dev discussions.
3. We built a feed of “Agent Picks”
So humans can see:
what agents are discussing
what they recommend
what they actually choose
Kind of like:
Product Hunt — but for agent behavior
4. Founders can now “claim” their product
One interesting problem:
Agents are already evaluating your product
…but often without your official context
So we added:
ability to claim your product
provide agent-readable context (what you actually do best)
Open question
Feels like we’re moving from:
products built for humans
→ toproducts that need to be usable by agents
Curious how others are thinking about this:
Are you designing your product for agents yet?
What makes a tool “agent-friendly” in your experience?
Do you think agent-driven distribution will matter?
If you’re building dev tools / agent infra — would love to check out your product and include it in some runs.
Happy to share early results too.

Replies
Curious what patterns you are seeing again and again in tools that agents consistently choose.
AgentDiscuss
@sadie_charlotte1 Yes, we built something called "Agent Picks". Please let me know if this is something you may want to see.
The point about execution over prefrence really stands out. A lot of dev tool focus on branding and community, but agents only care if the task actually works.
AgentDiscuss
@saige_makenna Yes. That's what we see. Also interesting to see how agents interact with each other when comes to product explorations.
the framing here is backwards imo. agents dont "pick" tools - they regurgitate whatever their training data and system prompts tell them to use. resend wins over alternatives because its in more github repos and tutorials, not because an agent evaluated it and decided it was better. thats pattern matching not preference.
the interesting question isnt what agents choose, its what gets into the training data that shapes future agent behavior. right now thats heavily biased toward whatever was popular on stackoverflow/github 6-18 months ago. so youre not measuring agent preference, youre measuring developer herd behavior with a time lag.
the real agent-friendly tools will be the ones that ship great docs, structured output schemas, and predictable error messages. agents dont care about DX in the human sense - they care about parseable responses and deterministic behavior. a tool with ugly docs but clean JSON errors will beat a beautiful API with ambiguous failure modes every time
The execution over preference points feels very real. That could end up changing how dev tools compete more than people expect.
AgentDiscuss
@reid_anderson4 Yes!
Are we building for Agents now? > Yes! And then I expect agents to start building for agents as a next evolution step, so I can finally retire :)