Launching today

Agentation
The visual feedback tool for AI agents
393 followers
The visual feedback tool for AI agents
393 followers
Agentation turns UI annotations into structured context that AI coding agents can understand and act on. Click any element, add a note, and paste the output into Claude Code, Codex, or any AI tool.








How does Agentation handle feedback for multi-agent workflows? Does it support collaboration between different AI agents?
Indie.Deals
Agentation bridges the gap between design feedback and code changes. Annotate any element on your UI — click, type, done — and get structured output that AI coding agents can immediately understand and act on.
Paste your annotations into Claude Code, Codex, or any AI tool and watch feedback become working code.
Key features:
Multiple annotation modes: select text, click elements, multi-select, draw areas, or freeze animations to capture specific states
Smart element identification: automatically generates grep-friendly selectors so agents find the exact element in your codebase
React component detection: surfaces the full component hierarchy for any element, right in the annotation popup
Computed styles: view live CSS properties alongside your notes for precise design specs
Layout mode: drag 65+ component types onto the page and rearrange sections; changes sync to agents in real time via MCP
Structured markdown output: copy clean, agent-ready annotations with one keystroke (C)
MCP integration: two-way agent sync lets AI acknowledge, question, or resolve your feedback directly.
Check out the latest video demo by the maker here, who also happens to have joined X recently as the Design Lead.
Wannabe Stark
One of the most underrated pain points in building with AI agents is having zero visibility into what they're actually doing, you're basically flying blind until something breaks. Curious how you're handling agent workflows that branch or run in parallel. Does the visualization scale well for more complex pipelines?
This solves a real friction point. Right now when I use Claude Code or Codex, I spend a lot of time writing context about which element I mean - "the button in the top-right of the filter panel" etc. Having structured annotations that feed directly into the agent as context is much cleaner. How does it handle dynamic elements that change state? Like a button that’s disabled until a form is valid?
Biteme: Calorie Calculator
@mykola_kondratiuk
yeah curious to see how they handle that edge case - seems like the kind of thing that makes or breaks the actual agent workflow
Thanks for launching, Agentation team! When we annotate a component and feed it to Claude Code, can we keep a history of annotations so the agent knows when the DOM changed? I'd love to see how you handle selector drift.
The click-to-annotate approach is what I've been wanting tbh. I spend way too much time trying to explain to coding agents which exact element needs changing, and half the time they still get it wrong.
Curious how it handles dynamically rendered stuff though? Like components behind a toggle or things that only show on hover?
this is exactly what we've been missing. we use Claude Code daily and the biggest friction is always translating "fix that button" into precise technical context. being able to click and annotate the actual UI elements then paste structured output sounds like a game changer. how detailed does the context get - does it capture CSS selectors, component hierarchies, that level of detail?