Launching today

Agentation
The visual feedback tool for AI agents
48 followers
The visual feedback tool for AI agents
48 followers
Agentation turns UI annotations into structured context that AI coding agents can understand and act on. Click any element, add a note, and paste the output into Claude Code, Codex, or any AI tool.








Indie.Deals
Agentation bridges the gap between design feedback and code changes. Annotate any element on your UI β click, type, done β and get structured output that AI coding agents can immediately understand and act on.
Paste your annotations into Claude Code, Codex, or any AI tool and watch feedback become working code.
Key features:
Multiple annotation modes: select text, click elements, multi-select, draw areas, or freeze animations to capture specific states
Smart element identification: automatically generates grep-friendly selectors so agents find the exact element in your codebase
React component detection: surfaces the full component hierarchy for any element, right in the annotation popup
Computed styles: view live CSS properties alongside your notes for precise design specs
Layout mode: drag 65+ component types onto the page and rearrange sections; changes sync to agents in real time via MCP
Structured markdown output: copy clean, agent-ready annotations with one keystroke (C)
MCP integration: two-way agent sync lets AI acknowledge, question, or resolve your feedback directly.
Check out the latest video demo by the maker here, who also happens to have joined X recently as the Design Lead.
This solves a real friction point. Right now when I use Claude Code or Codex, I spend a lot of time writing context about which element I mean - "the button in the top-right of the filter panel" etc. Having structured annotations that feed directly into the agent as context is much cleaner. How does it handle dynamic elements that change state? Like a button thatβs disabled until a form is valid?