Launched this week

ugcs.farm
Prompts tuned tight enough to one-shot the render.
4 followers
Prompts tuned tight enough to one-shot the render.
4 followers
AI video is a slot machine. Most teams burn 5–10 renders before one is actually usable — wrong cap, six fingers, product orientation flipped halfway through. ugcs.farm is the first AI UGC tool that's actually seen your reference clip. A multi-agent pipeline (Skills + Critic + Judge) grounds every prompt in the source video itself, so the first render usually lands. Tuned per model for Sora, Veo, Wan, and Kling. Export to Higgsfield, fal.ai, Google AI Studio, or Replicate. Free in open beta






Hi Product Hunt 👋
The math behind AI video has been driving me nuts.
A Sora render costs ~$4 and takes 90 seconds. Veo 3, Wan, and Kling are cheaper but not free. The painful part isn't the cost of one render, it's that you almost never get a usable output on the first try. The cap is wrong. The hand has six fingers. The product flips orientation halfway through. So you regenerate. And again. And again.
By the time you have one ad ready to ship, you've burned 6-10 renders, $30-40, and 20 minutes per take across 50 variations a week, that's a real line item.
The reason this happens is that every "AI UGC" tool I tried was just a prompt rewriter. It looks at the *words* you typed and guesses. It hasn't actually seen the clip you're trying to remix.
ugcs.farm is built around one bet: if the prompt is grounded in the source video itself, the first render usually sticks.
Here's how:
1. Drop a vertical reference clip (yours or a public TikTok / Reel / Short)
2. Auto-extract every shot, pick the moments where the swap should happen
3. Upload your product / character / brand reference
4. Get a separate prompt tuned for Sora, Veo 3, Wan 2.1, and Kling
What's actually under the hood (the bit that earns the "first-take" claim):
→ Skills : a 13+ rubric product-handling library, hand-built from agency footnotes. So a flip-top tube doesn't get an unscrew motion. A pump dispenser knows it has a pump. Lipstick rotates the right way. Generic prompt rewriters can't do this, they don't know what your product is.
→ Critic : a multimodal agent that watches the source video against the candidate prompt and patches weak beats, motion mismatches, anatomy slip-ups, and continuity gaps *before* you spend $4 on a render. This is where most "first takes that work" actually come from.
→ Judge : grades the rendered output against the source clip itself (not a textual proxy of it), and the verdict feeds a memory loop. The pipeline gets better at *your* brand, *your* product type, *your* visual grammar with every render you do.
The end result is prompts tuned tight enough that you stop treating "render" as a draft step and start treating it as a publish step.
One-click export to wherever your stack lives:
• Native: sora.chatgpt.com · Google Flow (Veo 3) · fal/Wan · klingai.com
• Multi-model: Higgsfield · fal.ai · Google AI Studio · Replicate
Pricing: Free during open beta. No credit card. Paid tiers will roll out alongside team accounts; everyone in the beta gets ample notice.
What it explicitly is *not*:
• Not a video host - your renders live wherever you generate them
• Not a moderation layer - you hold the IP / copyright responsibility
• Not yet multiplayer - team accounts are next
Three things I'd love feedback on in the comments:
1. What's your current "renders-to-keeper" ratio? I'm trying to calibrate honestly, if you're already at 1:1 on Veo 3, I want to know what you're doing differently.
2. What product-handling skill is missing? We add a new skill module monthly
3. Anything weird in the UX?
I'll be around all day. 🌾