CyberCut AI helps creators and teams produce viral videos fast. Auto-slice long footage into social-ready clips, generate marketing videos, add high-precision subtitles, edit by text, access an AI asset library, try virtual model try-ons, and use a full AI toolkit.
Replies
Best
This feels more like a marketing video engine than a traditional editor — really interesting direction. Quick question: which part of the pipeline (script understanding, clip selection, or captions) delivered the biggest productivity gain for early users?
Report
Wow, I’ve got a lot of good things to say about this product. The UI is clean, very easy to understand, and I’d easily subscribe to something like this for $20. That said, in its current state it’s not usable for me.
A couple of issues and improvements:
Hebrew captions are detected really well by the AI, but they don’t render on the screen. In the edit panel they’re also reverted. Hebrew should be RTL, not LTR.
Another big improvement would be adding clip speed control. If I want to speed a clip up to 150%, I should be able to do that.
Report
The Long-to-Short feature is exactly what I need. I have a bunch of interview recordings sitting on my hard drive that I've been too lazy to edit into clips. Tried it out - the auto-caption accuracy is pretty decent for English. Any plans to support Chinese captions? That would be huge for me 🙏
Report
Love the “edit by text” promise — that’s the dream for non-editors.
Report
I've been using it and was genuinely impressed at first—it automatically sliced my long explainer video into several short, subtitle-ready clips with transitions, and the results definitely had that "viral-ready" vibe. But for your core features like "marketing video generation" and "virtual model try-on," I'm curious: how does the AI ensure that the virtual model's movements, lip-sync, and expressions precisely match the emotional tone and pacing of different marketing scripts?
Replies
This feels more like a marketing video engine than a traditional editor — really interesting direction.
Quick question: which part of the pipeline (script understanding, clip selection, or captions) delivered the biggest productivity gain for early users?
Wow, I’ve got a lot of good things to say about this product. The UI is clean, very easy to understand, and I’d easily subscribe to something like this for $20. That said, in its current state it’s not usable for me.
A couple of issues and improvements:
Hebrew captions are detected really well by the AI, but they don’t render on the screen. In the edit panel they’re also reverted. Hebrew should be RTL, not LTR.
Another big improvement would be adding clip speed control. If I want to speed a clip up to 150%, I should be able to do that.
The Long-to-Short feature is exactly what I need. I have a bunch of interview recordings sitting on my hard drive that I've been too lazy to edit into clips. Tried it out - the auto-caption accuracy is pretty decent for English. Any plans to support Chinese captions? That would be huge for me 🙏
Love the “edit by text” promise — that’s the dream for non-editors.
I've been using it and was genuinely impressed at first—it automatically sliced my long explainer video into several short, subtitle-ready clips with transitions, and the results definitely had that "viral-ready" vibe. But for your core features like "marketing video generation" and "virtual model try-on," I'm curious: how does the AI ensure that the virtual model's movements, lip-sync, and expressions precisely match the emotional tone and pacing of different marketing scripts?