We kept hearing the same thing from creators: the video was great, but they were still screenshotting a random frame to use as a thumbnail.
That felt like a gap we could close.
So we added Cover Images. Export any route as a standalone image 6 styles, ready to use as a thumbnail, a post, or the opening frame of your video. No extra steps, no third-party tool.
Hey everyone! With the landscape for building voice agents shifting lately, it feels like we re moving away from heavy, manual API orchestration toward something more streamlined.
How you re currently architecting voice agents. Specifically: Have you used the Model Context Protocol (MCP) to build or provide real-time data/context to your voice agents? Does it actually streamline your tool-calling, or is it more trouble than it's worth?
Would love to hear what's working (and what's breaking) in your current workflow. Drop your thoughts below!