
Mosaic
Zapier for Video Editing
239 followers
Zapier for Video Editing
239 followers
Mosaic allows you to automate any video edit — from Rough Cuts to Motion Graphics and anything in between. Our node-based canvas is an interface to setup video editing workflows that scale. Once created, these can be reused as templates or triggered programmatically via API or event-based triggers. From any step along the way, seamlessly export your timeline back into traditional tools like Premiere Pro / Final Cut / DaVinci Resolve or to popular Media Asset Management softwares.
This is the 2nd launch from Mosaic. View more
Mosaic
Launched this week
Mosaic allows you to automate any video edit — from Rough Cuts to Motion Graphics and anything in between. Our node-based canvas is an interface to setup video editing workflows that scale. Once created, these can be reused as templates or triggered programmatically via API or event-based triggers. From any step along the way, seamlessly export your timeline back into traditional tools like Premiere Pro / Final Cut / DaVinci Resolve or to popular Media Asset Management softwares.









Payment Required
Launch Team / Built With




Mosaic
Hey ProductHunt!
I'm Adish, one of the co-founders of Mosaic (https://mosaic.so). Mosaic lets you create and run your own multimodal video editing agents in a node-based canvas. It’s different from traditional video editing tools in two ways: (1) the user interface and (2) the visual intelligence built into our agent.
While most AI video editors today are attempts at retrofitting existing timeline editors with a chat copilot, we realized that the chat UX has limitations for video: (1) the longer the video, the more time it takes to process. Users have to wait too long between chat responses. (2) Users have set workflows that they use across video projects. Especially for people who have to produce a lot of content, the chat interface is a bottleneck rather than an accelerant.
The result: a node-based canvas where you can create and run your own agentic video editing workflows. This paradigm shift redefines what it means to be a "non-linear editor" and offers a scalable content engine that allows you to define workflows that can be reused as templates or triggered programmatically via API or event-based triggers.
Each node in the canvas represents a video editing operation and is configurable with natural language prompts, so you still have creative control. You can also branch to run edits in parallel, creating multiple variants from the same raw footage to A/B test different prompts, models, and workflows. In the canvas, you can see inline how your content evolves as the agent goes through each step.
The idea is that canvas will run your video editing on autopilot, and get you 80-90% of the way there. Then you can adjust and modify at a more granular level in an inline timeline editor. We also support exporting your timeline state as an XML back out to traditional editing tools like DaVinci Resolve, Adobe Premiere Pro, and Final Cut Pro or to popular Media Asset Management softwares.
Our use of multimodal AI to build visual understanding and intelligence is a core platform feature. This gives our system a deep understanding of video concepts, emotions, actions, spoken word, light levels, shot types. We’re supplementing this with our own computer vision + video processing pipeline, which includes techniques like saliency analysis, audio analysis, and determining objects of significance—all to help guide the best edit.
These are things that we as human editors internalize so deeply we may not think twice about it, but reverse-engineering the process to build it into the AI agent has been an interesting challenge.
Use cases for editing include:
1. Removing bad takes or creating script-based cuts from videos / talking-heads
2. Repurposing longer-form videos into clips, shorts, and reels (e.g. podcasts, webinars, interviews)
3. Creating sizzle reels or montages from one or many input videos
4. Creating assembly edits and rough cuts from one or many input videos
5. A/B testing different hook, CTA permutations and variants
6. Optimizing content for various social media platforms (reframing, captions, etc.)
7. Dubbing content with voice cloning and lip syncing
8. Generating *editable* motion graphic animations or cinematic captions
We also support generative workflows such as:
1. Creating new AI Avatar / UGC content
2. Creating new cartoon / animated content
3. Adding contextual AI-generated B-Rolls to existing content
4. Modifying existing video footage (e.g. censoring content, changing lighting, applying VFX)
We're giving everyone in the ProductHunt community a 20% discount if you sign up during our launch week! You can try it today at https://edit.mosaic.so and our API and educational docs are at https://docs.mosaic.so/. We’d love to hear your feedback!
This is exactly what the YouTube creator workflow has been missing. Right now, the creation pipeline looks like: brainstorm ideas → write script → shoot → edit → publish. The first two steps are getting automated fast (we built TubeSpark to handle ideation and script generation with AI), but editing has always been the manual bottleneck.
The node-based canvas approach makes a lot of sense — especially for creators who produce weekly content with consistent formats. Being able to save workflows as templates and trigger them via API is a game-changer for batch production.
Curious about one thing: how does Mosaic handle b-roll suggestions or cuts based on script pacing? Like if a script has a "pause for emphasis" moment, does the visual intelligence pick up on that?
Congrats on the launch, Adish!
Mosaic
@aitubespark TubeSpark looks cool, would love to see how we can collaborate in case you'd like to offer video editing as a part of your SaaS offering. I can easily see how TubeSpark helps with the ideation / script writing process and then hands-off to Mosaic via API to help with the editing bit as well.
With regard to your question about b-roll suggestions / cuts, a lot of this is based on the prompting that is available within each node. That allows you to still have control each step of the way but be operating in a larger automation framework.
This is a really powerful shift from “editing videos” to “designing video systems”
Curious — how do you handle consistency across outputs?
Like when generating multiple variants (A/B tests, reels, etc.), how do you ensure brand voice, pacing, and visual identity don’t drift across different agent workflows?
Mosaic
@shrujal_mandawkar1 it's a really good question and I think something we're still actively working on as a problem. We want to build a long-term memory into these agentic workflows so they understand your style and have built-in data loops to optimize over time based on actual "real-world" performance of videos or human feedback on outputs.
For now, there are a few guardrails we offer such as being able to provide style references to anchor generations or prompt within each node of the workflow to have similar style across variants.
@adishj That makes sense — especially using style references + node-level prompts as guardrails
The long-term memory + feedback loop approach sounds really promising
One thing I’ve seen work well is enforcing a “shared style layer” across workflows (like global constraints for tone, pacing, visual rules) so every variant inherits the same base identity
Curious if you’re thinking along those lines as well?
Mosaic
@shrujal_mandawkar1 100%, this is an enterprise feature that is already available as "brand guidelines" that you can define once and then serve as a memory bank for your agent to QA loop against
@adishj That’s awesome — having brand guidelines as a shared memory layer makes a lot of sense for scaling content systems.
We’ve actually seen something similar when testing AI-driven video pipelines for brands — once outputs start scaling, consistency becomes the hardest thing to maintain.
Curious if you’re planning to expose those brand guideline rules via API as well? Could be really powerful for teams automating content workflows across multiple platforms.
The "Zapier for video" positioning is spot-on. We're running a custom merchandise platform with heavy content needs, and the biggest bottleneck isn't the AI itself—it's the workflow friction between ideation → production → distribution. What caught my attention: the node-based canvas approach vs. the typical chat copilot. Most AI video tools force you into a conversational loop that doesn't scale when you're producing 50+ variants a week. A visual, reusable workflow canvas is the missing piece. Quick question: How does Mosaic handle batch processing for A/B testing different creative hooks? We're constantly testing product angles across platforms, and being able to define a workflow once and run it programmatically would be a game-changer for content ops.
Mosaic
@arron_young That's exactly how we think about it too, the chat copilot seems to be the interface most AI video editors naturally gravitate towards but it doesn't seem to best serve content workflows that need to scale.
A/B testing different creative hooks is baked into the canvas, which by nature allows branching so that you can test different scenarios. Let me know if you'd like a demo and I'd love to show you how this works in practice.
The node-based canvas is the right interface for this. Chat-based video editing works for simple one-shot tasks but falls apart the moment you have a repeatable workflow with multiple steps, branching variants and brand constraints you need to apply consistently across projects.
The A/B testing of hook and CTA permutations from the same raw footage is the use case that jumps out to me. That alone could change how content teams approach high-volume social production.
As a motion designer and Creative Director who works with brand video regularly, the "80-90% of the way there, then you refine" model is how I'd actually want to use this. The XML export back to Premiere, Final Cut and DaVinci is also what makes this feel safe to adopt rather than a walled garden. Curious how the motion graphics node handles brand system constraints, can you feed it a style guide or does it work purely from prompt? Congrats on the launch!
Mosaic
@joao_seabra love the detailed and thoughtful reply here. Great to get your thoughts on this as from the perspective of a Motion Designer / Creative Director.
Check out this Mosaic which really shows the power of the A/B testing of different cutdowns: https://edit.mosaic.so/mosaics/413fa388-3d3c-445e-b5eb-efa4048b8144. It takes a 40 minute raw interview footage and cuts it down into 30s and 90s cutdowns.
With regard to the Motion Graphics, you're actually able to give it any YouTube video as a style reference and the agent will recreate visually similar graphics that are contextual to your new video. You can also provide it reference links to pull assets from or just prompt it to do research about certain topics or pull things from online to create relevant / researched graphics.
It's one of my favorite and most powerful nodes we have!
@adishj this is all SUPER interesting. I'll have to try it myself as soon as I can. But I'm very excited by this!
Mosaic
@joao_seabra please do and let me know what you think!
Congrats on the launch and the product!
The node-based canvas is smart, seems way better than waiting for chat responses on long videos. Does this work well for podcasts → LinkedIn clips? I'm thinking that it might be a popular usecase?
Mosaic
@ruxandra_mazilu Thank you Ruxandra! Podcast -> Clips is a huge use case and something that you can streamline end-to-end using our Trigger and Destination tiles (LinkedIn is one of the social platforms we support auto-posting to with platform-optimized captions so that LinkedIn captions sound professional but are also anchored to the context of your video).
@adishj This is lovely! Can you also include color grading styles and preferences?
Mosaic
@jacklyn_i Yes, already a feature! We have a color correction tile where you can provide any image as a reference and we can extract the style and apply it as a video, or alternatively you can import your own custom LUT file and apply it on any videos :)