Congratulations team Velo! Love the focus on closing the gap between raw intent and polished output. Regarding the AI Voiceover sync, if a user decides to edit the generated script after the video is processed, how does the engine handle the re-syncing of the visual timing? Does the browser agent actually "re-record" the sequence to match the new pacing of the speech?
@gayatri_sachdevaΒ Thanks Gayatri, this is a great question! If someone edits the script after the video is generated, we donβt re-record everything. Instead, we adjust the timing of the scenes to match the updated voiceover.
We use visual cues in the video (like clicks, hovers, and page changes) as anchors, and then tweak the pacing so everything stays in sync.
@mykola_kondratiukΒ Yes the whole recording is treated as an intent capture, almost always the length of your recording and the output will be entirely different but we capture all the intent and then do the auto edits. Would love to have you try it out.
Report
Intent capture framing makes sense. The variable output based on length is interesting - sounds like longer recordings compound the context, not just extend it.
@felipe_daguilaΒ Thanks for your feedback, We're constantly improving the product experience and we'd love for you to keep trying everything we launch.
Report
This is great. Would surely test it out this week. Congrats on the launch!
Replies
A very cool service. I will try it for recording a video presentation of my startup.
Velo
@mykyta_semenov_Β Thanks Mykyta, do try it out and let me know your feedback
Velo
Feeling really excited to put it out in public!
Velo
@soni_karanΒ Yess, we have been working on this for last 3 months, and now it is finally out π
Velo
@soni_karanΒ Supeer Excited
DronaHQ
Congratulations team Velo! Love the focus on closing the gap between raw intent and polished output.
Regarding the AI Voiceover sync, if a user decides to edit the generated script after the video is processed, how does the engine handle the re-syncing of the visual timing? Does the browser agent actually "re-record" the sequence to match the new pacing of the speech?
Velo
@gayatri_sachdevaΒ Thanks Gayatri, this is a great question! If someone edits the script after the video is generated, we donβt re-record everything. Instead, we adjust the timing of the scenes to match the updated voiceover.
We use visual cues in the video (like clicks, hovers, and page changes) as anchors, and then tweak the pacing so everything stays in sync.
Flexprice
Velo
@manish_choudhary19Β Thanks so much! Appreciate the support!
does it actually understand what matters in the recording, or is it mostly trim + zoom? curious how it handles long demos
Velo
@mykola_kondratiukΒ Yes the whole recording is treated as an intent capture, almost always the length of your recording and the output will be entirely different but we capture all the intent and then do the auto edits. Would love to have you try it out.
Intent capture framing makes sense. The variable output based on length is interesting - sounds like longer recordings compound the context, not just extend it.
Velo
@mykola_kondratiukΒ Yuppp
ha yeah. the async angle is underrated - half the sync meeting culture is just habit.
Konfide
Nice idea
A lot of friction to just start using including the Chrome browser install. It would be good a 2-3 clicks max experience for a user to try.
Velo
@felipe_daguilaΒ Thanks for your feedback, We're constantly improving the product experience and we'd love for you to keep trying everything we launch.
This is great. Would surely test it out this week. Congrats on the launch!
Velo
@karanparwaniΒ Thank you so much for your comment, excited for you to try it!
Love this concept I literally do this in my personal all the time. Most of the time videos are easier to get the point across.
Velo
@timothy_wilson1Β Yup, our goal is to make screen based videos as fast as you type.
Claap
Congratulations on the launch!!
Velo
@seantiffonnetΒ Thank you Sean, would love for you to try it out.
Cool idea. Looking forward to trying it out properly
Velo