Launched this week

LTX Desktop
Local open-source LTX video editor optimized for GPUs
80 followers
Local open-source LTX video editor optimized for GPUs
80 followers
LTX Desktop combines a full non-linear video editor with on-device AI generation. Free, open-source, runs locally on your machine. Powered by LTX-2.3.






Hey Hunters, I am excited to hunt LTX Desktop today! 🚀
If an engine is truly powerful, it should enable real products to be built on top of it — and that’s exactly the idea behind LTX-2.3.
LTX Desktop is a fully local, open-source video editor running directly on the LTX engine, optimized for NVIDIA GPUs and compatible hardware. That means:
• No mandatory cloud dependency
• No per-generation pricing
• Your data stays on your own device
The engine is the hard part — the interface shouldn’t be. And now it’s yours, free.
LTX Desktop is a great demonstration of what the LTX engine can already power, and more importantly, what developers and creators can build with it.
Curious to hear what the community thinks and what you would build on top of this engine. 👇
How are you thinking about LTX Desktop on sub-32GB GPUs, especially now that the pitch is fully local and open source? That support path feels like the difference between a cool demo and a real daily tool for a much bigger group of creators.
Vois
@piroune_balachandran I'm not associated with the original poster. As someone working with these models day in day out, It is just physically not possible, they need that much VRAM to work. The quantization of these moments just deteriorates the quality big time, making them unsable and unusable. At least not for now.
What specific SDK or API capabilities does the LTX-2.3 engine provide for developers who want to build their own specialized video tools or integrate the engine into existing creative workflows?
The on-device AI generation with no per-gen pricing is compelling, but curious about the minimum hardware sweet spot for smooth workflows, What's the min GPU spec you'd recommend for this?
Love that this runs fully local with no per-generation pricing. The open-source NLE + on-device AI combo is exactly what creators need to experiment freely without worrying about cloud costs.
Vois
There is an increasing population of users with Apple devices with those specs and your listing mentions "runs locally on your machine" Apple(MacOS) Excluding. You may want to mention that.