Jaydon Calhoun

Jaydon Calhoun

Recruiter

About

Finds, interviews, and hires candidates for open positions within an organization.

Badges

Tastemaker
Tastemaker
Gone streaking
Gone streaking

Forums

Rick Ovelar

1d ago

Changing career after 22+ years is not for the faint of heart

Hi, everybody, i'm Rick
Exactly a year ago, I closed my small advertising shop after 6 years of ups and downs.

Of course it was tempting to follow inertia and just go back to what I knew best. 

After 22+ years working in advertising I had some credentials and job offers to go back. 

But, go back to what exactly?

We just launched our Alpha and we need your honest feedback.

I built Prodshort because I understood after my previous companies that the hard thing is not to Build but to Sell.
But because I'm a builder, not a seller. I decided to build something that Sells for me.
And Because the trend is Founder Led Marketing, I decided to build something that Create content on your behalf.
But there was a lot of AI tools out there. So I decided to go the opposite way, make it the most authentic possible.
I want you to create content when you are not even aware of it.
And honestly it worked for me. Many people tell me it's amazing but to keep it honest, NO ONE PAYED, and that's the only KPI I'm looking at.
For now, I have feedback about the landing page being too AI generated, and doesn't reflect the quality of our product.
And Builder socially scared from sharing there first content.
Let me know what you think https://www.producthunt.com/prod...

OpenCut-AI now supports Google Gemma 4 locally, with TurboQuant KV-cache compression engine.

Hey Hunters
We just shipped Google Gemma 4 support, paired with our TurboQuant KV-cache compression engine. That means you can now run Google's any-to-any multimodal models directly inside your editor no API keys, no cloud, no data leaving your machine.
What's new in this drop:
Full Gemma 4 family wired into the hardware-aware model registry:
- Gemma 4 E2B (5B) fits in ~3.5 GB, runs on 8 GB laptops
- Gemma 4 E4B (8B) ~5.5 GB, the new sweet-spot for Pro tier
- Gemma 4 26B MoE (4B active) big-model quality, efficient inference
- Gemma 4 31B Dense top-tier quality for 24 GB+ GPUs
TurboQuant KV-cache compression on every model:
- 3.8 compression at 4-bit (cosine similarity 0.9986 effectively lossless)
- 5.0 compression at 3-bit
- 7.3 compression at 2-bit for extreme memory savings
- Unlocks long-context editing sessions (32K 131K tokens) on consumer hardware
Hardware-aware auto-selection OpenCutAI detects your RAM/VRAM and picks the largest Gemma model that'll actually run smoothly. No guesswork.
Served through both Ollama (for simple local use) and our TurboQuant service
Why this matters:
Local video AI has always been a RAM problem. An 8B multimodal model + a long edit timeline + Whisper + TTS used to blow past 16 GB easily. With TurboQuant compressing the KV cache, you can now run Gemma 4 E4B end-to-end on a MacBook with room to spare.
Try it, tear it apart, tell us what breaks

View more