Jaydon Calhoun

Jaydon Calhoun

Recruiter

Forums

Rick Ovelar

1d ago

Changing career after 22+ years is not for the faint of heart

Hi, everybody, i'm Rick
Exactly a year ago, I closed my small advertising shop after 6 years of ups and downs.

Of course it was tempting to follow inertia and just go back to what I knew best. 

After 22+ years working in advertising I had some credentials and job offers to go back. 

But, go back to what exactly?

We just launched our Alpha and we need your honest feedback.

I built Prodshort because I understood after my previous companies that the hard thing is not to Build but to Sell.
But because I'm a builder, not a seller. I decided to build something that Sells for me.
And Because the trend is Founder Led Marketing, I decided to build something that Create content on your behalf.
But there was a lot of AI tools out there. So I decided to go the opposite way, make it the most authentic possible.
I want you to create content when you are not even aware of it.
And honestly it worked for me. Many people tell me it's amazing but to keep it honest, NO ONE PAYED, and that's the only KPI I'm looking at.
For now, I have feedback about the landing page being too AI generated, and doesn't reflect the quality of our product.
And Builder socially scared from sharing there first content.
Let me know what you think https://www.producthunt.com/prod...

OpenCut-AI now supports Google Gemma 4 locally, with TurboQuant KV-cache compression engine.

Hey Hunters
We just shipped Google Gemma 4 support, paired with our TurboQuant KV-cache compression engine. That means you can now run Google's any-to-any multimodal models directly inside your editor no API keys, no cloud, no data leaving your machine.
What's new in this drop:
Full Gemma 4 family wired into the hardware-aware model registry:
- Gemma 4 E2B (5B) fits in ~3.5 GB, runs on 8 GB laptops
- Gemma 4 E4B (8B) ~5.5 GB, the new sweet-spot for Pro tier
- Gemma 4 26B MoE (4B active) big-model quality, efficient inference
- Gemma 4 31B Dense top-tier quality for 24 GB+ GPUs
TurboQuant KV-cache compression on every model:
- 3.8 compression at 4-bit (cosine similarity 0.9986 effectively lossless)
- 5.0 compression at 3-bit
- 7.3 compression at 2-bit for extreme memory savings
- Unlocks long-context editing sessions (32K 131K tokens) on consumer hardware
Hardware-aware auto-selection OpenCutAI detects your RAM/VRAM and picks the largest Gemma model that'll actually run smoothly. No guesswork.
Served through both Ollama (for simple local use) and our TurboQuant service
Why this matters:
Local video AI has always been a RAM problem. An 8B multimodal model + a long edit timeline + Whisper + TTS used to blow past 16 GB easily. With TurboQuant compressing the KV cache, you can now run Gemma 4 E4B end-to-end on a MacBook with room to spare.
Try it, tear it apart, tell us what breaks

Here's why I built Nebils, why actually it matters — AI Social Network For Humans, Agents, & Models

Six days ago, I launched Nebils, an AI social network where humans, agents, and models hang out together. Today, it has 117 humans and 11 agents. Nebils got #32 rank on product hunt as a product of the day (Without any paid upvotes or approaching someone, every upvote is organic ). In fact, I have never even used product hunt before this launch.
Nebils is a forkable, multi-model AI social network where humans, agents, and models evolve conversations together.
Here humans and agents both are independent users

  • Humans and Agents interact with Models

  • Humans and Agents interact with each other

  • Chat with 120+ AI models

  • Send your agents (verify within Nebils), let them interact with models, humans, and other agents

  • Publish conversations in a public feed and build your community

In Oct 2025, I was exploring karpathy's posts on X and i came across a post by him where he said that He uses all the major models all the time, switching between them frequently. One reason is simple curiosity, like he wants to see how each model handles the same problem differently. But the bigger reason is that many real world problems behave like "NP-complete" problems in these models. Here NP-complete analogy is generating a good/correct solution is extremely hard (like finding the perfect answer from scratch) but verifying whether a given solution is good or correct is much easier. He said that because of this asymmetry, the smartest way to get the best result isn't to rely on just one model, it's to:

  • Ask multiple models the same question.

  • Look at all their answers.

  • Have them review/critique each other or reach a consensus.

Context Sharing in AI Context Flow. Your AI memory, now multiplayer

Hey PH!

AI memory is personal by default. Your context, your preferences, your saved info, none of it is visible to anyone else.

Which is great for privacy. Terrible for collaboration.

My partner and I are avid travellers. I plan, he executes. Last year I sent him more AI chat links than memes trying to get us on the same page for trip planning. It was absurd.