Forums
The best AI products should make themselves less needed over time
There's something counterintuitive about building an AI product in the mental health and self-awareness space: if you're doing it right, your users should eventually need you less.
Most product teams optimize for stickiness. More sessions, more time in app, more daily returns. But at Murror, we've been wrestling with a different question what if the goal of our product is to help someone build enough self-understanding that they don't need to open the app as often?
Top AI Creative Tools Ranked by Real Ad Data (2026 Q1)
Product Hunt is home to amazing products across every category.
Today, we wanted to look at AI creative tools from a slightly different angle not just features, but the real-world marketing activity behind them.
Setting up monorepos for AI: submodules versus subtrees
I've been building my app for 8 months now, and i ended up having 5 repositories
nextjs app
databases
customer facing API
node-sdk that wraps the api
react-sdk, for both reusing shared component and customer facing components
So i thought, it's gonna be great if i create a mono repo with submodules. But it was terrible. I realized that turborepo does not like external packages, and as i tried to reuse my own customer facing libs, the DX became terrible. It was very time consuming to ship a feature. Even when i wanted to use Codex or Cursor 3, it was not able to show git diff properly, also i was not able to use Cursor's cloud agents properly to ship complex features.
What tools do you use to create beautiful product screenshots?
Hi everyone
I ve been researching how founders and makers create beautiful screenshots for product launches, social media, and landing pages.
There are many tools available, but I m curious what people actually use in their workflow.
DealGPT by Sanctorum
We built DealGPT when we realised chats dont make money deals do.
DealGPT allows you to research with AI, connect with verified people, and deploy capital in a single interface.
OpenCut-AI now supports Google Gemma 4 locally, with TurboQuant KV-cache compression engine.
Hey Hunters
We just shipped Google Gemma 4 support, paired with our TurboQuant KV-cache compression engine. That means you can now run Google's any-to-any multimodal models directly inside your editor no API keys, no cloud, no data leaving your machine.
What's new in this drop:
Full Gemma 4 family wired into the hardware-aware model registry:
- Gemma 4 E2B (5B) fits in ~3.5 GB, runs on 8 GB laptops
- Gemma 4 E4B (8B) ~5.5 GB, the new sweet-spot for Pro tier
- Gemma 4 26B MoE (4B active) big-model quality, efficient inference
- Gemma 4 31B Dense top-tier quality for 24 GB+ GPUs
TurboQuant KV-cache compression on every model:
- 3.8 compression at 4-bit (cosine similarity 0.9986 effectively lossless)
- 5.0 compression at 3-bit
- 7.3 compression at 2-bit for extreme memory savings
- Unlocks long-context editing sessions (32K 131K tokens) on consumer hardware
Hardware-aware auto-selection OpenCutAI detects your RAM/VRAM and picks the largest Gemma model that'll actually run smoothly. No guesswork.
Served through both Ollama (for simple local use) and our TurboQuant service
Why this matters:
Local video AI has always been a RAM problem. An 8B multimodal model + a long edit timeline + Whisper + TTS used to blow past 16 GB easily. With TurboQuant compressing the KV cache, you can now run Gemma 4 E4B end-to-end on a MacBook with room to spare.
Try it, tear it apart, tell us what breaks



