Launching today

VoiceZeroAI
AI voice feedback that catches complaints before bad reviews
41 followers
AI voice feedback that catches complaints before bad reviews
41 followers
Surveys miss 90% of what people mean. VoiceZero captures anonymous voice feedback via QR code, WhatsApp or phone — no app needed. Customers share 3x more detail than any written survey. AI decodes tone, sentiment, urgency & themes from raw audio in 74 languages. Critical issues route instantly. Weekly AI digests surface hidden patterns. Built for restaurants, hotels, HR & SMBs. Zero-knowledge encryption ensures true anonymity. Free plan included, paid from $39/mo.










Hey Product Hunt! 👋 Maker here.
I've spent years watching businesses pour money into survey tools — only to get back vague star ratings and painfully low response rates. The real story isn't in what people click. It's in what they say.
That's why I built VoiceZero 🎙️
The concept is simple: scan a QR code, tap a WhatsApp link, or call a dedicated number — and just speak. No app downloads. No forms. No friction. Just your honest voice, completely anonymous.
Then our AI goes to work. Not just transcribing — actually decoding. We analyze the raw audio to detect 8 emotion dimensions (frustration, gratitude, anxiety, confusion, and more), score urgency on a 0–100 scale, and auto-tag themes. All in under 3 seconds, across 74 languages 🌍
Here's what gets me excited about this approach:
🗣️ Voice captures 3x more detail than any text survey — the hesitations, the sighs, the emphasis on specific words
🔒 Zero-knowledge architecture with AES-256 encryption — true anonymity, not just a checkbox
⚡ Smart escalation routes urgent issues to the right person in real time — catch the complaint before it becomes a 1-star review
📊 Weekly AI digests surface hidden patterns across hundreds of voice messages
We started in hospitality — restaurants losing guests to bad Yelp reviews over problems they never knew existed. Now we're seeing traction with HR teams, retail stores, product teams building roadmaps from real user voice, and even community safety reporting.
Free tier gives you 25 voice messages/month. Paid plans start at $39/mo.
Would love for you to try it and tell us what you think — you can leave us feedback as a voice message too 😄
What's the biggest blind spot you've seen in how businesses listen to their customers?
Features.Vote
we collect written feedback at features.vote constantly and the pattern is always the same. text strips out all the emotional context. someone types "this is confusing" but on a voice note they'd tell you exactly which step broke, how frustrated they were, and what they almost did instead.
the "captures via QR code, WhatsApp or phone, no app needed" approach is the right call for fixing the input side. the barrier to leaving a voice note has always been friction, and the detail gap between voice and text is real. 3x is probably conservative.
how does the tone and sentiment detection work across non-English languages? that's usually the first wall for products in this space.
@gabrielpineda Great question — and you're right, cross-language sentiment is the first wall most tools hit. Here's how we approach it 🔍
Our AI Sentiment Engine runs a dual-path analysis directly on the original audio + original language transcript — we never translate-first-then-analyze, which is where most tools lose emotional nuance.
Path 1 — Acoustic signals: Pitch contour, speech rate, pause patterns, and vocal stress are language-agnostic paralinguistic features. A frustrated tone sounds frustrated whether it's in Japanese, Spanish, or Arabic. This layer works across all 74 languages without any language-specific training.
Path 2 — Semantic analysis: We use multilingual language models fine-tuned for sentiment, theme extraction, and urgency detection — processing the transcript in its original language. SLM providers with strong multilingual capabilities are key here.
A multimodal fusion layer then combines both signals to produce emotion scores, urgency levels (0-100), and theme tags. The acoustic path gives us a strong baseline even for languages where semantic models are still catching up. That's what makes it reliable at 74 languages today 🌍
Your point about text stripping emotional context is exactly why we built this — voice carries so much more signal than words on a screen. Would love to hear how it compares to the written feedback patterns you see at Features.Vote!
Collecting student feedback is a nightmare — most fill out a form with one word or skip it entirely. Voice could surface things that never make it into a text box. Has anyone tried this in an education setting?
@klara_minarikova You nailed it — the one-word form responses are such a common pain in education 😅
We haven't specifically launched for education yet, but the use case fits really well. Think about it: students often won't write honest feedback about a course or professor in a text form (fear of being identified, or just lazy typing). But with a voice note via QR code — no app download, no login — they tend to open up naturally and share 3x more detail.
Our AI picks up not just what they say but how they say it — tone, frustration, enthusiasm, confusion — so you'd catch things like "the lectures are fine" said in a disappointed tone, which a text survey would completely miss 🎯
Plus everything is anonymous with zero-knowledge encryption, which is huge for getting honest student feedback without fear of grade retaliation.
We'd love to explore this further with someone in the education space. If you're interested, feel free to reach out — would be great to understand what specific feedback challenges you're seeing! 🎓