
Tapfree
Voice dictation that adapts to what’s on your screen
120 followers
Voice dictation that adapts to what’s on your screen
120 followers
Typing on phones hasn’t evolved. Tapfree fixes that. Tapfree is a voice-first Android keyboard that lets you write messages, notes, and emails by speaking naturally - without dictation errors, awkward formatting, or constant corrections. It understands context, not just words.










Tapfree
Hey Product Hunt 👋
I’m Mansehej, the maker of Tapfree.
I built Tapfree because mobile typing still feels stuck in the past. When you’re moving fast, your ideas don’t arrive as perfect sentences. They come as fragments, quick reactions, and rough thoughts you need to shape into something coherent.
Most keyboards and dictation tools don’t help much. They transcribe words literally, miss context, butcher names, and leave you fixing formatting by hand. Writing an email, a chat reply, or a document all need very different handling.
What makes Tapfree different is how it understands context. Tapfree is a voice-first Android keyboard that uses on-screen context (the text field and surrounding UI), not just the app you’re in, to produce cleaner, more relevant dictation.
It also handles the way people actually talk. You say "Could you get some coffee... sorry, tea on the way back?" and Tapfree writes: "Could you get some tea on the way back?". It catches your corrections mid-sentence so you don't have to go back and fix them.
If you give it a try, I’d love specific feedback:
Which app or scenario felt noticeably better (or worse) than usual dictation?
Any "wow" moments with the context understanding?
What would make it even more useful for you?
Thanks so much for checking it out!
Feedback from this community means the world to a solo builder! 🙏
- Mansehej
I’m typing this with Tapfree! It has made my life so convenient, even if I jumble/stutter while dictating, Tapfree automatically ignores that and rearranges your sentences to still sound coherent!
Tapfree
@sosboy888 This really made my day - thanks for sharing that. A lot of Tapfree is built around embracing how messy real speech is, so I’m glad that’s coming through!
Product Hunt
Tapfree
@curiouskitty Great question! When enabled, Tapfree can extract relevant text context from the screen (via Android’s accessibility APIs) to improve things like proper nouns, formatting, and intent. For example, the app name and surrounding text help it infer whether you’re writing an email, a message, or something else, including who you’re texting - which changes how dictation is handled.
A few key clarifications:
Accessibility is opt-in: You can use Tapfree without it, just with reduced context awareness.
Minimal, purpose-bound use: Only the text context needed for that specific enrichment step is used.
Ephemeral processing: Nothing is logged, nothing is stored, and nothing is retained on servers. Context is used only during the enrichment process and then discarded.
No training or reuse: User text is not saved or used to train models.
Because context is powerful, I’m deliberately keeping this scoped, optional, and transparent; and I’m actively refining both the technical boundaries and how clearly this is communicated in-product.
If there are specific scenarios that feel sensitive or unclear, I’d genuinely appreciate hearing about them. That feedback directly shapes safer defaults.
What's your security/privacy for this app? I want to use this for work, but I don't want to compromise my clients' info.
Tapfree
@ncho That's a completely fair concern, Nancy. Tapfree can use on-screen context (via Android's accessibility APIs) to improve formatting and spellings, but that is fully opt-in. Only the minimal text needed for that specific dictation moment is processed, nothing is logged or stored, and nothing is retained on servers. Text is used ephemerally during the enrichment step and then discarded, and it is not used to train models. I'm very aware that a keyboard touches sensitive information, so I've been deliberate about keeping processing scoped and transparent.
@mansehej Thanks so much for the info! I'll give this a whirl :)
Congrats on the launch! Using on-screen context instead of just raw audio feels like the right way to rethink mobile dictation. How does Tapfree decide what surrounding UI context is relevant versus noise, especially in dense apps where small misreads could change the meaning of what gets written?
Tapfree
@vik_sh Thank you Viktor! Tapfree doesn't try to interpret the entire screen semantically. It starts with lightweight structural signals like the active input field, app identity, and nearby text that's directly tied to what you're typing. From that visible context, it builds a short-lived reference set, especially for proper nouns such as names in the current thread, document titles, or recurring terms. Those tokens are then used to bias spelling and capitalization during dictation.
The key is scope control. It prioritizes proximity and structured input areas rather than scraping everything, and if confidence is low it falls back to safer, more literal transcription instead of aggressively rewriting. The context is used ephemerally during that enrichment step and then discarded.
have been alpha testing it. The way it spells Indian names perfectly in English feels like magic.
Tapfree
@sp_singh2 Thanks! Part of the reason this mattered so much to me is that my own name gets misspelled constantly. Instead of hard-coding fixes, I leant towards on-screen context, which ends up getting names like mine right far more often.
I have been alpha testing it. Im really impressed with the implementation here👍
Tapfree
@harjyot_kaur Thank you! Really glad the implementation stood out. Feedback from early testers like yourself helped shape a lot of the current behaviour.