Joshua R. Lehman

Agent Wispr - Local Whisper dictation built for coding terminals

by
→ 100% local inference — Wispr-optimized model runs on your hardware → Model selection — tiny (75MB, CPU) to large-v3 (3GB, CUDA) → Correction dictionary — teach your vocabulary, corrections apply retroactively to history → Toggle and push-to-talk modes → Full transcript history w/ search and CSV export → Compact floating widget (95px collapsed) → CUDA GPU acceleration auto-detected → Cross-platform — Linux, macOS, Windows Free demo available. Full version: CA$29.99 one-time. No subscription.

Add a comment

Replies

Best
Joshua R. Lehman
Builder here. Agent Wispr started as a personal tool — I wanted dictation in my coding workflow that ran locally, had no subscription, and could learn the vocabulary of whatever project I was working on. The correction dictionary is the piece I'm most proud of. You correct a misrecognition once, and it fixes that everywhere — retroactively in your history and in all future transcriptions. For technical vocabulary (library names, variable names, domain terms), it makes a significant accuracy difference. A note on limitations: Whisper's accuracy varies by model size and accent. The large-v3 model is substantially more accurate for technical content than the tiny model, at the cost of VRAM and latency. The correction dictionary addresses the jargon problem; accent sensitivity is an upstream Whisper limitation I can't solve at this layer. Free demo resets daily at midnight — enough to evaluate whether it fits your workflow before purchasing. This is the first product in a local-first AI developer toolkit I'm building — Agent Brain (shared semantic memory for AI coding agents) is coming soon. Happy to answer questions about the implementation.