Building Sverklo (launching tomorrow on PH), I ran a structured dogfood protocol used the tool on its own codebase to find real bugs before users did.
Found 4 integration-level bugs that unit tests missed:
Impact analysis silently dropped repeat call sites the worst possible failure for a refactor-safety
Reference search returned 48 substring matches, drowning the 5 real
Lookup returned "No results" on valid queries instead of explaining why
Parser off-by-one skipped every function after the first in multi-function
All fixed, regression-tested, and documented in a full unedited session log
Hey PH! I'm the maker of Sverklo.
I built this because every AI coding tool I tried either sent my code to the cloud or gave me keyword search that missed the actual symbols I needed.
So I built a local MCP server that combines BM25 lexical search + ONNX vector embeddings + PageRank over the dependency graph β all running on your machine, all stored in a single SQLite file.
The thing I'm most proud of: I dogfooded it on its own codebase before shipping. Ran 3 structured sessions where I used Sverklo to navigate and refactor Sverklo's own code. Found 4 real bugs in my own tool that unit tests missed:
1. Impact analysis silently dropped repeat call sites
2. Reference search returned 48 substring matches drowning 5 real hits
3. Lookup returned "No results" on valid queries (silent failure)
4. Parser had an off-by-one skipping functions after the first in multi-function files
All four are fixed, regression-tested, and documented in the full unedited session log:
https://github.com/sverklo/sverklo/blob/main/DOGFOOD.md
I'd love your honest feedback β especially if you try it on a real codebase and something feels wrong. I triage issues within hours.
npm install -g sverklo && cd your-project && sverklo init