I want to be upfront about something before I share what we built.
We've been building AI Context Flow - inline context injection via a browser extension that takes your personal context into AI tools as you use them. It's been working well, but it always had a ceiling.
Inline injection can only go so far. There are corner cases where you need deeper context. And the extension is browser-only. What about desktop apps, CLI tools, local models, mobile? You shouldn't have to keep re-explaining yourself to every AI in every environment.