Launching today
Pensieve
Full company context for every AI agent
80 followers
Full company context for every AI agent
80 followers
AI agents are getting smarter, but they still don't understand your business. Every conversation starts with copy-pasting context and re-explaining the backstory. MCP gives agents tool access. Pensieve gives them understanding. We connect your tools and build a living picture of your organisation. People, projects, decisions, customers and how they all relate. Agents reason over the full picture and surface what nobody sees alone. Free. Bring your own Anthropic, OpenAI or Google inference.








Free
Launch Team / Built With



Pensieve
Hey Product Hunt! I'm Euan, co-founder of Pensieve.
AI agents are incredibly intelligent but they don't understand your business. You can give an agent access to your tools via MCP, and it can pull data from Slack or Linear. But that's not the same as understanding. A new hire can access your Slack too. It doesn't mean they know what's going on.
What makes an employee effective isn't tool access. It's the context they build up over months and years. Who's working on what, why that decision was made in January, how the sales pipeline connects to the engineering roadmap, which customer relationships matter. That's what lets them make good judgment calls.
We built Pensieve to give AI agents that same understanding. We connect to the tools you already use, but we don't just make them searchable. Think about the difference between a raw codebase and a well-maintained CLAUDE.md. One is searchable data, the other is understanding. Pensieve curates and distils your company knowledge into a format AI natively understands, so an agent can operate like a fully onboarded employee, with the intelligence of a frontier model.
Once agents have that depth of context, they can work in the background and surface things no single person catches. Connecting what customers say on calls with what engineers discuss on Slack with what the usage data actually shows.
Pensieve is free. Bring your own Anthropic, OpenAI, or Google inference. No platform fee, no credit card.
What's the biggest limitation you've hit trying to get AI agents to actually work in your business?
@euan_cox13 How does Pensieve auto-distill those interconnections without manual tagging?
Cool concept! We've been planning to build a contextual knowledge base internally, so this caught my attention.
Quick question though: how do you handle role-based access control? We have context that should only surface for Project Managers (client budgets, escalation notes) and other stuff that's strictly Leadership-level. If everything gets ingested into one knowledge graph, how do you prevent the agent from leaking restricted context to someone who shouldn't see it?
Pensieve
@michal_kukul Really good question, and a genuinely hard problem to solve well. We've thought about this a lot.
For most teams, the default is a shared organisation where you connect data sources you're happy for the whole team to see. We've found that a lot of companies are fine with this. Project updates, engineering discussions, customer feedback, product decisions. Most of it is shared context anyway, and that's where the bulk of the value comes from.
If you do care more about the permissions side, each user can connect their own tools and create their own organisation in Pensieve. You can have as many organisations as you want. All the data within that organisation comes only from tools they have access to, so it's a completely isolated context layer. You still get a full organisational picture, it's just built from everything that user can personally see.
Longer term, the direction we're exploring is a shared context layer where each user connects their own tools and we tag entities with their source permissions so we can filter at the data layer and only surface information to a user if they have access to all the source materials used to derive it. You raise a good point about leakage though: even with filtering, the agent has technically seen restricted context while researching, and there's a risk of that bleeding through. That's exactly why we haven't rushed it. Getting the filtering right so it's a real data-layer boundary and not just a prompt-level one is the hard part.
I'd be curious to hear more about how you're structuring your internal context layer?
Pensieve
Hey PH! James here, co-founder of Pensieve.
We're heavy Claude users, and the thing that transformed our coding workflow wasn't better search over our codebase — it was maintaining architecture files. A good CLAUDE.md means an agent drops into a session already understanding what the codebase does, what the product goals are, and how everything fits together. It doesn't need to grep through every file to build that picture from scratch each time.
Most AI-for-business tools today are search layers. They let agents query your underlying data, which is useful, but it's the equivalent of giving a new developer grep access and calling them onboarded. What's missing is the pre-researched, maintained understanding — the business equivalent of those architecture docs.
That's what we're building. Pensieve connects to your tools and continuously maintains a living knowledge graph of your organisation. People, projects, customers, decisions, and how they all relate. So when an agent starts a conversation, the context is already there — not searched for, already understood.
Would love to hear what integrations you'd want first — that's what we're prioritising next.