Noureddin Bakir

Hipocampus - AI operators that own team workflows

by
Hipocampus is a workflow-ownership layer for teams. It deploys governed operators that automate and own team workflows across fragmented systems, with persistent workflow state, approvals, delegation, escalation, and shared context so work keeps moving across tools and time.

Add a comment

Replies

Best
Noureddin Bakir
Hey everyone 👋 Noureddin here, co-founder of Hipocampus. We’ve been spending the last couple of years building AI systems, and one thing kept coming up over and over again — AI tools are everywhere, but actually getting real work done across them is still messy. You end up stitching workflows together by hand, jumping between tools, and babysitting everything to make sure it moves forward. So we built Hipocampus, AI operators that own team workflows. The idea is simple: give every team an swarm of operators. Instead of just generating outputs, operators actually own workflows, they carry context across tools and time, take actions, keep things moving, and only pull you in when it actually matters. Our goal is to become the single point of contact for how work gets done across your stack. We’re still working closely with teams to shape this, so if you’re dealing with messy, multi-step workflows, I’d genuinely love to hear what’s painful for you. Try it out, break it, tell us what’s missing. I’ll be here all day answering everything 🙌
Saul Fleischman

Congrats on the launch! This is a compelling take on workflow automation. I'm curious about how Hipocampus handles context handoffs when operators need to escalate work back to humans—what does that experience look like for team members, and how do you prevent context loss in that transition?

Noureddin Bakir

@osakasaul Our operators aren't driven by a thin runtime, in essence a series of ephemeral chats. Instead, they are persistent in memory, state, and context, which allows them to continue working after weeks at a time.

Our architecture supports persistence at scale, unlike most of our competitors within this space.

Mahmoud Shehata

@osakasaul thank you. Great question. To add, when an operator needs a human, that handoff becomes a task with the context attached, not just a chat relay. The teammate sees a review item in their queue or inbox with the task, notes, comments, latest summary, and outputs. We avoid context loss by storing the work state outside the model: task history, ownership changes, comments, session summaries, notifications, and artifacts stay with the task, so work can move between operator and human without anyone having to reconstruct what happened from memory.

Hipocampus is built to learn the job as you do it. You do something once or twice with the AI helping, it starts to recognize the pattern, and then it can take the first pass on similar work on its own. When you come back, the draft or prep work is already there, and you can review it, tweak it, approve it, or send it back

Lakshay Gupta

Seriously cool! This feels like Zapier + Temporal + AI agents combined. Btw what’s been the hardest part in making that actually reliable in production?

Noureddin Bakir

@lak7 The hardest part was figuring out how to approach building Hippocampus. Since we both have a background from distributed systems, we realized this was just another distributed systems pattern. This is why we're ahead of the curve when it comes to production reliability, observability, and why we have the strongest harness given our intelligence layer.

Jimmy Nguyen

@noureddin_bakir1 — given the distributed systems background, how do you handle consistency when an operator's state needs to survive model version changes or provider swaps? Most persistence layers I've seen for long-running agents break down when the underlying model behavior shifts mid-workflow.

Noureddin Bakir

@jimmypk Easy. You don't train your system on being very reliant on LLMs. That's a mistake a lot of teams building agentic systems make.

One of the things that differs us is that we understood that early on was to build harnesses that weren't reliant on the model itself. That way we avoid model quirks, becoming model agnostic.

Teams make the most mistakes by being reliant on an LLM Model.

Sounak Bhattacharya

The "approvals and delegation" piece — how does that actually work in practice? Like if an operator hits a decision point that needs human sign-off, does it pause and ping someone in Slack, open a task in a PM tool, or does it have its own notification layer? The handoff mechanism seems like where this either clicks or falls apart.

Noureddin Bakir

@sounak_bhattacharya The handoff mechanism definitely clicks and it will reply to Slack or Discord, whatever platform the team selects, on top of our own notification layer. The operators are intelligent enough to figure out what works for the team and what channels are appropriate, because it starts by mirroring their workflows exactly.

Martí Carmona Serrat

Persistent state + model-agnostic harness is the right bet — agents that live as ephemeral chats always fall off a cliff past a few hours. Handoff-as-a-task (not a chat relay) is the correct UX primitive. Curious how Hipocampus handles policy drift when the team's approval criteria evolve after the operator has already learned an older pattern.