Atomic is a self-hosted, AI-native knowledge base. Write notes, get a semantic graph. Ask questions, get cited answers from your own content. Auto-generates wiki articles as your knowledge grows. MCP server built-in for Claude/Cursor. Local-first. Open source. Everything you know, connected.
Hey PH! I'm Ken, the maker of Atomic 👋
I built this because every note-taking tool I tried either buried my ideas in folders or gave me AI features that felt bolted on. I wanted something where the AI was baked into the structure itself, not a chatbot sitting on top of my notes.
The feature I'm most proud of is wiki synthesis: Atomic reads all your atoms under a tag and generates a cited wiki article. Every claim links back to the source note. It's like having your own research assistant.
A few fun facts about Atomic:
- It's built in Rust + SQLite — the whole thing, including vector embeddings, lives in a single file
- There's a built-in MCP server so Claude, Cursor, and other AI tools can query and write to your KB directly
- It runs fully local with Ollama or any other OpenAI-compatible provider (LM Studio, LiteLLM, etc) No data leaves your machine
Still early days but the core loop is solid. Happy to answer anything — architecture questions, roadmap, weird use cases, all fair game. 🙏
Report
Coolest launch of the day fs! Btw do you see atomic as a note taking tool, a personal knowledge OS or something closer to a local-first AI assistant? Also are you using this yourself, if so is it creating an impact in your daily tasks??
Thanks! and great question — it's closer to a knowledge OS the way I use it.
I have RSS feeds (such as Hacker News front page) piped directly into Atomic, so interesting articles get ingested and tagged automatically as they come in. And via MCP, my AI agents can read and write to the KB mid-task, so research they do during a session gets persisted and searchable later.
The mental model I've landed on: it's the long-term memory layer for both me and my agents. Notes, feeds, and agent outputs all flow in; semantic search and wiki synthesis make it queryable.
Still early but that loop — ingest → tag → synthesize → query — is where it gets powerful.
Report
Nice one @kenforthewin92 , couldn't resonate with the problem more. Though I'd love my Slack knowledge to be ingested as well.
Report
The wiki synthesis feature is the killer differentiator here imo. Every note tool I've used just gives you a folder of disconnected stuff. Having AI that actually reads across your notes and generates cited articles from them is something I haven't seen before.
Built in Rust + SQLite in a single file is also really smart for local-first. No Docker, no Postgres, just works. How big can the graph get before performance starts degrading? Asking because my notes tend to spiral into thousands of entries pretty fast lol.
Totally agree on wiki synthesis, that's the feature that made everything click for me too. The "folder of disconnected stuff" problem is exactly what I was trying to solve.
On performance: the graph uses Sigma.js under the hood, which renders via WebGL — so it's GPU-accelerated and can handle 100k+ nodes without breaking a sweat. I regularly stress test by ingesting thousands of Wikipedia articles in batch, and the graph stays snappy.
The SQLite + Rust combo does a lot of heavy lifting on the backend side too — vector search, full-text search, and graph queries all running against a single file with no external dependencies. For your use case (spiraling thousands of notes) it should be very much in its comfort zone.
Basically: throw everything at it. That's kind of the point. 😄
Report
The local-first + no data leaves your machine angle is underrated. We're building an AI that reads Google Drive files to organize them, and "who sees my content?" is the first question every user asks. Having the model run locally removes that friction entirely. Curious, does Atomic work well with existing large note collections, or is it better started fresh?
For sure - privacy anxiety is real friction and local-first resonates, especially with technically-minded folk.
To your question: Atomic is built for existing collections as well as starting a KB from scratch. The ingestion pipeline is batch-optimized, so dropping in a large library is fast even at scale. A few ways to get existing notes in:
- Folder of markdown files - point it at your vault and it imports in bulk
- RSS feeds - ongoing ingestion, auto-tagged as items come in
- REST API - if you have a custom pipeline or want to push from other tools, it's fully pluggable
Replies
Atomic
Coolest launch of the day fs! Btw do you see atomic as a note taking tool, a personal knowledge OS or something closer to a local-first AI assistant? Also are you using this yourself, if so is it creating an impact in your daily tasks??
Atomic
@lak7
Thanks! and great question — it's closer to a knowledge OS the way I use it.
I have RSS feeds (such as Hacker News front page) piped directly into Atomic, so interesting articles get ingested and tagged automatically as they come in. And via MCP, my AI agents can read and write to the KB mid-task, so research they do during a session gets persisted and searchable later.
The mental model I've landed on: it's the long-term memory layer for both me and my agents. Notes, feeds, and agent outputs all flow in; semantic search and wiki synthesis make it queryable.
Still early but that loop — ingest → tag → synthesize → query — is where it gets powerful.
Nice one @kenforthewin92 , couldn't resonate with the problem more. Though I'd love my Slack knowledge to be ingested as well.
The wiki synthesis feature is the killer differentiator here imo. Every note tool I've used just gives you a folder of disconnected stuff. Having AI that actually reads across your notes and generates cited articles from them is something I haven't seen before.
Atomic
@mihir_kanzariya
Totally agree on wiki synthesis, that's the feature that made everything click for me too. The "folder of disconnected stuff" problem is exactly what I was trying to solve.
On performance: the graph uses Sigma.js under the hood, which renders via WebGL — so it's GPU-accelerated and can handle 100k+ nodes without breaking a sweat. I regularly stress test by ingesting thousands of Wikipedia articles in batch, and the graph stays snappy.
The SQLite + Rust combo does a lot of heavy lifting on the backend side too — vector search, full-text search, and graph queries all running against a single file with no external dependencies. For your use case (spiraling thousands of notes) it should be very much in its comfort zone.
Basically: throw everything at it. That's kind of the point. 😄
The local-first + no data leaves your machine angle is underrated. We're building an AI that reads Google Drive files to organize them, and "who sees my content?" is the first question every user asks. Having the model run locally removes that friction entirely. Curious, does Atomic work well with existing large note collections, or is it better started fresh?
Atomic
@sophiafyi
For sure - privacy anxiety is real friction and local-first resonates, especially with technically-minded folk.
To your question: Atomic is built for existing collections as well as starting a KB from scratch. The ingestion pipeline is batch-optimized, so dropping in a large library is fast even at scale. A few ways to get existing notes in:
- Folder of markdown files - point it at your vault and it imports in bulk
- RSS feeds - ongoing ingestion, auto-tagged as items come in
- REST API - if you have a custom pipeline or want to push from other tools, it's fully pluggable