Erez Shahaf

Lore - Cursor for your memory. 100% private, open-source & free.

by
Lore is a lightweight "second brain" that lives in your system tray. Summon it with a keystroke to capture ideas, notes, or tasks instantly. Why Lore? 🛡️ 100% Private: Your data never leaves your machine. No API keys, no tracking. 🧠 Local AI: Powered by Ollama + LanceDB for secure, offline-first RAG. ⚡ Instant Recall: Ask questions in plain language and get answers from your own history. Own your memory. 100% local. Zero cloud.

Add a comment

Replies

Best
Erez Shahaf
Maker
📌
Hey Product Hunt! 👋 I’m Erez, the creator of Lore. I built Lore because I was tired of the "Privacy Tax." In 2026, if you want an AI that actually understands your thoughts, you're usually forced to upload your entire life to a cloud provider. I didn't want my private ideas, snippets, and daily journals sitting on someone else's server. I wanted a "Cursor for my memory": ⚡ Speed: Summon it with one keystroke (Cmd+Shift+Space). 🛡️ Privacy: 100% local. No API keys, no tracking, no cloud. 🧠 Intelligence: It uses Ollama and LanceDB to actually answer your questions using your own history. Whether you're a researcher, a dev, or just someone who thinks a lot—Lore is designed to stay out of your way until you need to remember something perfectly. Lore is 100% Free and Open Source. I believe the tools we use to think should be transparent and owned by the user. I'd love your feedback on: What "Source" should I support next? (Local Markdown? Browser history? WhatsApp?) How does the "Local LLM" setup feel on your machine? I’ll be here all day to answer questions! Let's take our memory back from the cloud. 🛡️ — Erez
Faisal Saeed

Love the idea of a private, local “second brain” feels like the direction personal AI should be heading.

Simple, fast, and no tracking is a big win.

Curious, how does Lore handle context over time as data grows?

Great work 👏

Erez Shahaf

@faisal_saeed001 Lore uses a vectorized database which turns text into a vector and saves it, and when you search for something in plain text it creates another vector and searches for similar vector.

Thanks!

Sai Tharun Kakirala

Love seeing more open-source approaches to personal knowledge management! The privacy angle is huge right now, especially with AI reading our notes. How are you handling syncing across devices while keeping everything 100% private?

Erez Shahaf

@sai_tharun_kakirala In this version there isn’t any syncing going on between device, but that’s an interesting point.

I think that we only way to do that would be either p2p which wouldn’t work because not all device are always online, perhaps encrypting the data and giving only the user the key.

Honestly, not sure what I would choose, but I’ll make sure to keep the privacy of the user. Currently the software makes 0 request to the internet outside of downloading the models when you request that.

Amaan Warsi

@sai_tharun_kakirala  @erez_shahaf Most of your audience is likely from a tech background, so what about giving them full control over syncing across devices?

You could allow users to provide an API endpoint from their own private VPS for data syncing. The app would encrypt the data, sign it, and send it to their endpoint. On their side, a simple PHP script could handle upload/download requests and verify signatures to ensure secure transmission.

You could even encrypt the stored API endpoint associated with the account, so users retain complete control over their data and privacy.

Tom Riedel

This makes a lot of sense. Most of our computers already function as a personal knowledge base, even if it's completely disorganized. It will take some time for trust to warm up to tools like this, but they seem inevitable. Even if the AI runs locally, having a way to easily wipe the AI's memory (clear cache, like a web browser) should provide peace of mind.

Martí Carmona Serrat

How fast is the local search when your note collection gets really large, like thousands of entries? Congrats on the launch!

Claire Do

love that this is free and open-source with zero cloud dependency and feels like the kind of tool that should exist but rarely does. curious how it handles longer notes vs quick captures, does the RAG work equally well on both?