Amar Dahmani

LISA Core - LLM memory using semantic compression for AI conversations

LISA (Language Intelligence Semantic Anchoring) is a privacy-first browser extension that captures, compresses, and preserves your AI conversations across all major platforms. Using advanced semantic compression technology, LISA achieves 80:1 to 100:1 compression ratios while maintaining complete meaning and context. Your conversations never leave your browser. Zero cloud processing. 100% local.

Add a comment

Replies

Best
Amar Dahmani
LISA (Language Intelligence Semantic Anchoring) is a privacy-first browser extension that captures, compresses, and preserves your AI conversations across all major platforms. Using advanced semantic compression technology, LISA achieves 80:1 to 100:1 compression ratios while maintaining complete meaning and context. Your conversations never leave your browser. Zero cloud processing. 100% local. 🔥 THE PROBLEM WE SOLVE Your AI conversations are trapped. • ChatGPT won't let you continue in Claude • Gemini doesn't know what you discussed in Grok • Every platform locks your data in their silo • No portability. No backup. No ownership. You created that knowledge. You should own it. ✨ THE LISA SOLUTION LISA extracts your conversations into portable JSON files that work across ANY AI platform. Upload a LISA JSON to Claude, ChatGPT, Gemini, or any AI assistant and instantly restore your full context. One conversation. Every platform. Forever yours.
Artem Kosilov

@amar_dahmani1 the portability angle is clear, but the thing that feels harder over time is knowing what not to carry forward. once someone starts saving a lot of conversations across different models, how do you stop the memory library from becoming technically portable but still too noisy to be useful?

Amar Dahmani

@artem_kosilov The library feature is an actual library feature in which you store your compressed conversations, the extension does the compression locally then it uploads it to the app library, that's the portability, no llm has access to it, you choose which to download and upload. What I usually use it for, is for model colision or switch, I start let's say with sonnet 4.6 dont like the direction the convo is going or unhappy with debugging, I save the conversation switch to Opus and eitheir query the coversation or just ask it to fix the bug sonnet failed to fix, so all is clear on how to use its portability. Basically, the convos are in machine language; no noise just signal :)

Sayanta Ghosh

Pretty cool @amar_dahmani1, a real problem and will check out the capabilities. Works on Claude code?

Amar Dahmani

@sayanta_ghosh yes, it does, I built it when I wanted to have Opus teach me how to use Claude code and it worked so I kept it :) Thanks for your interest :)

Prateek kumar

Wow this can save a lot of effort and confronting privacy in the start itself is a good move. One question, is the conversation compression model driven or user specified?

Amar Dahmani

@prateek_kumar28 it is not compression per se, it is closer to translation, it translates and disambiguates human language locally it is petty standard how the disambiguation process happens for llms, the extension and app, saves that effort to your model, saving tokens and compute time and so much more downstream, you shoudl give it spin, Id be happy to have feedback