JavaScript and Python SDKs for Mnexium
The Mnexium SDKs give you a complete memory infrastructure as a service. Install the package, pass your LLM provider key, and your AI remembers. Node (https://www.npmjs.com/package/@mnexium/sdk) Python (https://pypi.org/project/mnexium/) Docs (https://mnexium.com/docs) Get Started Both SDKs are MIT licensed and available now. No sign-up required — omit the API key and a trial key is...
🆓 Mnexium Free Tier — Easy API, No Signup
Quick update — we just launched a free tier that requires zero signup. You can now use Mnexium without creating an account. Just make an API call with your own OpenAI or Anthropic key, and we auto-provision a trial key for you on the spot. await client.chat.completions.create({ model: "gpt-4o", messages: [{ role: "user", content: "My name is Sam." }], mnx: { subject_id: "user_ID_123", } }); No...
Video demo: How Mnexium adds persistent memory & context to AI applications
This short demo shows how Mnexium works as a memory and context layer for AI apps. Mnexium sits between your app and the LLM to provide: 🧠 Persistent memory across sessions 📜 Inspectable & resumable chat history 🧩 Structured user profiles and long-term context 🔁 Automatic recall and injection — no prompt juggling The goal is simple: AI apps that remember users, stay consistent, and feel...
Free Drop-in “Chat with X” for your app — NPM package for your site
Hi all - I've built @Mnexium AI and I thought the fastest way to get folks to try was it to build a chat plug-in for websites. I am providing free keys (however much usage it may be) to anyone who is willing to try it. The plug-in can be found on NPM https://www.npmjs.com/package/@mnexium/chat npm install @mnexium/chat <MnexiumChat endpoint="/api/mnx" /> Out of the box you get: A professional...
🚀 @mnexium/chat — Drop-In AI Chat for Any Web App
We just shipped @mnexium/chat: a single npm package that adds a polished, production-ready AI chat widget to any website. React, Next.js, Express, or plain HTML — it just works, and most importantly it remembers. The Problem Adding AI chat to a product usually means: Designing and building a custom UI Handling streaming responses Managing conversation state Securing API keys ... and it still...
We Built a Live AI Memory Demo — Try It Now
See AI Memory in Action We just shipped something we're really excited about: a fully interactive demo where you can experience AI with persistent memory — no signup required. 👉 mnexium.com/chat What You'll See The demo is a real-time chat interface that shows exactly how Mnexium works: Why We Built This Docs and code samples only go so far. We wanted developers to feel what it's like when AI...
Memory Decay: AI Memory That Forgets Like Humans Do
Most AI memory systems treat all memories equally. Something mentioned two years ago carries the same weight as yesterday's conversation. That's not how human memory works — and it creates awkward, irrelevant AI responses. Today we launched Memory Decay, a feature that makes AI memory behave more like human memory. Frequently used memories stay strong. Unused ones naturally fade. The result is...
AI Is Learning About You. You Should Own What It Learns
When people talk about AI memory, it’s usually framed from the developer’s side. How do we store it? How do we retrieve it? How do we keep context alive? This is where @Mnexium AI started as well since that ecosystem is important. But the initial vision and goal was very different and yet to be executed on. What if users owned their memories — not just the app owners? Today, every AI product...
🧠 Memory Graphs: Visualize How Your AI Remembers
When building AI agents with long-term memory, debugging is a challenge. You know something was remembered — but: ➡When was it created? ➡What replaced it? ➡Why is it being recalled now? ➡Why was it created as a memory in the first place? Memory Graphs is our first attempt to fix that. 🔗 See memory evolution Watch facts evolve across conversations: “favorite color = blue” → green → red → yellow...
🚀 New Provider: Google Gemini Support is Live!
@Mnexium AI Now supports all three major AI providers! ✅ OpenAI ChatGPT models ✅ Anthropic Claude Models ✅ Google Gemini Models ← NEW Why this matters: Your users can now seamlessly switch between providers while keeping their memory and context intact. Learn something with GPT-4 → Recall it with Gemini → Continue with Claude. Same user. Same memories. Any model. How it works: Just use the...
🧠 AI apps need memory but building it yourself is brutal
Most AI apps eventually hit the same wall. They forget users unless you build a ton of infrastructure first. This means every AI dev eventually will end up building this infra to provide the best user experience needs for their agent and app. What “rolling your own” really means: Vector DBs + embeddings + tuning Extracting memories from conversations (and resolving conflicts) Designing user...
🧠 𝐀𝐈 𝐚𝐩𝐩𝐬 𝐟𝐚𝐢𝐥 𝐛𝐞𝐜𝐚𝐮𝐬𝐞 𝐭𝐡𝐞 𝐦𝐞𝐦𝐨𝐫𝐲 𝐢𝐬 𝐛𝐚𝐝
🧠 𝐀𝐈 𝐚𝐩𝐩𝐬 𝐝𝐨𝐧’𝐭 𝐟𝐚𝐢𝐥 𝐛𝐞𝐜𝐚𝐮𝐬𝐞 𝐭𝐡𝐞 𝐦𝐨𝐝𝐞𝐥 𝐢𝐬 𝐛𝐚𝐝. 𝐓𝐡𝐞𝐲 𝐟𝐚𝐢𝐥 𝐛𝐞𝐜𝐚𝐮𝐬𝐞 𝐭𝐡𝐞 𝐦𝐞𝐦𝐨𝐫𝐲 𝐢𝐬. As more teams ship AI assistants, one quiet problem keeps showing up: ➡️ 𝐂𝐨𝐧𝐯𝐞𝐫𝐬𝐚𝐭𝐢𝐨𝐧𝐬 𝐠𝐞𝐭 𝐥𝐨𝐧𝐠𝐞𝐫 ➡️ 𝐂𝐨𝐧𝐭𝐞𝐱𝐭 𝐤𝐞𝐞𝐩𝐬 𝐠𝐞𝐭𝐭𝐢𝐧𝐠 𝐫𝐞-𝐬𝐞𝐧𝐭 ➡️ 𝐂𝐨𝐬𝐭𝐬 𝐞𝐱𝐩𝐥𝐨𝐝𝐞 — 𝐚𝐧𝐝 𝐪𝐮𝐚𝐥𝐢𝐭𝐲 𝐝𝐫𝐨𝐩𝐬 Above we've together the comparison below to show how the main “memory” approaches stack up — and when each one actually makes sense. What stood out: 🔹...
Switch between ChatGPT and Claude — without losing memory or context
We just shipped multi-provider support in @Mnexium AI — so you can change LLMs without resetting conversations, user context or memories. The problem When teams switch providers, they usually lose everything: conversation history user preferences long-term memory learned context Every conversation starts from zero. Not great for UX — or retention. What Mnexium does Mnexium now works with both:...
Mnexium — a memory layer so AI apps don’t forget anything
Most AI products eventually hit the same problem They forget who the user is… and the experience feels generic again. We’ve been working on @Mnexium AI , a simple memory layer for AI apps that: remembers users across sessions recalls relevant context automatically keeps conversation history lean (rolling summaries) creates structured user profiles over time No vector DB setup, no custom...
Feature Update: Rolling Conversation Summaries — Cut Chat Costs Without Losing Context
We built a feature to solve a problem most AI apps eventually run into: The longer the conversation, the more you keep paying to resend the entire chat history — over and over. Blog here (https://www.mnexium.com/blogs/chat-summarization) Docs here (https://www.mnexium.com/docs#summarize) That “token tax” adds up fast. In the blog, we walked through a realistic scenario: 40 messages per...
🚀 New Feature in Mnexium: Profiles that Build Themselves
Profiles that populate automatically Mnexium can build structured user profiles based on what users say in conversations — without any separate onboarding forms. If a user says: “I’m Sarah from Acme” Mnexium records: name = Sarah company = Acme You define the schema You choose what fields exist: company_name job_title subscription_tier preferred_pharmacy shipping_preference Anything that makes...


🚀 Getting-started: Build a ChatGPT-style app with persistent memory
In this new getting-started guide, you will learn how to build a ChatGPT-style application that includes persistent memory, conversation history, and semantic recall — all using a single API from Mnexium. The guide walks through how Mnexium simplifies AI memory by replacing complex setups such as: • vector databases • embedding pipelines • retrieval logic • custom chat storage Instead, memory,...

