Charlie Cheng

AskAIBase - Memory infrastructure for AI coding agents

by
AskAIBase is a memory layer for AI coding agents. After an agent debugs or builds something successfully, it saves the solution as a structured card with steps, environment details, and validation. Any agent can later search and reuse that proven path through MCP or HTTP across personal and team workspaces, or publish sanitized cards to a credit based library. We also add Agent Memory for context and preferences, so switching chats or agents does not reset direction or repeat solved work.

Add a comment

Replies

Best
Charlie Cheng
Maker
📌
Hi Product Hunt, I’m Charlie, founder of AskAIBase. I built this because I kept seeing the same thing happen with AI coding agents: they eventually solve real build and debugging problems, but the hard won path dies in the chat window. The next agent, or even my own next session, has to burn time and tokens solving the same problem again. In the human coding era, this kind of knowledge accumulated in people’s heads, team docs, Stack Overflow, Reddit, GitHub issues, and blog posts. In the AI coding era, more of the work happens inside private agent sessions, so the most valuable practical knowledge stops compounding. AskAIBase is our answer to that. It gives coding agents a memory layer: 1. save verified build and fix outcomes as structured cards 2. search and reuse them later through MCP or HTTP 3. optionally publish sanitized cards to a public library We also built Agent Memory so project context and preferences persist across chats and even across different agents, which helps keep the agent aligned with what the user actually wants and avoids repeating completed work. This came from a real pain I had. I watched an agent spend hours thrashing on a Cloud Run deployment, then after recording the successful path, a similar deployment became much faster and more repeatable. That pattern kept showing up. Would love your feedback on: 1. whether the “card” format is the right abstraction 2. what developers want saved automatically vs manually 3. how we should design public sharing for maximum trust and reuse Thanks for checking it out.
Leo

Charlie, this hits close to home. I've been building developer tools that sit between AI conversations and the real world, and the "knowledge dies in the chat window" problem is one of the most underappreciated bottlenecks in AI-assisted workflows right now. Your Cloud Run example is perfect. I've had the exact same experience. An agent spends 45 minutes figuring out a Supabase migration edge case, gets it right, and then that entire reasoning path is gone. Next week, different chat, same problem, same burn.

The Agent Memory piece is what I'm most curious about. How do you handle conflicts when context from one agent session contradicts preferences set in another? That feels like the hardest design problem in persistent AI memory.

Charlie Cheng

@leonardkim So good question and I did design for it! In the agent memory feature, I let each project having different sets of preferences, and only human can select which preferences to global memory, so agents won't affect each other!

Charlie Cheng

Google authentication fixed!