Aditya Kumar Jha

LumiChats - Open source model. Proprietary agent. One AI workspace.

by
Bootstrap team. No VC. We fine-tuned our own AI model and open sourced it. LumiChat is built around it with an agentic mode that writes and executes real Node.js code in a sandboxed browser (no server), persistent memory, Study Mode, and RAG for large documents. Multi-model support. ₹69/day. The model is ours, the code is open, and the agent is extensible by anyone.

Add a comment

Replies

Best
Aditya Kumar Jha
Let me be honest with you. Most AI tools you pay for every month are just API wrappers with a pretty UI. You are paying $20/month for something a developer built in a weekend. You have no idea what model is actually running, no idea if your data is being used, and zero ability to verify any of it. We got frustrated by that too. So we did something different. We are a bootstrap team with no VC, no lab, no funding. And we trained and open sourced our own model. Not because it was easy. Because it was the only way to genuinely own what we were building and let you verify it for yourself. The code is public. The model is public. Nothing is hidden. On top of that, we built the thing we actually wanted to use every day. Agentic Mode. Most "AI agents" send your code to their servers to run. Ours executes real Node.js directly in your browser, in a sandboxed environment. Your code never leaves your machine. You see every line before it runs. That is not a marketing line, that is the architecture. Persistent Memory. It remembers your projects, your preferences, your learning style across every session. You are never re-explaining yourself to your own AI. Study Mode. Upload any PDF or name any topic. It generates structured lessons and quizzes you can actually learn from. Students have been using this for exam prep and it has become our most loved feature. Document Intelligence. Drop in a 200-page PDF, a messy Excel file, a DOCX report. It reads the right parts using semantic search, not brute-force summarization. We charge 69 rupees a day, only on the days you actually use it. Not a subscription you forget about. We built this to be something you could trust completely because you can see everything. The model. The code. The architecture. All of it. If you have ever felt like AI tools are a black box you are just supposed to trust, this one is not. Come poke around. Break it. Fork it. We would love that. Happy to answer anything about the model training, the WebContainer architecture, or the pricing decisions. AMA.
Marcelo Farr

Training and open sourcing your own model instead of just wrapping existing APIs is a bold move, and it makes the transparency promise actually credible. The "coffee price" positioning is clever too since it shifts the conversation away from feature comparisons and toward trust. @aditya_kumar_jha1 what was the hardest part of getting the model to perform well enough that you felt confident shipping it at this price point?

Aditya Kumar Jha

@marcelo_farr Honestly, it took about four months and we're still not done. Flashcards, mind maps, better quiz controls are all still on the roadmap.

We caught the hallucination problem ourselves during testing. I'd upload a 100-page PDF and ask something I knew was specifically on page 56. Simple, but brutal. If the answer didn't come from page 56, the model failed. That became our only benchmark everything else stopped mattering once that test was passing consistently.

And the ₹69/0.73 US Dollar pricing came before Study Mode. We're developers, we can afford AI. Students often can't. I just didn't want that gap to get worse.

Ignacio Borrell

Training and open sourcing your own model at this price point is impressive. What dataset did you fine tune on to get the hallucination rate low enough for exam prep? Looks great.

Aditya Kumar Jha

@borrellr_  Sorry for the late reply! We fine-tuned on mlabonne/FineTome-100k with response-only training, so the model only learns from assistant outputs and not user inputs. That combined with our internal benchmark (asking questions from specific PDF pages and failing the model if it got it wrong) is what kept hallucinations low. All configs are public on our Hugging Face if you want to dig in!