I built the Optimism Engine because I noticed a dangerous gap in how we are using AI for mental health.
Right now, everyone is rushing to add "AI Chatbots" to their apps. But there is a huge risk they are ignoring: Hallucinations. Generative AI (like ChatGPT) is creative, but it makes mistakes. It can miss a suicide cue. It can give bad advice. In mental health, a "creative" mistake isn't just a bug, it's a liability.
Research says we have 6,000 - 70,000 thoughts a day. 70 - 80% are negative. 90 - 95% are the same ones on repeat. That loop kept me stuck for years, same doubts, same fears, same not enough story on autoplay. So I built The Optimism Engine: a straightforward CBT-style tool that lets you face one negative thought at a time. Name the distortion. See the layers. Reframe it. Let it go. One belief at a time. One fear at a time. It s not just therapy, it s just a way to interrupt the pattern. Try it here (no signup required): https://optimism-engine.vercel.app/ What s the one thought you d finally unbelieve if you could?
I built The Optimism Engine to explore something simple but powerful: How can AI help people think more clearly - without sounding generic or robotic? The Optimism Engine is designed to guide users through structured cognitive reflection. When someone shares a stressful thought, the system: Identifies possible cognitive distortions Maps the thought through progressive layers (Surface Trigger Emotion Core Belief) Chooses a response mode (regulate, clarify, reframe, plan, or listen) Returns a grounded, non-templated response The goal isn t just therapy, It s structured thinking. Instead of giving advice, it helps users examine their interpretation, underlying fear, and next actionable step. It was an interesting exercise in combining psychology frameworks with deterministic system design. Curious to keep building at the intersection of AI and human cognition.