wow, such a great tool.. congrats on the launch! i love the idea of immediate crisis recognition and actual help with connecting to the real therapy. which languages does it support for now?
@yellow_yetti thanks a lot Vera!! Right now it is english-only, but more is to come!
Report
Maker
@yellow_yetti We have crisis management btw. If Anna will notice destructive behavior or dangerous, she will help user to connect to real therapist or emergency1
@yellow_yetti what languages would you suggest to add next?
Report
Really interesting positioning especially the clinical foundation.
From a safety perspective, how do you ensure consistency in responses over time? Is Anna aligned through fine-tuning, or do you rely on structured prompting and guardrails on top of a foundation model?
In mental health use cases, stability and drift control seem crucial, curious how you approach that.
@rachid_jeffali Thanks Rachid! Great question - it's actually both. We use fine-tuning AND structured guardrails on top of a foundation model. For us, the most critical thing is detecting potential mental health issues early and communicating them to users in time. Stability in a mental health context isn't optional, it's the whole point. What kind of drift patterns concern you most in clinical AI use cases?
The maker confirmed it’s online-only, but the promise is “anytime you need it.” What happens if someone is in a panic situation on a flight or somewhere with no signal — is there any fallback or is Anna simply unavailable?
@fabrice_salah Great catch, Fabrice - you're right, currently "anytime you need it" means 24/7 with an internet connection. Coverage is pretty solid these days, but the airplane case is a real one. I actually had severe flight anxiety for years that I only recently managed to overcome, so this hits close to home. We're adding offline access to our backlog for sure. Are there other offline scenarios where you'd want AI emotional support?
You mention HIPAA compliance and encryption at rest, but I didn’t see anything about a BAA, SOC2, or real clinical validation yet since it’s still “launching soon.” What has the crisis detection system actually been tested against so far?
@devopscraftsman Fair point, Alexandre. There's actually growing independent clinical evidence here - for example, a randomized controlled trial published in NEJM AI showed that a generative AI chatbot significantly reduced symptoms of depression, anxiety, and eating disorders vs. control, and participants rated therapeutic alliance with the AI as comparable to that of human therapists.
On crisis detection - we use NLP to analyze natural language in real-time, catching triggers for harmful thoughts from conversational context.
What level of clinical evidence would make you comfortable recommending a tool like this?
@sirthaven Sorry about that, Jakub! With our #1 Product Hunt launch we're seeing a huge influx of new users, so support response times are a bit longer than usual. Could you share the email you used to contact support? We'll double-check and make sure your subscription is unlocked ASAP 🙏
Anton, this is a strong and thoughtful approach to AI in mental health 👏
What really stands out is your focus on evidence-based frameworks and not just building an agreeable chatbot. The idea of gently challenging unhealthy thinking is a big differentiator.
A few quick questions:
How do you balance therapeutic challenge vs. user retention?
How robust is the crisis detection system in avoiding false positives/negatives?
Are you seeing higher engagement with voice compared to text-based AI tools?
Will clinical validation results be publicly shared?
Positioning Lovon as a bridge between sessions (not a replacement) feels responsible and well thought out.
@clyqforge Thanks for the kind words! On the balance — we found that gentle challenge actually improves retention because users feel real progress, not just validation. Voice engagement is significantly higher than text for us, which is why we went voice-only. And yes — clinical validation results will absolutely be shared publicly. What's the feature you'd personally want to see validated first?
@dora_akulshina good question, Dora! Actually right now we do not work in that direction, but we see high potential there and will add it in future updates 100%!
Replies
Evra
Hey man it's prince founder of evra, congrats on your launch.
would definitely give it a shot
Lovon AI therapy
@princeajuzie hey Prince! thanks!!
Will oversee Evra as well!
wow, such a great tool.. congrats on the launch!
i love the idea of immediate crisis recognition and actual help with connecting to the real therapy. which languages does it support for now?
Lovon AI therapy
@yellow_yetti thanks a lot Vera!! Right now it is english-only, but more is to come!
@yellow_yetti We have crisis management btw. If Anna will notice destructive behavior or dangerous, she will help user to connect to real therapist or emergency1
Lovon AI therapy
@yellow_yetti what languages would you suggest to add next?
Really interesting positioning especially the clinical foundation.
From a safety perspective, how do you ensure consistency in responses over time?
Is Anna aligned through fine-tuning, or do you rely on structured prompting and guardrails on top of a foundation model?
In mental health use cases, stability and drift control seem crucial, curious how you approach that.
Lovon AI therapy
@rachid_jeffali Thanks Rachid! Great question - it's actually both. We use fine-tuning AND structured guardrails on top of a foundation model. For us, the most critical thing is detecting potential mental health issues early and communicating them to users in time. Stability in a mental health context isn't optional, it's the whole point. What kind of drift patterns concern you most in clinical AI use cases?
Lovon AI therapy
@rachid_jeffali thanks a lot, Rachid! Appreciate!
The maker confirmed it’s online-only, but the promise is “anytime you need it.” What happens if someone is in a panic situation on a flight or somewhere with no signal — is there any fallback or is Anna simply unavailable?
Lovon AI therapy
@fabrice_salah Great catch, Fabrice - you're right, currently "anytime you need it" means 24/7 with an internet connection. Coverage is pretty solid these days, but the airplane case is a real one. I actually had severe flight anxiety for years that I only recently managed to overcome, so this hits close to home. We're adding offline access to our backlog for sure. Are there other offline scenarios where you'd want AI emotional support?
Lovon AI therapy
@fabrice_salah thank you, Fabrice!
You mention HIPAA compliance and encryption at rest, but I didn’t see anything about a BAA, SOC2, or real clinical validation yet since it’s still “launching soon.” What has the crisis detection system actually been tested against so far?
Lovon AI therapy
@devopscraftsman Fair point, Alexandre. There's actually growing independent clinical evidence here - for example, a randomized controlled trial published in NEJM AI showed that a generative AI chatbot significantly reduced symptoms of depression, anxiety, and eating disorders vs. control, and participants rated therapeutic alliance with the AI as comparable to that of human therapists.
On crisis detection - we use NLP to analyze natural language in real-time, catching triggers for harmful thoughts from conversational context.
What level of clinical evidence would make you comfortable recommending a tool like this?
Lovon AI therapy
@devopscraftsman Alexandre, thank you for your questions! We value this topic at 10/10 importance level
subscription is not working, after payment features are still locked
support is not responding to messages
Lovon AI therapy
@sirthaven Sorry about that, Jakub! With our #1 Product Hunt launch we're seeing a huge influx of new users, so support response times are a bit longer than usual. Could you share the email you used to contact support? We'll double-check and make sure your subscription is unlocked ASAP 🙏
Lovon AI therapy
@sirthaven curious if now everything is ok?
Anton, this is a strong and thoughtful approach to AI in mental health 👏
What really stands out is your focus on evidence-based frameworks and not just building an agreeable chatbot. The idea of gently challenging unhealthy thinking is a big differentiator.
A few quick questions:
How do you balance therapeutic challenge vs. user retention?
How robust is the crisis detection system in avoiding false positives/negatives?
Are you seeing higher engagement with voice compared to text-based AI tools?
Will clinical validation results be publicly shared?
Positioning Lovon as a bridge between sessions (not a replacement) feels responsible and well thought out.
Excited to see how this evolves 🚀
Lovon AI therapy
@clyqforge Thanks for the kind words! On the balance — we found that gentle challenge actually improves retention because users feel real progress, not just validation. Voice engagement is significantly higher than text for us, which is why we went voice-only. And yes — clinical validation results will absolutely be shared publicly. What's the feature you'd personally want to see validated first?
Lovon AI therapy
@clyqforge thank you for your input!
Amazing project Anton! Wish you all the best here
Lovon AI therapy
@german_merlo1 thank you German!
Lovon AI therapy
@michael_vavilov thanks Michael! Highly appreciate!
Lovon AI therapy
@michael_vavilov thanks a lot for support
Congratulations on the launch! You guys are doing awesome product imo!
Btw how well does Lovon understand emotional nuances in the voice (happiness, sadness, etc)?
Lovon AI therapy
@dora_akulshina good question, Dora! Actually right now we do not work in that direction, but we see high potential there and will add it in future updates 100%!
Thank you!
Lovon AI therapy
@dora_akulshina thanks a lot! we do sentiment analysis to understand users emotions