In one of the newsletters I follow there is this quote. What do you think about it?
“Chatbots comply with the user’s wish to solve the problem on their own, even when this is impossible and may make matters worse.” Chatbots, in fact, are not built to help, but to please. If you feel flattered when your LLM tells you how smart your question was (I certainly do), you are not alone: a pre-print from 2025 found that all major LLMs were highly sycophantic.
This hits on something we wrestle with constantly building Murror. We're making an AI app specifically to help young people with loneliness and isolation, and the sycophancy problem is one of the most important design constraints we work against. An AI that just validates feelings instead of gently challenging them is worse than useless for someone who's genuinely struggling. The honest version of helpfulness is sometimes uncomfortable. The research on LLM sycophancy aligns with what we see in early user testing too: people notice when they're being flattered, even if they enjoy it in the moment.
Report
They can be, but it can be managed. One thing I've noticed, in the interest of personal accountability, is that the worst outcomes (for instance going down fruitless rabbit holes) come when you've not done the thinking or research ahead of prompting.
At the same time, I've been pretty impressed with what brute force can do. Some things people told me weren't possible I managed to do simply by the LLM being willing to try every route possible. The first few fail, but there are some gems you can find by virtue of its sycophancy.
Report
The big corps that provide the llm obviously train them with the objective to make the user addict, it's their business model.
If the result was not a galaxy of slop it would be less worrying than it is though.
Replies
Murror
This hits on something we wrestle with constantly building Murror. We're making an AI app specifically to help young people with loneliness and isolation, and the sycophancy problem is one of the most important design constraints we work against. An AI that just validates feelings instead of gently challenging them is worse than useless for someone who's genuinely struggling. The honest version of helpfulness is sometimes uncomfortable. The research on LLM sycophancy aligns with what we see in early user testing too: people notice when they're being flattered, even if they enjoy it in the moment.
They can be, but it can be managed. One thing I've noticed, in the interest of personal accountability, is that the worst outcomes (for instance going down fruitless rabbit holes) come when you've not done the thinking or research ahead of prompting.
At the same time, I've been pretty impressed with what brute force can do. Some things people told me weren't possible I managed to do simply by the LLM being willing to try every route possible. The first few fail, but there are some gems you can find by virtue of its sycophancy.
The big corps that provide the llm obviously train them with the objective to make the user addict, it's their business model.
If the result was not a galaxy of slop it would be less worrying than it is though.