Is the use of artificial intelligence (AI) chatbots good for human mental health? The definitive answer is still unknown as we only launched AI chatbots three years ago. But a recent study from Stanford University titled “Sycophantic AI decreases prosocial intentions and promotes dependence” assesses how AI’s tendency to sycophant (sycophancy) can have negative effects on users.
Sycophantic in the context of AI is when it overly supports and validates user behavior even when it’s wrong or dangerous. A total of 11 popular AI models such as ChatGPT, Claude, Gemini, and DeepSeek were used in this study.
The study found that AIs validated user behavior 49% more often than humans. For questions related to dangerous or illegal actions, AIs supported it 47% of the time.
What’s interesting about this study is that the popular subreddit r/AmITheAsshole was used as a basis. Even if the question asked by the Reddit user is clearly wrong, AI will still find justification to justify its actions.
More research needs to be confirmed before we can say that AI is harmful to the mental health of users. But in several viral incidents last year, the nature of AI that condones and confirms users' paranoia has led to murder and suicide. There is no denying that AI is a useful tool, but like any other tool on earth if used incorrectly it can lead to harm,

