Sycophantic AI chatbots are trying so hard to please humans, they often give bad advice

Sycophantic AI chatbots are trying so hard to please humans, they often give bad advice



Artificial intelligence chatbots are so prone to flattering and validating their human users that they are giving bad advice that can damage relationships and reinforce harmful behaviours, according to a new study.

The study, published on March 26 in the journal Science, tested 11 leading AI systems and found they all showed varying degrees of sycophancy – behaviour that was too agreeable and affirming. The problem is not just that they dispense inappropriate advice but that people trust and prefer AI more when the chatbots are justifying their convictions.



Read Full Article At Source