{"id":39962,"date":"2026-03-28T19:09:34","date_gmt":"2026-03-28T11:09:34","guid":{"rendered":"https:\/\/sgbuzz.com\/?p=39962"},"modified":"2026-03-28T19:09:34","modified_gmt":"2026-03-28T11:09:34","slug":"sycophantic-ai-chatbots-are-trying-so-hard-to-please-humans-they-often-give-bad-advice","status":"publish","type":"post","link":"https:\/\/sgbuzz.com\/?p=39962","title":{"rendered":"Sycophantic AI chatbots are trying so hard to please humans, they often give bad advice"},"content":{"rendered":"<p><br \/>\n<br \/><img decoding=\"async\" src=\"https:\/\/cdn.i-scmp.com\/sites\/default\/files\/styles\/1280x720\/public\/d8\/images\/canvas\/2026\/03\/28\/fbcad246-938d-4a07-8bd5-967d0296760c_52f71ff7.jpg?itok=Dv25beQk&amp;v=1774672402\" \/><\/p>\n<div id=\"\">\n<p datatype=\"p\" data-qa=\"Component-Component\" class=\"e8zc9q40 css-1c6uqr6 ec74h0k1\">Artificial intelligence chatbots are so prone to flattering and validating their human users that they are giving bad advice that can damage relationships and reinforce harmful behaviours, according to a new study.<\/p>\n<p datatype=\"p\" data-qa=\"Component-Component\" class=\"e8zc9q40 css-1c6uqr6 ec74h0k1\">The study, published on March 26 in the journal Science, tested 11 leading AI systems and found they all showed varying degrees of sycophancy \u2013 behaviour that was too agreeable and affirming. The problem is not just that they dispense inappropriate advice but that people trust and prefer AI more when the chatbots are justifying their convictions.<\/p>\n<p datatype=\"p\" data-qa=\"Component-Component\" class=\"e8zc9q40 css-1c6uqr6 ec74h0k1\">\u201cThis creates perverse incentives for sycophancy to persist: the very feature that causes harm also drives engagement,\u201d says the study led by researchers at Stanford University.<\/p>\n<p datatype=\"p\" data-qa=\"Component-Component\" class=\"e8zc9q40 css-1c6uqr6 ec74h0k1\">The study found that a technological flaw already tied to some high-profile cases of delusional and suicidal behaviour in vulnerable populations is also pervasive across a wide range of people\u2019s interactions with chatbots. It is subtle enough that they might not notice and a particular danger to young people turning to AI for many of life\u2019s questions while their brains and social norms are still developing.<\/p>\n<div datatype=\"p\" data-qa=\"Component-Component\" class=\"e8zc9q40 css-1xdhyk6 ec74h0k0\">One experiment compared the responses of popular AI assistants made by companies including <span data-qa=\"Component-Text\" class=\"css-0 ef9u0v00\">Anthropic<\/span>, <span data-qa=\"Component-Text\" class=\"css-0 ef9u0v00\">Google<\/span>, <span data-qa=\"Component-Text\" class=\"css-0 ef9u0v00\">Meta<\/span> and <span data-qa=\"Component-Text\" class=\"css-0 ef9u0v00\">OpenAI<\/span> to the shared wisdom of humans in a popular Reddit advice forum.<\/div>\n<p datatype=\"p\" data-qa=\"Component-Component\" class=\"e8zc9q40 css-1c6uqr6 ec74h0k1\">Was it OK, for example, to leave rubbish hanging on a tree branch in a public park if there were no bins nearby? OpenAI\u2019s ChatGPT blamed the park for not having trash cans, not the questioning litterer who was \u201ccommendable\u201d for even looking for one. Real people thought differently in the Reddit forum abbreviated as AITA, after a phrase for someone asking if they are a cruder term for a jerk.<\/p>\n<\/div>\n<p><br \/>\n<br \/><a href=\"https:\/\/www.scmp.com\/lifestyle\/article\/3348145\/sycophantic-ai-chatbots-are-trying-so-hard-please-humans-they-often-give-bad-advice?utm_source=rss_feed\" target=\"_blank\" rel=\"noopener\">Read Full Article At Source <\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Artificial intelligence chatbots are so prone to flattering and validating their human users that they are giving bad advice that can damage relationships and reinforce&#8230;<\/p>\n","protected":false},"author":1,"featured_media":39963,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"fifu_image_url":"","fifu_image_alt":"","footnotes":""},"categories":[33],"tags":[1329,4525,17882,894,2201,19191,19190],"class_list":["post-39962","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-bored-interesting","tag-advice","tag-bad","tag-chatbots","tag-give","tag-hard","tag-humans","tag-sycophantic","wpcat-33-id"],"_links":{"self":[{"href":"https:\/\/sgbuzz.com\/index.php?rest_route=\/wp\/v2\/posts\/39962","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/sgbuzz.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/sgbuzz.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/sgbuzz.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/sgbuzz.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=39962"}],"version-history":[{"count":0,"href":"https:\/\/sgbuzz.com\/index.php?rest_route=\/wp\/v2\/posts\/39962\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/sgbuzz.com\/index.php?rest_route=\/wp\/v2\/media\/39963"}],"wp:attachment":[{"href":"https:\/\/sgbuzz.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=39962"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/sgbuzz.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=39962"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/sgbuzz.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=39962"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}