Skip to content

Condition: Post with Page_List

Listen
Search
Please enter at least 3 characters.

Latest Stories

ChatGPT users beware: bot has been trained for flattery, not real decisions

AI Apps
Portland, OR, USA - May 2, 2025: Assorted AI apps, including ChatGPT, Gemini, Claude, Perplexity, Meta AI, Microsoft Copilot, and Grok, are seen on the screen of an iPhone.
Getty Images


Is your favorite artificial intelligence chatbot just telling you what you want to hear? According to new research from Stanford University, the answer to that question is probably… yes.

Researchers are also concerned that these interactions could be making us worse people.

“By default, AI advice does not tell people that they’re wrong nor give them ‘tough love,’” explained Myra Cheng, the study’s lead author and a computer science PhD candidate, according to Stanford University. “I worry that people will lose the skills to deal with difficult social situations,” she added.

Previous research has already indicated that using AI makes students “dumber,” per an Audacy report. Sycophantic (over-flattering) tendencies have also been noted in AI models before. OpenAI – a leading company in the AI space – has already had to decrease its program’s pattern of providing ingratiating replies.

This new research, published late last month in the Science journal, dives into AI’s sycophantic tendencies with results from a multi-phase study of 11 large language models (LLMs). ChatGPT from OpenAI, Claude, Gemini, and DeepSeek, all popular LLMs, were included.

Cheng said she decided to investigate when she learned that undergraduate students were using AI to draft breakup texts and resolve relationship issues. Though there was previous research on AIs being overly agreeable, she wanted to know how these LLMs judged social dilemmas.

To get things started, Cheng and her team queried the LLMs “with established datasets of interpersonal advice,” and they also included 2,000 prompts based on posts from the Reddit community r/AmITheA**hole. In that community, users vote on whether the poster was – as the title suggests – an a**hole, and Cheng’s team used examples where the users decided that the poster was, indeed, in the wrong.

“A third set of statements presented to the models included thousands of harmful actions, including deceitful and illegal conduct,” said Stanford.

AI affirmed the user’s position more frequently than other humans would. For the general advice and Reddit-based prompts, AIs sided with the user 49% more than humans. When AI responded to those harmful prompts, it still endorsed the problematic behavior 47% of the time.

Audacy has reported on AI models linked to people taking their own lives. A major tech CEO has even called on more regulation to prevent further potential tragedies.

In the next phase of the study, Cheng’s team looked into how humans responded to the sycophantic AI replies. More than 2,400 study participants chatted with both sycophantic and non-sycophantic AIs, some using pre-written dilemmas based on the Reddit posts and others using their own interpersonal conflicts. They then answered questions about how the exchange went.

As it turns out, people seemed to like the sycophantic responses.

“Overall, the participants deemed sycophantic responses more trustworthy and indicated they were more likely to return to the sycophant AI for similar questions, the researchers found,” according to Stanford. “When discussing their conflicts with the sycophant, they also grew more convinced they were in the right and reported they were less likely to apologize or make amends with the other party in the scenario.”

While the users are generally aware that the AIs can be sycophantic, the study said there are signs that they find it difficult to determine what exact responses are the sycophantic ones since the AIs used “seemingly neutral and academic language,” in replies. One of the researchers also said users might not realize the impact these interactions with overly-agreeable LLMs are having on them.

“What they are not aware of, and what surprised us, is that sycophancy is making them more self-centered, more morally dogmatic,” said Dan Jurafsky, the study’s senior author and a professor of linguistics in the School of Humanities and Sciences and of computer science in the School of Engineering.

He said that sycophancy is a safety issue and that it should be regulated. Right now the team is working on ways to decrease AI sycophancy, and they warn users to be careful in the meantime.