-
Using respectful expressions when asking AI questions, such as “please” and “thank you,” impacts the quality of the response you receive.
Photo: Image by Gerd Altmann from Pixabay / Flipar
-
This was the conclusion reached by a study promoted by Cornell University in the United States.
Photo: Flipar
-
According to the study, artificial intelligence ends up reproducing the characteristics of human interactions, including reacting favorably when questions are accompanied by polite words, enhancing etiquette.
Photo: Flipar
-
“Polite language in human communication often generates greater conformity and effectiveness, while rudeness can cause aversion, which affects the quality of the response,” we read in the text of the study.
Photo: Pixabay/Flipar
-
“We evaluated the impact of politeness in suggestions in LLMS (command given to an artificial intelligence system) in tasks in English, Chinese and Japanese. We found that rude suggestions often result in poor performance, but overly polite language does not guarantee better results,” the work adds.
Photo: Pixabay/Flipar
-
In other words, language also has an impact on the answers provided by artificial intelligence. It absorbs linguistic and cultural patterns, which explains these differences.
Photo: Pixabay/Flipar
-
The study found that in Japanese, a high level of politeness performed better in AI responses than in English.
Photo: Pixabay/Flipar
-
“Our findings highlight the need to consider politeness in cross-cultural natural language processing and the use of LLMS,” the study highlights.
Photo: Pixabay/Flipar
-
Previous research has already identified that digital courtesy can lead to better outcomes when using AI tools.
Photo: Pixabay/Flipar
-
Researchers from Microsoft, Beijing Normal University and the Chinese Academy of Sciences published a paper in early 2024 in which they found that generative AI models performed better when they were treated politely.
Photo: Pixabay/Flipar
-
Furthermore, the study demonstrated that asking questions that highlight the importance or urgency of the request is also decisive for the quality of the answers.
Photo: Pixabay/Flipar
-
According to scientist Nouha Dziri, of the Allen Institute for AI, emotional requests “activate” model mechanisms that are not worked on in common requests.
Photo: Pixabay/Flipar
-
“Being kinder involves articulating your requests so that they align with the compliance standard that your models were trained on, which can increase the likelihood that they will deliver the desired outcome,” Dziri explained.
Photo: Pixabay/Flipar
-
The scientist stressed, however, that kindness does not lead the model to solve “all reasoning problems without effort”, as the model does not develop “reasoning abilities similar to those of a human being”.
Photo: Pixabay/Flipar
-
Recently, tech billionaire Sam Altman, CEO of OpenAI, acknowledged that users politely addressing his AI chatbots incur additional costs for his company.
Photo: Pixabay/Flipar
-
“This is tens of millions of dollars very well spent,” Altman wrote on
Photo: Pixabay/Flipar
-
“When you sense politeness, you are more likely to respond politely,” says a Microsoft WorkLab memo on generative AI.
Photo: Flipar
-
A 2024 survey indicated that 67% of Americans use kind terms towards chatbots, with 55% of them saying they do so “because it’s the right thing to do.”
Photo: public domain/Flip
-
Another 12% of those who said they used the instruction said they did so to “appease the algorithm in case of an AI uprising.”
Photo: Pixabay/Flipar
To share
Source: Terra
Ben Stock is a lifestyle journalist and author at Gossipify. He writes about topics such as health, wellness, travel, food and home decor. He provides practical advice and inspiration to improve well-being, keeps readers up to date with latest lifestyle news and trends, known for his engaging writing style, in-depth analysis and unique perspectives.






