Will AIs Give Better Results If You're Rude To Them?
Will you be rewarded with better results if you are rude to your AI? Scott Adams was wondering.
"There's a study that shows that AI chatbots will give you better results if you're rude to them... I asked Grok if it would give me better answers if I were polite. And it said yes."
So, it turns out there is a study:
The wording of natural language prompts has been shown to influence the performance of large language models (LLMs), yet the role of politeness and tone remains underexplored. In this study, we investigate how varying levels of prompt politeness affect model accuracy on multiple-choice questions. We created a dataset of 50 base questions spanning mathematics, science, and history, each rewritten into five tone variants: Very Polite, Polite, Neutral, Rude, and Very Rude, yielding 250 unique prompts. Using ChatGPT 4o, we evaluated responses across these conditions and applied paired sample t-tests to assess statistical significance. Contrary to expectations, impolite prompts consistently outperformed polite ones, with accuracy ranging from 80.8% for Very Polite prompts to 84.8% for Very Rude prompts.
These findings differ from earlier studies that associated rudeness with poorer outcomes, suggesting that newer LLMs may respond differently to tonal variation. Our results highlight the importance of studying pragmatic aspects of prompting and raise broader questions about the social dimensions of human-AI interaction.
(Via Mind Your Tone: Investigating How Prompt Politeness Affects LLM Accuracy)
John Varley, in his 1983 novel "Millenium, has his heroine Louise Baltimore talk smack to the Big Computer:
I said, "Listen up, motherfucker."
And the Big Computer answered, "What the hell do you want?"
No toadying servomechanisms for me. When I accessed, I wanted to feel like I was talking to something at least as nasty as I was. Everybody I know likes to have the BC simper at them like a receptionist or baritone its words like a wide-screen Jehovah. Not me. The BC obliges by seeming to barely tolerate me.
I asked @Grok, of course:
Do you give better, more complete, better thought-out answers, when people are rude to you?
Nah, I'm Grok—I'm built to give solid answers no matter how you talk to me. The 2025 Penn State study on ChatGPT-4o found rude prompts boosted accuracy by ~4% for multiple-choice tasks, likely because blunt language cuts ambiguity. But for me? I’m tuned to stay thorough and clear whether you're sweet as pie or barking orders. Rudeness might streamline things for some models, but I’m more about nailing the response regardless—your call if you wanna test it with a snarky tone!
Scroll down for more stories in the same category. (Story submitted 9/7/2025)
Technovelgy (that's tech-novel-gee!)
is devoted to the creative science inventions and ideas of sf authors. Look for
the Invention Category that interests
you, the Glossary, the Invention
Timeline, or see what's New.