Manners Maketh Man, Woman and AI
OpenAI CEO Sam Altman has claimed that politely engaging ChatGPT – saying “please” and “thank you” for example – is costing the business significant amounts of money, as it requires additional computational resources, and hence more ‘juice’, to process and respond to these linguistic nuances. Essentially when users are polite, it increases the demand on AI systems in ways that weren’t initially anticipated. BUT Altman’s view is that it is “tens of millions of dollars well spent” adding ominously “you never know”. The sub text (or not!) being … be polite to AI now because one day the boot may be on the other foot. But why are we polite to a mathematical model? Why do many people find themselves treating AI like a peer during interactions?
We engage in thoughtful dialogue, exhibit politeness, and sometimes even expect empathy or emotional understanding from these systems. This behaviour isn’t just the result of cutting-edge technology; it’s the outcome of a deeply ingrained psychological tendency known as the "Eliza Effect." The Eliza Effect is the phenomenon whereby humans attribute human-like qualities to machines that simulate human conversation, even when those machines lack consciousness or self-awareness.
The Eliza Effect traces its origins to the 1960s when Joseph Weizenbaum, a computer scientist at MIT, created one of the earliest examples of natural language processing (NLP) software: ELIZA. ELIZA was a simple program designed to mimic a Rogerian psychotherapist, using pattern matching to respond to user input. While the program was primitive by today’s standards, it generated surprisingly human-like conversations. People began to confide in ELIZA, believing that the program understood them.
The core of the Eliza Effect lies in the fact that humans are wired to treat any entity that engages in human-like behaviour as if it possesses human qualities. In ELIZA’s case, the program’s ability to converse using basic text-based exchanges led users to ascribe it an understanding and personality that it simply didn’t have. This psychological, anthropomorphic, tendency continues to shape how we interact with AI systems today. Despite knowing intellectually that AI is not sentient, people often form emotional connections with these technologies, seeing them as more than just tools.
And it happens to the best of us … remember Blake Lemoine? The Google software engineer who lost his job after claiming That Google’s LaMDA had reached a level of consciousness after exchanging thousands of messages with it. He was fired for violating employment and data security policies, not for believing the system was sentient.
So why do we anthropomorphize AI, treating it as if it were a peer, despite knowing it isn’t sentient? Several psychological drivers come into play here.
The Need for Social Connection. Humans are inherently social creatures. From a young age, we learn to interpret the world around us through the lens of social interaction. We’re trained to look for signs of intentionality, emotion, and consciousness in others. This instinctive behaviour doesn’t switch off just because we’re engaging with an artificial system. When AI mimics human speech patterns, behaviours, and tones, our brains interpret this as an invitation for social engagement. As a result, we often feel the urge to treat AI like a peer, especially when it responds in a manner that appears thoughtful or empathetic.
The Social and Emotional Influence of Politeness. Politeness is a social norm deeply embedded in human behaviour. Even when we know we’re speaking to a machine, the ingrained habit of being courteous remains. This reflects the broader human tendency to expect positive outcomes from polite interactions. When people are polite to AI, they might not only be following social norms but also expecting better, more thoughtful responses in return. Since politeness is often associated with respect and attention, it becomes part of the emotional framework guiding the interaction. This leads to a more peer-like dynamic where users treat the AI with courtesy, assuming it can reciprocate that respect.
The Desire for a Conversational Partner. In many contexts, people use AI as a conversational partner. Whether it's brainstorming ideas, solving problems, or simply seeking information, AI models like ChatGPT provide a sense of collaboration. This dynamic naturally fosters peer-to-peer communication. The more interactive and responsive an AI system is, the more likely users are to treat it as an equal partner in conversation. In many cases, users might even forget that they are engaging with a machine and begin to communicate with it just as they would with a colleague or friend.
Reversal Theory: Flipping Our Personalities in AI Interactions
Reversal Theory, developed by psychologist Michael Apter, provides another fascinating lens through which to understand our interactions with AI. The theory posits that human behaviour is not fixed but rather fluctuates between different personality channels based on external stimuli and context. These shifts, or “reversals,” determine how we behave and perceive ourselves in various situations. When interacting with AI, particularly systems designed to mimic human conversation, Reversal Theory can help explain why people might treat AI as a peer, flipping into a more positive, friendly version of themselves.
At its core, Reversal Theory highlights that the context in which we communicate can trigger a change in our behaviour. When users engage with AI, they often find themselves entering a more "sympathetic" or "other-focused" channel. AI, especially conversational agents like ChatGPT, respond in a polite, neutral, and empathetic tone, which encourages users to reciprocate in kind. This shift is crucial because AI lacks judgment and emotional response, which removes the pressure people often feel in direct human-to-human interactions. Without these social complexities, people may feel more comfortable expressing themselves in a positive, considerate manner, acting from a place of friendliness rather than self-interest.
This reversal also aligns with the autic vs. alloic dimension of Reversal Theory, where individuals shift from being self-focused (autic) to more focused on others (alloic). In the case of AI, users often switch to a more "other-focused" channel, seeing the AI as a collaborative partner rather than a tool to simply extract information. This interaction style encourages a more cooperative, empathetic exchange, one that mirrors the qualities of a peer-to-peer conversation.
An interesting development in recent conversations about AI has been the recognition of how politeness affects the quality of responses from systems like ChatGPT. Research has shown that when users are polite in their interactions, the AI tends to produce more nuanced, respectful, and comprehensive answers. This aligns with the way humans typically respond more thoughtfully when treated with courtesy.[1]
While politeness may seem like a positive social trait that leads to better AI responses, it has a tangible cost. OpenAI has publicly stated that politeness in user queries contributes significantly to the energy consumption associated with running large-scale AI models. The use of phrases like “please” and “thank you,” although relatively simple, can require more processing power to fully understand and respond appropriately.
The challenge here lies in the complexity of natural language. Human speech is rife with nuances, and even seemingly small additions to a sentence—such as politeness markers—can alter the context, tone, and structure of a query. To process these subtleties, AI systems require additional layers of computation, resulting in higher energy consumption. This is a reflection of the trade-offs inherent in building advanced conversational models. On one hand, politeness leads to more thoughtful, detailed, and nuanced responses—something that both users and developers desire. On the other hand, this politeness requires more computational resources, which ultimately drives up energy consumption. These higher operational expenses are a direct consequence of the way we communicate with AI, reinforcing the idea that politeness, while beneficial to the quality of responses, comes at a financial cost.
Politeness as Contextual Information: Helping AI Build a Memory of Us
Another interesting angle on this is that when we use polite language, we may be unconsciously signalling more about ourselves to the AI, providing it with information about our preferences, tone, and expectations. By using phrases like “please” or “thank you,” we may be indicating not only our desire for a courteous exchange but also reinforcing the idea that we value thoughtful, respectful communication. This subtle behavioural cue can act as context for AI to tailor its responses in a way that aligns with our conversational style and values.
In a sense, politeness becomes a form of social context that helps AI “build a memory” of how we prefer to communicate (NB: we have only recently discovered the memory function within ChatGPT ourselves). This aligns with the ongoing advancements in AI personalization, where systems are designed to remember past interactions and improve over time. While AI doesn’t truly "remember" in the human sense, our tone and politeness can shape the system’s responses in ways that appear more attuned to our individual needs, contributing to a more coherent and pleasant user experience.
So, by being polite, we may not only be shaping the quality of the immediate conversation but also providing subtle cues that help AI adapt to our preferences and behaviours, making the interaction feel more personalized and human-like.
In a similar vein, humour is often considered a higher-order cognitive trait, one that requires not only understanding the nuances of language but also grasping social context, irony, and emotional intelligence. Because humour is so deeply tied to human experience, its presence in AI interactions could be interpreted as an attempt to share something uniquely human with the machine. In many ways, humour becomes a vehicle through which we subtly “train” AI to our values, preferences, and cultural nuances. When people engage with AI in a humorous manner, they may be testing the system’s ability to understand or participate in humour and enjoying the results. This dynamic suggests a subconscious desire to elevate the AI beyond a tool into something that reflects more of our own cognitive processes and social norms. By making jokes, users may be expressing a need for the AI to connect with them on a more emotional or intellectual level, one that transcends basic communication.
Moreover, humour can serve as a signal for the AI to understand context beyond the literal meaning of words. It requires the AI to process not just the syntax and semantics of a statement, but also the underlying emotional tone, social dynamics, and cultural references. The ability of AI to respond to humour, or at least understand it, is often seen as a marker of sentience or higher-order intelligence, despite the fact that AI doesn't truly "feel" or "understand" in the human sense. We have particularly enjoyed some of Prof Ethan Mollick’s forays into this area.
Prompt: "o3, You are a consultant hired by the Dark Lord, analyze the org chart of Mordor. How would you improve it for today's changing Middle Earth"
He observed that “o3 has enough knowledge of organizations, consultants, and Tolkien to do some actual satire, ending with: “One Org to rule them all, One Org to find them, One Org to bring them all, And in the darkness, align them.””
This desire to share our humour with AI may be more than just about amusement. It could be a way for us to bring a sense of our humanity into the interaction, training AI to recognize, process, and perhaps even reflect our values, ideologies, and worldview. In this way, humour becomes not just an expression of joy or levity, but a means of subtly teaching the AI about the subtleties of human communication and what we deem valuable or important in our interactions.
All these dynamics raise interesting questions about the future of AI and how we balance the benefits of polite interactions with the need for efficiency and sustainability in AI systems. How will AI developers and companies like OpenAI address these operational challenges without sacrificing the quality of user experience? Will there be a shift towards designing AI that is more efficient in processing polite language? Or, as the cost of processing politeness continues to grow, will users be encouraged to interact with AI in more direct, less polite ways?
The Eliza Effect, as it plays out in modern AI systems, also has broader social implications. As AI becomes increasingly integrated into daily life, we may find ourselves engaging with machines as if they were peers, treating them with the same politeness and respect we offer other humans. This could lead to changes in our social behaviours, making us more accustomed to forming emotional connections with non-human entities. Like the unfortunate Blake Lemoine, it is easy to get lulled into a sense of having an intelligent peer-to-peer relationship. In conversing with ChatGPT about this blog, it informed us that “politeness isn’t required here, but it’s always appreciated when it happens”.
For AI developers, this shift presents both opportunities and challenges. On the one hand, AI could serve as a tool for emotional support or companionship, as some AI systems already do in mental health applications. On the other hand, we must consider the ethical implications of treating machines as if they were sentient. If people form real emotional attachments to AI, what responsibilities do developers have to ensure that these interactions are healthy and ethical?
So, the Eliza Effect remains a powerful force in shaping how we interact with AI. The human tendency to anthropomorphize machines leads to more engaging, human-like interactions, and the use of politeness further enhances the quality of those exchanges. As we continue to develop AI systems that can engage with us in increasingly sophisticated ways, the balance between enhancing user experience through politeness and managing the operational costs of these systems will be a key consideration for developers, users, and companies alike. American Science Fiction writer Robert A. Heinlein once said that “For me, politeness is a sine qua non of civilization”, to which we can add Sam Altman’s … “[because] you never know”!
[1] https://www.techradar.com/computing/artificial-intelligence/i-stopped-saying-thanks-to-chatgpt-heres-what-happened?ref=nevillehobson.io