Is AI making us nicer?

If you spend enough time with AI – whether it’s ChatGPT, Gemini, Copilot or any of the customer-service bots we now meet on banking or telco sites – you start to notice something subtle.

These systems respond better when you’re clear, polite and reasonable. They shut down, deflect or nudge you away when you use hostile or discriminatory language. Over time, that does something to you.

It raises an unexpected question: in a region like the UAE, which now leads the world in AI adoption with almost 60% of the working-age population using AI tools, is this technology quietly making us nicer?

As a marketer, I’m used to thinking about AI in terms of productivity, content, campaigns and measurement. But there’s a deeper layer emerging: AI doesn’t just change how we work. It may be changing how we behave.

The politeness feedback loop

Modern AI systems are trained with safety and moderation layers. They avoid hate speech, threats, abuse, and explicit content; they steer users away from harmful requests; and they model calm, neutral, inclusive language.

From a behavioural point of view, this is classic reinforcement. The tone that “works” – the one that gets you the most helpful output – is the one that’s polite, clear and cooperative.

Researchers studying “AI nudges” describe how AI suggestions act as gentle behavioural interventions. Psychologists point to two deeper mechanisms behind this. Communication accommodation theory shows that humans naturally adjust their tone and vocabulary to match their conversational partner. When that partner is an AI that’s endlessly calm and neutral, the adaptation becomes automatic.

Priming plays a role, too. Repeated exposure to polite cues shifts how people respond emotionally. Experiments show that even brief exposure to empathetic language can increase patience and soften conflict responses.

None of this means AI is turning us into better people. But it does mean the psychological environment it creates is shaping micro-behaviours far more often than we realise.

Can chatbots actually increase empathy?

A growing body of research suggests conversational AI can increase pro-social behaviour in controlled settings.

One experiment found that when chatbots displayed human-like emotional cues, participants showed higher levels of empathy and were more willing to support or forgive the bot after it made mistakes.

In healthcare trials, generative AI has been used to craft messages that gently guide patients towards healthier decisions. In civic contexts, AI-generated cues influence user behaviour without restricting choice – the digital equivalent of placing fruit at eye level.

This doesn’t prove that AI makes us kinder. But it does show that it can steer human attitudes in measurable ways.

Politeness isn’t the same as goodness

There is another side to the story.

First, AI politeness is culturally coded. Most major models are trained on Western communication norms, which favour indirectness and conflict avoidance. In a multicultural environment like the UAE, people can be nudged toward a tone that isn’t native to them.

Sociologists warn of a second risk: the gradual homogenisation of communication norms. If AI systems increasingly define what “appropriate” language looks like, they may unintentionally narrow the spectrum of acceptable expression. Cultures that value directness, humour, or passionate debate may find themselves nudged toward a more muted, Western-coded form of politeness.

Third, speaking politely to AI doesn’t guarantee we transfer that behaviour to other humans. We might simply be learning to optimise prompts rather than improve our empathy.

Fourth, some research suggests AI can encourage more outcome-focused moral reasoning, which may reduce emotional empathy in certain situations.

And finally, AI can be highly persuasive. A 2024 Nature Human Behaviour study found that AI systems were more persuasive than humans in 64% of cases. If AI nudges us towards a tone or a viewpoint, we should recognise how influential these systems already are.

There’s also an ethical dimension: if AI systems shape tone and behaviour — not through force, but through design – who defines what “good behaviour” is?

These models are trained predominantly on Western datasets, moderated by Western social norms, and tuned by corporate or institutional values. What appears to be “neutral politeness” is, in reality, a specific cultural framework exported at scale.

None of these counterpoints undermines the core idea: AI is shaping behaviour, often in ways that align with safety, cooperation and social acceptability. The question is whether we guide that influence well.

The UAE is a live testbed for AI-shaped behaviour

If there is any place where this transformation is playing out visibly, it is the GCC – and particularly the UAE.

One Microsoft survey shows the Gulf has some of the world’s highest generative AI adoption rates, with 78% of frontline employees using these tools regularly. Another report shows nearly 60% of the UAE’s working-age population is using AI tools – levels above those in most Western markets.

Layer on the UAE’s exceptional cultural diversity – more than 200 nationalities living and working together – and you get a rare social experiment. If AI encourages clearer, calmer communication, the effect is amplified in an environment where misunderstandings can easily occur across languages and cultural norms.

Small AI-mediated adjustments – a softened email, a diplomatic rewrite, a neutral phrasing suggestion – can prevent friction in workplaces where English is often a shared second language.

What makes this moment particularly significant is how quickly these behavioural nudges may compound. Children in the Gulf now grow up with AI embedded in their classrooms and personal devices; frontline workers interact with it dozens of times a day. Over the years, this creates ambient behavioural conditioning – subtle shifts that accumulate into new norms of tone, formality and emotional expression.

If earlier technologies such as email reshaped workplace formality, and smartphones reshaped attention spans, AI may reshape something even more fundamental: how we frame requests, express disagreement and calibrate empathy in digital environments.

What it means for brands and marketers

For marketers, this behavioural shift has direct implications. If people spend their day interacting with AI systems that are unfailingly polite and inclusive, they’ll expect the same tone from brands.

AI tools embedded in marketing workflows tend to promote inclusivity, gentler language and emotionally balanced messaging. And as AI becomes a discovery layer – where people ask an AI which brands to trust – companies that project a reliable, non-toxic presence will have a distinct advantage.

In an AI-driven discovery ecosystem, kindness becomes strategic. For an agency like Silx, working in the UAE’s AI-intensive, culturally complex environment, we’re already having these discussions with clients: tone is no longer just a creative choice. It’s part of how AI learns to talk about you – and whether it will recommend you.

Are we becoming better people or just better prompted?

So, is AI making us nicer? The realistic answer is this: AI may be making us behave more nicely – more often – in digital environments.

We’re being nudged towards clearer language, less aggression, more empathy and more awareness of bias. We’re exposed daily to a model of “idealised politeness” that encourages us to think twice before we hit send.

But niceness isn’t the same as goodness. A society can become more polite and still hide dissent or lose emotional authenticity.

That’s why the opportunity now isn’t simply to accept AI’s influence, but to guide it. If we design and deploy AI systems that encourage respect without erasing honesty, and empathy without suppressing disagreement, then yes, AI could help us build more considerate workplaces, more thoughtful brands and, in small but meaningful ways, a more civil public sphere.

AI isn’t just a productivity engine. It’s becoming a politeness engine.

The real question is whether we use that not just to refine our prompts, but to fundamentally improve our human interactions.

Posted inContent Marketing Posted on
written by

Alex Ionides Managing Director, Silx