Here’s what I keep seeing.
A professional who is multilingual, clearly competent, and has spent years building fluency in the English register of their field starts using AI tools. Their output improves quickly. The grammar tightens, the phrasing becomes more natural, and on paper, the gap appears to narrow.
And then their manager says, “Something about this feels slightly off. I can’t explain it.”
That moment is worth paying attention to, because it reveals something important. The manager is reacting to a real signal. What they’re noticing is not a problem with correctness. It’s a mismatch they can’t quite name.
What’s actually happening is a separation between two layers of communication: the surface and the underlying structure. AI is very effective at cleaning the surface. It improves grammar, smooths phrasing, and aligns the tone more closely with what is recognizable as “professional English.” But the deeper structure of the message — how ideas are prioritized, how arguments are built, what is assumed versus what is stated — doesn’t automatically change.
So the output reads as fluent, but something in the logic of the message doesn’t fully align with the expectations of the reader. The manager experiences that as friction. Because they don’t have a precise way to describe it, the friction is translated into a general judgement: something is missing.
This pattern isn’t new. It’s the same mechanism that has been documented in research on accent bias, now operating at a different layer of communication.
The research on accent bias is consistent on this point. When speech is more difficult to process, listeners don’t attribute that difficulty to their own unfamiliarity with the accent. Instead, they attribute it to the speaker. The result is lower perceived clarity and lower perceived competence, even when the underlying content is identical. The mechanism isn’t primarily about bias in intention. It is about processing ease. We instinctively assign friction to the source.
AI now sits inside that same mechanism. Its outputs tend to converge toward a specific version of professional English: fluent, structured, and closely aligned with native-speaker norms. When a multilingual professional uses these tools, the surface of their communication improves. But, the underlying logic of how their ideas are organized often reflects a different set of linguistic and cultural patterns.
The evaluator doesn’t see these layers separately. They experience only the combined output. When something feels slightly harder to process, they register it as a problem with the communicator, rather than as a mismatch between surface fluency and deeper structure.
What makes this more complex is that this isn’t a single new skill to develop. It’s multiple demands operating at the same time. A multilingual professional needs to understand what effective communication looks like in their specific context, learn how AI tools shape and present language, and manage both of those processes in real time while operating in a second language.
A native English-speaking professional is working within the same system, but much of it is already internalized. A multilingual professional is actively managing all of these layers at once, while being evaluated against the same standard.
The manager sees the output, but they can’t see the cognitive load required to produce it.
If you want to observe this more directly, try a simple exercise. Before using any AI tool, say the core idea you want to communicate out loud in one sentence. Don’t aim for a polished version. Say the idea as it comes to you. Then notice whether you can do it clearly, and notice where the difficulty appears.
That sentence represents the part of the process that AI doesn’t replace. It’s where thinking and language meet in real time. The fact that it's hard is not a gap. It's the whole point.
See you next Monday,
Airi
P.S. If this landed, forward it to someone who's navigating the same thing.