We were taught to be clear, logical, and, in a way, predictable. Our sentence structures were meant to be consistent and balanced. We were explicitly taught to avoid the very "burstiness" that ‘detectors’ now seek as a sign of humanity. A good composition flowed smoothly, each sentence building on the last with impeccable logic. We were, in effect, trained to produce text with low perplexity and low burstiness. We were trained to write in precisely the way that these tools are designed to flag as non-human. The bias is not a bug. It is the entire system.

Recent academic studies have confirmed this, finding that these tools are not only unreliable but are significantly more likely to flag text written by non-native English speakers as AI-generated. (And, again, we’re going to get back to this.) The irony is maddening: You spend a lifetime mastering a language, adhering to its formal rules with greater diligence than most native speakers, and for this, a machine built an ocean away calls you a fake.

I'm Kenyan. I Don't Write Like ChatGPT. ChatGPT Writes Like Me.