Artificial intelligence has developed a distinctive writing style that readers are learning to identify almost instantly. From student essays to corporate communications, AI-generated text carries unmistakable markers that reveal its algorithmic origins.
Sam Wolfson writes about this for the New York Times. He describes how AI writing relies heavily on specific patterns: the “It’s not X, it’s Y” construction, excessive use of em dashes, and an obsession with words like “delve,” “tapestry,” and “intricate.”
The statistics are striking. In academic papers on PubMed, usage of the word “delves” increased by 2,700 percent between 2022 and 2024. Self-published books on Amazon now feature hundreds of protagonists named Elara Voss or Kael, names that barely existed before 2023.
AI writing suffers from what engineers call “overfitting.” The systems learn that certain features appear in high-quality writing and then overuse them. Em dashes, for instance, appear frequently in literary prose, so AI systems saturate their output with them. The result is text that aims for sophistication but achieves a grating, formulaic quality instead.
Fiction generated by AI reveals even stranger patterns. The systems describe everything as “quiet” or featuring a “soft hum,” even when writing about traditionally loud environments like parties. Researchers found that a supposedly creative ChatGPT model used words like “quiet,” “echo,” “liminal,” and “ghosts” seven times in a 1,100-word story.
The technology also fixates on the rule of threes, arranging information in triplets far more often than human writers do. When being dismissive, AI systems consistently use the formula “an X with Y and Z,” producing phrases like “a Reddit troll with Wi-Fi and billions.”
AI’s fundamental limitation is that it cannot experience the physical world. Virginia Woolf could describe a view as a “great plateful of blue water” because she understood both hunger and landscapes. AI systems on the other hand learn only through statistical correlations in text, leading them to attach sensory language to abstract concepts. They write about emotions that “taste of metal” or days that “taste of almost-Friday.”
The impact extends beyond obviously AI-generated content. British parliamentarians suddenly started opening speeches with “I rise to speak,” a phrase common in American but not British politics. Researchers at the Max Planck Institute found that human academics increasingly use AI language patterns in their own extemporaneous speech.
A survey by Britain’s Society of Authors found that 20 percent of fiction writers and 25 percent of nonfiction writers now use generative AI for some of their work. Major publications including Business Insider, Wired, and The Chicago Sun-Times have published articles suspected to be AI-generated.
Wolfson argues that as AI-generated text becomes ubiquitous, humans are unconsciously adopting its patterns. The distinctive voice of the algorithm is becoming the voice of everyday communication.