LLMs Don’t Reward Originality, They Flatten It

In an age where content is generated, circulated, and consumed at unprecedented speeds, the rise of large language models (LLMs) like ChatGPT, Claude, and Bard has shifted how we approach creativity and originality. These models, built on massive data patterns from countless sources, are undeniably powerful. They can draft essays, generate email responses, and summarize information with ease. But behind their capabilities lies a subtle yet significant issue: LLMs do not reward originality—they flatten it.

At their core, LLMs operate by predicting the most likely next word in a sequence, based on patterns they’ve encountered during training. This doesn’t inherently prioritize unique or groundbreaking ideas—in fact, it explicitly avoids them. The output of an LLM is heavily influenced by the most statistically likely combinations of language. As such, anomalies, surprises, and wholly unique expressions are viewed as statistical risks, not as desirable traits.

Consider the goal of an LLM: to produce content that is intelligible, safe, and inoffensive. This same goal leads models to reinforce what is already known and accepted. When prompted to generate a story, poem, or essay, an LLM draws upon patterns seen in thousands—or millions—of similar examples. It doesn’t question premises, explore radical thought, or synthesize truly novel viewpoints. Instead, it offers variations on the average.

This is not a failure of the models themselves, but rather a reflection of how they are trained and deployed. When originality does surface from an LLM, it is typically accidental or a clever recombination of already processed data. Truly new ideas—those that break molds and redefine categories—do not emerge from statistical regression alone. They require the kind of human intuition and experiential learning that machines simply don’t possess.

A profound consequence of the flattening effect is its impact on creative industries. Writers, artists, educators, and journalists increasingly find themselves competing—not with each other—but with AI systems trained to replicate the most generic, collectively acceptable version of their craft. The result? A vast influx of bland content optimized for virality and SEO rather than for challenging thought or cultural progress.

This flattening manifests in several ways:

  • Homogenization of Style: Whether composing fiction or formal emails, LLMs frequently rely on a small subset of stylistic models, leading to a sameness that is hard to escape.
  • Loss of Voice: Personal voice, so central to compelling writing, becomes diluted as machine writing trends toward a “neutral” tone.
  • Suppression of Risk: LLMs are conservative by nature. They shy away from controversial statements or untested ideas, fearing they may be flagged as unsafe, biased, or offensive.

Furthermore, as reliance on LLMs grows, the training data that feeds future models will itself become increasingly uniform. It’s a feedback loop: machine-generated content becomes part of the corpus for future machines, reinforcing the cultural median even further. This self-replication of mediocrity threatens to stifle the diversity of thought that is essential for cultural innovation and intellectual growth.

Some defenders argue that LLMs serve as assistive tools, enhancing human creativity rather than replacing it. There is some truth to this. By relieving people of mundane tasks or helping break writer’s block, language models have utility. But if uncritical reliance becomes the norm, there is a real risk that we begin to equate the easy production of words with the creation of meaning. Quickly generated content does not inherently carry intellectual or cultural value.

The promise of AI lies in its potential to amplify humanity’s best qualities—not to mute them. To fulfill that promise, developers, users, and decision-makers need to be intentional. This means resisting the temptation to settle for what is statistically average and instead striving for the singular, the provocative, and the genuinely new.

True originality often means taking risks, standing apart, and even facing rejection. It requires a willingness to be misunderstood. These are traits no language model possesses—and perhaps never will. That is both the limit of LLMs and the call to action for human creators. In a world awash with generated content, the value of honest, unfiltered human expression has never been higher.