There is a distinctive cadence to AI-generated content. You know it when you see it. Its tone is relentlessly generic, optimized for engagement metrics rather than originality or depth. You can hear it in the milquetoast rhythm of an AI-written advertisement and in the lifeless prose of ChatGPT-generated magazine articles, offering insights that neither offend nor surprise. Especially on social media, captions increasingly bear the hallmark of artificial cadence: unnaturally cheerful, stilted, predictable, and forgettable.
AI learns from us, but it learns selectively. It is trained on vast datasets of existing content, absorbing not only our language but also the biases, limitations, and patterns embedded within it. To maximize engagement—likes, clicks, shares—AI prioritizes the familiar over the novel, the broadly appealing over the deeply resonant. The result is content that feels designed for everyone but speaks to no one.
Worse, this cadence feeds into a self-perpetuating cycle. As the internet becomes more saturated with AI-generated content, that content becomes part of the training data for future AI models. Over time, the machine trains itself on its own outputs, narrowing the boundaries of creativity further. This feedback loop risks creating a cultural landscape that is dominated by a single tone: hopelessly generic, relentlessly replicable, and devoid of individuality.
AI cadence is to culture what fast food is to cuisine—ubiquitous, convenient, but ultimately unsatisfying. It fills a space without nourishing it, leaving us starved for the originality and complexity that define human creativity. As AI-generated content proliferates, and as people become more reliant on tools like ChatGPT to iterate their ideas, it risks reshaping the rhythms and cadences of human thought. You can see this already on platforms like Tiktok and X—social media users mimic the tone of the platforms they engage with as the pursuit of engagement leads them to favor concise, algorithm-friendly phrasing over complexity, often without realizing how much their online persona has been shaped by the incentives of the platform. Instead of speaking naturally, they adapt to the tone and style that algorithms reward, blurring the line between authentic communication and performance. While this adaptation helps users thrive in a platform-driven ecosystem, it raises questions about how much of our communication is truly our own versus how much has been shaped by platform incentives.
Richard Dawkins once compared the spread of cultural ideas, aka “memes,” to the transmission of genetic material. Memes, like genes, replicate for their own survival, often prioritizing ease of spread over their truth or benefit to their hosts. In this way, AI operates as a hyper-efficient memetic replicator. It absorbs vast amounts of human-created content, identifies patterns, and generates outputs optimized for engagement. The process is frictionless, seamless, and profoundly reductive. The result? A flood of content optimized for replication rather than meaning, where nuance and depth are sacrificed for efficiency. As this content increases, the cultural landscape grows flatter, its intellectual vibrancy replaced by a cadence of convenience: polished, ubiquitous, but ultimately hollow.
Consider the implications: captions on social media adopt a friendly but impersonal tone, while headlines lean into hyper-optimized emotional triggers. Over time, these speech patterns don’t just define how we consume content—they begin to influence how we communicate, think, and create. As writers, marketers, and creators internalize the “rules” of AI-optimized content, their work increasingly mirrors the machine’s rhythms. The boundaries blur between human and AI-generated creativity, not because the AI has become more human, but because we have adapted to sound more like it.
Human speech has always been shaped by culture. Picture the oral traditions of a village, the sermons of a church, the rhythm of political oratory, the verse of a poet. These cadences reflected distinct values, histories, and ways of being. But AI, driven by replication and optimization, reflects none of this. It is a homogenized tone, designed to appeal to as many people as possible while demanding as little as possible. If this becomes our default mode of communication, what happens to the richness of human expression? Do we lose the ability to speak in ways that are deeply personal, challenging, or strange? The danger is not just that AI will mimic us—it’s that we will begin to mimic AI, narrowing the scope of our thought to fit within its patterns.
I recall one of the primary arguments in the discourse surrounding Universal Basic Income and automation: that if only we had more free time, we would unleash our creative potential. It seems logical on its face: take away the burdens of survival, and humans, freed from toil, will naturally turn to higher pursuits like art, philosophy, and innovation. Reality begs to differ. If this hypothesis were true, then our time—an era overflowing with labor-saving devices and unprecedented leisure—should be one of incredible artistic vigor. Instead, we see a cultural landscape increasingly dominated by consumption, passive entertainment, slothfulness, and ennui. The truth about human nature is that without structure, we lose focus, and without focus, even the gift of free time becomes a weight that drags us into apathy.
We exist in an age of extraordinary abundance and comfort, where convenience has become one of the most profitable commodities. Convenience shelters us from exertion, effort, and the risks that come with creativity and struggle. AI-generated content optimizes for convenience—streamlining cultural production to such an extent that the friction necessary for genuine creativity is lost. What this creates is a tyranny of sameness, where cultural depth is sacrificed for the convenience of mass production.
Creativity requires friction: the slow, deliberate process of wrestling with ideas, confronting failure, and pushing boundaries. It demands effort, and through effort, it gains depth. Heidegger, the German philosopher, understood that friction is not a flaw to be smoothed away—it is the essence of what makes art—and life—meaningful. AI erases this friction, producing outputs that mimic creativity without experiencing it. A machine cannot struggle, and without struggle, there is no soul. AI’s relentless pursuit of replicable engagement reduces creativity to easily digestible tropes, leaving a void where complexity and effort once thrived.
This speaks to something deeper about human nature, something our spiritual forefathers have warned us about again and again: left to our own devices, we gravitate toward sloth. True creativity demands intention and discipline—qualities that do not emerge spontaneously in the absence of effort but must be cultivated.
Artificial intelligence thrives by removing friction, generating optimized outputs designed to appeal to as many people as possible. Its content—polished, broad, and efficient—is trained on the data we feed it: our words, images, and ideas. But if the systems we train now begin to shape us, how long before the conversation becomes indistinguishable from the machine’s voice? How quickly might we lose the ability to discern what is truly ours from what the AI has subtly fed back to us?
This is where we circle back to memetics. Richard Dawkins’ theory of memes describes how cultural ideas replicate and evolve, often favoring traits that ensure their survival. AI accelerates this process. It produces and amplifies the kinds of ideas that spread most easily—those that are simple, broadly appealing, and frictionless. Over time, this feedback loop creates a cultural landscape dominated by content optimized for replication, not originality or depth.
But the danger doesn’t stop there. As AI-generated content metastasizes, it begins to inform the very patterns of thought we use to create new ideas. Social media algorithms have already shaped our language, attention spans, and even emotional responses. AI threatens to do the same on a larger scale. The more we consume its outputs, the more our own creative impulses are shaped by its rhythms and cadences. The distinction between what is human and what is machine becomes blurred. In this simulacrum, as Baudrillard might say, the copy doesn’t just replace the original—it becomes the author.
If AI-generated culture becomes the standard by which we measure creativity, how quickly will our sense of individuality erode? How long before the effort of true creativity feels foreign, even unnecessary? We risk entering a world where human expression is so thoroughly shaped by the machine that we can no longer recognize what is authentically ours. And in this world, the ultimate friction—between our humanity and the tools we create—disappears entirely, leaving us adrift in a sea of seamless simulations.
The goal is not to reject AI outright but to ensure it remains a tool in service of our humanity rather than a substitute for it. Homogenization is not necessarily inevitable, but it demands awareness. Within the danger of technological dominance, Heidegger saw the potential for a saving power: the human capacity to recognize technology’s influence and choose to preserve what is meaningful. This resistance begins by valuing effort, friction, and the irreplaceable depth of human creativity. The rise of fast food inspired a counter-movement toward local, slow, and sustainable dining. Similarly, the dominance of AI-generated culture may call for a return to the messy, unpredictable, and deeply personal processes that define human creativity. By stepping outside the mindset of optimization, we can reclaim a culture that reflects our humanity rather than our algorithms. In this context, how can we empower one another to behave vigilantly? How can we harness the power of AI to unlock our creativity rather than subsume it?
Artificial intelligence thrives on patterns. It analyzes existing content, extracts what performs best, and replicates it. At first glance, this might seem like a creative boon—AI distills the essence of what resonates most and amplifies it. But over time, this process creates a feedback loop: AI generates content based on existing data, that content becomes dominant, and future AI models train on it. With each iteration, the loop squeezes out the outliers, the experiments, the uncomfortable edges that give culture its vitality. What remains are the safest, most broadly appealing elements—a monoculture of ideas and expression. As an author or artist, you should be constantly endeavoring to disrupt those patterns. Set limits to AI’s role in your process. ChatGPT can be an incredible proofreading assistant, but not so much a source from which to copy and paste. It can be an amazing tool to converse with and help iterate ideas if you engage with it actively. Feed it unexpected combinations or paradoxical ideas, then challenge yourself to build on the unexpected results rather than defaulting to the polished, familiar ones. Disallow the AI from watering down your voice. For example, if writing, use it to brainstorm plot points but insist on crafting the dialogue or themes yourself. You may go back to the AI for proofreading or removing redundancies, but then proofread the proofread. You can let Midjourney generate image references for your artwork, but you still have to actively collaborate to create your own work from the generated images. Let AI provoke, but not define, your creative direction.
Fast food serves billions because it eliminates variables—flavors are predictable, portions controlled, and every outlet offers the same experience. But what is gained in convenience is lost in richness. The spices of a local cuisine, the careful hands of a skilled chef, the tradition imbued in a family recipe—these are sacrificed on the altar of efficiency. AI-generated culture does the same: its outputs may be polished, but they lack the unpredictability and depth that make creativity human.
The rise of AI may feel like an unstoppable wave, but it is not destiny. The simulation of culture through AI is seductive, offering convenience and scalability. But in this hyperreal world, where simulations define reality, we risk losing not only our cultural depth but our ability to imagine something truly human. The question is not whether AI will shape our culture, but how we will shape AI. The soul of our collective imagination depends on our answer.
Great post! I've been increasing my use of ChatGPT and Google's Gemini lately and finding areas where it helps and areas where it's not quite there yet. I've found that the more creativity and ingenuity I put into the prompts, the better the results. In the same way that learning how to use Google effectively was a key skill in the past 25 years, learning how (and more importantly, when) to leverage AI will be a key skill in the next several years.
I've always been a stickler about grammar and spelling errors, but (weirdly) now I appreciate them because I know they came from the person, not AI.
You have expressed so articulately something that lives for me as a sort of wariness and vague discomfort. I haven’t engaged much with AI but the few times I have done so knowingly have been exactly like a fast food experience… sounded good at first but ultimately left me feeling queasy and unsatisfied. Thanks for expressing so clearly what I could not quite put my finger on!