AI Is Coming for Culture
I have watched automation move through industries in predictable waves — logistics, manufacturing, data entry — always targeting the repetitive, the measurable, the low-variance.
Culture seemed immune. The argument was intuitive: creativity requires experience, taste requires suffering, authorship requires a self. None of those conditions seemed transferable to a machine.
I'm starting to get worried about that position.
What changed is not that AI became creative in any philosophically meaningful sense. What changed is that the outputs of large generative models are now culturally functional. A song does not need to emerge from grief to move a listener. A novel does not need a lived author to sustain attention across three hundred pages. The audience, it turns out, does not verify provenance — it responds to signal.
And AI has learned to generate signal at scale.
The Production Layer Has Collapsed
Culture has always had a bottleneck: the cost of production.
- Recording music required studios.
- Publishing required editors, printers, distributors.
- Filmmaking required crews.
These constraints shaped what got made and who made it. They were economic filters that, however imperfectly, tied cultural output to human effort and institutional risk.
Generative AI has collapsed that layer. The marginal cost of producing a competent artifact — a song, an essay, an image, a screenplay — is approaching zero.
This is not a metaphor. It is a structural shift in the economics of cultural production. When supply becomes infinite, the scarcity that previously defined value disappears. What replaces it is not clear, yet.
Authorship as a Trust Mechanism
Authorship was never purely about credit. It was a trust mechanism. Knowing that a human made something told the audience something about its origin — about the conditions, pressures, and intentions behind its existence.
That information shaped interpretation. A memoir carries weight because it implies a body that lived through events. A manifesto implies conviction. A confession implies risk.
AI severs that chain of implication. The output exists without a subject behind it. This is not inherently destructive — anonymous works have always existed — but it scales the problem. When the majority of cultural artifacts carry no recoverable authorial intent, the interpretive frameworks audiences use begin to degrade.
The audience learns to consume without interpreting. Culture becomes surface.
What Remains Structurally Resistant
Not everything in culture is equally vulnerable. Some forms resist automation not because they are complex but because their value is constitutively tied to human origin. These include:
- Live performance: The value is the body in space and time. Presence cannot be generated.
- Testimony: Accounts of personal experience derive authority from the specificity of a life. AI can simulate the form but not the warrant.
- Craft with visible provenance: Handmade objects, hand-lettered work, performances with documented process — anything where the trace of human labor is part of the artifact itself.
- Relational work: Cultural production embedded in communities, rituals, and ongoing relationships. The work is inseparable from the social context that generates it.
These categories are not niches. They represent a significant portion of what humans have historically valued in cultural production.
The question is whether they can remain economically viable when surrounded by an infinite ocean of competent, costless alternatives.
The Redistribution of Authority
As AI absorbs the production layer, authority migrates upstream. Curation, taste, selection, framing — these become the scarce inputs.
I have observed this dynamic in my own consumption habits. I spend less time evaluating individual artifacts and more time evaluating the sources that surface them. The curator becomes more important than the creator, not because curation is more difficult, but because it is the last point at which human judgment is legibly applied.
This is not unprecedented. The history of recorded music involved a similar redistribution — from performers to labels, then to playlists, then to algorithmic recommendation.
AI accelerates this logic to its endpoint. At the limit, culture becomes a filtering problem, and the entities that control the filters control what culture means.
The Invariant: Culture as Coordination
Beneath every specific claim about AI and culture, there is a more durable question: what is culture for. My working answer is that culture is a coordination mechanism. It creates shared reference points, common emotional vocabularies, and collective identities. It allows strangers to recognize each other as members of the same interpretive community.
AI threatens this function not by producing bad artifacts but by producing too many artifacts too quickly for shared reference points to stabilize.
Culture requires repetition and recurrence — the same works revisited across time, across generations, across contexts. A system that can generate infinite variation on demand undermines the conditions for that kind of shared sediment to form.
The problem is not that AI is coming for culture. The problem is that it may dissolve the conditions under which culture can do its work.