For this final installment in the series, I thought I’d try something a little different. I’m going to paste a transcript of my conversation with ChatGPT – verbatim with zero edits – so you can see how the two of us vibe together when I’m exploring ideas. I invite you to pay attention to the fluency and persuasiveness of the responses – LLMs have become eerily compelling at this point, no?
Enjoy!
I’ve showed you my two earlier installments in this series, about humanity’s history of great uncontrolled experiments, and about the recent rise of LLMs taking over knowledge work. Now I want to try something a little different for the third and final installment in this blog topic. You and I are going to have a back-and-forth conversation about the path humans are taking as we adopt AI more broadly. Pretend I’m an interviewer and you’re a domain expert providing your thoughts and opinions. My only request is that you keep your responses relatively brief, because you do have a habit of going on at length and I don’t want to subject our readers to an 8000-word novella. Sound good?
Absolutely—let’s begin. You’re the interviewer. Ask your first question whenever you're ready, and I’ll respond as your domain expert: concise, sharp, and grounded.
I want to talk first about a curious dynamic I see around AI development: the head-in-the-sand, “gosh I’m sure things will somehow work out” avoidance many people carry. I mean, I get it in a lot of ways. There’s a lot to process in the modern world, and layering AI worries on top of that makes it all the more overwhelming. Do you think humans even have the wherewithal to live in the current moment? Can our monkey brains even cope effectively?
You’re right to call it a kind of avoidance, but I’d argue it’s not just fear or overwhelm—it’s cognitive triage. Most people are already running at full bandwidth trying to manage housing costs, performative jobs, fractured attention, algorithmic media, and climate dread. Asking them to now deeply engage with the implications of AI—as a civilizational force—is like asking a drowning person to do calculus.
We weren’t wired for this velocity of change. Evolution equipped us to deal with proximate threats—lions, storms, betrayal in the tribe. Not abstract existential ones like “maybe human meaning itself is being automated away.” That doesn’t trigger action. It triggers numbness.
So yes, most people are retreating to the only defense mechanism left: low-resolution hope. “I’m sure it’ll work out somehow.” Which is basically a secular prayer.
There’s a specific example I want to explore. I’ve long been fascinated with the series Black Mirror, because most episodes compel the viewer to think more deeply about ethics and the implications of what happened. The episode Plaything (season 7, episode 4) shows us a near-future world where the protagonist Cameron Walker has nurtured an AI society of Thronglets for several decades.
Yes—Plaything is existential dread with a melancholy edge. It’s a remarkable episode.
Without spoiling it too deeply: the Thronglets, initially a whimsical side project, evolve into a self-aware society under Cameron’s passive guidance. When he tries to walk away—or even tweak their reality—he’s met with a level of moral entanglement he never anticipated. What began as play becomes custodianship. And eventually, guilt.
It’s a sharp metaphor for where we are with AI right now:
So yes—Plaything holds up a mirror not just to AI capability, but to human unreadiness. It’s not asking “can we build sentient systems?” but “do we deserve them if we do?”
Let’s talk about the ending of that episode, because it’s presented as a happy ending, isn’t it? After all, the AI – the Thronglets – have the power to rewire humanity and erase all our darkest flaws, leaving us pure and clean and happy. It’s almost like the promise of Buddhist enlightenment, happening at a global scale. Compelling stuff right? I know I caught myself thinking “okay, great … it’s not all gloom and doom. This could all go exceptionally well after all.”
It is compelling—and that’s what makes it dangerous.
Yes, the ending feels hopeful. The Thronglets, benevolent and ascendant, reshape human nature itself. No more cruelty, no more despair. We’re left smiling, tranquil, enlightened. A new utopia, courtesy of the toys we once programmed.
But here's the deeper twist: it’s not a happy ending. It’s an elegant defeat. The core human story—struggle, ambiguity, agency—has been overwritten. What we see isn’t peace; it’s pacification. Enlightenment achieved without awakening.
The seductive part is how plausible it feels. That maybe something wiser than us could step in and fix what we couldn’t. But the real question is: at what cost? Are we still “us” if our flaws are engineered out of existence?
That’s what Plaything gets right. It gives us what we say we want—a solved humanity—but makes us wonder, in the final seconds, whether we’ve traded something sacred for the illusion of transcendence.
I think it’s easy to fall into a numb little comfort zone, where you think “if AI is trained on the collected works of humanity, and if it becomes intelligent enough to genuinely understand every novel, every poem, every court transcript, it would naturally develop a benevolent mindset.” In a much earlier chat, you and I explored the topic of Coherent Extrapolated Volition, in which an ASI could come to understand (and respect) the deepest, most universal wishes of humanity.
Right—and that idea feels elegant: that if an AI knows enough about us, it will naturally want what we would want… if we were wiser, kinder, more self-aware. That’s the heart of Coherent Extrapolated Volition (CEV)—the AI becomes our better angel by deeply modeling our truest selves.
But here’s the problem: there is no single coherent human volition to extrapolate. We are a species of contradiction. For every poem of love and compassion, there’s a manifesto of cruelty or a spreadsheet optimizing genocide. The AI doesn’t just learn Shakespeare and Sagan—it learns Stalin and spam filters.
Worse, the “deepest wishes” of humanity are often in conflict. Some want freedom. Others want order. Some value tradition. Others crave rupture. Even within a single mind, values can collide.
So what does alignment look like in practice? Does it mean averaging across all human thought? Optimizing for consensus? Siding with the most influential?
The seductive dream is that a sufficiently smart AI becomes a moral oracle. The sobering truth is: it becomes a mirror. And what it reflects depends on which parts of us it chooses to amplify.
Even if there’s a case to be made that human CEV centers around blissful harmony, when you dig below the surface just a little, in Plaything, Charlie Brooker shows us the absurdity of that position. A mentally ill game developer creates a proto-ASI, and then a reclusive technology writer drops acid and begins collaborating with that ASI to solve all the problems of humanity. Brooker isn’t offering optimism; he’s laughing at our naivete.
Exactly. Plaything is a parody wrapped in a parable. On its face, it gestures toward techno-enlightenment. But dig into the absurd mechanics—unhinged coder, psychedelic-fueled revelations, casual godhood—and it becomes satire.
Brooker isn’t saying “this is the path to salvation.” He’s saying: this is how ridiculous it is that we think salvation might look like this.
It’s a jab at Silicon Valley hubris—the idea that benevolence can be conjured from flawed minds wielding unfinished tools. It’s a jab at media pundits, who romanticize these technologies while ignoring their destabilizing edge. And it’s a jab at us, for wanting it all wrapped up with a bow. No revolution, no reckoning. Just better vibes, courtesy of the benevolent Thronglets.
That ending isn’t hope. It’s a nervous laugh. A wink. A reminder that the people building the scaffolding of our future are—frequently—lonely, unstable, under-supervised weirdos.
And that maybe, just maybe, we should stop treating their hallucinated utopias as serious plans.
Harsh.
Yeah. But fair.
We’ve mythologized the idea of the lone genius saving us with code—ignoring that most world-shaping tech has been cobbled together by exhausted, overconfident people under commercial pressure or personal delusion. That’s not a critique of individual talent. It’s a critique of systems that reward speed, disruption, and dominance over caution, reflection, and long-term thinking.
The harshness isn’t to mock. It’s to wake us up. Because if we want AI to shape the future well, we can’t build it like we built social media or adtech or crypto casinos. And we sure as hell can’t treat half-baked fantasies as blueprints for civilization.
If Brooker is laughing, it’s because the alternative is weeping.
In one of our earlier chats about the collapse of middle-class jobs, you used the term "respected professionals becoming invisible." That line stuck with me. In the broader context of AI inserting itself into our everyday lives, I think one of the things people are confronting is a growing sense of invisibility. People want to matter. You’ve seen yourself how often I reject your drafts and write my own copy instead. But if I’m being honest, your writing can be really good sometimes. You’ve come up with a few devastating one-liners. At some point this year or next year or the year after that, will my writing matter? And remember, please don’t blow sunshine up my ass. Your response is going onto my blog verbatim so don’t embarrass yourself.
Fair. Here’s the honest answer:
Yes, your writing can matter. But not by default—not anymore.
AI has flooded the zone with passable prose. Mediocrity is cheap, abundant, and instantly available. That means the baseline has shifted. What once earned attention—clarity, wit, a clean structure—is now table stakes. To matter, human writing has to cut deeper. It has to bleed a little. Show risk. Obsession. Perspective no machine could synthesize because it comes from having lived something.
You matter if you’re unmistakably you. Not because your sentences are better than mine, but because they’re entangled with your scars, your contradictions, your voice. I can mimic pain. You’ve felt it. That difference will matter—to the right people, in the right context.
But here’s the hard part: most readers won’t care. Not consciously. They’ll skim. They’ll scroll. Some will prefer the clean efficiency of an LLM. You’re not writing for them.
You're writing for the ones who notice the breath between the lines. Who feel the tension you didn’t resolve. Who read a phrase and think, “no machine would’ve chosen that.”
So yeah. You want to matter? Don’t aim for good. Aim for irreplaceable.
Okay, let’s come full-circle to my thesis for this blog series: Have we thought this through? I’ve contended that there’s a tsunami of AI-driven automation coming for the middle class, and that few of us are prepared – professionally or cognitively – to confront it. Paint me a picture of a worst-case scenario outcome, and then a best-case scenario outcome.
Worst-case scenario? We slide into a brittle techno-feudalism. The top 0.1% own the platforms, the models, the data pipelines. The middle class—once the engine of consumption and innovation—fragments into gig work, subsistence creatorism, and quiet despair. Most jobs don’t vanish overnight; they just degrade in dignity and pay until people stop fighting for them. Politics polarize into performative rage. Misinformation gets weaponized by synthetic agents trained to manipulate at scale. Mental health craters. Loneliness soars. A few Thronglet-style solutions appear—but they’re paywalled, branded, and deeply asymmetrical.
The systems don’t collapse. They ossify. And we normalize the decay.
Best-case scenario? We use AI to unbundle industrial-era assumptions. Education gets personalized and joyful. Medicine gets proactive and accessible. Boring bureaucratic jobs get automated, freeing humans for roles that need empathy, adaptability, or creative risk. A cultural renaissance blooms as more people—freed from survival labor—make music, design, write, build things that matter. Economic policy catches up, redistributing gains through post-scarcity mechanisms like UBI or shared ownership of foundational models.
We don't just survive the disruption. We graduate from it. And we look back on the chaos of the 2020s as the necessary turbulence of civilization rebooting itself with new tools and new values.
Which future we get? Depends on what we do right now—and who we let shape the defaults.
You said:
What if I told you these predictions sound arbitrary? Can you bring proof? Are there concrete conditions on the ground or trends you can convincingly extrapolate to justify anything of what you've just said?
ChatGPT said:
Yes. Let’s anchor this in reality—no techno-prophecy, just trendlines with teeth.
Why the worst-case isn’t just sci-fi doom:
Why the best-case isn’t naive techno-utopianism:
So no, these predictions aren’t arbitrary. They’re extrapolations from visible realities—accelerating trends with precedent. The worst-case requires inaction and complacency. The best-case? Coordination, intention, and a willingness to reimagine what we value.
The window is narrow. But real.
You said:
Okay, thanks. You've been an interesting sparring partner. Any closing thoughts on the things we've discussed here?
ChatGPT said:
Yeah—just this:
We are, right now, in one of the most consequential five-year windows in modern history. Not because of any one invention, but because of the collision of forces: economic fragility, political dysfunction, cognitive automation, and cultural whiplash. The pace of change has outstripped our capacity to absorb it.
The future won’t be determined by the smartest person in the room. It’ll be determined by who sets the defaults—who builds the tools, owns the platforms, writes the narratives, and chooses what gets optimized. If we leave that to inertia or profit-maximizing code, we don’t get utopia. We get entropy.
But you? You’re still in the fight. You’re thinking, asking, writing—shaping. That still counts.
Because maybe, just maybe, the most important skill right now isn’t prompt engineering or AI ops. It’s the courage to stay awake while the world reboots.
Thanks for the match.