Sharp Edges

January 30, 2023

This little treatise called Caviar Cope was making the rounds in the last week or two. It properly decries and laments the declining state of media today, as any philosopher of the age worth their salt should. And it resonated too. I live some of this, not as an elite, but as a declasse and faux-elite PMC as much as I’m loathe to admit it. I love the All-In podcast. I felt sucked in to White Lotus (although I did stop after a couple episodes). But the final point is far more striking:

..These shows are not trying to say anything pointed. They’re trying to arrive at whatever pastiche of scenery and signifiers would be maximally rewarded by the megamachine. And right now, the optimized formula is: Grandiose scenery + absurd wealthy people hilariously affirming and negating everything at once + no clear plot or value system that could offend or stress anyone.

The megamachine. That’s us. That’s It. That’s everything. The global consciousness of humanity. Today our social graph has condensed so far that any meme is rated by it’s ability to distribute by just 1 hop. That’s what Likes and Retweets and Quotes are. The more Likes you have the more value you have to humanity. It’s all directly measurable. You manage what you measure, and so we all try to go as viral as we can.

ChatGPT is on everyone’s mind too because we worry that the idea of creative human thought is over. The most thoughtful piece I’ve seen on this is Ian Leslie’s take describing how we’re letting robots change the ways we are human, and echoes Brian Christian’s concern:

“We are in danger of losing control of the world, not to AI or to machines as such but to models. To formal, often numerical specifications for what exists and for what we want.” -Brian Christian

ChatGPT isn’t smart. It’s just incredibly predictive bullshit. As Scott Alexander points out, “More intelligence and training make AIs more likely to endorse all opinions, except for a few of the most controversial and offensive ones.” It’s trained on us, and so it’s biased towards what we think. ChatGPT output is a Keynesian beauty contest on what we think. It’s not outputting any truth, it’s outputting what humanity seeks to perceive as truth. As it gets more data, it’s more able to model the output of humanity. There’s a lot of really bad 5 paragraph topical essays out there, so it shouldn’t be a surprise that ChatGPT writes uncanny approximations of high school and college essays.

If I were to define this epoch of humanity, I would call it the Performative Age. We broadcast ourselves as loudly as we can, calling out the right shibboleths to the right audiences, just like SBF said. Fame or reach isn’t defined by originality or even authenticity, but by the ability to pander to the algorithms that sort and sift the world’s information and are bred to push a never-ending dopamine rush of attention. This is what entertainment is now: it’s play-acting a vague script to elicit confirming biases in different segments of the population.

It could still be true that this is still just a camera and not an engine. Maybe this has always been true and all of our vaunted algorithms only make it more direct and transparent today. Russ Roberts points out during his interview of Ian Leslie that every one of the grad school admissions essays he read from his students were identical. They described a death close to them that caused despair they were able to overcome and they broadcast laudable but vague goals about improving the world. They were trying to pander to what they believed their audience wanted.

Whether these tools are cameras or engines probably doesn’t matter. The end result is that they’re leading us to self-select our identities and preferences based on the audiences we hold dear. Through algorithm and influence, we’re coerced into performing: what we think, what we buy, what we write, how we act.

No tool is so powerful that it’s worth sacrificing our humanity. And that’s exactly what’s at stake. Hirschman described three different paths when faced with deteriorating conditions: Exit, Voice, and Loyalty. We could work to effect change using voice and loyalty I suppose, but I’m not even sure what I’m loyal to in this performative system. And the whole point of the system is to subsume voice and make it a part of the algorithm, shouting to the audience.

The only real solution is to Exit.

“It is not worth an intelligent man’s time to be in the majority. By definition, there are already enough people to do that.” -GH Hardy

Russ Roberts cites a CEO who “liked people with sharp, jagged edges because they were the ones who changed the world and changed the company; whereas the pretty good, standardized one-size-fits-all people that get stamped out by an industrial process, while pleasant, are not going to be the game changers.” Our industrial processes - our Systems - are more encompassing than ever before. In a prior age, the Systems that CEO meant would be high school or college or the tired but stable businesses that had been around for years. But today the Systems we have are the ones everyone is using. Facebook, LinkedIn, Twitter, TikTok, Google, ChatGPT. These Systems are available everywhere and encapsulate the entire world.

People that make real change have sharp edges. They don’t fit in. They diverge from the mean over time. They break out of whatever Systems they’re in.

So how do you Exit from social media and algorithms? How do you keep your sharp edges? And your humanity?

Like the content? Share it around..