English Is The New STEM

August 08, 2023

Every time I sit down with a blank sheet in front of me I’m scared and excited. I’ve usually spent the last day or two grappling with some text or interview or idea in my head. I’ve had some moments of clarity but I’m still working it out. That is, so they say, one of the primary reasons to write: to figure out what it is we actually think.

And so I try it. Over and over. I sit down excited to breathe into the world the uniquely interesting and incredibly deep thoughts I’ve been thinking. Battle ensues. Some days I win and some days the empty page remains empty. The words I get out are never what I expect and they prove that too often those deep thoughts I’m so proud of.. aren’t that deep. Or insightful. Or beautiful. But they are mine.

Writing is hard. Uniquely hard. But it’s also uniquely human. Language seems to be one of the traits that differentiates humans from other species. Which means no chimp can call out “Hey Alice, this tree has good food in it!” That makes us different from animals perhaps, although lots of animals seem to do a good job of communication without the invention of language. But you won’t hear a chimp tell a story about the amazing food tree they found near the hill and the time Alice almost fell three times trying to get there. This kind of storytelling is different and it’s propelled by our language.

There are two reasons why writing, along with it’s close cousin oratory, is so important, so hard, and so human.

Manifest

One of my favorite passages from any book is from Stephen King’s On Writing:

“Look- here’s a table covered with red cloth. On it is a cage the size of a small fish aquarium. In the cage is a white rabbit with a pink nose and pink-rimmed eyes. … On its back, clearly marked in blue ink, is the numeral 8. Do we see the same thing? We’d have to get together and compare notes to make absolutely sure, but I think we do. There will be necessary variations, of course: some receivers will see a cloth which is turkey red, some will see one that’s scarlet, while others may see still other shades. … The most interesting thing here isn’t even the carrot-munching rabbit in the cage, but the number on its back. Not a six, not a four, not nineteen-point-five. It’s an eight. This is what we’re looking at, and we all see it. I didn’t tell you. You didn’t ask me. I never opened my mouth and you never opened yours. We’re not even in the same year together, let alone the same room… except we are together. We are close. We’re having a meeting of the minds. … We’ve engaged in an act of telepathy. No mythy-mountain shit; real telepathy.”

King’s passage shows us the first reason. He makes magic with words flippantly, casually, with a little flick of the wrist - damnit Steve, it’s so easy to see the rabbit! Writing lets us manifest in the world the ideas that we hold. Different philosophies of mind describe exactly how thoughts are constructed in our heads and whether they are made up of words and how the language we use actually shapes what we think. But at the end of the day, if I want to get the thought-matter stuff that’s in my head out into the world - or more importantly, into your head - I’ve got to use words to do it. I need to string them together as a sort of building block and create castles.

Tolkien took this idea into the realm of mystical theology with his idea of sub-creation. He believed that since God creates, and we are made in His image and likeness, we are imbued with the same abilities. In his world this is not just a right and an ability, but a higher calling and a way or worshipping our Creator. Tolkien laid the ground work for the many fantasy worlds of the last hundred years:

“Fantasy remains a human right: we make in our measure and in our derivative mode, because we are made: and not only made, but made in the image and likeness of a Maker.” -JRR Tolkien

Tolkien used the creative act to build his fantasy worlds complete with cultures, languages, and new species. He used fantasy to teach us plenty about reality too, and this more than anything else is why his works are so revered today. Tolkien spun truth out of the web of stories about completely imagined worlds. You could argue that this is about as circuitous a route as you could possibly take to manifest your ideas. Not every love story about an ageless beauty and a hidden prince needs thousands of pages of backstory like Arwen and Aragorn. But then, not every story is as beautiful or as detailed or as precious and heartfelt to the reader. There’s a magic in Tolkien’s detail that captivates.

But the engineer who describes a product, the salesman who makes a sale, or the leader who inspires a crowd uses the same means to make manifest their visions and goals. If we want to make something real in the world or in the minds of other people, we use words.

“If you want to build a ship, don’t drum up the men to gather wood, divide the work, and give orders. Instead, teach them to yearn for the vast and endless sea.” -Antoine de Saint-Exupery

Explanation

The second reason for writing is another one of the characteristics that seems to make humans completely unique in the universe so far: Explanation. Nobody explains this better than David Deutsch:

“And yet, gold can be created only by stars and by intelligent beings. If you find a nugget of gold anywhere in the universe, you can be sure that in its history there was either a supernova or an intelligent being with an explanation. And if you find an explanation anywhere in the universe, you know that there must have been an intelligent being. A supernova alone would not suffice.” -David Deutsch

The capacity not just to observe, to sense, but to draw conclusions from observing and sensing provides all of the remarkable triumphs of human history. It leads us beyond the animalistic urge to react based on genetically encoded behavior and into an entirely different world - a world that includes understanding. If you want to understand why humans are special in a purely material sense - it’s not opposable thumbs, or our ability to sweat, or our bipedal gait. It’s our ability to understand and explain the world. Explanatory knowledge is perhaps the most important substance in the universe. And every brick and piece of mortar of explanation is composed of nouns and verbs and grammar.

New Models

And now we have these new tools in the form of gigantic interactive language models. They’re trained on all of the books, dialogues, tweets, journal articles, reddit threads, code and whatever else they can get their greedy algorithmic mouths on. Effectively, they’re built on a large percentage of the sum total of human knowledge. For the first time, we have a tool capable of telling us how to extricate our PBJ from the VCR in whichever style you prefer (King James Bible: “And it came to pass that a man was troubled by a peanut butter sandwich, for it had been placed in his VCR, and he knew not how to remove it. And he cried out to the Lord, saying, “Oh Lord! how can I remove this sandwich..?”)

Some people are wildly troubled by this development. They think it’s going to kill us all, or even worse, help you cheat on your term paper.

Just twenty years ago, two art historians developed and published a new theory about the incredible leaps in painting realism during the Renaissance. The Hockney-Falco thesis suggested that a new technology called the camera obscura - a predecessor to the photograph - allowed painters in the 14th and 15th centuries to develop light, shadow and color into hyperrealistic compositions. Justin Murphy makes the corollary for today’s new tools:

In other words, the great artists from this period—Peter Paul Rubens, Botticelli, Michelangelo, Caravaggio and others—are remembered as great partially because they were aggressive and shameless exploiters of Artificial Intelligence. Were they cheating? Presumably some contemporary observers must have thought so! Posterity, however, says no. The human qualities they brought to their work—the emotionality, the symbolic resonances, the larger vision they pursued in their work over time—was not commoditized by the new instruments and this is perhaps what distinguishes the great Renaissance painters from the merely good ones.

Hat tip to the anonymous reader who pointed this out to me. He adds, “Caravaggio was a pimp and thug who killed people, yet he painted like an angel because he ‘cheated.’ Real artists are always looking for either new techniques or new technology, while losers write off those who seem to be better as innately talented.”

Models are going to do a lot of stuff. They’re going to replace a lot of crappy original writing with crappy computer-generated writing. They may - and we can only hope and pray here - finally help eliminate the standard five paragraph essay from the middle and high school curriculums. On the leading edge, the most driven artists and creators will find new ways to use these tools to refine their work or to produce entirely new and barely imagined versions. Fifty years ago Italo Calvino gathered acclaim for his considerable effort dialoging Impossible Interviews with a Neanderthal. Today, Tyler Cowen can feed GTP4 with the collected works of Jonathan Swift and achieve the same dramatic effect in a few hours. Realistic selfies of Napoleon, Jesus and cave dwellers are trivial. Transforming napkin sketches into working web apps is easy. And we’ve only been using these things for months.

Chess

I’ve been thinking a lot about chess-playing robots because of LLMs. Most people only know about Deep Blue and its historic win over Kasparov a couple decades ago (which itself was more interesting than just beating the world champ - as Kasparov was purposefully trying to play strategies he believed would be more difficult for a computer). Far more important is AlphaZero and its insane obliteration of Stockfish. Stockfish has been the top chess algorithm for a long time and it has an ELO rating way over 3000. (ELO is the chess rating system - the current #1 player is around 2850.). Basically no human even has a chance to draw against Stockfish. And then AlphaZero comes in 2017 and beat Stockfish in 100 games without a single loss (note: there are deserved caveats about these games in terms of computing power).

So why have I been thinking about chess? First, because what’s already happened in chess is what people claim to be worried about more generally. That is to say, the AIs got so freaking good at something that we literally have no chance at keeping up. Moreover, they aren’t just beating us on computation. As Vishi says above, there’s some kind of reasoning going on there. And last - and this is the scary part - we don’t understand it. We literally have no idea what AlphaZero is thinking. Understandability is one of the Capital S Scary things. We call these models artificial intelligence, but alien intelligence might be a better term.

Maybe that’s not surprising since we also don’t really understand the human creative or reasoning process. And we definitely don’t understand consciousness. We’re at a really weird point in history where the big existential crises we worry about.. we don’t totally understand. Don’t let the neuroscientists fool you: we don’t have a good definition of consciousness. The most practical definition is really The Turing Test and the LLMs blew past that and we barely blinked and still don’t think of them as conscious. There are all these doomsayers gesticulating about AI as the eschaton and we can’t even define what we’re scared of! At least we all know what an earth-crushing asteroid is.

Anyway, there’s another side to this understandability too, which is that chess is a very discrete world with defined rules. People are translating the success in these small well-defined domains into very large domains. Cal Newport did a breakdown of GPTs that describes all of the weights in this giant inscrutable word matrix. He notes that it’s important to remember that there’s only around 50,000 tokens in English it’s dealing with. That’s a huge vocabulary for humans and tiny for computers. We’re taking this super interesting linear algebra problem and imbuing it with all sorts of human reasoning properties. And we’re extrapolating the narrow rules of some domains into much bigger domains where it’s very unknown how those rules will work.

There was a paper a couple years ago that defined the term “stochastic parrots”. That’s what these LLMs are - probabilistic repeaters. But watching what they do presents an interesting question: how much of humanity is us being stochastic parrots?

I’ve been in plenty of situations where a certain set of conversational stimuli innervate the same memories and I find myself telling the same story. The same could be said of the order we choose when grocery shopping, the way in which we turn on our car, how we develop and build a spreadsheet, what design patterns we use to build a database API, or what features we embellish when we create an original drawing. The mob is suggesting that AI is making what is distinct about humans smaller. Absent philosophical or theological arguments on free will, human dignity, Cartesian dualism, or sin and virtue I think this is true. The last outpost of humanity I’m defending is that there’s still 5% of generative human thought that is truly novel. Original. There’s a 5% to the “what makes us human” piece. Call this a divine spark if you want. Call it the creative essence of the human spirit. Call it an evolutionary step. But it’s still distinct from AI. GPTs can build sparkling castles out of words, but the cloudy, fuzzy, beautiful idea about the castle - the essence - still has to come from somewhere.

I read one article describing these new models as “blackboxes with emergent behavior that are still being studied..” I heard that and I thought to myself: that’s us. That’s humans. Most people are still focused on the capabilities side of the curve: what these models can do. But the “stochastic parrot” idea makes me think about people. If these things can operate so much like us.. then yeah, maybe 90%+ of what we are isn’t as special as we have thought in the past. I keep thinking about what Edward Teller said about John von Neumann - that he was so much smarter than everyone else that “only he was fully awake”.

Here’s what I mean: we’re used to thinking about a 80 IQ person and a 150 IQ person and believing them to be wildly different in capability. But.. what if they’re not? What if the range that we define that has so many graduated levels in it is just a small sliver of the intelligence curve. That’s what AlphaZero proves to us in chess. Magnus Carlsen is by far the best in the world. He’s 2850 ELO and he regularly and easily trounces measly 2500-2600 ranked grandmasters. Those, in turn, will never lose to a 2000-2200 rated player. And those will never lose to a 1500 rated player. And so for all of human history the top of that chart has been THE pinnacle of achievement and we’ve regarded that range of chess playing ability to be very wide. But now we have Stockfish with a rating way over 3000. And Magnus can’t even draw a game against it. And now AlphaZero trounces Stockfish. And maybe the range we knew is narrower than we thought.

Does this make any sense? I’m using “intelligence” as if it’s some obvious and inherent good which is an overdetermined perspective. (Creativity and originality are probably loosely correlated.) The larger point is how much of our behavior and our intelligence and our free will is really just conditioning and training to certain events and stimuli and environments? It’s always been difficult for me to tell where my stochastic parrot begins and ends. Even when I’m trying to synthesize existing work or understand my own new thoughts I’m always drawing from and standing on the shoulders of other thinkers and texts. And the shape of my thoughts always suspiciously looks a lot like the last few books I’ve read. Is what I’m thinking original? Novel? Regurgitated? I always have a slightly uncomfortable feeling that the stochastic parrot’s beak reaches just a bit further than I’d like it too. There is always a broader context of culture and memes swirling around us. In most situations and at most times, we are the stochastic parrots.

This idea gives me some hope in the longer term. Intelligence and consciousness and free will are all “suitcase” words: you can pack a lot of different meaning inside of them. If we’re forced to extrude out more precise attributes of each than we have in the past, maybe we can understand more about what is fundamentally special and human and made “in the image and likeness”.

Storytelling

I had a fun debate awhile back with my brother-in-law that started with an article about Presentism and ended with a conversation about the stories that different groups tell themselves about our country. History is supposed to be the study of the events of the past, but the past gets fuzzy very quickly. What we end up with is different myths emphasizing different values and all wrong in some way. The Nikole Hannah-Jones vision of the 1619 founding of America on slavery is as wrong as it is disparaging. The pure and innocent vision of triumphant and noble founders freeing humanity is wrong and naive too. The true history of events is in there somewhere - more nuanced than we could ever portray - but the stories we build around history are far more important. America has always been on the rise because the proportion of grand, beautiful and uplifting stories of “American exceptionalism” have always triumphed over the negative, declinist stories of evil. Where our myths are wrong they remain positive on the whole, and this averaged out optimism still drives the immigrant’s desire to come to the US, the American Dream, the free markets, and the drive to innovate technologically.

Storytelling is one of the most distinctive traits of humanity. The tools we’ve used to weave and capture our stories have advanced in big leaps over time. We started with simple cave paintings and oral traditions which lasted tens of thousands of years. We advanced to writing systems soon after agriculture let us settle into cities and kept them for thousands of years. Gutenberg’s printing press gave our stories and ideas a range that ushered into being the modern world in just a few hundred years. The internet made it even faster and more ubiquitous and transformed things again in just decades.

The wisdom and institutions of the day have always rejected these advances. Socrates himself rejected writing as effective communication in Phaedrus. (Incidentally, the reason Socrates is probably wrong here is the same reason context windows are so important for LLMs: some of the best ideas - including those laid down in Plato’s Socratic dialogues - are larger than our working memory window. We can’t hold them entirely in our head at any one time.). The Hockney-Falco theory of art suggests that new tools usher into being new techne - new modalities and new crafts and new ways of doing. The tools drive the craft.

David Friedberg has a thesis about the future of the economy moving towards narration - where we can literally speak our ideas into existence and rely on powerful tools to build and execute them. Here’s his overview:

Look, my core thesis is I think humans transition from being, let’s call it, passive in this system on earth to being laborers. And then we transition from being laborers to being creators. And I think our next transition with AI is to transition from being creators to being narrators. And what I mean by that is as as we started to do work on earth and engineer the world around us. We did labor to do that. We literally plowed the fields, we walked distances, we built things and over time we built machines that automated a lot of that labor, you know everything from a plow to a tractor to caterpillar equipment to a microwave that cooks for us. Labor became less, we became less dependent on our labor abilities. And then we got to switch our time and spend our time as creators as knowledge workers and a vast majority of the developed world now primarily spends their time as knowledge workers creating and we create stuff on computers. We do stuff on computers. But we’re not doing physical labor anymore. As a lot of the knowledge work gets supplanted by AI or as it’s being termed now, but really gets supplanted by software. The role of the human I think transitions to being one of the narrator where instead of having to create the blueprint for a house, you narrate the house you want and the software creates the blueprint for you. And instead of creating the movie and spending $100 million producing a movie, you dictate or you narrate the movie you want to see and you iterate with the computer and the computer renders the entire film for you because those films are shown digitally anyway, so you can have a computer render it instead of creating a new piece of content. You narrate the content you want to experience.

AI represents the next major step-function change in our storytelling ability. LLMs are built on a significant and growing percentage of our shared knowledge. If we use these tools as an oracle and imbue them with some sense of authority we will, at best, get the most staid, milquetoast, and unoriginal answers to your prompts (and, at worst, incoherence, incorrectness, and hallucinations). On the other hand, if we accept that most of our own output is just a stochastic parrot and focus on that last Tolkienesque and generative 5%, we can use these tools to shape ideas we might otherwise struggle to articulate. They will allow us to directly translate our vision for the world around us into reality.

And, just like Tolkien, the building block for all of this incredible world-building and creation will be the written word.

In the beginning was the Word, and the Word was with God, and the Word was God. The same was in the beginning with God. All things were made by him; and without him was not any thing made that was made. -John 1:1-3

English

The building blocks will be specifically English. Since World War 2 and America’s dominance as the world hegemonic power, English has been the primary language of science and business. When the internet began its disruption first in America, English was reinforced as the primary language that can cross boundaries. When Americans travel today and try haltingly to speak German or Japanese or French, the locals laugh and switch to English.

English represents more than half of all content on the internet. This advantage is getting enshrined in LLMs today as the “programming language” of AI and will eventually do more to keep English in the number one spot than America or business or the internet.

Since the 20th century, science and math and engineering have been the tools of choice to change the world. Computers added a new tool to the creative repertoire over the last couple decades. Education finally caught up to the trend and defined STEM as a key component of the curriculum: Science, Technology, Engineering, Math. In the same time frame, the humanities have become déclassé and deteriorated. Most people can’t write anymore. They don’t read much either.

At this point, STEM fields have become table stakes. They will be necessary but insufficient for those looking to create or change the world. The next couple of decades will see a resurgence in the importance of the English language. The most important and sought after skill will be the ability to articulate your stories, your thoughts, your ideas, and the ability to use a new generation of models and tools to translate your words into practical and pragmatic reality.

We’re entering a new Renaissance for the ability to write. If you want to change the world in the future, your primary tool is English. It’s always been true that the better you can use language, the further you can go in whatever you decide to do in life. That’s more true today than ever before. It doesn’t matter if you’re an engineer building prompts using a GPT, an architect, a movie producer, an executive, a fascist autocrat, an activist, or a philosopher. Words are the primary construction material of the future. English is the new STEM.


Postscript

There’s an old funny story about God that goes like this:

God was once approached by a scientist who said, “Listen God, we’ve decided we don’t need you anymore. These days we can clone people, transplant organs and do all sorts of things that used to be considered miraculous.”

God replied, “Don’t need me huh? How about we put your theory to the test. Why don’t we have a competition to see who can make a human being, say, a male human being.”

The scientist agrees, so God declares they should do it like he did in the good old days when he created Adam.

“No problem!” says the scientist as he bends down to scoop up a handful of dirt.”

“Whoa” says God, shaking his head in disapproval. “Not so fast. You get your own dirt.”

This is a good analogue for how I feel about AI right now. It can do a lot of stuff, but we still have our own dirt too.

And our dirt is still just God’s. Carl Sagan said, “If you wish to make an apple pie from scratch, you must first invent the universe.”


Like the content? Share it around..