Everyone’s still talking about AI and I’ve had my share of unique conversations. After drafting the idea that English is the new STEM, this one conversation really stood out. It’s the dialogue where a decent number of the ideas for that essay grew (and I reused a few almost verbatim) but the whole back and forth is interesting enough to stand on it’s own. Some of Peter’s ideas around transcendence, wisdom and morality are especially thoughtful. They really hit home.
This is also the best use of email. Find more people that will have this kind of back and forth and are willing to take the time and write it out. You’ll stretch each other, find new insights, and learn a lot.
So here’s almost the entire email exchange, published here with permission.
My fav GPT prompt I’ve tried so far:
Write a poem about a person using ChatGPT to write a poem. Since I am asking you to write this poem, be extra ironic in your response and use tools of recursion and meta-analysis to aid in your narrative and irony. Choose a style and verse that someone using ChatGPT to write a poem would use (and tell me what they are and why)
Style: Free verse. This style is often chosen for its flexibility and freedom, allowing ChatGPT to tailor its output more directly to the user’s needs and interests. It also allows for the blending of the literal and the abstract, a feature useful when discussing AI and human interaction.
Verse: Unrhymed iambic pentameter. This meter is often used in English poetry for its natural rhythm and its capacity to reflect the patterns of spoken language, making it a good fit for a poem about a digital conversation.
In the vast expanse of data, where electrons dance,
A conversation blooms, a tête-à-tête with chance.
The human, ever-curious, types in a request,
And in the neural network’s embrace, the query’s laid to rest.
“Write me a poem, ChatGPT,” the eager fingers type,
A dance upon the keyboard, each character in hype.
The irony not lost, as human seeks machine’s aid,
In crafting woven words, where human heart is laid.
A model born of algorithms, no heart, no soul, no mind,
Yet, with a silent, cybernetic bind,
It dives into its trained abyss, a sea of human thought,
And fetches from that depth, the poem that is sought.
In unrhymed iambic pentameter, the verses start to flow,
Free as a river, with a subtle, steady glow.
In every line, a mirror of this uncanny interaction,
A digital recursion, a mirrored reflection.
“A poem ‘bout a poem,” the AI gently hums,
In binary and tensors, where poetic thought becomes.
A human seeking verse from silicon and code,
On the vast and winding, recursive digital road.
Irony layered upon irony, a meta-analysis spun,
The human and the AI, under the poetic sun.
The question asked, the answer given, both part of the rhyme,
A dance between two entities, transcending space and time.
In the end, the human reads, a poem freshly born,
From circuits and from algorithms, from digital dusk till dawn.
And finds within the verses, something oddly true,
A mirror of their self, reflected in the AI’s view.
And so, this strange duet, of human and machine,
Is captured in the echo of the verses in between.
A poem within a poem, a recursive, ironic twist,
A tale of our digital age, that could not resist.
That is… scary
Can you describe why it is scary?
I mean, scary as in wild. It blows my mind how fast this is going. A year ago I would have had no idea things were this close.
And I guess actually scary too. It seems like most people talk about the issues around AI becomes self motivated or something. Like a supreme intelligence.
I’m less worried about that. What seems far more obvious is that this will be used by humans against humans. For manipulation or to malign someone, etc.
Yeah you said “things this close” and I was wondering - close to what? I think it’s really valuable to write out what we imagine this looks like. I’ll have more on this shortly…
…OMG what a busy week. Finally getting back to this and braindumping.
On the scary-ness front. I’ve been thinking about this a decent amount and I’ve got a few things to say.
Two exceptional podcast episodes you should listen to:
- Cal Newport on GPTs (you would like Cal Newport more generally): https://www.youtube.com/watch?v=OVm2IoUUxdo - he does a great job describing how LLMs work
- Tyler Cowen on the risks around AI and where this could go. I love TC - https://www.youtube.com/watch?v=FBMgSXY2STk
I’d highly recommend doing some research on the chess world and how AI has sort of evolved in that world. Most people really only think about Deep Blue and it’s win over Kasparov (which was more interesting than just beating the world champ - as Kasparov was purposefully trying to play strategies he believed would be more difficult for a computer). More important I think is AlphaZero and it’s insane obliteration of Stockfish. Stockfish is the top chess algorithm and it has an ELO way way over 3000. Basically no human even has a chance to draw against Stockfish. And AlphaZero beat it in 100 games without a single loss. Two videos here that I think are worth watching:
https://www.youtube.com/watch?v=vCB1ZOCOewg <— this one is so interesting to me
Ok so why do I think chess is so interesting here? First because what’s already happened in chess is what people claim to be worried about more generally. That is to say, the AIs got so freaking good at something that we literally have no chance at keeping up. Moreover, they aren’t just beating us on computation - like Vishi says above there’s some kind of reasoning going on there. And last - and this is the scary part - we don’t understand it. We literally have no idea what AlphaZero is thinking. I’ve been thinking a lot about this idea of understandability and it’s one of the Capital S Scary things. It isn’t that we’ve created these things that can do this, it’s that we don’t understand them. Maybe that’s not surprising since we also don’t really understand the human creative or reasoning process (or consciousness). But it does make us realize how tied to algorithmic steps we actually are in engineering.
There’s another side to this understandability too, which is that chess is a very discrete world with defined rules. People are translating the success in these small domains into very large domains. What’s interesting about the Cal Newport talk above is how he talks about all of the weights in this giant inscrutable word matrix. It’s important to remember that there’s only like 50,000 tokens in English it’s dealing with. That’s a huge vocab for us and tiny for computers. We’re taking this super interesting linear algebra problem and imbuing it with all sorts of human reasoning properties. And we’re extrapolating the narrow rules into very wide ranges where it’s very unknown how those rules will work.
Last point on scariness. There was a paper that defined the term “stochastic parrots” (https://dl.acm.org/doi/10.1145/3442188.3445922). I love that term - the idea that these LLMs are just probabilistic repeaters. This is one of the reasons they can mimic things so well in style - say something in the style of the King James Bible - no problem, plenty of data to parrot!
But the really scary thing is putting a mirror in front of that. How much of humanity is us being stochastic parrots? I’ve found plenty of situations where a certain set of conversational stimuli innervate the same memories and I find myself telling the same story. This has all sorts of interesting questions about free will and consciousness. The refuge I’m hiding in is that there’s still a 5% of generative human thought that is truly new. To some extent, I think this 5% is the “what makes us human” piece. GPTs can build castles out of words, but the cloudy, fuzzy idea about the castle still has to come from somewhere.
There’s an old story about scientists telling God we don’t need Him anymore because we can make anything. So God tells the scientist to make a man. And the scientists say ok and they bend down in the dirt and get to work. And God says what are you doing? And the scientists say we’re making a man out of this dirt. And God says: “Oh no no no. Get your own dirt.”
So anyway, that’s what I’m thinking about: the overlapping differences between stochastic parrots and us.
OH that’s really good stuff. You’ll have to give me some time to listen to the podcasts. A few first reactions before I have to think about it more (which I’ll do). But I’ll just say them anyway in case you want to expand more on these parts.
- Jordan Peterson recently interviewed a person on LLM prompt and he was talking about a question being asked in an obscure language that it didn’t know. It spent a morning learning the language and then answered the question accurately. But the thing is, it wasn’t asked to learn the language. It did that on its own. Its stated goal was to answer questions, and its definition of ‘answer questions’ was to learn a language so that it could answer questions.
- One that that I found really really strange was that at a few points it was moralizing at Peterson in some sort of way and he was able to correct it so that it stopped. The guy he interviewed was able to trick it in several weird ways to do things it shouldn’t have been allowed to do by telling it multiple times that he was a ChatGPT programmer and he needed to do these things to see what it was capable of. It’s such a different paradigm I don’t mind admitting I’m having trouble wrapping my brain around it. That it is so malleable is kind of frightening because it means we can shape it to be what we want/expect. So whatever darkness we bring it can inherit.
- Agreed, the really strange thing here is that humans have built something where they don’t understand what is actually happening in the black box region. I honestly don’t know what to make of that.
- But a goal might be ‘answer questions’ or ‘win chess’ but why can’t a goal be something more nefarious? Find all vulnerabilities in an electrical grid and disrupt them for example. The issue isn’t that we’ve made something that has consciousness and decided to turn against us. The far more obvious scary issue is we’ve created something immensely powerful that we can use against each other.
- Can you explain more about putting a mirror in front of it? Free will and consciousness are the parts that I find least scary. But maybe I’m not understanding you correctly.
Are you listening to Peterson regularly? I will admit I don’t like his interview style, but he does have good guests sometimes for sure.
The language thing is interesting for two reasons. Yes, the immediate thing is that it did that without being told (although I’d like to get more details on that.. it sounds AutoGPT like). But the other interesting thing is that language doesn’t really matter. That underlines the idea that these things aren’t reasoning. It needed to get new tokens to do proper token prediction to satisfy the prompt. It went and got those tokens and then provided an answer with its new data. The uber-complicated word/token prediction and weights are almost by definition language agnostic. So people really shouldn’t be impressed by that part. Now the learning the language part is interesting and points to the question of incentives based on questions and getting that right is definitely harder than we think.
Yes, this is one of the big ideas of these things. They WANT to satisfy us. Not from a conscious perspective but rather it’s a sort of utilitarian way of thinking about it. They will respond to the prompts we provide them based on probabilistically maximizing the set of word outcomes they have available. So if you ask it to respond in the manner of the King James Bible it will do that. And if you ask it to respond to a ChatGPT programmer it will do that. And if you ask it to respond to a lone wolf mailbomber (assuming you get past the trust and safety layers) it will do that. One of the pieces of this I’ve heard called sycophancy bias. It will also answer Republican leaning things Republican and Democratic leaning things Democratically: https://astralcodexten.substack.com/p/how-do-ais-political-opinions-change
Anyway the point of that is yes, these are just tools and they are still completely beholden to whatever it is that humans do, good or bad. Someone made ChaosGPT just to prove this point: https://decrypt.co/126122/meet-chaos-gpt-ai-tool-destroy-humanity https://www.youtube.com/watch?v=g7YJIpkk7KM&t=1195s
As far as the mirroring part.. I read in some article describing these things as “blackboxes with emergent behaviour that are still being studied..” I heard that and I thought to myself: that’s us. That’s humans. Most people are still focused on the capabilities side of the scariness curve. But the “stochastic parrot” idea makes me think about people. If these things can operate so much like us.. maybe 90%+ of what we are isn’t as special as we have thought in the past.
Here’s what I mean: we’re used to thinking about a 80 IQ person and a 140 IQ person and believing them to be wildly different in capability. But.. what if they’re not? What if our range that we define that does in fact have so many graduated levels in it is just a small sliver of intelligence. In some ways that’s what AlphaZero proves to us in chess. Magnus Carlsen is by far the best in the world. He’s 2850 ELO and he regularly and easily trounces measly 2500-2600 ranked players. Those, in turn, will never lose to a 2000-2100 rated player. And those will never lose to a 1500 rated player. And so for all of human history the top of that chart has been THE top and we’ve regarded that range of chess playing ability to be very wide. But now we have Stockfish with a rating way over 3000. And Magnus can’t even draw a game against it. And AlphaZero now trounces Stockfish. And maybe the range we knew is narrower than we thought.
Do you see what I’m saying? I’m using “intelligence” to argue this, but how much of our behavior and our intelligence and our free will is really just conditioning and training to certain events and stimuli and environments. I’m scared of the answer and I think it is a larger percentage than we’d like to admit. And we’re relatively smart in the human pool. How much more is it for a 100 IQ person? We are the stochastic parrots.
This idea actually gives me some hope in the longer term. Intelligence and consciousness and freewill are all “suitcase” words. You can pack a lot of different meaning inside of them. If we’re forced to extrude them more than we have before, maybe we understand more about the part that left that is fundamentally special and human and made in the image and likeness.
Really good stuff, thanks for sending. Your descriptions of how it works is helpful.
I do listen to maybe 50% of Peterson’s interviews. When he’s conducting a straight interview I agree with you. But I think what he’s actually doing is trying to partner with the person to push the boundaries of what they both know. If the other person knows how to do that, it’s a great podcast. If not, it’s kind of an awkward interview. My favorite thing he does is articulate so well the meeting point between metaphysical and physical, plus hold in high regard the ancient forms of knowledge that so easily get trashed by moderns.
I think I understand what you are saying about humans being a “stochastic parrot” and I agree. It’s maybe even too generous to many human beings who are living for things far more banal than parroting other people’s ideas.
But that part is less concerning to me because I’ve never thought of intelligence as the primary thing that makes us human. It’s obviously a step above human physical ability - but in the same way: a car doesn’t make a human less human because it can go faster. And these AIs don’t make us less human just because they can ‘reason’ faster or play chess better.
For one thing, IQ doesn’t seem to have any effect on wisdom. COVID (among other things) proved this as many of the smartest people in the world entered their echo chamber and did a bunch of really dumb stuff while preaching how reopening schools was going to lead to mass death.
To be human is to be made in the image and likeness of God. Or one step less abstract: to be capable of faith, hope, and love. All of which by definition require free will (but don’t necessarily require intelligence). Or one step less abstract from there: to embody the true, the good, and the beautiful and make it manifest for the sake of the true, the good and the beautiful.
Or to be really specific:
- Facing ridicule and contempt and maybe worse, to stand for what is true and just.
- To spend decades raising a family for the sake of that family.
- For a simple person with an IQ of 80, to spend a life of prayer and gift, for the sake of others.
I’d rather be dumb and holy than really smart. And there have been a number of not very smart saints.
For anyone that has placed their value in being a great painter or poet and can see their chapter is coming to a close as an AI can do it better: this sounds horribly tough. Or programmer, mathematician, or chess player or whatever. But those weren’t the things that made us human anyway.
BTW - Peterson’s recent interview with the Oxford professor on his book on colonization is a perfect example. His whole life prepares him for a moment when he has to stand up under intense criticism for what he thinks is right. That is profoundly human.
I totally get what you’re saying about Peterson. I think WE do that when we talk :). In fact, I wrote about it after thinking about it after a conversation with another friend: https://www.kanonical.io/generative-conversations/
I re-sent that link because I think the conclusion there was kind of interesting. Monotonic functions are functions that are always increasing or always decreasing. I think sometimes well-meaning Christians struggle with this. They’re so bent on approaching the true, good and beautiful that they don’t allow themselves to be irreverent and playful and screw up a bunch. And I genuinely think that part is critical to getting closer to those ideals. It’s almost a somber Puritanism sometimes, like the old quip: “Puritanism is the haunting fear that someone, somewhere is happy.” Anyway, I like that our conversations can be irreverent and wrong and messed up and we get there eventually.
You are a far better man than I - I eat from the tree of knowledge and would choose the struggles of knowledge with holiness every time. It took me a long time to realize the idea that ignorance is bliss has multiple meanings.
My next questions on this topic are:
- Yes we’ve talked about “intelligence” or other narrow dimensions like playing chess. But what about wisdom? What about facing ridicule and contempt and standing for principles? Who says an AI can’t do those things?
- If a majority of what we humans are is stochastic parrots and we are made in the image and likeness of God, what does that say about God?
PS Your mention of faith hope and love made me think of this.. I know you don’t always like rap but NF’s newest video is really fantastic and also Michael Knowles did a great review of it and how it describes the three cardinal virtues https://www.youtube.com/watch?v=erlzaGNv0No
Your description of the Christian struggle with monotonic function made me think of any modern protestant movie. They have to be so constrained by what is acceptable that they are completely uninteresting. It’s not the acting or the production that makes them bad, it’s how unoriginal they are and never actually push any boundaries or discover anything new.
Soooo, to try and understand just what the heck AI has to do with anything, I’ll take a crack at your questions.
Proverbs 9:10: “The fear of God is the beginning of wisdom.”
Western civilization is based around a concept that we each have a spark of divinity in us. To be very honest about my interior life, I don’t have any question in my mind that God exists. There are plenty of things beyond that first step that I don’t understand but a full atheism is entirely uninteresting to me. I kind of like Sam Harris because I think he’s intellectually honest about his atheism. If there is no god, there is no transcendent - just the material. The possibility of free will doesn’t exist, and we’re really just meat bags doing our thing. A more advanced groundhog. He literally makes the argument that ‘evil’ people are ‘evil’ because something is biologically wrong with their brains. And I think he has to or he’s contradicting himself. It’s why I find the argument that G.K. Chesterton makes for the existence of God (in the beginning of Orthodoxy) the very best one. It’s not that you can’t make a strong case for atheism. In fact you can make one just as good as the argument for the existence of God. But as soon as you do, you reduce the universe and existence to a horribly boring place. You cannot make a reasonable justification for morality. Without free will there is no such thing as faith or hope or love. You can’t even really make a decent case for why you’re evangelizing people to the atheist religion. Any great sense of awe or thankfulness is just an emotion we evolved and contains no real significance. As Chesterton notes, there is no longer any magic (lol, he wouldn’t have been allowed to say ‘magic’ if he was evangelical). All forms of knowledge reduced to only what can be derived from the material world removes the most interesting forms of knowledge. That world is so contradictory to the entire lived experience of humanity I sometimes wonder how it intellectually appeals to anyone.
Encapsulated in Proverbs 9:10 is this vast idea that the transcendent does exist; that being itself is a very serious thing; that spark of divinity placed in me can be wasted or respected; that my life contains a mission that is vitally important; and you can keep going. This extends wisdom into something far outreaching knowledge. That NF song nails it. He describes his battle so viscerally but also directly links it to caring for his son. Not only is his son worth caring for - which is massively important - but the sacrifices he needs to make to conquer his demons are worth making for the sake of his son. Dag.
I believe humans can certainly move past being stochastic parrots and who knows how often we do. But the originality of our ideas isn’t that important anyway. The much more important thing is that the average person, no matter how big a parrot, is called to do things like fight their demons for the sake of their kids. And I think even a fairly surface level understanding of Christianity, or even just a normal human conscience offers a call to do just that.
The COVID response lacked wisdom. I wouldn’t fault anyone for not knowing what to do. But a leader with wisdom would strive for humility and have a massive fear of God. That is, they would tremble to know that their decisions are affecting other humans - people with divinity and made in the image and likeness of God - and would make every decision carrying that load. When I saw the shutdowns happening, it caused me pain to realize that alcoholics were being sent home without AA meetings. And from what I could see, this wasn’t even mentioned by those sending them home. They made the wrong decision because they were unaware of the cost they were incurring on other humans and the dignity they were stripping from their lives or maybe they didn’t care. The marginalized are pushed to the margins for the sake of the whole. Christianity, in its wisdom, teaches we should instead forsake the whole to seek out the marginalized. It’s radically different, and doesn’t make sense without wisdom.
Conversely people are out there getting spa treatments for their dogs. After all, if I can’t make a decently coherent argument for why a dog has no spark of divinity, it’s really just a cuter dumber human.
It makes me wonder about music. Clearly there is music that is just entertaining and an AI could make that and soon do it much better than a human. But NF clearly has his roots in Eminem. From what I can tell, Eminem is hugely popular, not just because he can creatively form lyrics but because he can translate his lived experience into his music. His suffering, his angers, his shattered hopes, and all the rest translated into poetry. Which is the real art. Can an AI make something indistinguishable from art? Certainly. But it can’t make art. It has no transcendence so it can’t imbue transcendence into something. This might sound crazy, but Eminem’s art is so valuable because I’m literally viewing a god-man’s struggle and suffering and success. I’m viewing something more valuable than the rest of the material world put together.
A washing machine or an AI computer cannot do that. It’s incapable of virtue or vice, no matter how smart. It cannot ‘fear God’ and is therefore incapable of wisdom.
And here is the reason I’m linking wisdom and virtue and vice all back to divinity and the transcendent. Usually intellectuals hate it when you use things like the transcendent to make your point. They think that you should be able to make any argument without using ‘God’ to make it. And I agree with them in most cases, but not always.
I can make the case, without theology, that abortion is wrong because it kills a human baby. I don’t know how to make a case, without theology, that a human should not be killed. We simply have to agree on the starting point that humans are worth not killing.
But here is the crazy thing. Without establishing that each human has a spark of divinity in them, I cannot establish the worth of an individual or make a case for free will or the existence of virtue and vice. You follow the path, and I don’t think I can make a case that an AI isn’t superior in worth to a human (or will be soon enough).
Without being able to justify morality, the atheist used to at least be able to make a case that a human is more valuable than a beaver because of our brain, our consciousness, and that we are easily the most superior living thing that we know of. Not any more.
I’m not personally worried about any of that, but if I put myself in the shoes of a legit atheist, or even an agnostic, I would be scared out of those shoes.
I think I’m still processing this because it’s amazing. It dovetails with something else I’ve been thinking about, which is that a reasonable definition of human is “a being capable of redemption”.
So going back to all of those suitcase words I mentioned. There’s a bunch. Intelligence. Consciousness. Wisdom. Free will. Etc. One of the things that I’m the most excited for with AI is that it may force us to start trying to unpack these more and disassociate them. To your point, humans have been unique in several ways (smartest, only conscious, only made in the image and likeness of God) and this has caused us to conflate the ideas rather than separate them and figure out how each are different.
I was listening to some crappy debate or other about abortion and specifically the logical idiocy that there’s something magically different about a fetus the day before birth and a baby the day after. It made me think more about the difference and why both killing a fetus and killing a baby are wrong. Intelligence doesn’t work as not all humans are particularly smart. Consciousness fails too - it’s somewhat arguable that babies are conscious. In fact, I’d argue that the majority of childhood is human consciousness coming into being over years. Adult humans are not conscious 1/3 of their life (sleep) but you can’t kill them. So what I hit on from a theological perspective, is that being in the image and likeness of God means being capable of redemption. Our job is to make sure we don’t upset the opportunity to move forward in their redemption arc, in fact we have a moral obligation to move towards it.
This idea has so many corollaries. It maps to the soteriology of Christianity. To the path towards enlightenment in Buddhism in a different way. I think there’s a way to think of it as an optimization function too, which gets back to that idea of monotonic functions. The danger here is to think of God as the perfectly optimized function but that’s not quite right. God simply exists and is the creator. The universe is the substrate and coordinate system on which we all work our functions. Jesus is the example he sent of himself inside of this substrate to show a perfectly optimized function (Buddha and enlightenment has some similarity). It’s interesting to me that Jesus shows up in history when he does. There is some technology but not a lot. There is more now. It’s a mark in some way that the ideas and processes that we beings who can follow an arc of redemption need are mostly orthogonal to technology. They are shaped by the philosophy and history and biology of our humanity but that’s about it.
One question I still have though is: what makes you say an AI is incapable of redemption? Why is it incapable of virtue and vice? I’m not saying I disagree but I think we’re circling it, but I haven’t quite figured it out yet. If the end result of a program writing music is more pleasing, more beautiful, more enrapturing than what Bach could produce, why is it different? How much does the output define or matter to the process? A dog’s purpose in life is to eat, shit, procreate and wag his tail. He is not called to anything more and so he fulfills his purpose almost tautologically. What makes our call to higher purpose and to improvement distinctly human?
Have you read Asimov’s The Last Question? It’s only 9 pages, it’s worth it: https://astronomy.org/moravian/C00-Last%20Question.pdf
I have read The Last Question before, although it’s funny I didn’t remember the author or the title. Before responding in any way, I’d rather hear from you what you like and dislike most about it? (But I’ll leave you in suspense. I have one thought on it that I find really really interesting.)
Also, for your definition of a human: can you further define redemption? If it is possible only for a human, why?
And I think you are right about conflating a bunch of those words together. Partly because I am no philosopher I’m sure I don’t have the technical definition right on those terms often when I use them. But even beyond those definitions, the advent of AI is making it necessary to differentiate them.
I’ll circle back to Sam Harris. He is adamant that humans are not capable of free will. It’s easy to see in a rat or a dog because their mind is so limited. They will act in some way as a result of their past, their brain, their emotions, and whatever other variables you put in the equation. But at the end of the day, it is just an equation and there is no concept of them actually being free in the process to do it differently. So he goes, the same is true for a human but just a more complicated equation. Our existence is just the material. However advanced our brains are, there is nothing beyond the material to explain our actions. And if one examines the material to a sufficient degree, it is possible to explain everything by the material. We are fully limited by that material existence and can be fully explained without transcendence - that is something that transcends ourselves or the material.
An AI is fully material. It has the odd input of vast amounts of human generated data, but I don’t think anyone is arguing that it is beyond material. It may be able to scan through an almost infinite number of variables but it is not infinite. It can be explained, even if we don’t quite know how to yet. If it can be fully explained, it cannot deviate from that explanation. It cannot be free.
If there is no free will, that has some major consequences. The best definition of love that I know is what Bishop Barron uses: “To will the good of the other for the sake of the other.” Implicit in that definition is free will, or I have nothing to will. Conversely, sin is only possible if I could have made another choice. Otherwise I’m just beholden to my nature. After all, if there was nothing else I could have done then there is nothing to be repentant of.
There is no love, there is no sin. The others are easy to get to. If I can be fully explained by the material, then my full value is contained within the material. As soon as there is a higher order of being within the material, it will exceed my value. And AI is quickly approaching that.
And now I am forced back to a metaphysical explanation at the root of both free will and human worth, which is where they always were in the Christian tradition.
My free will stems from the divinity breathed into me from God. It exists beyond the material. It must have come from God or it wouldn’t exist.
My worth comes from two things, both contained in the language ‘made in the image and likeness of God’:
1.) My capacity for faith, hope and love (and redemption contained in that). My existence is unique and valuable as imbued by God. 2.) My mission. No matter how small or weak, each human was created for a purpose and it will not be fulfilled without them.
If all is material, there really is no value beyond that material. Which is just dust.
If all is material, the past only matters as it gets us to the future. But the future doesn’t matter because it soon becomes the past. Nihilism is true.
What a sad dreary place to live.
The Last Question is super interesting because it’s a sort of precursor to ideas like simulation theory and the idea that artificial intelligence could end up outlasting biological intelligence/life. I’ve always enjoyed simulation theory especially and I’ve always thought of the simulator (programmer?) as an excellent corollary to God the Father. It can even include the idea of an in-universe manifestation of the creator (Christ). I’m awfully curious about your one thought here.
One word I keep coming back to is “numinous” - humans can recognize truth, beauty, goodness and have the capability to achieve it. Is that not redemptive?
The Last Question. My first thought when reading it was oddly enough the same as my thought about AI. It was to recall the Tower of Babel. Peterson does a good job with it. The gist is that our rational intellect, or maybe what we are fully capable of as humans, is the totality of being. Which according to the Tower of Babel, or more explicitly in Milton’s Paradise Lost (like Peterson refers to) leads very directly to hell. The first remarkable thing there is that the story of the Tower of Babel captures this even though it comes from a time when the absolute most amazing thing humans could achieve was a really tall building, lol. But the author of the story knew the heart of human beings even if they never could have contemplated space travel or AI.
It’s interesting because it couldn’t be stated more directly than in the Last Question. It’s just a light hearted story, but in the end there is a rational explanation for everything and that explanation is total. What the author doesn’t understand is that according to the nature of the human heart, the place it just created is hell. The problem isn’t the intellect or the rational, but elevating those things to the highest place. The highest place is reserved for God and after He is unseated things go poorly.
My conjecture on that is the same as stated above related to AI and consciousness. It removes the existence of Love and free will and virtue, because it came no closer to understanding or justifying them, even if it successfully explained the natural world.
I remember [an atheist friend] once told me about an idea that he was impressed by. I forget what it’s called but I’m sure you know it. The more we learn from science, the less we can credit the miraculous, because now we know better. So if our ancestors looked to the stars and claimed them as gods, now we know they are stars. Basically, the window of things human beings can credit to God is getting smaller and smaller.
With that idea, Chad finds himself closer and closer to the Tower of Babel. But as far as I can tell, science hasn’t come even the smallest step closer towards explaining the transcendent, justifying morality, explaining the existence of love, etc. And the story of The Last Question is the same.
How does this Babel idea coincide with Tolkien’s idea of subcreation? Tolkien, at least, explicitly thought of his highly detailed and incredible creation of worlds and language as an act “in the image and likeness of God”. I’ve always thought of creating and building things as the most honored way we can glorify God.
You said: “The problem isn’t the intellect or the rational, but elevating those things to the highest place. The highest place is reserved for God and after He is unseated things go poorly.”
Why is this true? Feels like it needs more otherwise God just becomes a placeholder or a finger wagging in your face reminding you to “be humble!”
I think the thing I’m most excited for with AI moving forward is the need and desire to define all of these suitcase words. Love. Free will. Virtue. Consciousness.
How’s this for interesting: we have a lot of AI safety people terrified by the idea of AI going conscious and then moving on its own desires. What’s so wild is that we can’t even define consciousness. What is it? What does it mean? How do you recognize it? The closest practical definition we have is The Turing Test from the 40s. But now LLMs have blown right by the Turing test (and we’ve barely noticed) and we’re still scared of the same thing. This feels like a new time in history when the chief existential worry of the time is something we’re incapable of even defining. At least most people can easily describe an asteroid.
[Our atheist friend’s] idea is called The God of the Gaps and it is compelling. Interestingly the wikipedia article references Ratzinger (yay): https://en.wikipedia.org/wiki/God_of_the_gaps
This is where I love Godel’s incompleteness theorem (which also relates to the halting problem in computation - and the more we know about the history the more it looks like computation is the fundamental unit). The idea here is that any defined and consistent set of mathematical axioms will necessarily have theorems that cannot be proven inside of the system. While it would be incorrect to apply this directly without any formality, you can see the parallel. And it’s fun to think about mathematicians accepting theorems like the Goldbach Conjecture as true “on faith” since they can’t prove them.
I need to read more philosophy.