Made in Our Image: The Rise of Artificial Intelligence - The Gospel Coalition | Australia (2024)

See here for part 1and part 2 in this series of articles.

Quantum theorist Richard Feynman died of cancer in 1988. One of his final entries on his personal blackboard seems to have been the phrase:Made in Our Image: The Rise of Artificial Intelligence - The Gospel Coalition | Australia (1) “What I cannot create, I do not understand.” In the twenty-first century, we are facing the inverse of this. If we can create intelligence, does that mean that we’ve comprehended it?

Made in Our Image: The Rise of Artificial Intelligence - The Gospel Coalition | Australia (2)

Modern artificial intelligence is remarkable. The computer scientists have built, from code and circuits, a functioning intelligence. Something that can intuit, that can strategise, that can rewrite your emails in the style of Abraham Lincoln. This changes our relationship to the natural world. I grew up thinking humans are the species that are more intelligent than dolphins and chimpanzees. We were the apex predator of intelligence. Soon we’ll be the species ranked between computers and dolphins. Smarter than dolphins, but dumber than our phones. As the atheist philosopher Nick Bostrom writes:

Far from being the smartest possible biological species, we are probably best thought of as the stupidest possible biological species capable of starting a technological civilization—a niche we filled because we got there first.[1]

Ouch.

Neural Networks

Modern neural networks aren’t just similar in terms of their outputs, they imitate us in their structure. Mustafa Suleyman, co-founder of DeepMind (now owned and run by Google), said the goal of his organisation was:

[to] create a system that could imitate and then eventually outperform all human cognitive abilities, from vision and speech to planning and imagination, and ultimately empathy and creativity.[2]

Again, ouch. Demis Hassabis, another co-founder of DeepMind, completed his PhD not in computer science, but in neuroscience. He studied memory formation in the human brain, trying to figure out how people picture events in their lives and store those events for later. Hassabis wanted to build a computer inspired by the patterns of the human brain.

Accordingly, in a small office in London, DeepMind started building various kinds of neural networks. Such networks draw considerable inspiration from the structure of the brain. A huge web of nodes or neurons pass information back and forward, with each level performing different tasks. If a neural network was learning to read, for example, the lower levels might identify punctuation marks or gaps between words. The next level might start to recognise words, while a higher level would engage with the meaning of sentences and paragraphs. That might feed into an even higher level which would learn to sense tone, genre, overarching arguments, worldview assumptions and so on. A large neural network might have billions of neurons, and dozens of layers, connected by an incredibly complex web.[3]

But how can a programmer have time to organise and design billions or trillions of little circuits? The genius of modern neural networks is that they don’t have to. Instead, the circuits assemble themselves. DeepMind was, from the start, building learning machines, feeding training data into neural networks. A language model would learn by making predictions, and then calculating what neurons and what connections were responsible for the success or failure of the predictions. Picture our language model working, letter by letter (token by token, to be exact), from a body of text the size of the Library of Congress. It makes a prediction about every upcoming letter, one at a time, and then adjusts the connections between the neurons to guess more and more accurately.

The neural network is a general-purpose technology that is appearing in more of life, at ever-increasing quality. Like electricity, it’s the sort of technology that may eventually be a component of almost all technologies. At some point it may be difficult to go through an hour of your life without interacting with one of these artificial brains.

More Is Different: The Quality of Quantity

Some people react to such models by saying, “they aren’t intelligent, they are just predicting the next word [or token] in a string of text.” Others say, “they aren’t intelligent, they are just finding correlations in data.” Each of these responses sounds plausible and contains a truth. But in each case, you have a truthful descriptive in miniature of how a large-language model works, with the subtle but crucial addition of the word ‘just’. These types of responses overlook a vital point: quantity has a quality of its own. A million ants produce phenomena that aren’t possible with one ant. A Mexican wave at a football stadium is a different kind of phenomenon to a single person raising their hands. A river may be wet, but a single molecule cannot be wet. As physicist Max Tegmark points out, “the phenomenon of wetness emerges only when there are many molecules, arranged in the pattern we call liquid”.[4]

The American Nobel Prize-winning physicist Philip Anderson would sometimes hear physicists say that other fields of scientific study aren’t researching anything fundamental. Chemistry, biology, earth sciences, social sciences and medicine are really just physics in a different building. In his paper ‘More Is Different’, Anderson rejected this way of thinking:

The behaviour of large and complex aggregates of elementary particles, it turns out, is not to be understood in terms of a simple extrapolation of the properties of a few particles. Instead, at each level of complexity entirely new properties appear.[5]

Quantity has a quality of its own.

To get a sense of the scale on which modern neural networks operate, consider this from Mustafa Suleyman:

Google’s PaLM [Pathways Language Model] uses so much [computation] that were you to have a drop of water for every floating-point operation it used during training, it would fill the Pacific. [6]

It’s important to understand that large-language models aren’t just memorisation machines. They don’t just store away the internet and copy-paste it back to you. Large-language models don’t have anywhere near the working memory to store the internet in memorised form. They do memorise common connections. The connection between ‘Capital of France’ and ‘Paris’ would certainly be memorised in a large-language model. The primary thing that large-language models do is extract principles, often very deeply buried principles, that allow them to store massive amounts of knowledge in a compressed form.

Consider the example of maths. If a computer was learning addition, it could start memorising every sum of every pair of numbers. But eventually the memory would be so full of possible number combinations that other functions would be crowded out. A better approach would be to find some principle, or a set of principles, that allows the machine to derive all possible sums without the need for much memorisation. The common term for this is ‘grokking’. Models grok all sorts of things once they grasp a deeper process or principle. When a model groks some truth about the world, performance and accuracy suddenly jumps, generalisation improves, and the neuronal structure changes to something simpler and more efficient.

As time passes, more general models are being trained that are competent to write computer code, recognise images, converse in Hungarian, and do your maths homework. Recognising this, in 2023 a team from Microsoft argued that

the evaluation of the capabilities and cognitive abilities of those new models have become much closer in essence to the task of evaluating those of a human rather than those of a narrow AI model.

A general model that has been trained in such a way has grokked millions of small principles about human language and the world that it represents. This gives it the ability to do something as unexpected as writing a proof that there are infinitely many prime numbers, with every line rhyming. Understanding these models requires top-down thinking, closer to psychology than physics, requiring a mapping of the various regions of the machine ‘brain’. Just as we use fMRI machines to see which parts of a human brain are active in various activities, we can try to map out how and where various concepts are stored inside an artificial brain.

The Hofstadter Pivot

Cognitive scientist Douglas Hofstadter used to be quite sceptical of the intellectual power of neural networks, both in his Pulitzer Prize-winning book Gödel, Escher, Bach,[7] and in articles like ‘The Shallowness of Google Translate’. Quite remarkably, in 2023, he not only changed his mind, but admitted that he’d changed his mind:

And my whole intellectual edifice, my system of beliefs … It’s a very traumatic experience when some of your most core beliefs about the world start collapsing. And especially when you think that human beings are soon going to be eclipsed. … People ask me, “What do you mean by ‘soon’?” … I don’t have any way of knowing. But some part of me says five years, some part of me says twenty years, some part of me says, “I don’t know, I have no idea.”

In his new point of view, with his new awareness of the power of neural networks, Hofstadter concludes:

It also makes me feel that maybe the human mind is not so mysterious and complex and impenetrably complex as I imagined … And so, it makes me feel diminished. It makes me feel, in some sense, like a very imperfect, flawed structure compared with these computational systems that have, you know, a million times or a billion times more knowledge than I have and are a billion times faster.

All of this was predicted by the British computer scientist Alan Turing as far back as 1951:

If a machine can think, it might think more intelligently than we do, and then where should we be? Even if we could keep the machines in a subservient position … we should, as a species, feel greatly humbled.

Here we are in 2024, and at least some of us are greatly humbled.

Questions for Christians

The emergence of these powerful new technologies raises enormous questions. Firstly, if we can create intelligence, does that mean that we’ve comprehended it? And if we’ve comprehended intelligence, have we understood what it is to be a person, to be conscious, to be alive? Secondly, if we can explain the mechanics of intelligence, to what extent have we ruled out one of the reasons for believing in the existence of God? Thirdly, if humans are a second-rate form of intelligence, where will they find their identity? What is significant, unique, and valuable about humanity, if cognitively we end up redundant?

These are the kinds of questions need be willing to explore with our Bibles open, as I argued in my first article. We Christians need to ponder these questions, not for ourselves alone, for I expect that these will be the sort of discussions many of us will be having with our non-Christian friends, classmates and workmates as the world becomes ever stranger.

This article has been adapted from material inMade in Our Image: God, Artificial Intelligence and You(2024).

[1] N Bostrom, Superintelligence: Paths, dangers, strategies, Oxford University Press, 2016, p 53.

[2] Suleyman and Bhaskar, The Coming Wave, p 8.

[3] Neural networks aren’t new and weren’t invented by DeepMind. Companies like DeepMind or OpenAI have taken advantage of the immense power of modern supercomputers to train neural networks that are massively larger than was possible decades prior.

[4] M Tegmark, Life 3.0: Being human in the age of Artificial Intelligence, Penguin Books, 2018, p 300.

[5] PW Anderson, ‘More Is Different: Broken symmetry and the nature of the hierarchical structure of science’, Science, 4 August 1972, 177(4047):393.

[6] Suleyman and Bhaskar, The Coming Wave, p 66.

[7] DR Hofstadter, Gödel, Escher, Bach: An eternal golden braid, 20th anniversary edn, Basic Books, 2000, p 20.

Made in Our Image: The Rise of Artificial Intelligence - The Gospel Coalition | Australia (2024)
Top Articles
Latest Posts
Article information

Author: Pres. Carey Rath

Last Updated:

Views: 6107

Rating: 4 / 5 (41 voted)

Reviews: 88% of readers found this page helpful

Author information

Name: Pres. Carey Rath

Birthday: 1997-03-06

Address: 14955 Ledner Trail, East Rodrickfort, NE 85127-8369

Phone: +18682428114917

Job: National Technology Representative

Hobby: Sand art, Drama, Web surfing, Cycling, Brazilian jiu-jitsu, Leather crafting, Creative writing

Introduction: My name is Pres. Carey Rath, I am a faithful, funny, vast, joyous, lively, brave, glamorous person who loves writing and wants to share my knowledge and understanding with you.