Fitting to the perpetual status of this blog, I started writing this post in November of 2018, but never finished it. However, I recently starting listening to the Artificial Intelligence Podcast and it got me thinking about it again, so I decided to wrap it up.
I recently caught the tail-end of an episode of This American Life on NPR called “Where There Is a Will.” In Act 2 of this episode, show producer David Kestenbaum explores the idea that there is no such thing as free will.
It’s a difficult concept to fully grasp. When I begin to think of the extent of what it means, it makes my head hurt. It’s sort of like trying to think about the universe going on forever, or what might exist beyond it if it does have a finite edge.
Free will is what makes us as humans believe we are unique. The ability to make a distinct choice in your actions means you aren’t just a pinball bouncing between the bumpers of existence. You have control over your path. There is something comforting about feeling like you have a say in what happens to you—that your life doesn’t have a pre-defined destiny that you have no control over. But when you really think about it, the concept of free will doesn’t seem compatible with what we know about physics and the human body.
The brain is an immensely complicated black box full of billions of neurons and trillions of synapses connecting them. Ultimately we’re just a blob of atoms interacting in ways governed by physics. When you do something—anything really—it’s just your body reacting to outside stimuli.
Someone shines a bright flashlight at your face. Your pupils shrink. You close your eyelids. You put your hand up between the light and your face. You tell the person to stop pointing the flashlight at you. Each of these reactions has a varying degree of perceived autonomy and amount of outside input that led to it.
Your pupils adjusting and your eyelids closing are instinctual reactions, coded in our DNA. You didn’t really make a decision do to those things, they just naturally happened. Putting your hand up to block the source of the light from further away is a learned reaction. Babies don’t do it out of the womb, but over time you learn to control your extremities and various ways to use them as tools.
But that process of learning is still just physical reactions. Your brain remembers conditions that, if met, signify a way to respond. Each experience slightly adjusting that ruleset.
One might say free will is essentially just a simple way to explain that there are too many variables or parameters that lead to a specific action, and we have no way to know or evaluate all the little details that explain why someone did what they did.
I won’t dive into compatibilism and determinism here. There are many significantly smarter people than me who can explain it better. But just understand that if there is no such thing as free will, then it complicates much of how we’ve structured society. For example, it completely undermines things like the legal system. Our laws are built on the basis that we have the ability to choose whether to follow them.
What I find fascinating—and what I wanted to write this post for—is the implication that the non-existence of free will would have on the concept of artificial intelligence.
Artificial General Intelligence
Current AI systems are typically referred to as narrow AI. They focus on a single task or operate in a very controlled environment. Examples include programs that focus on a specific game, such as AlphaGo or AlphaStar. Even Tesla’s Autopilot, which is iterating quickly on the front of self-driving vehicles, could be considered narrow AI because it of it’s very specific focus.
But the long-term dream/goal of AI researchers is artificial general intelligence (AGI), which could be described as human level intelligence in a machine. There isn’t a fully agreed upon definition AGI, but I think a good general concept is something that can not only solve or answer any type of question or problem presented to it, but one which can also determine new questions or problems on its own.
If you consider the idea that humans do not have free will and are just a machine responding to stimuli, then I don’t believe there is a magic switch that will enable artificial general intelligence. It will be a gradual improvement over time. Similar to how a baby grows and learns about the world and how to live in it.
Is this incompatible with the singularity theory—that an AGI would hit a point where its ability to train itself and learn is so fast that it becomes a runaway train, moving beyond our comprehension and ability to control it? Perhaps not, but it might be difficult to identify the moment when it surpasses us.
Can a machine with enough cognitive ability ever truly be aware that it’s a robot?
If there is no free will, then humans are essentially robots made of organic material, yet most would vehemently deny it. I think it’s because everything we are taught and believe is based on the idea that life is special, and not just a combination of chemical and physical reactions based on the laws of physics. Teaching otherwise is too complex for us to fully grasp and upends everything we base our society on.
The AI that is programmed from day 1 to know that it is nothing more than a “dumb” machine that reacts to stimuli and doesn’t have free will would know it’s a robot, but would it meet our criteria of AGI? Where is that line drawn?
Is it just the Turing test?
Is AGI just the point at which a human can no longer distinguish between “free will” and a complex decision tree?
I don’t really have answers, but these are fascinating things to think about as we venture into the future.
Any sufficiently advanced technology is indistinguishable from magic.
Arthur C. Clarke