DALL·E Illustration from the post
I’ve spoken to you several times recently about free-will according to mathematician-physicist Stephen Wolfram. I spoke about it at the CNIL on November 28, and I spoke about it again here a few days ago in a post entitled Le libre-arbitre: réalité ou illusion? Stephen Wolfram and the “epistemic artifact”.
Now, a few days ago, one of you wrote to me to point out what Wolfram thinks of this question according to the PI chatbot from the firm Incentive, and I’m taken aback to see that it doesn’t sound like what I’m saying.
A few attempts at conversation with this chatbot quickly convince me of its low level: it seeks to please the user but barely takes off from that level. However, I’m determined to find out for sure, so I ask GPT-4 a few questions. Here again, surprise, surprise, what they say about Wolfram’s conception of free-will is not what I say about it. The difference between the two may seem subtle to some, but to me it’s fundamental. I’ll explain in detail below, but for those of you who have been following the debate so far, here’s a brief explanation of the difference:
– With Wolfram, what he calls computational irreducibility can give rise to behaviour that we perceive as free-will.
– With me, what Wolfram calls computational irreducibility accounts for our subjective experience of having free-will.
Here’s the difference: Wolfram’s computational irreducibility leads to behaviours that we can (or “could”) interpret as manifestations of free-will – this is a reading of human subjects in exteriority, whereas my computational irreducibility accounts perfectly for our subjective experience of free-will – this is a reading of human subjects in interiority.
This means that, by moving from a reading in exteriority to a reading in interiority, we move from a “could be interpreted as…” to an “accounts for…”.
So much for the program, now for the substantiated and, I hope, convincing demonstration.
Here’s what I wrote on March 7:
When you’re […] a subject within a “computationally irreducible” process, you know two things about your own life: as a scientist, you know that it follows an ineluctable course, but as a subject you know that you don’t know entirely what’s going to happen from one moment to the next, and that you have to … make a decision. One can say to oneself “I’m going ahead”, and the next step in a “computationally irreducible” process will be that I’ve gone ahead, or one can say to oneself “I’m going to sit on my ass, since the process is deterministic anyway”, and this decision to do nothing will no less have been experienced as a deliberate choice, which means that being a subject in a “computationally irreducible” deterministic process is necessarily experienced as the exercise of free-will.
The computational irreducibility of a deterministic process means that we can’t predict how it will behave; we have to follow it step by step to be able to describe a posteriori what its inevitable course was. And this lack of predictability means that the person whose life’s journey is such a process, is at all times confronted with choices, the feeling of which is that they are free, because such is by necessity the nature of a choice: to appear free.
Before moving on to what Wolfram himself says, let’s look at how the machine characterises his point of view.
GPT-4 :
-
- Wolfram’s perspective on free-will is closely related to his ideas about computational irreducibility. He posits that if a system is computationally irreducible, the only way to determine its outcome is to run the computation itself, which implies that predicting the future of such a system is inherently difficult, if not impossible.
- From this viewpoint, free-will could be seen as a manifestation of computational irreducibility. If our brains and decision-making processes are computationally irreducible, then they would inherently possess a form of unpredictability that could be likened to free-will.
- However, it’s important to note that this perspective doesn’t necessarily grant a metaphysical sense of free-will but rather frames it within the context of computational complexity and unpredictability. In Wolfram’s view, even though our actions might be governed by deterministic rules at a fundamental level, the complexity and computational irreducibility of these rules can give rise to behaviour that we perceive as free-will.
Note that “our brains and decision-making processes” possess “a form of unpredictability that could be likened to free-will”, i.e., as I’ve already pointed out, an external reading. Moreover, it is “complexity” that “can give rise to behaviour that we perceive as free-will”, not “unpredictability” that induces the subjective feeling of free-will in interiority.
Let us now turn to Wolfram himself. In his opus (770 pages) A Project to Find the Fundamental Theory of Physics (2020), he has very little to say on the subject:
“… how even deterministic underlying rules can lead to computationally irreducible behaviour that for all practical purposes can seem to show ‘free will’” (p. 550).
He has a lot more to say in a video published on October 1st 2022 on the Closer to Truth YouTube channel: “Stephen Wolfram – Do Humans Have Free Will ?”
Robert Lawrence Kuhn:
Stephen, one of the most fascinating questions that we can ask is about human free-will. And it’s fascinating because many scientists who do not believe in anything supernatural and therefore believe in a deterministic flow of science, have to deal with the nature of free-will. So if you have determinism in science, how can you have free-will? And there are some complex philosophical manoeuvrings, shall I put it, which enables some to say that we can have free-will? And sometimes it’s about quantum determinacy, but it’s a very intricate philosophical discussion as you look at the concept of free-will, from your perspective. How do you think we should address that question?
Stephen Wolfram:
Well, so if we think of the computer, for example, we often say we often imagine that in a sense it has free-will: it does all these strange things that we don’t expect. We kind of attribute to it kind of almost human qualities of acting in a way that’s it’s free in its will, so to speak.
But one of the things that I’m curious about is when we know that the system ultimately has perhaps even quite simple underlying rules, is it possible that one could go from perhaps quite simple underlying rules to behaviour that is complex enough that one could imagine that that behaviour is free of those underlying rules?
So, you know, one has the notion it’s like the robots of 1950s science fiction or something like that, you know, they have simple logical rules that are driving them, so they do these very simple to understand dumb things, right?
Well, one of the things that we’ve discovered is that if you look at all the possible simple rules that you might use to drive a system, so to speak, that many of those rules, in fact, don’t lead to simple behaviour: they lead to extremely elaborate and complex behaviour. Behaviour that is so complex that it’s very hard to predict it.
In fact, we even know that there’s some fundamental phenomenon that I call « computational irreducibility » that says that when you just sort of watch the unfolding of the rules of a system, that there isn’t a way to sort of reduce the computational effort needed to find out what the system will do. Essentially, the only way to find out what the system will do is just to follow each step and see what it does. And so in that sense, that is sort of a place where when you look at one of these systems, it appears to be behaving in a sense as if it has free-will: it appears to be behaving in a way that is so complex that we don’t recognise the simplicity of its underlying rules. And as a practical matter, when we ask about some, you know, let’s say, a moth that keeps to repeatedly, you know, banging against, you know, a window or something, it doesn’t seem to have free-will. And, in a sense, it’s because we can readily predict what it’s going to do: there’s no way we can sort of reduce the computation that it’s doing to just say it’s just going to keep on doing that.
What we see in many of these systems that one can study in some of the computational universe of possible systems, is that there isn’t that kind of predictable simplicity. Instead there is an irreducible complexity to what the system does. And I think that that’s kind of the essence of what we see as being, kind of free-will in the systems that we study.
Robert Lawrence Kuhn:
But it’s not clear that free-will and irreducible complexity are the same thing. I mean, you can have something that’s very complex and you cannot define it in any simple equation. But that, that by its very nature doesn’t make it free-will, does it?
Stephen Wolfram:
Well, I think that the question, the sort of the bottom level of that question, is: « Does the universe operate according to the kinds of rules that I’m talking about, or is there something that sort of comes from outside, that mixes things up? »
So one of the questions, and this relates to some issues of responsibility and its relationship to determinism and free-will and so on, is « Are we able to actually capture the complete rules for a system, or do we need something coming from outside of those rules, outside of the system, kind of kicking the system to determine what it will do? ».
And so one of the things that’s sort of surprising, perhaps, is that it can be the case that we can know the complete rules for a system. Perhaps one day we’ll know the complete rules for the universe. Yet it still can be the case that the behaviour of the system that in fact operates as if it is free of those rules, in the sense that this some irreducible distance between the underlying rules and the actual behaviour of the system, nut if you play that system over and over again, you will get exactly the same thing: it will do the same thing.
Robert Lawrence Kuhn:
So there is in fact … it would be impossible for it to …
Stephen Wolfram:
That’s correct.
Robert Lawrence Kuhn:
So to me that is not free-will.
Stephen Wolfram:
Well I think that that may be the way that we work [rires]. I think that is the way we work. So if you attribute to us free-will then …
Robert Lawrence Kuhn:
I’m saying that’s an open question. That the premise that you start with and many people start with a similar premise that the world is determined, and they therefore try to justify a kind of an artificial free-will, and that may be around here. So what we may be concluding is that he human sense of free-will that we have is no different from what you are talking about in terms of, the rules generating this kind of complexity. But they’re both not free.
Stephen Wolfram:
Right! Well, so then the question is « Is there anything that is free in our universe? ». And, really, to answer that, we have to ask the question of « Are all the ultimate rules for our universe, deterministic? ».
If they are, then it could be the case that we would conclude, you know, that if the rules, the deterministic rules were fairly simple, then it would have to be like, you know, the old science-fiction robots, and we’d never see anything that we would even imagine was free-will. The, I think, more remarkable thing is that it is in fact possible for there to be rules, even quite simple rules that are completely deterministic yet the behaviour is complex enough that it has all of the properties that we would normally attribute to something that seems to be free of those underlying laws. And my guess is that as we progress with our technology in our ability to understand how brains work and so on, that this kind of interpretation of what we think of as a phenomenon we call free-will is going to become more and more relevant because we’re going to be seeing this chain of neurones that does this and that and the other, and we’re going to be asking ourselves « How can it be the case when there is this deterministic set of processes that we get something which to us is a phenomenon like free-will? ». And I think this idea of computational irreducibility is the core of what allows us to understand how even when we know the underlying rules, we can still have something that to us seems like the phenomenon of free-will.
What does Wolfram say in a nutshell: a highly complex deterministic system “appears to be behaving in a sense as if it has free-will”; “[a] behaviour [can be] complex enough that it has all of the properties that we would normally attribute to something that seems to be free of those underlying laws”; “this idea of computational irreducibility is the core of what allows us to understand how even when we know the underlying rules, we can still have something that to us seems like the phenomenon of free-will”.
Is this conclusive? I think not. But can the notion of “computational irreducibility” explain the subjective feeling of free-will? I believe I’ve shown it is able to do so.
DALL·E Illustration from the post