This is a guest post by one of my readers, Mason Green, in which he aims to provide a novel argument for the existence of God within the framework of Ray Solomonoff’s theory of inductive inference. The argument (and the background knowledge it presumes) is outside my field of expertise. But it will definitely be of interest to some readers and Mr. Green would welcome any constructive feedback you may have.
And now, without further ado…
One popular form of reasoning, Solomonoff’s method of induction, presupposes that any set of data one might observe should be generated by a computable process—that is, there should be some Turing machine (an abstract computer program) that outputs the data.
Solomonoff induction allows us to formalize Occam’s razor as a guiding principle: we simply assign all possible programs a prior probability equal to 2-K, where K is the Kolmogorov complexity (program length). That is, simpler programs are preferred as explanations compared to longer ones.
We can use this to predict future values for data sequences. For example, suppose we have a sequence whose initial values are 1,0,1,0,1,0,1,0,1…, and we want to predict whether the next number is a 0 or 1. Solomonoff induction tells us that the next value should be a 0, because a simple program (“print ‘1,0’ repeatedly”) would produce a 0, whereas it would take a more complex program (e.g. “print 1,0,1,0,1,0,1,0,1,1” repeatedly) to produce a 1. When used this way, Occam’s razor becomes Solomonoff’s lightsaber.
There is, in fact, a very short computer program or algorithm that runs every possible computation. It does this by running the first step of program 1, then the first step of program 2, then the second step of program 1, then the first step of program 3, then the second step of program 2, then the third step of program 1, etc.
Since the number of possible computations is only countably infinite, the above computer program will (given infinite time and memory storage) compute everything that can possibly be computed. Such a computer program has been proposed by Bruno Marchal and Juergen Schmidhuber, among others—Marchal has termed it a universal dovetailer. If we accept that the universe we inhabit is computable (and Solomonoff induction implies that we should), this means that the universal dovetailer will eventually output our universe (along with a lot of other stuff) if it is allowed to run long enough. Of course, a physical computer wouldn’t have access to infinite time and memory storage, but God would.
The universal dovetailer is very simple (it has low Kolmogorov complexity), and thus it should have a high prior probability for Solomonoff induction. However, Kolomogorov complexity of the dovetailer is not the only variable we need to take into account. We also need to consider the measure universes like ours have in the universal dovetailer. What fraction of the universal dovetailer’s resources are spent on creating universes like ours? This fraction is extremely low—the universal dovetailer is a bit like Jorge Luis Borges’ Library of Babel, in that almost all of the things it computes are “junk” rather than inhabitable, intelligible worlds.
Thus, when trying to reason about the origin of our universe, we should adopt a modified form of Solomonoff induction that assigns a probability of 2-K * M, where M is the measure universes like ours have in the option that is being considered.
The Alternative: Alpha
One option would be to switch to a different program with much higher M. Could there be a program which only (or mainly) outputs habitable worlds? Perhaps. It would certainly have to be more complex than the universal dovetailer, but the decrease in 2-K might still be outweighed by the increase in M.
In order to only (or mainly) output habitable worlds, such an algorithm—let’s call it Alpha, would need to be very intelligent. (Intelligence is something the “standard” universal dovetailer lacks, since it simply outputs all possible computations without discerning which ones are truly worth outputting). However, it’s likely that even a very simple program—only somewhat more complex than the universal dovetailer itself—would theoretically be able to bootstrap itself to unboundedly high levels of intelligence, use that intelligence to determine which kinds of computations give rise to habitable worlds, and then focus all its resources on outputting those worlds.
Having intelligence also implies that “Alpha” may possess personhood (unlike the universal dovetailer), and personal characteristics (such as benevolence). For example, if it determines that certain types of universes have less suffering in them, it could focus on creating those. (If we were to determine that our universe had much less suffering than an “average” inhabited universe, we could infer that Alpha is benevolent). Furthermore, Alpha could become curious, and wonder what it was like to be an inhabitant of one of the worlds it created, it might then decide to write itself into that world—which is starting to sound an awful lot like the idea of Incarnation.
Furthermore, Alpha may also create afterlives for the inhabitants of its worlds. (The universal dovetailer would do this too, in fact—which is why one needn’t necessarily believe in a personal God to believe in an afterlife. However, the Alpha would assign a much higher measure to such afterlives, which is why if you find yourself in one after you die, it’s much more likely you got there because of Alpha—i.e., a personal God).
I would argue that the belief that the origin of our universe is best described by an Alpha-type algorithm is functionally equivalent to theism. This doesn’t necessarily imply that God is a computer program, just that God could potentially be described by one (the distinction between the two being philosophical in nature). Thus, rationalism and Solomonoff induction are not incompatible with belief in a personal God.