The improbable simulation

let created = `Date (2021, 5, 16) in

I recently saw Sabine Hossenfelders critical walkthrough of the simulation hypothesis, and got reminded of earlier thoughts I've had on why it doesn't seem to hold.

The simulation need to be approximate

As Sabine notes, the simulation of the universe would need the simulation-algorithm to be aware of all conscious beings in the simulation, and only simulate the details that are relevant for making these beings believe there is no simulation.

My take on the reason why this is essential, is that simulating the whole universe, with all its gigantic amount of parallel actions at each moment in time, would demand so much computational power, that it would take about the computational power of a universe to compute the simulation of the universe (think computational complexity theory). After all, all the matter of the universe, is in its entirety, the greatest parallel computer running itself. Parallel computing at this scale seems to be unrealistic, and at least unknown to able be solved by a much (many many orders of magnitude) smaller computer.

But let's imagine that you actually had the massively parallel computer that could simulate the universe. In this case, to be able to observe what is happening in the simulation, which must be one of the main reasons for running a simulation to begin with, you would need to frequently observe all the parallel states in the simulation. E.g. to be able to detect new life coming into existence, or anything else that is not known to exist to begin with (which is why one runs a simulation).

The time-complexity of this analysis/observation algorithm would in essence dominate that of the parallel simulation algorithm. So, to make simulating the whole universe in our universe more probable, the actors running the simulation wouldn't want to observe the simulation much while it is running - i.e. not getting much out of the simulation, which in turn makes it improbable that the actors constructed the simulation to begin with - a paradox.

Human centrism

As we have now established that the simulation most likely would be optimized, one must choose what to optimize for. The idea is that it is conscious beings, or maybe a very specific subset of conscious beings that are being optimized around. So, everywhere you look, all of the simulation needed to fill your point of view is approximated, to a point where you don't experience that you are in a simulation.

But if it is specifically conscious beings that are at the centre of the simulation, how do you define consciousness?

  • Are plants, trees etc. conscious in any relevant sense?
  • Is the human race as a 'collective being' conscious?
  • Are some or all animals conscious?
  • When is my own AI algorithm conscious enough for the simulation to choose to simulate it fully?

If we include all kinds of consciousness, and if conscious beings can constitute bigger conscious collectives, then where do we draw the lines of what to observe and simulate fully?

But if we then drop the question of consciousness, as this seems hard to draw a tight border around, and choose a very specific conscious being.. Are humans special conscious beings in this simulation?

A reason for believing this could be that (some might believe) we are close to solving simulation of ourselves. So humans could be a category of beings that was of special interest to the actor running the simulation. I.e. human beings would be a major filter for which kinds of consciousness that would need to be simulated, which would make the optimizations have more potential.

At this instant, we have stated that we believe ourselves as being very special beings. I.e. beings that are close to becoming their own creator - becoming Gods ourselves. But we are not so special, as even Nick Bostrom notes - our race has only been dominating earth for a split second relative to its full history.

This human centric view on the focus for the potential simulation, hints to me at the premise being wrong. Even the focus on conscious beings is human centrism in disguise - we think that consciousness is such an important quality, as we are conscious ourselves. We know nothing of what patterns of intelligence is possible in the general sense, so why should consciousness be so special; especially in the context of simulating a universe.

Infinite layers of optimizations

Human beings being part of the subset of the universe that is simulated in full, is an essential premise for the simulation hypothesis, as its guess on how realistic it is that we are in a simulation, is based on the idea of intelligent beings (like us humans) simulating the universe inside the universe recursively. If we were not being simulated in full - then we wouldn't exist; we would be optimized away to be an approximation.

But how would optimizing the nested simulation within the simulation work? The outer simulation would need to not be lossy in its simulation of conscious beings in the inner simulation too - else the chance of us being inside a simulation is minimized. But the inner simulation also needs the optimization to run realistically inside the simulated universe. So all nested simulations would be optimized by all parent-simulations as well as itself. This leads to O(n^2) amount of optimizations per layer, where n is the amount of nested simulations - and n's limit is infinity.

This doesn't sound feasible to run on any computer. And the probability of the optimization-algorithms disagreeing on what to optimize for would be extremely high.

Spooky God

If consciousness emerges from non-consciousness, then the simulation-algorithm would probably have a big reason to simulate this happening - which means that everything not conscious would need to be simulated non-approximately anyway, which we can't do.

Else, the hypothesis is based on the assumption of all possible conscious beings (at least those not stemming in creation from other conscious beings), already exist in the simulation at the initial state.

This would mean that another premise of the simulation hypothesis would be that the actor running the simulation had designed the initial conscious beings in the simulation. A spooky God peeks in - the simulation hypothesis, optimized around conscious beings or humans, is a form of creationism.

Relating to the simulation argument

Nick Bostrom argues that one of the following three cases must be true:

  1. "The fraction of human-level civilizations that reach a posthuman stage (that is, one capable of running high-fidelity ancestor simulations) is very close to zero", or
  2. "The fraction of posthuman civilizations that are interested in running simulations of their evolutionary history, or variations thereof, is very close to zero", or
  3. "The fraction of all people with our kind of experiences that are living in a simulation is very close to one."

From what I've argued, the first case is highly probable;

  • in the case of a full simulation; it doesn't seem possible to simulate our own universe within our universe - at least with our current knowledge of what is possible computationally. Neither regular computing nor quantum computing has proven itself as a known practical solution to simulating our own universe exactly as we experience it. This implies a low probability of other full simulations within full simulations.
  • in the case of an approximate simulation; this is made improbable by the implicit dependence of the simulation hypothesis on the probability of simulations of universes within universes, as it's either too expensive to optimize for all nested universes conscious beings, or they are optimized away.

Dependence on qualities of the nested simulations

But I will argue that the most essential problem for the probability of the simulation hypothesis, is that the simulation argument implicitly is dependent upon certain probabilities concerning simulating universes recursively. It doesn't make sense to assign probabilities to the simulation argument before you've argued for probable solutions to what follows.

In the case of us being in a full simulation: for the high probability of being in the simulation, it's essential that we are not limited to simulate universes with lesser computational capabilities within our own universe - else there would be a limit to the amount of nested simulated universes, which lessens the probability of being in one. This point is also related to the second case in the simulation argument of simulating your ancestors. Will it always be possible to make interesting simulations of ancestors in universes with less powerfull computational potentials?

Most importantly, the possiblity of approximate simulations in themselves lessens the probability of being in full simulations. Let's say that some simulated universes simulated alike universes fully. Then by the possibility of deeper nested universes simulating by approximation - which would probably approximate all deeper nested consciousness away - the 'nestedness' of simulated universes containing non-approximate conscious beings would be limited, which makes the simulation hypothesis improbable no matter if full simulation is possible or not.