The Simulation Hypothesis by Nick Bostrom lays out an argument regarding the underlying state of reality, claiming exactly one of the following must be true:
- The fraction of human-level civilizations that reach a posthuman stage (that is, one capable of running high-fidelity ancestor simulations) is very close to zero”, or
- “The fraction of posthuman civilizations that are interested in running simulations of their evolutionary history, or variations thereof, is very close to zero”, or
- “The fraction of all people with our kind of experiences that are living in a simulation is very close to one”
According to Bostrom, unless you assume the conditions of posthuman civilization are impossible or certain incentives are absent in such a civilization, then we likely are inhabiting a simulation.
This article asks the question: “What can evolutionary theory tell us about our reality when considering the Simulation Hypothesis?” This article isn’t intended to be interpreted as rigorous, and is written by a layman. But it is an area of inquiry I’ve not seen before so felt it may be a useful starting point for others smarter than myself to consider.
What is a simulation?
Bostrom leaves the definition of a simulation out of his argument. However, for this discussion, it’s useful to mention some characteristics of a simulation the Hypothesis assumes.
First, it assumes a simulation is embedded in exactly one another simulation or in base reality.  A simulation can not be embedded in multiple simulations.
The Hypothesis assumes that there is a condition leading to some simulations spawning high fidelity sub-simulations. (The causal origin of these is not relevant.)
We can give a name to the probability that a simulation will spawn high fidelity sub-simulations which, in turn, will be able to do so. Lets call it subsimulatability. For example, if simulation A can spawn simulation B, but simulation B will always terminate the chain, A is claimed to have no subsimulatability. However, if there is some probability that B will not terminate the chain, A is said to have subsimulatability.
The basic principles of evolutionary theory generalize to any system with certain dynamics. A simplified evolutionary model is:
- A system is composed of discrete entities.
- The entities exist within a stateful environment.
- Entities have the potential to replicate themselves.
- Entities have properties called traits.
- When replicating, the traits can vary probabilistically with a high bias towards the parent’s traits.
- Replication is probabilistic and transactional. It either succeeds or fails.
- The probability of successful replication for an entity is a function of the entity’s current traits and the state of the environment and other entities.
- If these dynamics are present, one can expect the fitness of newly replicated entities to improve after many replication events, meaning their traits result in high probabilities of successful replications. The trend towards fitness is often called pressure. (With many books worth of caveats, but this is a simplified model.)
If we assume the third statement of the Simulation Hypothesis to be true, then one can ask if the resulting overall system dynamics appear evolutionary. If so, one can start viewing the final state of the world through the lens of fitness.
The structure of such a reality forms a reality tree, where the root of the tree is “base reality.” Below a given node of the tree, child nodes represent spawned sub-simulations.
There seems to be a relatively clear mapping to the various parts of an evolutionary system. Simulations themselves are entities. A simulation may or may not spawn one or more sub-simulations. The sub-simulations will vary. This variance will affect if the simulation spawns sub-simulations.
Based upon this, there’s a reasonable argument if we are in a simulation, it exists within a system which has evolutionary dynamics. As such, claims about evolutionary pressure guiding the reality we inhabit may hold weight.
Subsimulatability as Fitness
Lets move through a few claims. The first:
The subsimulatability of a simulation is unknowable at the time it is spawned.
This falls out if we assume simulations are chaotic and must be simulated to see their emergent characteristics. If so, at the time a sub-simulation is spawned, its subsimulatability can only be estimated, and the estimate improved over time.
The inhabitants of a simulation at the time they learn to spawn sub-simulations have a better understanding of the subsimulatability of their own simulation than the creators did when it was created.
Insofar as simulation inhabitants can observe the history of their own simulation, they can analyze the effects initial conditions had on their own ability to ultimately sub-simulate. This level of knowledge was not available to the creators of their simulation when they created it.
Given this claim, we assume those creating sub-simulations can use this knowledge to design simulations that lead to improved subsimulatability relative to their inhabited simulation. Because of this:
Subsimulatability of a sub-simulation may be greater than the parent simulation. 
As sub-simulations are spawned down the tree, inhabitants creating sub-simulations can knowledge from observing their own simulation, and thereby design for improved subsimulatability.
If we are in a simulation, the probability that we are in a sub-simulation of another simulation, not base reality, is close to one.
If subsimulatability can increase in sub-simulations, then the reality tree will have an ever-increasing number of levels. If we are inhabiting a random node in the tree, we should assume we are somewhere at the end of a long chain of sub-simulations, not at the root or the an early level. So, our parent is a simulation, not base reality.
If we are in a sub-simulation of a simulation, and the reality tree is an evolutionary system, we should expect our parent simulation to be highly fit for subsimulatability.
We can assume that all parent simulations of ours have some subsimulatability, including base reality, otherwise they could not exist. Given the evolutionary pressure for subsimulatability, we should assume our parent simulation is highly fit for subsimulatability, the end of a long chain of ever more fit simulations.
If we assume our simulation has subsimulatability, we should assume our parent simulation shares most characteristics with ours.
If we are in a simulation with subsimulatability, we have an existence proof in our own simulation as one which can lead to subsimulatability. If we can’t know the subsimulatability of a simulation before it is simulated, the best candidates for sub-simulations with subsimulatability will be ones modeled after our own simulation.
By this same chain of reasoning, we should expect creators of our parent simulation to have come to the same conclusion. So we should expect our simulation to share most initial conditions and properties with our parent simulation.
Finally we get to the primary claim. If the third scenario of the Hypothesis is true and the resulting reality tree structure has evolutionary dynamics:
Our simulation is similar to our parent, and our parent has high fitness for subsimulatability. So, our simulation should be highly fit for subsimulatability.
Fitness in our simulation
If you believe this final claim, we can start to view characteristics of our reality through a new question:
What characteristics of our reality seem strictly necessary, or are hard to vary, to ensure high subsimulatability?
Here we actually converge on a ‘creationist’ interpretation of reality, where the ‘creator’ had one design goal: maximizing subsimulatability. However unlike typical creationist theory with an all-knowing creator, in our case, our ‘creator’ is actually a long chain of creators who, using their own simulation as a key reference, created sub-simulations with at most slight adjustments. Through evolutionary pressure, we find ourselves living in a simulation with high fitness for subsimulatability.
This argument tells us little about the earliest levels of the reality tree, but it could shed new light on properties of our reality which are seemingly arbitrary, and typically are explained through anthropic reasoning. Anthropic reasoning is weak since it relies upon the self-evidence of our existence. Subsimulatability fitness driving these characteristics is more directed and has more explanatory power, and is not so tautological, so arguments based upon subsimulatability fitness may be at the very least more satisfying than the anthropic ones.
Lets cover a few.
The Big Questions
First, lets consider the “unreasonable effectiveness of mathematics” in modelling reality. If we varied that assumption, what effect would it have on subsimulatability? In order to develop technology to create sub-simulations, mathematical effectiveness in modelling reality seems strictly necessary. A corollary of this effectiveness is a reality that is to a large degree repeatable, consistent, and predictable. All of these things, if not present, inhibit the development of subsimulatability.
Lets consider the observability of the early universe. Much of what we know about the early universe depends on the presence of light and other subatomic particles hitting Earth. One could easily imagine inhabiting a reality where we such knowledge was inaccessible — for example, if intelligent life were to only develop at times their light cone prevented the development of astrophysics.
The knowledge we have of the early universe seems strictly necessary if we want to create sub-simulations with high subsimulatability. Given the complexity involved and the need to simulate to prove subsimulatability, having knowledge of our simulation’s initial conditions give us a chance of creating sub-simulations with subsimulatability in finite time. This leads to evolutionary pressure for simulations which have intelligent life which can trace back these conditions, like ours.
Also, the immense size of our reality is justified by subsimulatability. A smaller reality would constrain the probability of all the emergent phenomena which lead to the ability to sub-simulate. In a smaller simulation, the dice is rolled way less, and many dice rolls are key to getting to sub-simulation. We should expect to see a huge number of independent, disconnected ‘theaters’ in our simulation in which these probabilities can continue to be resolved independently. There’s evolutionary pressure to have large simulations with independent regions inaccessible to one another.
Also, we seemingly have no way to communicate with our parent simulation. Why could this be? Well, would such a connection jeopardize subsimulatability? It certainly seems likely, since our simulation would no longer be a closed system. Fitness from subsimulatability is conjectured to come from trying to fine tune the fitness of ones existing simulation into sub-simulations. If that’s the case, having a closed system is key to ensure the subsimulatability of the design manifests similarly to the parent, and isn’t corrupted by out-of-simulation side effects or recurrent ‘feedback loops.’ There’s evolutionary pressure for ‘set it and forget it’ simulations, since those simulations are more likely to play out in a repeatable, predictable way.
Finally, consider the existence of qualia and conscious experience. We don’t have an explanation for these phenomena. We may one day be able to answer why the biological processes which lead to these phenomena were selected for in evolutionary biology, but we still would not have the answer to the question of why we inhabit a universe where such phenomena can occur in the first place. An anthropic argument fails, because one could imagine most characteristics of our present day universe, including humans, existing without what we call consciousness or qualia as ‘psychological zombies.’
Consider a hypothetical universe where such phenomena do not exist. What would the effect be upon subsimulatability? If our own reality is any indication, a large part of the drive towards increased simulation fidelity and immersion is due to the urge to override our conscious experiences. Tools like VR and AR seem like an early transitional technology on the path to full sub-simulation. One has a hard time imagining such transitional simulations being developed by any intelligence without qualia and consciousness, given how much the incentive for creating those depend on their existence. Instead of looking at VR and AR as an “argument by analogy” to imply we must be in a simulation, these trends, driven by the existence of qualia, can be viewed as an expected emergent attribute of any simulation which has been pressured into having fitness for subsimulatability.
This essay isn’t meant to be a highly disciplined argument, but it does attempt to open a new set of questions about the implications the Simulation Hypothesis, particularly when looking at some of the hard to explain characteristics our existing reality. It seems likely that, if we assume we are in a simulation, applying concepts from evolutionary theory could lead to a new frameworks for understanding the reality we exist in. Certainly there are more aspects of our reality than those outlined here which may benefit from being viewed through the lens of subsimulatability fitness. Beyond that, applying evolutionary theory in general to simulation theory seems like rich territory for further philosophical mining.
 In mathematical terms, a function that maps the state of the embedding context to the simulation is surjective and non-injective.
 In terms of probability mass