The Anthropic Problem
One of the biggest meta-problems in cosmology over the past decade or so has been the so-called Anthropic Problem, which asks: Why does the universe that have the properties that it does? The standard answer to this question has traditionally been (approximately): Because if it didn’t have the properties it did we wouldn’t be able to exist to ask the question. Another answer can be found in the concept of naturalness, which proposes a weaker form of the same answer.
The Entropic Principle
There is, however, another way to look at the Anthropic Problem. Raphael Bousso and his colleagues have articulated the Entropic Principle, which holds that universes that create the maximum amount of entropy over their lifetimes (clipping out certain inconvenient portions of the universe like black holes which are causally disconnected from the rest of the universe) tend to reproduce certain critical values that we observe in our own universe, such as the Cosmological Constant. Huh? That is a pretty amazing result, and makes an immediate connection between maximizing causal entropy (i.e., keeping options open) and intelligence. A further discussion of the Entropic principle and its implication for intelligent observers can be found in Wissner-Gross et. al.’s perhaps controversial discussion of Causal Entropic Forces. Wissner-Gross’ approach has gained traction over the past few years in the Artificial General Intelligence (AGI) community as a candidate for the physical basis of AGI.
Game Playing and Keeping Your Options Open
Another example of keeping options open surrounds AI game playing. Note that it took years before a chess playing program could defeat a human master when Deep Blue defeated Garry Kasparov back in 1997. After Kasparov was defeated, the next “Grand Challenge” for AI seemed to be the game of Go, since Go has a vastly larger search space (Chess has possible games, while Go has possible games). Somewhat surprisingly machine Go players such as MoGo are already competitive with humans. How did MoGo and its successors accomplish this feat? Perhaps paradoxically, these Go players use a very simple algorithm known as Monte Carlo Tree Search (MCTS).
Basically, MCTS is a form of sparse tree sampling that simply looks at the set of all possible legal next moves and plays the game out against a random player making random moves, then choose the ones that win. To a first order of approximation MCTS is nothing more complex than this and almost all MCTS optimizations are about pruning the search space. In summary, what appeared to be something of a Grand Challenge for AI was conquered by an incredibly simple algorithm. But why?
So what are we to make of this apparent contradiction? That is, what is the hint here? Well, MCTS optimizes the number of future choices (paths) that are available to the player (i.e., it is maximizing causal entropy). According to Wissner-Gross, the idea then is that this is evidence that there is some deep and profound connection between casual entropy (keeping the future open) and intelligence.
So we are getting hints from AGI and Cosmology (among other disciplines) that seem to imply that there is some sort of deep connection between causal entropy production and intelligent observer concentrations in a universe. The same thing said in Wissner-Gross’ vernacular:
Ok, so what is the takeaway? Quite simply: keep your options open.