Uncertainty management is hardwired into the predictive processing of the brain. According to a rapidly emerging consensus in cognitive science, the brain is not a passive receiver of sensory input but an **active inference machine**, continuously generating hypotheses and testing them against sensory data. Karl Friston, a neuroscientist at University College London, describes it as a ***fantastic organ***—from the Greek *phantastikos*, meaning the ability to create mental images. In essence, the brain constructs an internal model of the world, constantly refining its predictions through recursive feedback loops that minimise surprise by aligning expectations with reality.[^1] This is worth a double-take—it means that we see, smell, hear, taste, and feel is not a direct representation of the world, but an internal model our brain actively maintains, using sensory inputs to validate it in a constant feedback loop. Friston’s framework suggests that these feedback loops include a mechanism called *gain control*—a weighting system that determines ***how confident a prediction is***. A low weight signifies a tentative guess—an uncertain hunch—while a high weight reflects a near-certain expectation. This process of weighting predictions is, in itself, a form of **meta-prediction**—a prediction about predictions. In this sense, our brain does not just generate expectations; it also assigns confidence levels to them, effectively **encoding uncertainty**. This makes uncertainty not a flaw but a fundamental feature of cognition—the meta-ingredient that enables thought itself. A complementary theory comes from engineer and neuroscientist Jeff Hawkins. He proposes that the neocortex consists of numerous ***cortical columns***—clusters of around 100,000 neurons—each of which independently constructs models of objects using different sensory inputs. For example, one cortical column may process tactile information from a fingertip, while another processes visual data from a specific patch of the retina. Each column operates autonomously, yet interacts with others to create a unified perception of the world. In his book *A Thousand Brains: A New Theory of Intelligence*, Hawkins describes how these columns communicate, compare models, and integrate conflicting information through a “voting” mechanism. When discrepancies arise between different columns, the brain reconciles these variations to form a stable experience of reality. This underscores that much of neocortical activity revolves around **predictive modelling**. When we walk around a house, we assume it has a back side. When we lift a familiar coffee cup, we expect it to have a certain weight, temperature, and texture. We only notice the predictive nature of this process when it fails—like expecting to lift a heavy suitcase only to find it empty and light, or expecting another step on a staircase when there isn't one. Because our perception is so heavily prediction-driven, uncertainty management is embedded directly into our neurological architecture. Each prediction carries a confidence weight—what Friston calls *gain control*—determining the certainty of an expectation. Predictive processing is not unique to humans or even to mammals. While birds and reptiles lack a neocortex, they possess brain structures with comparable functions. This suggests that **uncertainty management** has deep evolutionary roots. The ability to construct increasingly sophisticated inference models correlates with the evolution of higher-order cognition. From amphibians to reptiles, cephalopods, birds, and mammals, the refinement of inference mechanisms has translated into greater evolutionary fitness. In other words, at least since the Cambrian Explosion,[^2] the evolution of complex organisms has been driven by the increasing sophistication of **meta-predictive processes**—the neural machinery that encodes and manages uncertainty. This is often misunderstood to mean that the brain's primary function is **uncertainty reduction**—the avoidance of surprise, novelty, and all things unpredictable. However, even the most cursory understanding of human behaviour should invalidate this simplified view. People climb mountains, sail across oceans, base-jump from skyscrapers, and brave the social danger of putting pineapple on their pizzas. How is this to be reconciled with the view of the brain as an uncertainty reduction machine? This conundrum is perfectly illustrated in the **Dark Room Problem**: If our brain has evolved to minimise prediction error, then a person should be in an optimal state when sitting in a dark room devoid of any stimulus. No stimulus = no surprise, right? I think most of us can agree that solitary confinement in the dark does not lead to peak happiness. So what's wrong here? The answer lies in the fact that prediction error means the **discrepancy between our model of the world and the reality that is encountered**. We do not seek the reduction of sensual stimulus, but the perfect fit between the stimulus predicted and the stimulus received! If, like me, you have evolved in a rich, stimulating environment, full of variability and flux, then this forms the predictive model that expects more of the same. A dark room devoid of stimulus would be a huge diversion from my predictive model, and therefore cause surprise and distress. If however, you are a spider having evolved in a cave—a small, dark crevice is your comfort zone from which you only emerge for the occasional night-time snack. The human brain is, at its core, an **epistemic machine**. It constantly constructs models of the world, fine-tuning them to anticipate what comes next. This relentless drive for predictability is what allows us to dodge traffic, hold conversations, and open doors without an existential crisis. There is a deep comfort in knowing what to expect—familiarity offers cognitive efficiency, reducing the mental effort needed to navigate daily life. Predictable environments allow us to conserve energy, both physically and mentally, which is why routine feels so reassuring. But here’s the paradox: if the world were _entirely_ predictable, it would be intolerable. A life of pure certainty is, to the human mind, indistinguishable from stagnation. Enter our equally powerful drive for novelty. The same brain that craves predictability also seeks out the unknown, particularly in situations that might test our survival skills. This is not mere curiosity—it is a fundamental mechanism for improving our predictive models. When we expose ourselves to unfamiliar or challenging scenarios, the brain scrambles to make sense of them, updating its internal frameworks to prepare for future encounters. This is why children (and, let’s be honest, some adults) touch hot stoves despite being warned, and why we climb mountains, watch horror films, or engage in risky business ventures. Experiencing the unpredictable in controlled doses strengthens our ability to navigate true uncertainty when it inevitably arises. In essence, humans exist in a **dynamic tension between security and exploration**. We seek stability so that we may function smoothly, but we also push into the unknown to refine our ability to predict—and thus survive—uncertain or dangerous situations. The trick is balancing these impulses: too much certainty leads to dullness, too much unpredictability leads to chaos. The brain, ever pragmatic, seems to know this instinctively, which is why we alternate between clinging to routine and leaping into the unknown like caffeinated philosophers. [[Constructed Reality|Next page]] <hr> [^1]: Kanai R, Komura Y, Shipp S, Friston K. 2015 Cerebral hierarchies: predictive processing, precision and the pulvinar. Phil. Trans. R. Soc. B 370: 20140169. http://dx.doi.org/10.1098/rstb.2014.0169 [^2]: [Cambrian explosion](https://en.wikipedia.org/wiki/Cambrian_explosion)