%%
- add ontological and agential uncertainty, cite example of the Cumaean Sibyl
%%
Humans are intuitive realists—we like to think that reality is just out there, patiently waiting for us to discover it, like a lost sock under the sofa. But the uncomfortable truth is that our knowledge isn’t directly plucked from the universe; it’s always [[Epistemic Machines|actively constructed]] by our brains. And because the brain isn’t exactly running on infinite processing power, these constructions are necessarily simplified, coarse-grained approximations of reality. They’re good enough to keep us alive and build smartphones, but they’re not exactly the full-HD, surround-sound, IMAX version of the world.
This perspective is neatly encapsulated by **Constructivism**, a philosophical stance suggesting that we don’t passively *receive* knowledge, but actively generate it. Think of reality as less like a perfectly filmed documentary and more like a messy, collaborative art project—one that each observer shapes differently based on prior experiences, perceptions, and biases. The physicist Stephen Hawking even advanced the related idea of [model-dependent realism](https://en.wikipedia.org/wiki/Model-dependent_realism), arguing that our understanding of reality depends entirely on the models we create. If two very different models—say, classical physics and quantum mechanics—both adequately predict observations, then asking which model is the *real* one is as futile as arguing about whether Batman or Superman is more realistic. (It’s obviously Batman, but you get the point.)
In contrast, **Realism** holds that there is an objective, observer-independent reality that we access directly through observation. According to realists, reality is like a book in a library waiting to be read; science just flips through the pages. The catch is, however, that even realists must admit our observations are limited, biased, and often downright misleading. Optical illusions alone prove the brain loves shortcuts—so even if reality is sitting out there, it’s always filtered through our imperfect senses and cognitive biases. Realism thus makes a tempting promise of certainty, but stumbles into uncertainty at every turn.
**Science**, meanwhile, doesn’t get bogged down debating whether there’s a perfect Platonic *reality* behind the curtain—it cheerfully accepts that uncertainty is baked into the process. Instead of promising truth, science offers progressively less terrible models of reality. These models evolve iteratively: experiments are conducted, theories proposed, observations compared, models updated, and repeat. It’s less a linear march towards absolute truth and more a steady process of error-correction and incremental improvement—a bit like software updates that make your laptop slightly less buggy over time (but still never entirely glitch-free).
History bears this out clearly. Newtonian mechanics ruled physics for centuries, accurately predicting planetary orbits, tides, and apples falling on heads. But then Einstein rolled along with relativity, and Newton’s *absolute* space and time suddenly looked quaint—still useful for everyday problems, but fundamentally incomplete. Quantum mechanics arrived to further upset the apple cart (sorry, Newton), introducing uncertainty at a fundamental level, effectively telling us we can’t know everything with perfect precision, even in principle.
So, science never actually hands us absolute truth. Instead, it provides ever-better **approximations of reality**—each one reducing uncertainty a little further, while always reminding us to stay humble. Our knowledge, no matter how advanced, remains fundamentally tentative and probabilistic, ensuring job security for philosophers, scientists, and anyone else brave enough to wrestle with uncertainty.
Every organism, from a bacterium floating lazily in a pond to a philosopher pondering existence over espresso, relies on [[Life as Biological Information Processing|internal models]] of reality to survive. Life isn’t passive—it’s a continuous guessing game, a constant dance of anticipation, prediction, and adjustment. Without some form of internal predictive model, even the simplest life forms wouldn’t last long. Plants predict sunlight direction, birds anticipate migration cues, and humans predict everything from weather patterns to the likely outcomes of Tinder dates. These models don’t reflect absolute reality; they’re survival strategies, essentially educated guesses designed to keep organisms alive and, ideally, reproducing.
Yet these internal models are inherently **constrained**. Our perception is limited—we don’t see ultraviolet or infrared without help, we miss sounds that bats easily navigate by, and our cognitive resources are finite (some more than others, as your last online argument surely confirmed). Evolution doesn’t care if our perceptions are perfectly accurate; it only cares that they’re good enough to get by. This makes our models both limited and efficient, trading off accuracy for speed, simplicity, and energy conservation.
This fundamental trade-off is neatly captured by the concept of **Bias-Variance Decomposition** from machine learning, a fancy-sounding term describing the balance between two opposing tendencies: bias—the simplifying assumptions our models use to function efficiently—and variance—the complexity required to capture detailed nuances. Too much bias, and our models become simplistic stereotypes, like assuming every rustling bush hides a predator. Too much variance, and our models become overly detailed and brittle, mistaking every noise for something unique and significant. Survival means striking a functional balance—neither too vague nor too specific—a balance evolution has been fine-tuning for millions of years.
Thus, knowledge isn’t some pristine external truth waiting patiently to be uncovered; it’s an **adaptive construct**, continuously shaped by inference and trial-and-error. Every model we build is a heuristic, a handy mental shortcut aimed at improving our odds of success rather than revealing absolute cosmic certainty. Just as there is no perfect map (because it would be the exact size of the territory itself, and thus pretty useless for a hike), there’s no perfect internal model. Instead, evolution ensures our heuristics evolve toward increasing usefulness, not absolute accuracy. In short, the human brain—and indeed every living organism—is less interested in absolute truth than in staying alive long enough to enjoy dinner tonight and possibly breakfast tomorrow. Our knowledge, then, is both deeply impressive and delightfully imperfect, a rough sketch continuously revised in response to life’s shifting challenges.
Uncertainty isn’t just an occasional nuisance; it’s baked into the very fabric of our existence. It emerges from the unavoidable gap between our inferential models—the mental blueprints we use to navigate reality—and the mind-boggling complexity of the world itself. Our models are, by necessity, simplifications, and this leaves room for uncertainty to creep in through multiple doors. Some of these uncertainties are due to gaps in knowledge, others stem from randomness itself, and some exist because the universe simply refuses to be pinned down like a well-behaved equation.
**Constructivism**, unlike many other philosophical standpoints, doesn’t try to sweep uncertainty under the rug or nervously pretend it doesn’t exist. Instead, it boldly invites uncertainty to the party and hands it a drink. Rather than chasing after some elusive fixed truth, constructivists see knowledge as an ongoing, fluid process of model refinement—think of it as continuously upgrading your mental operating system, patching it regularly to handle the complexities of reality a little better each time.
From this viewpoint, knowledge isn’t a stable fortress but a perpetual building site, scaffolding everywhere and workers constantly shouting at each other over the noise. Embracing uncertainty, rather than attempting to banish it, helps us remain intellectually agile, humble, and open to revision. In this way, uncertainty becomes not our enemy, but a valuable companion, nudging us to question, rethink, and improve our models of the world.
Charles Darwin famously pointed out that it’s not the strongest or even the most intelligent species that survive, but those **most adaptable to change**. The same wisdom applies to our knowledge: intellectual adaptability, fostered by acknowledging and embracing uncertainty, is the cornerstone of long-term success. It prevents us from getting intellectually fossilised—forever trapped in yesterday’s beliefs—allowing us instead to remain flexible and responsive to reality’s relentless surprises.
Ultimately, the more openly and honestly we understand uncertainty, the better equipped we become to actively construct our reality. In recognising that our knowledge is necessarily incomplete and ever-evolving, we build better inferential models that, paradoxically, bring us closer to reality—even if it’s always just out of reach. After all, life’s uncertainty might be frustrating at times, but it does at least guarantee we won’t ever run out of puzzles to solve.
[[Types of Uncertainty|Next page]]