Uncertainty has long been carved up into two main types: **_epistemic_**, the kind that stems from ignorance and can in principle be reduced through knowledge; and **_aleatoric_**, the kind that arises from randomness — a polite nod to the fact that the universe doesn’t always play by our rules. This two-part taxonomy has its uses, particularly if you’re building bridges, underwriting insurance, or dabbling in risk management while wearing a tie. But when it comes to making sense of the broader, messier entanglements of the real world — from complex systems to moral action to existential dread — the traditional dichotomy starts to wobble. It’s time, I argue, to propose a richer framework. This new framework distinguishes between _epistemic_, _ontological_, and _agentic_ uncertainty. **Epistemic uncertainty** is what we don’t know but might learn — a gap in our models or knowledge. **Ontological uncertainty**, by contrast, covers the instability of the world itself: the randomness, emergence, chaos, undecidability, and fundamental unknowability baked into complex or non-linear systems. It’s not just that we haven’t modelled it — it’s that it may not _be_ model-able. And then there’s the most unnerving of the trio: **agentic uncertainty**, which arises when knowledge exists, but action is blocked — structurally, systemically, or existentially. It’s the grim realisation that you can see the iceberg, you know the ship is going down, and yet the helm won’t turn. Together, these three types sketch a more realistic — and let’s be honest, more unsettling — map of how humans navigate a world that often refuses to cooperate. ## Epistemic Uncertainty: The Known Unknowns and the Unknown Knowns When we say we “know” something, what we often mean is that we can do two things with it: firstly, we can **explain it** — usually with equations, diagrams, or impressively technical jargon that makes us sound clever at conferences. And secondly, we can **predict it** — reliably state what it will do next. This is the domain of Newtonian billiards and well-behaved spreadsheets. It’s a comforting place to live. But the moment we step outside this bubble, we encounter **_epistemic uncertainty_** — the recognition that, actually, we don’t know what we thought we knew. Or worse, that we knew the wrong thing entirely. Traditionally, epistemic uncertainty is attributed to simple **ignorance**: gaps in our data, imprecise measurements, or an unfortunate shortage of PhDs. This kind of uncertainty is, at least in principle, fixable — throw enough research funding at it, and it should retreat. But reality is rarely so accommodating. More insidious than ignorance is **model error**: the quiet catastrophe of using the wrong frame to understand a phenomenon. We might have plenty of data, pristine mathematics, and superb computing power — but if we’ve misjudged what kind of thing we’re dealing with, our elegant models will march us straight off a cliff with terrifying confidence. Worse still is the subtle arrogance baked into modelling itself: the assumption that the phenomenon in question _can_ be faithfully modelled at all. This is what makes epistemic uncertainty such a fascinating trap: it often **_disguises itself as certainty_**. We cling to our explanations because they seem to work — until they don’t. The 2008 financial crisis, for example, was not caused by a lack of data, but by a false sense of epistemic control. Sophisticated models of risk and credit were applied to systems they did not understand, and the results were predictably disastrous — except, of course, no one predicted them. Not because they couldn’t, but because they trusted the wrong picture of the world. And yet, we can’t do without models. The point is not to renounce modelling, but to **treat it with humility** — to remember that any model is a lens, not a mirror. Epistemic uncertainty is a reminder that knowledge is always partial, always situated, and always shadowed by the possibility of deeper misunderstanding. It’s not a bug in our thinking; it’s a feature of the kind of creatures we are. So when we speak of epistemic uncertainty, we are not merely talking about missing pieces of a puzzle. We are talking about the nature of the puzzle itself — whether it _can_ be solved, whether we’re even holding the right pieces, and whether the image we think we’re assembling is a real landscape or just a comforting illusion. In short: knowing that we don’t know is only the beginning. The real challenge is learning how to act when the very structure of knowing is in question. ## Ontological Uncertainty: When the World Won’t Sit Still If epistemic uncertainty is about not knowing, **ontological uncertainty** is about the possibility that the world itself might be **unknowable** — not because we haven’t yet figured it out, but because some parts of reality are inherently unstable, indeterminate, or radically non-computable. It’s a tougher pill to swallow. Epistemic humility is fashionable these days, but **ontological humility** requires something more disturbing: the acceptance that there are limits not just to what _we know_, but to what _can be known_, even in principle. At its gentlest, ontological uncertainty shows up as **randomness** — the simple kind, like coin tosses and radioactive decay. But even here, if we’re honest, we’re already outside the tidy bounds of determinism. Roll a die or predict the exact trajectory of a raindrop—there’s noise built into the system. Some things are simply probabilistic at their core, and no amount of extra data will change that. This is the uncertainty gamblers, weather forecasters, and quantum physicists lose sleep over. And things get weirder quickly. Take **chaos**: systems that are deterministic but wildly sensitive to initial conditions. The classic example is the weather, where a butterfly flapping its wings in Porto might — given the right conditions — eventually influence a typhoon in Tokyo. In chaotic systems, it’s not that we can’t predict; it’s that tiny errors in measurement explode so quickly that prediction becomes meaningless. This is uncertainty not from ignorance, but from the dynamical nature of the system itself. You can write the equations — they just won’t help you in time. Dig a little deeper still, and you hit the bedrock: **undecidability**. In computer science, Alan Turing showed that there are some problems — like the [halting problem](https://www.quantamagazine.org/next-level-chaos-traces-the-true-limit-of-predictability-20250307/) — for which no general solution exists. You can’t write a program that tells you, in all cases, whether another program will eventually stop running or loop forever. This isn’t a limitation of technology or ingenuity; it’s a fundamental feature of computation. What the Halting Problem reveals is profound: even within rigorously logical systems, **some truths are inaccessible**. If you think that sounds like a metaphor for life, you’re not wrong. The future is littered with problems of this nature, where no amount of knowledge or computational power can determine the outcome. It’s the ultimate cosmic joke: even in a world where everything _seems_ deterministic, some things remain forever undecidable. In this sense, ontological uncertainty is not a failure of knowledge, but a **feature of being**. Some systems evolve in ways that cannot be reduced to prediction or compressed into simple laws. They are **open-ended**, **emergent**, and sometimes downright mischievous. Ecosystems, minds, economies — these are not puzzles to be solved but phenomena to be **inhabited cautiously**, with an ear to the ground and a willingness to be surprised. Trying to grasp them completely is a bit like trying to own the ocean. You might get your hands wet, but you won’t get far with the deed. This makes ontological uncertainty not just intellectually troubling, but ethically and politically challenging. It complicates our desire for control, for planning, for mastery. It suggests that no matter how clever we become, we will always be dancing on a shifting stage. Our models, our forecasts, our plans — they are not wrong, necessarily. They are just **partial stories about an unfolding world that doesn’t owe us clarity**. ## Agential Uncertainty: Knowing, But Not Acting The Cumaean Sibyl, a Greek priestess of the Apollonian temple of Cumae, knew. That was her curse. She had been granted immortality by Apollo, but not eternal youth — a rookie mistake in the wish-granting business. As the centuries wore her down, she shrivelled into something like a sentient raisin, kept in a jar for the amusement of children. And when they asked her, “Sibyl, what do you want?” she gave perhaps the most honest answer in literature: _“I want to die.”_ This wasn’t ignorance. It wasn’t randomness. It was the deep, aching horror of **agential uncertainty** — the kind that arises when one sees clearly, but cannot act. Agential uncertainty is what happens when epistemic and ontological uncertainty have been, at least partially, conquered. You know the storm is coming. You even understand why. But for structural, political, systemic, or psychological reasons, you can’t turn the ship. It is a peculiar kind of **paralysis**, more maddening than ignorance, because it wears the mask of insight. It’s the climate scientist watching CO₂ rise while emissions reports are filed in triplicate and ignored. It’s the social reformer trapped in a bureaucracy that digests every good idea and excretes policy briefs. It’s Cassandra, it’s Kafka, it’s the boardroom PowerPoint slide that proves collapse is imminent — followed by croissants and coffee. What makes agential uncertainty distinct is that it is not about what the system _is_, but about what the system **allows you to do**. You may be a well-informed agent with a clear view of the landscape, but you are caught in feedback loops, power asymmetries, social inertia, or logistical dead-ends that render your agency — in practical terms — null. The tragedy isn’t that you’re blind. It’s that you’re **trapped**. Your wings are pinned, even as you’re handed a weather forecast and told to “innovate.” In modern systems, this uncertainty is becoming more pervasive, not less. Globalised complexity means consequences ripple faster than our institutional mechanisms can respond. Technologies outpace regulation, finance outpaces morality, and governance often becomes an exercise in simulated control. We build models and dashboards and think-tanks, and yet — like the Sibyl — we’re reduced to whispering answers into jars while the crowd shuffles past. Agential uncertainty forces us to confront an uncomfortable question: _Is knowledge enough?_ For centuries, the Enlightenment story told us that with enough data, enough reason, enough light, we could steer the world. But now we face problems not of knowing, but of **doing** — and of being systemically prevented from doing what we already know is necessary. This is not a bug in the machine. **It is the machine**. So where epistemic uncertainty asks what we don’t know, and ontological uncertainty tells us what we can never know, **agential uncertainty demands we face what we cannot do — even when we know exactly what must be done**. It is the Sibyl’s whisper in the age of climate dashboards and AI governance committees: _“I see the path, but my hands are tied.”_ [[Body of Knowledge|Next page]]