The process of Darwinian natural selection inevitably leads to the encoding of information within living organisms, optimising their *fitted-ness* to an ecological niche. This embodied knowledge serves the purpose of *drawing conclusions*—making decisions based on *evidence*, *reasoning*, or *prior knowledge*. This is the definition of **inference**. The ability to form **generalisations** from limited observations is a distinct cognitive skill. The surprising point here is that inference has always been regarded as an ability exclusive to humans, or at the very least of creatures with highly evolved brains. I submit that it is a competency all life possesses—from single cells to civilisations! You may be familiar with the three existing forms of inference—**induction**, **deduction**, and **abduction**. To these three I would like to add a fourth: **adduction**, the most fundamental of them all: ## ==Adduction== At the cellular level, life operates as a self-organising, cybernetic system, processing and storing information through layered networks of competencies—DNA, RNA, proteins, lipids, signalling pathways, and so on. These components function as an internal model of the environment, encoding statistical patterns of past conditions. When organisms act based on this encoded information, they engage in **embodied inference**. I coined the term "adduction" borrowing from the Latin *adducere* (“to lead toward”), highlighting decentralised cognition in the service of evolutionary fitness. This form of inference occurs at the microscopic level, using cellular machinery to embed information and make decision based on logic gates—molecules that perform actions based on specific conditions—cognitive processes without a brain! When an E. coli bacterium swims toward a higher concentration of sugar, or when a plant orients its leaves toward the sun, these behaviours are driven by inferences about evolutionary fitness. Adduction is distinct from other forms of inference because it is embodied, decentralised, and does not require a brain. You can read more about it my [paper](https://zenodo.org/records/14753203) on this topic. ## ==Abduction== The next evolutionary level of inference is abduction. Originally introduced by American philosopher Charles Sanders Peirce, abduction is a form of inference that starts with an observation and seeks the most plausible explanation. It doesn’t guarantee correctness, but it generates hypotheses that can be tested. This is the logic of detectives and diagnosticians—Sherlock Holmes, for example, was a master of abductive reasoning. Animals rely on abduction when making educated guesses based on incomplete information. A prey animal that hears rustling in the bushes may infer that a predator is lurking, even without direct evidence. Bees infer which flowers contain nectar based on colour and scent, even if they’ve never encountered that specific species before. The ability to perform abductive inference highlights that hypothesis generation has deep evolutionary roots. ## ==Deduction== Aristotle, in *Prior Analytics*, described deduction as a form of reasoning that follows logically from general premises. He called it *sullogismos*, a discourse in which, if certain premises are true, a necessary conclusion follows. For example: All dogs are mammals. My pet is a dog. Therefore, my pet is a mammal. Deduction is the foundation of formal logic—it moves from general principles to specific conclusions with absolute certainty (assuming the premises are valid). Unlike other forms of inference, only humans are known to perform deductive reasoning. ## ==Induction== Aristotle also described another form of inference—induction (*epagogic reasoning*)—as the process of forming general principles from specific observations. While deduction moves from general to specific, induction works in reverse—starting with observations and building general rules from them. Unlike deduction, induction doesn’t provide certainty, only probabilistic conclusions. This form of inference is not only exclusive to humans, but a recent historical development that has flourished since its inception during the enlightenment age to become the foundation of empirical science, allowing us to form general principles with predictive power. Many of the abstract, predictive concepts in our heads—general relativity, planetary orbits, continental drift, DNA heritability—arose through a mix of inductive and deductive reasoning. These four forms of inference are more than categories of reasoning. They are successive layers of competencies that have evolved since the dawn of organic life—building on one another to create a scaffold of previously unseen reasoning abilities. The four types can be summarised as in the below table: | Type | Goal | Direction | Reliability | Example | | --------- | -------------- | ------------------- | ------------ | ------------------------------------------------------------------------------ | | Adduction | Fitness | External → Internal | Evolutionary | The sugar concentration is higher here → Let's keep swimming in this direction | | Abduction | Explanation | Effect → Cause | Plausible | The lights are off → The<br>power might be out | | Deduction | Certainty | General → Specific | Certain* | All dogs are mammals →<br>My pet is a dog → My pet<br>is a mammal | | Induction | Generalisation | Specific → General | Probable | Every crow I’ve seen is<br>black → All crows are likely<br>black | From cellular embodied knowledge to the generalisations of empirical science, a scaffold of inferential skills underpins intelligence. The last two forms of inference—deduction and induction—appear to be fully developed only in humans, explaining our advanced cognitive competencies relative to other life forms. It is this combination that truly sets humans apart, as it enables the creation of **abstractions**. But what exactly are abstractions? An abstraction is the distillation of complexity into a simpler form. It is a form of information compression, stripping away extraneous details while preserving core aspects in the form of **general principles**. These general principles act as meta-information—information about information—with predictive power. While this may sound somewhat abstract (pun intended), these generalisations exhibit properties that can be described and analysed. In philosophy, such principles are often referred to as categories, and thinkers like John Stuart Mill, Willard Quine, and John Searle have argued that inductive generalisations must be anchored in deductive **essences**. That is, categories have an essence if, and only if: 1. Their properties are **homogeneous** in the Aristotelian sense of being *necessary* and *sufficient*. This means that all instances within a category must share certain defining properties, and only those instances that meet these criteria belong to the category. For example, the category "gold" is defined by its atomic structure, mass, color, and density. Every instance of gold exhibits these properties, and if a substance lacks any of them, it is not gold. Conversely, if we were to discover a material with all these properties but an entirely different atomic structure, we would have to revise our definition of gold accordingly. 2. Their properties are **stable**, meaning they do not change over time. A category or general principle must remain invariant across different contexts. Gold is gold whether it is discovered on Earth, Mars, or in a distant future civilisation. The same principles apply across time and space. 3. Their properties must be **intrinsic**, meaning they are inherent to the phenomenon itself. This is evident in our gold example—its defining characteristics are determined by its atomic composition. However, many of the categories we commonly use do not possess intrinsic properties. Take money, for instance. We all understand what money is, but it does not have an inherent physical essence. Money’s value arises from collective agreement, which is why it has taken so many different forms throughout history—seashells, metal coins, paper notes, and now digital bits. Its properties are extrinsic, assigned rather than intrinsic. If this is true, then there is a limit to what can be inferred from general principles. Some things can be abstracted, while others can only be described. This limitation may also explain why defining **understanding** is so difficult. Even the word itself is a vague metaphor—*under-stand*, implying one is "standing under" something in order to grasp its structure or function. The same is true for the notion of "grasping" an idea—another metaphor for mentally holding onto something. The inherent fuzziness of such terms highlights the difficulty in defining, let alone replicating, intelligence. If we struggle to define what it means to understand something, then our grasp of intelligence remains tenuous at best. Perhaps to "understand" means to hold **inductive generalisations** about a phenomenon. However, that does not necessarily make it predictable. I may understand the rules of chess, but I cannot predict the outcome of a game due to the vast number of possible moves. Understanding something does not equate to being able to forecast its future states with certainty. Now that we have begun disentangling the relationships between intelligence, inference, understanding, and prediction, it is time to examine the organ that facilitates these functions: the brain. [[Epistemic Machines|Next page]]