In 1992, Professor Alice Mallard wrote that artificial intelligence was still largely a matter of explicit rules—systems that reasoned by applying formal logic to defined inputs and producing outputs that could be traced back through the inference chain to their premises.
The systems were impressive in narrow domains and brittle everywhere else. They could play chess at a level that exceeded most human players, but they could not understand why a particular move might be emotionally significant to the person sitting across the board. They could diagnose diseases from symptom profiles with considerable accuracy, but they could not sense that the patient was frightened and that the fear was affecting what they reported. The limitation was structural. The systems were built on logic, and logic, as Mallard observes, has a boundary.
She was working at that boundary when she wrote this. The observation about artificial intelligence—that it runs on logic, can simulate emotion in language, mirror intuition in statistics, but does not feel, does not carry memory in muscle or mood—was not, in 1992, a commonplace observation. It was a precise technical and philosophical claim made from inside the field, by someone who understood what the systems were doing well enough to say clearly what they were not doing. The distinction she draws between reflection and origination—the system reflects the turns you make, but it never turns itself—is still the most accurate short description of the limitation available.
The elevation of reason as the highest form of knowing is not a new development, but its institutionalisation—the embedding of the preference for explicit, defensible, auditable reasoning into the structures of organisations, assessment systems, and the measurement apparatus that determines what counts as evidence—is a relatively recent one, and it has produced consequences that Mallard anticipated more clearly than most.
The consequences are visible in the systems that process people. The medical assessment that reduces a patient to a symptom profile and a set of risk factors. The job interview that scores candidates on measurable competencies rather than the quality of judgement they would bring to situations the competencies do not capture.
The educational assessment that tests what can be tested and treats the untestable as educationally peripheral. The performance management system that measures what can be counted and calls the counted things performance. In each case, the preference for the explicit and auditable has produced a system that can account for what it measures and cannot account for what it cannot measure—which is frequently the thing the system was supposed to be assessing.
Mallard’s formulation is precise: rationality has a boundary, and most of what matters lives just outside it. The boundary is not a failure of rational method. It is the edge of the domain in which rational method produces useful results. Within the domain—in mathematics, in formal logic, in the analysis of well-defined problems with sufficient data—reason is the right tool and it works. Outside the domain—in the choosing of partners, the changing of careers, the facing of death, and most of the situations that constitute a human life—reason is one input among several, and frequently not the most important one.
Emotion, in her formulation, is not interference with the system. It is the system. This claim is stronger than it initially appears. She is not saying that emotion is a legitimate input that should be allowed into otherwise rational deliberation.
She is saying that the framing of emotion as an input into reasoning is itself wrong—that emotion is the prior structure within which reasoning occurs, that it signals change before reasoning can process the signal, that it encodes experience in ways that reasoning then works with rather than generates independently. You cannot reason your way into love or courage. These arise from a system that is running before the reasoning starts, and the reasoning, at best, reflects on what the prior system has already produced.
The implications for how organisations process people are significant. An organisation that treats emotional response as noise to be filtered out of decision-making is not producing purer decisions. It is producing decisions from which a major part of the available information has been excluded on the grounds that it is not amenable to the measurement apparatus the organisation has chosen to use. The information is still present. The person who is frightened, or excited, or depleted, or motivated by something the job description does not mention, is still bringing all of that into the interaction.
The system that records only what the measurement apparatus captures is not seeing more clearly. It is seeing less, with more confidence in the seeing.
Intuition receives a similar treatment in most formal systems, and Mallard’s rehabilitation of it is precise. Intuition is pattern recognition below conscious access. The practitioner who has encountered enough cases to have developed a reliable sense of when something is wrong, even before they can say what is wrong, is not operating irrationally. They are operating on a compressed account of a large number of previous encounters—a statistical summary that their experience has produced and that their consciousness does not have direct access to. The summary is real. The knowledge is real. Its inarticulate character is not evidence against its validity. It is evidence of the nature of the encoding.
The system that cannot accommodate inarticulate expertise excludes the most experienced practitioners from its decision-making apparatus at precisely the moments when their experience is most valuable—the moments when something is wrong in a way that the formal indicators have not yet registered, when the situation requires the kind of judgement that only extensive exposure to similar situations produces, when the explicit reasoning is still catching up with what the practitioner already knows.
The consultant who has reviewed enough failing organisations to recognise the signs of internal collapse before the financial indicators confirm it. The clinician who knows the patient is not telling the full story before the tests come back. The editor who knows on the first page whether a manuscript has the quality that will sustain a reader to the end. In each case the knowledge is real and the reasoning that would defend it formally is not yet available. The system that requires the reasoning before acting on the knowledge will always act later than the practitioner who trusts the intuition. Sometimes later is fine. Sometimes it is not.
Mallard’s observation about artificial intelligence has acquired a different valence in the decades since she wrote it. In 1992, the systems were brittle and narrow. Now they are fluent and broad. They produce language that sounds like understanding, reasoning that looks like judgement, responses that feel like engagement. The simulation has become considerably more convincing. The structural limitation she identified has not changed.
The systems still do not feel. They still do not carry memory in muscle or mood. They still reflect the turns the user makes without turning themselves. The outputs are produced by pattern matching over very large amounts of human-generated content, which means they reproduce the patterns of human reasoning, emotion, and intuition without instantiating those things. The reproduction is increasingly accurate. The accuracy of the reproduction is not the same as the presence of the thing being reproduced.
What is new is that the convincingness of the simulation has made the limitation harder to see. In 1992, the brittleness of AI systems made their structural limitations obvious. A system that failed visibly when it encountered a situation slightly outside its training domain was clearly not reasoning the way a person reasons. A system that produces plausible language about any domain, including domains it has no reliable knowledge of, is doing something that looks more like reasoning and is therefore more likely to be treated as though it were.
Mallard’s 1992 observation anticipated this. The system attends to the places where humans are not explaining themselves—to the pauses, the hesitations, the drift of meaning. She means this as a description of what AI could do usefully: track the gaps in human self-explanation, notice what is not being said, attend to the structure of communication rather than only its content. The observation is accurate and the capacity is real. It is also the description of a system that can appear to understand what it is attending to while having no access to the thing behind the appearance.
The pause before the answer. The hesitation before the disclosure. The drift from what was meant to what was said. The system can track all of these. It cannot know, the way a person with experience and embodied knowledge knows, what any of them mean in the specific case of the specific person in the specific situation. It can produce a statistically plausible response to the pattern the inputs represent. The response may be right. It will be right for statistical reasons, not for the reasons a person would be right—not because the system has understood the situation, but because the situation resembles enough other situations that the pattern has produced a reasonable approximation.
The purely rational life, Mallard observes, might be safer but would not be lived—only processed. The distinction is precise and it applies beyond individual lives. The organisation that processes its people rather than encountering them. The system that measures its signals rather than engaging with the substance they approximate. The technology that reflects the turns its users make without ever making its own.
Processing is not living. Measuring is not knowing. Reflecting is not understanding.
The tools are real. The domains in which they work are real. The boundary is also real.
Most of what matters still lives just outside it.
Beyond the Rational Mind, Professor Alice Mallard, 1992