A system, in its formal definition, is a set of interacting components that function together as a unified whole to achieve a specific purpose. The definition is straightforward. The components are interdependent—if one part fails, the whole is affected.
The components work together toward a purpose that none of them could achieve alone. The system has boundaries that separate it from its environment. It receives inputs and produces outputs. It maintains itself through feedback loops that detect deviation from the desired state and generate corrections.
This is an elegant description of something that works. The heating system with its thermostat is the standard example because it is simple enough to display all the properties clearly. The thermostat detects the temperature. The temperature is the input. The desired temperature is the specified output. When the actual temperature falls below the specified output, the thermostat generates a signal that activates the heating. When the actual temperature reaches the specified output, the thermostat generates a signal that deactivates it. The loop closes. The system maintains the desired state. The purpose—a room at a particular temperature—is achieved through the interaction of components that could not achieve it individually.
The elegance of the description is worth noting because it is doing something that the formal definition does not explicitly acknowledge. It is describing a system in which the purpose is unambiguous, the components are well-specified, the feedback mechanism is reliable, and the desired output can be measured directly against the actual output. These are the conditions under which the formal definition applies cleanly. They are not the conditions under which most human-facing systems operate.
The definition requires that the components work together toward a purpose that none of them could achieve alone. This implies that the purpose is shared—that all the components are oriented toward the same goal, that the interdependence serves the purpose rather than some other objective. In a mechanical system, this condition is trivially satisfied. The heating element does not have an objective other than heating. The thermostat does not have a preference for how often it switches. The components are built for the purpose and have no other.
In organisational systems, the components are people and the sub-systems those people belong to, and the condition that they share a common purpose is not trivially satisfied. The person answering the phone in the customer service department has objectives that include their performance metrics, their relationship with their manager, their interest in resolving the call within a defined time window, and their interpretation of what the caller actually needs. These may align with the organisation’s stated purpose. They may diverge from it in ways that are small individually and significant in aggregate. The feedback loop that would detect this divergence and generate a correction requires that the system can measure the gap between the actual output and the desired one—which requires that the desired output can be specified clearly enough to be measured, and that the measurement can detect the divergence rather than confirming the metric.
The heating system measures the gap between actual and desired temperature in degrees. The gap is unambiguous. The correction is automatic. The customer service system measures the gap between actual and desired output in metrics that were chosen for their measurability—call duration, first-call resolution rate, customer satisfaction score. The metrics may correlate with the desired output. They may also be satisfied by outputs that do not correspond to the desired one. A call resolved within the time window by redirecting the caller to a different department has satisfied the metric. Whether the caller’s problem was resolved is a different question.
The formal definition distinguishes between open and closed systems. Closed systems are isolated from their environment. Open systems interact with it. Most human systems are open—they receive inputs from an environment they do not control and produce outputs that affect an environment they do not fully observe. The environment changes. The inputs change. The system that was designed for a particular range of inputs will encounter inputs outside that range, and its behaviour outside the design range is not specified by the design.
The biological analogy is instructive here. The human nervous system is an open system. It receives inputs from a constantly varying environment and produces outputs—behaviours, adjustments, regulatory responses—that are adapted to the specific inputs rather than specified in advance. The adaptation is possible because the nervous system is capable of processing variation without requiring the variation to fit predefined categories. It does not receive an input, check whether the input matches a defined category, and proceed according to the procedure for that category. It processes the specific input and generates a response appropriate to it.
The organisational system that processes people through a defined procedure is doing the opposite. It receives the variable input—the person with their specific situation—and applies a procedure designed for a defined category. The procedure produces the output specified for the category. Whether the output corresponds to what the specific input required is not directly tested, because the system was designed to test correspondence with the category, not correspondence with the specific input. The category is the abstraction that makes the system scalable. The specific input is what arrives. The gap between the abstraction and the specific input is the gap the procedure crosses without examining.
Feedback loops are the mechanism by which systems maintain themselves—detecting deviation from the desired state and generating corrections. The formal definition presents this as a property of systems rather than something that has to be designed carefully and maintained actively. In mechanical systems, the feedback is automatic. The thermostat does not decide to measure the temperature. It measures it continuously and generates a signal whenever the measured value deviates from the specified value. The feedback is instantaneous, reliable, and directly connected to the correction mechanism.
In organisational systems, the feedback loops are not automatic. They require someone to measure the output, compare it to the desired state, identify the deviation, and generate a correction. Each of these steps can fail. The measurement may not capture the relevant dimension of the output. The comparison may be made against a specification that does not accurately describe the desired state. The deviation may be identified but not communicated to the part of the system that can generate a correction. The correction mechanism may not exist, or may exist but not be connected to the measurement.
The formal definition implies that feedback loops maintain stability. In practice, organisational feedback loops often maintain the appearance of stability—they generate measurements that confirm the system is performing within specification, while the specification has drifted from the purpose the system was designed to serve. The loop is closed. The signal confirms acceptable performance. The deviation from the original purpose is not detected because the measurement was designed when the purpose and the specification were aligned, and has not been updated as the specification has drifted.
The definition of a system’s boundaries—what separates it from its environment—is straightforward in physical systems and genuinely difficult in organisational ones. The boundary of a computer system can be drawn at the hardware. The boundary of a transportation network can be drawn at the physical infrastructure. The boundary of an organisation is less clear, because the organisation’s outputs affect an environment that also affects the organisation’s inputs, and the people the organisation serves are simultaneously inside the interaction and outside the system.
The person who approaches an organisational system to receive a service is not a component of the system in the formal sense. They are the environment—the source of the input and the recipient of the output. The system was designed with a model of this person, a description of the range of inputs the person is expected to provide and the outputs the person is expected to find useful. The model is an abstraction. The actual person is the environment, and the environment is variable in ways the model does not fully capture.
The system that treats its model of the person as adequate for the person is treating the abstraction as the input rather than the actual input. It is processing the category rather than the situation. The output it produces is the output specified for the category. The person receives the output specified for the model of them rather than the output that their actual situation required. The boundary between system and environment is drawn at the model rather than at the actual person, and the gap between the model and the person is left outside the system’s processing.
The formal definition describes what a system is supposed to be: a unified whole of interdependent components, organised to achieve a shared purpose, receiving inputs and producing outputs, maintaining itself through feedback, bounded from its environment. The description is coherent. It is accurate for the class of systems it was developed to describe. Those systems are designed with a clear purpose, components that can be specified precisely, inputs that can be measured, outputs that can be evaluated directly against the purpose, and feedback mechanisms that are reliable and automatic.
The organisational systems that manage most of human life share some of these properties in approximation. They have components. They have something that functions as boundaries. They receive inputs and produce outputs. They have feedback mechanisms of a kind. What they do not reliably have is the alignment of all these properties around a shared purpose that is specified clearly enough to be measured directly, maintained consistently enough that the measurement remains relevant, and pursued by components whose individual objectives are subordinate to it.
The thermostat does not have individual objectives. It measures the temperature and generates a signal. The signal is what it is. The gap between actual and desired is what it is. The correction follows automatically.
The person at the desk has objectives. The procedure they follow has a history. The measurement system records what it was designed to record. The output is produced. The record shows a completed interaction.
Whether the room is the right temperature is a question the record does not ask.
The thermostat would know.
The system does not have a thermostat for that.