I’ll Make a Note

I have no credit history in Australia. This is not a financial problem. It is an administrative one—the absence of a record rather than the presence of a bad one.

I have not borrowed money here, have not held unpaid credit card debts that report to the bureaux, have not left the kind of trace that credit systems use to construct a picture of a person’s financial behaviour.

The picture does not exist. The absence of the picture is, in most systems that consult credit records, treated as equivalent to a negative result. The system expects a record. No record arrives. The system stops.

When the new energy provider’s agent reached the credit check point in the sign-up process, I told her there would be no result.

She said: “I’ll make a note of that.” The process continued.

The sentence is worth examining because it is doing more than it appears to do. On the surface it is a minor administrative action—a note attached to a record, a flag in a field, a piece of context added to a file. Below the surface it is a structural intervention. The agent had encountered a condition the system would normally treat as a blocking failure. She converted it into context. The blocking failure became a known variation, noted, attached to the record, carried forward with the process rather than stopping the process until the variation was resolved.

This required two things to be true simultaneously. The agent needed the judgement to recognise that the absence of a credit record was not the same as the presence of a credit problem. And the system needed to accept the note—to allow a local judgement to be recorded in a way that would travel with the account and prevent the same question arising again. Both were true. The agent acted. The system accepted. The process continued.

Most systems are not built this way.

The credit check is usually a hard gate. It is binary—the record exists or it does not, the score falls above the threshold or below it—and binary gates do not accommodate variation. They produce pass or fail. Pass means continue. Fail means stop, escalate, request alternative documentation, refer to a specialist team, apply a security deposit, or decline the application. The gate does not have a third position for no record due to absence of activity rather than history of default. The gate was built for two outcomes.

When a third presents itself, the gate stops.

The agent converted the gate into a soft condition. She did not override the system. She used the system’s own capacity to annotate records in order to attach a non-standard context to a standard field. The credit check produced no result. The note explained why. The explanation was sufficient for the process to treat the empty field as filled—not with a credit history, but with an account of why no credit history existed. The account was accepted as adequate.

I have no credit record. The system noted it. It did not stop.

I want to be precise about why this is different from what I have been describing in the systems that preceded it, because the difference is not superficial.

The first energy provider’s systems encountered variations and stopped. The mobile number was bound to another account—stop. The identity token could not be used in two places—stop. The new account could not be verified while the old account was open—stop. Each stop was generated by the system’s own constraints, applied to a situation the constraints had not anticipated. The system had no capacity to note the variation and continue. It had only the constraint, applied uniformly, producing a loop from which I could not exit without the system resolving a conflict it could not represent.

The second provider’s agent, facing a different kind of variation, did not stop. She noted. The note was the resolution. Not a resolution of the underlying condition—I still have no credit history, and the note does not change that—but a resolution of the system’s need for information about the condition. The system needed to know something about my credit standing. It could not obtain a score. The note told it why. The system accepted the note as sufficient and proceeded.

This is local judgement inside a global structure. The structure has rules that apply across all customers. The rules exist because the structure cannot individually assess every variation that every customer presents. The rules are approximations—they work for the majority of cases and they fail at the edges. The edges are where variations live. A system that can only apply its rules uniformly will stop at every edge. A system that allows agents to annotate exceptions—to attach a note that says this edge case has been assessed and this is what was found—can continue past the edges without abandoning the rules that govern the centre.

The distinction matters because edges are not rare. Every system that interacts with real people will encounter variations continuously, because real people do not conform uniformly to the categories the system was designed for. A person moves between countries and has no local credit history. A person changes their name and holds accounts under two versions of it. A person’s mobile number is attached to an account being closed. These are ordinary life events. They are extraordinary system events. A system that treats them as extraordinary will stop repeatedly. A system that treats them as known variations within a range of expected inputs will absorb them and continue.

I have encountered three distinct system types across these interactions and the contrast between them is now clear enough to describe precisely.

The first type cannot deviate. Its rules are applied uniformly, its gates are binary, its paths do not accommodate inputs that fall outside the expected range. When a variation arrives, the system stops and presents the stoppage to the user as a task to be completed. The user must resolve the variation—by obtaining a different mobile number, by waiting for the old account to close, by finding a way to fit the expected input before the process will continue. The system’s inflexibility becomes the user’s workload.

The second type appears to accommodate variation but does not. It offers steps, channels, and interactions that produce activity without producing resolution. The chatbot provides instructions that are valid for a different problem. The form accepts entries that do not verify. The phone system routes to a team that escalates to another team. Each step is completed. Each completion is logged. The log accumulates evidence of interaction. The variation remains unresolved beneath the activity. The system is satisfied because it has recorded completed steps. The user is not satisfied because the completed steps have not addressed the problem.

The third type absorbs variation. It does not require the user to resolve the system’s constraints. It does not generate activity that substitutes for resolution. It recognises that the input deviates from the expected range, notes the nature of the deviation, and continues. The agent who says “I’ll make a note of that” is operating within this type. She is not overriding the system. She is using a capacity the system was built to accommodate—the capacity for local judgement, for exceptions that are noted and carried forward, for the recognition that the rules are approximations and the edges are real.

The credit check moment stayed with me because of what it revealed about the system’s relationship to its own imperfection. Most systems do not trust their own ability to handle imperfect inputs. They enforce uniformity because uniformity is controllable and variation is not. The uniform system knows what to do with every input because it has reduced all inputs to the same small set of categories. The cost of this reduction is the edge—the input that does not fit the categories and stops the process.

A system that allows local judgement has accepted that inputs will vary, that not all variations can be anticipated in advance, and that the response to unanticipated variation is not to stop but to note, contextualise, and continue. This requires trust—in the agent’s judgement, in the system’s capacity to carry a note forward, in the assumption that an exception noted is an exception handled.

The trust is not naive. It is structural. The system was built to accommodate it.

The first provider’s systems were not built to accommodate it. The constraints were hard. The gates were binary. The agent, if I reached one, could not note an exception and continue. The system did not accept annotations of that kind. The constraint was the constraint and the user was required to resolve it.

The new provider’s agent noted the exception and the system accepted the note. The process continued. My account exists. The service will transfer. The credit check produced no result and the system carried on with a record of why.

The system did not require me to fit it.

It adjusted to include me.

That is the difference between a system that manages its own complexity and a system that exports it. One stops at the edges. The other notes them and continues.