I reported a food safety issue to a company whose product I had purchased. The issue was specific, the report was clear, and the company acknowledged it. They escalated the matter internally and initiated whatever process such reports require. They thanked me for bringing it to their attention. The email was courteous and the tone appropriate to the situation.
Nothing had yet been resolved. The product was still on shelves. The issue remained. No outcome existed.
That afternoon, a second email arrived.
Please rate the help we have given you.
The request assumed something that had not occurred. It asked me to evaluate help that had not been delivered—to measure, on a scale of one to five, a service that existed at the level of acknowledgment and not yet at the level of outcome. The company had received my report. It had replied. It had told me that something was happening. Nothing had happened. The email did not distinguish between these states. It treated the reply as the service, and the service as something rateable, and the rating as something I should now produce.
The scale ran from one to five. The scale did not include zero. There was no option for no service has occurred or the matter is unresolved or it is not yet possible to rate something that has not been completed. The system that generated the evaluation request had not built those options because it did not include them. It had already decided that service had occurred. It knew this because it had sent a reply. The reply was, in the system’s internal accounting, the service. The service was therefore ratable. The rating was therefore requestable.
Whether the person receiving the reply considered it a service was not part of the system’s calculation.
I want to examine what the rating request was actually asking me to do, because several things were happening simultaneously and not all of them were visible on the surface.
The most obvious was the request for my time and attention. I had purchased a product, identified a problem with it, reported that problem, and received an acknowledgment. I had already invested effort in this sequence without any resolution resulting from it. The rating request asked for further investment—reflection on the interaction, judgment about its quality, the production of a rating and, presumably, written comment—without offering anything in return. The familiar structure: labour requested after the transaction, without agreement, without compensation, without acknowledgment that a request was being made at all.
Less visible was where the rating would go. The link in the email routed to an external platform—not the company’s own system but a third-party service, almost certainly based in the United States, that specialises in collecting and aggregating customer feedback on behalf of client organisations. I had not agreed to share my data with this platform. I had not been told, in the email, that clicking the link would take me outside the company’s own systems. I had not been informed what the platform would do with my rating, whether it would be identifiable, how long it would be retained, or who would have access to it.
The company had not decided to collect my feedback. It had decided to allow another system to collect my feedback on its behalf, using data generated through an interaction with the company, routed through infrastructure I did not choose and had not been told about.
The feedback was not for the company. It was for the system that measures the company.
The star rating itself is a mechanism worth a moment’s attention. It reduces experience to a number. This is not, in principle, a problem—numbers are useful for aggregation, and aggregated feedback serves genuine purposes in helping organisations understand patterns across large volumes of interactions. The compression is the cost. What the star rating cannot contain is the question I wanted to answer, which was that the service I was being asked to rate had not yet been provided. The scale had no vocabulary for this. It moved from one to five and each point on the scale assumed that something had occurred and was being assessed.
One star means it occurred and was very poor. Five stars means it occurred and was excellent. The gap between zero and one—the space where it has not yet occurred would live—was not available. The system had closed that space before I arrived at the question, because the system’s premise was that something had been delivered. The premise was wrong. The system had no mechanism for being told it was wrong.
This is the system recognising its own activity as completion. It sent a reply. Sending a reply is a thing the system knows how to do and knows how to record. The record shows: reply sent. Reply sent reads as: interaction completed. Interaction completed generates: evaluation request. At no point in this sequence does the system ask whether the person on the other end of the reply considers the interaction complete. The system does not need to ask. It has already answered.
The food safety issue is, in the context of this sequence, almost irrelevant. The company had received a report that a product might pose a risk to the people who consumed it. Whatever process they had initiated, they had not yet told me what they found, what they were doing about it, or what the outcome was. The product was, as far as I knew, still available for purchase by people who did not know about the issue I had reported.
I am aware that internal investigations take time, and that companies cannot always communicate every stage. I am not suggesting that the absence of immediate resolution indicates negligence. I am suggesting that the presence of an evaluation request, in the absence of any resolution, indicates something specific about how the system understands itself.
The system understands itself as having performed. It sent a reply. It escalated the issue. It thanked me for reporting it. These are activities that the system can record, measure, and report. They are not outcomes. The system does not distinguish between activities and outcomes because the system measures activities. Outcomes are harder to measure—they require the passage of time, the follow-through to a conclusion, the checking of whether the thing that needed to happen actually happened. Activities are immediate, recordable, and sufficient for the generation of a metric.
The metric requires an evaluation. The evaluation requires a rating. The rating requires me.
If I were to question the timing of the request—to ask why an evaluation had been requested before any outcome existed—the response would most likely take one of several forms.
No harm was intended. This is almost certainly true. The algorithm that generated the email operates without intent of any kind. It sent the request because the trigger condition—reply sent—had been met, and the trigger condition does not distinguish between a reply that resolved an issue and a reply that acknowledged one. The absence of intent does not produce the absence of effect. The request arrived. The implicit assumption that service had occurred was embedded in it. The star scale that offered no option for nothing has yet been resolved was part of it. None of this required intent. It required only a system that measured the wrong thing at the wrong time.
It’s just how the system works. Also true. The system works by measuring responses rather than outcomes, by requesting evaluation at the point of reply rather than at the point of resolution, by routing feedback through third-party platforms without disclosing this clearly, by offering star scales that assume completion as their premise. This is how the system works. The description of how the system works is not a justification for how it works. It explains the mechanism. It does not address the mismatch.
Everyone does this. True again, and the most revealing of the three. The normalisation of a practice does not establish its validity. Requesting evaluation before service has been delivered is common. Its commonality is a fact about how many organisations have adopted this approach, not a fact about whether the approach is accurate. The question the rating request embeds—how was the help we gave you?—contains a false premise regardless of how many systems ask it.
I did not complete the rating. There was nothing to rate.
The issue I reported was specific and the company’s response was courteous and the escalation they described may well have led to something useful. I do not know. No one told me the outcome. No one contacted me with a result. The investigation, if it occurred, occurred entirely within the company’s own systems and produced, as far as I can tell, no communication addressed to me.
What arrived, before any of that, was the request to evaluate the help I had received.
The help I received was an acknowledgment. The acknowledgment was courteous. I could have given it four stars. The option for no help has yet been received, the matter remains open, and it is not possible to evaluate something that has not occurred was not on the scale.
The scale begins at one because zero is not an acceptable answer.
The system records a response. It calls it service.
Nothing has been resolved. The system asks how it did.