Asimov wrote the Three Laws as a solution to a narrative problem. Science fiction before him had produced robots that either served obediently until they didn’t, or turned on their creators as the inevitable consequence of making something intelligent.
He wanted a third option—something more interesting than servitude or rebellion, something that would generate genuine moral complexity rather than horror or comedy. The Laws were designed to produce that complexity by making the constraints internally coherent and externally insufficient. They work as rules. They fail as ethics. The failure is the point.
The first law prohibits harm, including harm through inaction. This sounds like the foundation of a functioning ethics. It is, in practice, the foundation of paralysis. Every action in a complex system produces consequences, and consequences propagate through systems in ways that cannot be fully traced at the moment of action.
A hospital that prioritises patients with the highest probability of recovery may be causing harm through the inaction of not treating patients with lower probabilities. A government that builds a road removes the harm of poor connectivity and introduces the harm of increased traffic, noise, and displacement. A company that reduces its workforce to ensure its survival causes harm to the people it releases and prevents the harm that its collapse would cause to the people who remain.
The first law, applied to institutions, would require them to trace all of this before acting. The tracing is not possible. The consequences extend too far, involve too many people with conflicting interests, and depend on counterfactuals that cannot be resolved. The institution that attempts to comply with the first law fully would never act, because every action has downstream effects that include harm to someone. Asimov called this moral gridlock. It is also simply a description of what happens when a rule is applied to a domain it was not designed for—when the constraint is technically correct but practically unusable.
Most institutions have resolved this problem not by solving it but by limiting the domain in which harm is counted. The corporation counts harm to shareholders as the primary measure and treats harm to other parties as externality—present, perhaps acknowledged, but not the variable the institution optimises for.
The regulatory body counts harm within its defined jurisdiction and treats harm outside it as outside its remit. The military counts harm to the mission and to its own personnel and treats harm to other parties through rules of engagement that define what counts as permissible rather than what counts as harm.
The limitation is practical. An institution that had to count all harm to all parties before acting could not function. So the institution defines the population whose harm counts and optimises for that population. The definition is the ethical act. The optimisation is what follows from it. Whether the definition is correct—whether the population whose harm is being counted is the right population, or the complete population, or the population that deserves to have its harm counted—is a question the institution’s measurement apparatus does not ask, because the apparatus was built after the definition was made, and the apparatus measures within the definition rather than examining it.
The first law would force the definition into the open. It would require the institution to say, explicitly, which harms count and why, and to justify the exclusion of harms that fall outside the count. This is the genuinely useful disruption the first law would produce—not the impossible requirement to prevent all harm, but the requirement to account for the harm the institution is currently not counting. The accounting would be uncomfortable. Most institutional definitions of countable harm exclude the people with the least power to insist on being included. The first law would not solve this. It would make the exclusion visible.
The second law—obey instructions except where obedience causes harm—introduces a different kind of complexity. Institutions are hierarchical. They exist to translate authority into action—to take decisions made at one level and implement them at another. The mechanism of translation is compliance. The person at the implementation level does not evaluate the directive. They execute it. The evaluation has been done, or is assumed to have been done, at the level where the directive originated. The compliance is what makes the hierarchy function.
The second law inserts an evaluation at the point of compliance. The person implementing the directive is now required to assess whether the implementation would cause harm before proceeding. The assessment is not a check on the directive’s legality or procedural correctness. It is a check on its consequences—which requires the implementer to have access to information about consequences that the directive-issuer may or may not have considered, and to have the authority to act on that information, and to have some mechanism for raising the concern that is not simply the refusal to comply.
Most institutions do not have this mechanism. The person who identifies a likely harm from a directive is expected to raise the concern through the hierarchy—to report to the person above them, who reports to the person above them, and so on until the concern reaches the level where the directive originated. The raising is supposed to produce a reconsideration of the directive.
In practice, the raising of concerns about directives is institutionally costly in ways that the second law does not address. The person who raises the concern is visible as the person who did not simply comply. The institution that generated the directive has an interest in the directive being implemented. The mechanism that is supposed to translate the second law into action—the raising of concerns through hierarchy—runs against the grain of the hierarchy that the second law is supposed to constrain.
The military case is the sharpest version of this. Command structures exist specifically to produce compliance at speed. The evaluation of every directive for potential harm before compliance would eliminate the speed. It would also introduce the judgment of people at the implementation level—people who may have more direct knowledge of the likely consequences than the people issuing the directive, or may have less, or may have different knowledge that points in a different direction. The second law would not resolve which knowledge is authoritative. It would produce a situation in which every directive is subject to evaluation by the person implementing it, and the evaluation is in principle unbounded—any implementer can identify any potential harm and decline to comply on that basis.
The result is not moral autonomy. It is the appearance of moral autonomy in a structure that has not changed the distribution of power, only added a layer of evaluation to the compliance chain. The evaluation produces delay and conflict without necessarily producing better outcomes, because the evaluation is performed by people who have been selected and trained for compliance, not for the kind of independent ethical judgment the second law would require.
The third law—protect your own existence as long as doing so does not conflict with the first two—is the law that institutions already obey, without the constraints the first two would impose. The survival of the institution is the operational priority that shapes most of what institutions do. Not because institutions are cynical, but because the people within them have reasonable interests in the institution’s continuation—their employment, their professional identity, the services the institution provides, the relationships it maintains—and those interests align with the institution’s self-preservation in ways that make self-preservation feel like a natural objective rather than a law being followed.
The third law without the first two is simply the description of how most institutions behave. The institution protects its existence. The protection is rationalised as being in the interest of the people it serves, because the institution’s continuation is necessary for the service to continue. The rationalisation is sometimes accurate. The institution that dissolves cannot provide the service.
The rationalisation is also sometimes a cover for the protection of institutional interests that have diverged from the interests of the people the institution is supposed to serve. The measurement system cannot tell the difference, because the measurement system was built by the institution to measure what the institution has defined as its function, and the institution has defined its function in terms that align with its continuation.
What the Three Laws would produce, applied to institutions, is not a set of solutions but a set of visible contradictions. The first law would make the limits of what is currently counted as harm explicit. The second law would expose the gap between the stated mechanism for raising concerns and the institutional reality of what happens when concerns are raised. The third law would require the institution to justify its continuation in terms of the first two rather than in terms of its own persistence.
The contradictions are already present. The laws would not introduce them. They would require the institution to look at them directly rather than managing them at the edges, addressing them procedurally, or simply not measuring them because the measurement apparatus was not designed to capture them.
Asimov’s robots were paralysed by the laws. The paralysis was the story. The institutions that would be paralysed by these constraints are not fictional. They are operating now, in conditions they have defined to make operation possible, with measurement systems that confirm the operation is proceeding correctly.
The laws would not improve the operation.
They would make the definition visible.
That is the unsettling part.