The Programmer's Oath, Revisited
The people who write the rules
Robert C. Martin, Uncle Bob, has been making the same argument for over a decade. Programmers write the rules that society runs on, yet they operate with less professional accountability than almost any other profession that touches public safety.
His reasoning is straightforward. Software controls medical devices, financial transactions, election systems, aircraft, and power grids. The people who write that software have no licensing requirement, no professional oath, no enforceable standard of care, and no governing body that can revoke their right to practice. A barber needs a license. A software engineer who ships code to millions of people does not.
Martin proposed The Programmer’s Oath, a set of promises modeled on the Hippocratic Oath. Do no harm. Produce your best work. Never knowingly ship defective code. Keep your skills sharp. Never stop learning. Test everything you can. Leave the codebase better than you found it.
It was a good starting point. But the world has changed since he proposed it.
The new variable
When Uncle Bob made his case, the assumption was that humans wrote code. The accountability model was clear, even if it wasn’t enforced. The person who wrote the code was responsible for the code. If it was defective, someone made a mistake. If it caused harm, someone was negligent.
That model breaks down when AI writes the code.
Who is accountable when an LLM generates a function with a subtle security flaw? The developer who prompted it? They may not have understood the output well enough to catch it. The company that deployed the AI tool? They didn’t write the specific code. The AI itself? It has no legal standing, no professional obligations, and no concept of accountability.
The code exists. It’s in production. It has a bug. And the chain of responsibility is blurred in a way that Uncle Bob’s original oath didn’t anticipate.
Accountability without authorship
The traditional model assumes authorship. You wrote it, you own it. But AI-assisted development introduces a new relationship: you accepted it. You didn’t write the code, but you reviewed it (or didn’t), integrated it into the system, and deployed it.
Acceptance is a form of authorship. When a developer uses AI-generated code, they’re making a professional judgment: this code is correct enough, safe enough, and reliable enough to ship. That judgment carries the same responsibility as writing the code by hand, arguably more, because the developer is vouching for work they didn’t fully create.
This is where the oath needs updating. It’s not enough to promise “I will not ship defective code.” The modern version needs to account for code you didn’t write: I will not ship code I cannot verify, regardless of who, or what, produced it.
The specification as standard of care
In medicine, the standard of care is the baseline of competence expected from any practitioner. A doctor who fails to meet it can be held liable for malpractice. The standard isn’t whether errors occurred. Did you follow established protocols? Did you exercise reasonable judgment? Did you act on the best available information?
Software has no equivalent. There’s no defined standard of care for writing code, no established protocol for verifying it, and no baseline of diligence that separates professional practice from negligence.
Specifications could fill that role.
If the standard of care for software were “the implementation must be validated against an explicit specification,” then the accountability question has a clear answer. Was there a spec? Was the code validated against it? If yes, the developer met the standard. If no, they didn’t. It doesn’t matter whether a human or an AI wrote the code. The spec is the standard, and validation against it is the measure of professional diligence.
And it holds regardless of how the code was produced.
The window is closing
Martin warned that if the software profession doesn’t regulate itself, governments will do it for them. And governments, by and large, are poorly equipped to regulate software development. The regulations they produce will be crude, burdensome, and likely miss the point.
The EU’s AI Act is an early example. It’s well-intentioned legislation that attempts to regulate AI systems by risk category. But it focuses on the AI outputs rather than the underlying specification and verification practices. It asks whether the AI was fair and transparent, not whether the software it produced was verified against a formal specification of correct behavior.
This is what happens when regulation is imposed from outside. It addresses symptoms instead of root causes. The root cause is the absence of professional standards for defining and verifying what software should do. If the profession established those standards first, external regulation could build on them rather than invent from scratch.
What the oath should say now
The Programmer’s Oath needs to evolve. Not to replace Uncle Bob’s original, his principles are sound, but to extend it for a world where AI is part of the development process.
The additions should address three gaps:
Verification over authorship. I will not deploy code that has not been validated against an explicit specification, regardless of whether the code was written by me, by a colleague, or by a machine. The obligation shifts from “write good code” to “verify that the code meets a defined standard.”
Specification as professional duty. I will maintain clear, structured specifications for the software I am responsible for. I will not allow specifications to drift from their implementations without deliberate revision. This makes specification maintenance a professional obligation, not optional documentation.
Accountability for AI-assisted work. I accept responsibility for all code I integrate into a system, including code generated by AI tools. I will review, understand, and validate AI-generated code to the same standard as code I write by hand. This closes the accountability gap by making acceptance equivalent to authorship.
None of this is particularly demanding. Any serious profession would expect as much. The question is whether the software industry will adopt them voluntarily, or wait until a disaster forces the issue.
A profession that writes the rules
The core argument hasn’t changed. Software engineers write the rules that modern society runs on. That power comes with a responsibility that the profession has never formally accepted.
AI makes the stakes higher and the timeline shorter. The volume of code entering production is increasing exponentially. The percentage of that code written by humans is decreasing. And the mechanisms for verifying any of it, specifications, standards, professional accountability, remain as underdeveloped as they were a decade ago.
The oath is a start. But an oath without infrastructure is just words. The infrastructure is specifications. Explicit, verified, maintained descriptions of what software should do. That’s the foundation that professional accountability can be built on.
Without it, we’re just promising to do better while the tools get faster and the standards stay the same.