← Back to Blog

Determinism Isn't Going Anywhere

Ryan Haney ·

The token economy has a dirty secret

Everyone in the AI industry has an incentive for you to use more tokens. If your business model is selling access to language models, the ideal customer is one who routes every decision, every computation, every workflow through an LLM. More tokens, more revenue. The entire ecosystem — model providers, tool vendors, agent platforms — is aligned around a single message: put AI in everything.

It’s a compelling pitch. And for some problems, it’s the right answer. But for a surprising number of the tasks that businesses are rushing to AI-ify, it’s the expensive answer to a problem that was already solved.

Here’s the thing nobody selling tokens wants you to think about: deterministic computation is cheaper than inference. Always has been. Always will be.

The cost asymmetry

Running an LLM inference is not like running a function. A function that calculates sales tax takes a price and a jurisdiction and returns a number. It runs in microseconds. It costs effectively nothing. It gives the same answer every time.

The same calculation routed through an LLM takes milliseconds to seconds. It costs a fraction of a cent per call — which sounds trivial until you multiply it by the volume of a real business process. It might give the same answer most of the time, but not all of the time, because the model is probabilistic. And it requires network latency, API rate limits, and a dependency on an external provider’s uptime.

For a calculation where the logic is known and the rules are defined, using an LLM is like hiring a consultant to read your calculator’s manual and then press the buttons for you. You’re paying for reasoning about a problem that doesn’t require reasoning. It requires execution.

This isn’t an edge case. The majority of business logic in production software is deterministic. Tax calculations, inventory checks, permission evaluations, data validation, routing rules, formatting, scheduling, accounting. The rules are known. The logic is defined. The correct output for a given input doesn’t require creativity or judgment. It requires a function.

Where AI actually earns its cost

AI earns its cost when the problem space is ambiguous, open-ended, or requires synthesis across unstructured information.

Classifying customer intent from a free-text message — that’s a genuinely probabilistic problem. Summarizing a legal document. Generating a first draft of marketing copy. Translating between natural languages. Answering a question that requires reasoning across multiple knowledge sources. These are tasks where the “right answer” isn’t deterministic, where human-like judgment and pattern recognition add real value.

The distinguishing factor is whether the process is specified. If you can write down a precise set of rules that defines the correct output for every valid input, you have a deterministic process. Running it through an LLM is overhead. If you can’t write down those rules — because the problem is genuinely ambiguous, because it requires interpretation, because the inputs are unstructured — then AI is the right tool.

The industry is blurring this distinction because the economic incentives favor blurring it. If you sell tokens, you want every process to look like it needs AI. But the engineering reality is clear: known processes should be deterministic. Unknown processes can benefit from inference. Mixing them up costs you money and reliability.

Determinism is a feature, not a limitation

Somewhere along the way, “deterministic” became a pejorative. It implies rigid, old-fashioned, unsophisticated. Real innovation is probabilistic. Real intelligence is stochastic. If your system gives the same output every time, it must not be very smart.

This is backwards.

Determinism means predictability. It means auditability. It means you can test a system once and know what it will do the next thousand times. It means a compliance officer can review the logic and sign off on it. It means a customer gets the same answer whether they call on Monday or Friday. It means bugs are reproducible.

In every industry where correctness matters — finance, healthcare, aerospace, infrastructure — determinism is the goal, not the limitation. You want the autopilot to do the same thing every time the same conditions are met. You want the drug interaction checker to be deterministic. You want the circuit breaker to trip at the same threshold, every time.

Software that handles money, health data, legal obligations, or safety-critical operations should be deterministic wherever the logic permits. Introducing probabilistic behavior into a well-defined process doesn’t make it smarter. It makes it less reliable.

The specification connection

This maps directly to the specification problem.

A process that has a clear, complete specification is a process that should be deterministic. The spec defines the expected behavior. The implementation executes it. Testing verifies the match. There’s no gap that AI needs to fill because the gap between intent and implementation has been closed by the specification.

A process that doesn’t have a clear specification — where the rules are incomplete, the edge cases are undefined, or the input is unstructured — is where AI adds genuine value. It fills the gap between ambiguous intent and a useful output. It handles the cases that can’t be specified in advance because the space of possible inputs is too large or too varied.

The question for any workflow is: do we have a spec for this? If yes, the implementation should be deterministic. It’s cheaper, faster, more reliable, and more auditable. If no, AI might be the right tool — but the goal should be to eventually extract a specification from the AI-handled cases, move the well-understood patterns into deterministic implementations, and keep AI focused on the genuinely ambiguous remainder.

Over time, the spec library grows. The deterministic surface area expands. The AI handles less, not more, because the team is continually converting unknown processes into specified ones. That’s the healthy pattern. The opposite — routing more and more deterministic work through AI — is the expensive pattern.

The hybrid future

The future isn’t all-AI or no-AI. It’s a clear separation between what should be deterministic and what benefits from inference.

The well-specified business rules, the known calculations, the defined workflows — these run as deterministic code. Fast, cheap, reliable, auditable. The classification tasks, the natural language interpretation, the synthesis of unstructured information — these use AI. Slower, more expensive, but genuinely valuable because the problem can’t be reduced to a function.

The specification layer is what makes this separation possible. Without explicit specifications, you can’t tell which processes are deterministic and which aren’t. Everything looks like it might need AI because nobody has done the work of defining what the process actually is. With specifications, the boundary is clear. Specified processes get deterministic implementations. Unspecified processes get AI — temporarily, until they’re understood well enough to be specified.

Determinism isn’t a relic. It’s the endgame for every process that’s understood well enough to be defined. And AI is the tool you use for the processes that aren’t there yet.

The companies that understand this will spend their token budgets where they matter. The ones that don’t will pay inference prices for arithmetic.