Low-Background Code

Kevin Barrett on 2026-02-05

We carve our mistakes into the world around us. The lovelorn teen etches a heart into heartwood, the Earth warms, and basements flood because although we buried all the rivers the memory of water is long. Steel remembers the atom bomb. For the better part of a century, from the moment we first detonated test nukes until roughly the present day, steel contained within it a sample of the background radiation we had unleashed, every girder a little cenotaph to the human capacity for violence. And so when we built machines sensitive to radiation we had to scavenge steel from shipwrecks, prelapsarian steel, not yet poisoned.

The same thing is happening now, with code.

Large language models are irradiating the code we produce. The instant you allow an LLM access to a codebase you have introduced an anomly that must be accounted for, code that is indistinguishable from noise by design, code that was not produced by a human thinking procedurally but instead by unspooling tokens that, at best, a human carefully reviewed. And I would like to submit that code untouched by LLMs—low-background code—is thus inherently valuable. It should be pursused and protected, lest we resort to salvage.

In the LLabyrinth

Let’s set LLMs aside for a moment and imagine an experience many engineers are familiar with.

You are hired by a company that is struggling: their major systems are a little past their prime and, even worse, the people who built those systems have left. A skeleton crew has been cargo culting their way around issues in the systems, but no one understands the internals. You, an expert in whatever stack these systems are written in, are brought in to modernize.

You get repo access and are confronted with hundreds of thousands of lines of code. It mostly follows established patterns, but not always, and the patterns are slightly out of date. The dependencies have bitrot. It’s clear from the source that many assumptions are made, but those assumptions are never expressly written down, just hinted at in comments that effectively reiterate the code. And there is just so much of it.

You have a few options for how to proceed. You can test your way through it, writing lots of code around the existing code to verify behavior such that when you start to modify the thing itself you can be reasonably sure you won’t break it. You can take copious notes, spelunking through callsites and documenting everything until you’ve mapped the thing from the inside out. You can treat the entire system as a black box and replicate its inputs and outputs with a modern system. Regardless of your approach, the nature of your work is not to do what most software engineers are trained to do and enjoy doing, which is to solve problems from first principles with code. You are instead solving a meta-problem, verifying other people’s solutions from unknown priors, perhaps not even with code.

For most engineers, this fucking sucks to do.

There are engineers whose entire job is to parachute into companies like this over and over again. It’s a highly specialized role. They have the kind of brains that enjoy doing it and the depth of experience to reckon with it daily.

Vibecoding transmutes your day job from software engineering to black box understanding. Congratulations, you now get to ask yourself what did they possibly intend here? every day!

Every time we write code we venture into a labyrinth. When you write code yourself you take a bit of chalk and get to mark the walls. You carry with you the knowledge of every false start and blind alley. When an LLM writes your code you are dropped into the center of the labyrinth. It is dark, the walls are high, and every path looks the same. You must traverse it for salary. Good luck.

Review is not a backstop

Perhaps I’m not being fair to vibecoders. They hold the LLM’s hand and supervise and do context engineering. And in the end they submit PRs for code review, right? Their colleagues are just as culpable for issues as they are.

Well, first, no one likes reviewing huge PRs, generated or otherwise. But second, we have to acknowledge that an LLM’s output is, by definition, the most likely series of tokens given a prompt. This means that bugs—which will exist absent formal verification, which no one really does—are definitionally extremely subtle.

Another roleplay! You are for some reason a copyeditor for a company that produces lorem ipsum. The company produces the finest lorem ipsum on the planet, artisanal lorem ipsum. Bear with me. To produce the ipsum the company first runs a standard static generator. Then it employs copyeditors who personally review every word before sending it out to the ipsum-starved masses. But there is a bug in the generator: every thousand or so words has a typo. It’s your job to find and correct the typo.

This is a nightmare job. You would lose your mind. But it’s also what we ask our reviewers to do every time we submit a massive LLM-generated PR. Here is 3,000 lines of code. It was produced by a machine that is fallible, but it looks correct—the machine’s entire deal is that it produces tokens that look correct. Often looking correct means it is correct. But not always.

If I am handed a massive PR that I know a coworker wrote by hand, I at least know implicitly that the code is a mirror of how they think, and part of having coworkers is knowing how they think. If you’re any good at your work you already do this. It’s why you jump to certain files first, or look for certain patterns, or read the tea leaves of branch names. It’s why we describe PRs and leave comments and do all the humans bits of software engineering. When code is the byproduct of a person thinking through a problem it is simply easier to review.

Efficiency & amphetamines

Perhaps this is where I say that I have used coding agents extensively and understand their appeal. I use them far less now, but when I used them I felt incredibly productive. I produced so much code! I let the LLM build its little lopsided cathedrals and then vaguely apologized for the future refactors in PRs but I was moving so fast.

What I hadn’t realized is that my productivity gains didn’t net out. I was just shifting where time was spent. I was spending less time writing code, more time reviewing code, and way more time reasoning about LLM-authored code from a few weeks ago as I dug back into it to add a feature or fix a bug.

I think it is very difficult to recognize this shift, especially among a certain kind of very visible engineer. If you pay attention to who is most enthusiastic about LLMs, they tend to be engineers who work alone and are accountable to no one. This means they get to bask in the rush of feeling productive without having to reckon with the downstream effects of what was produced.

An ancient depiction of the sowing/reaping cycle.

I never did this myself, but it’s perhaps a little like abusing ampetamines to pull an all-nighter before an exam. My understanding from watching friends and college roommates do this is that you feel immensely capable and focused, yes, but it’s not something you can do every day. We are starting to learn that it actually damages recall in the long-term.

And yet many of us seem to be doing this daily with LLMs. We call ourselves engineers as if we build public infrastructure. If you learned that a bridge up the road was designed exclusively by engineers pulling all-nighters on amphetamines, would you still drive across it?

Code as substrate

There is a longer-term view of LLM outputs that I think is worth touching on in this weird little paean to code. A fairly natural conclusion when you are vibecoding all the time—something I myself thought a few times when I was really down bad—is more or less: code is now incidental to product, like machine language. It’s a substrate.

I am not the only one to think this. Those who do often match the profile I sketched above: solo coders who are not accountable to anyone other than themselves, working greenfield atop a pile of disposable software. It’s very easy to convince yourself of a post-code discipline when you spend all day nudging chatbots around.

Of course it’s impossible for such a thing to fully come to pass. LLM outputs are nondeterministic. An LLM could tell you it fixed something and then simply do something else in a machine equivalent of lying (this has happened to everyone I know who has used LLMs extensively, myself included). We could fix this by introducing formal verification, but at that point we become programmers of verification systems and all we have managed to accomplish is make computing slower, more expensive, and cost more energy.

In reality code is already a substrate. It is a method by which we represent, at some acceptable level of loss, human conceptualization. As noted above, reading another’s code can be a great proxy for learning how they think. Reading your own code is often a great way to understand how you yourself think, too.

We call it code because it is a formal grammar simpler than human language. By this property it can be made interpretable, executable. Human language is beautiful by its nature in that it allows for entendre, dissembling, metaphor, gesture. These are all qualities we have pruned from computer code on purpose because they muddy procedural meaning. Why would a person choose to forgo that tradeoff and instead work in human language, trading sentences with a token generator? It is a bad trade.

Plus—subjectively—talking to an LLM is like talking to the most boring person at a party. Those who seem to enjoy it like it because it is effectively talking to oneself.

GPT stands for Greek Pantheon Type

Towards non-proliferation

We rarely need salvaged steel these days. Somehow reason prevailed, and although we tragically still maintain nuclear warheads we almost never detonate them. Atmospheric background radiation has fallen such that modern steel can be used in nearly everything that calls for it.

I’m not sure if the same will happen for LLM usage. Some days it feels impossible to me for anyone to conclude anything other than what I have written above. Some days I simply don’t feel like writing code and have the LLM fix a little ticket and I think: perhaps this small usage is okay. Some days I read about these tools being used for the most evil shit imaginable and I dream about global EMPs.

In the meantime, all I can control is myself. I will continue to distrust LLMs. I will cherish low-background code and produce what I am able to. I will occasionally slip or crack or just get tired and ask an LLM to do something, and I will probably regret it. Regret is human, after all.