We Didn't Lose the Moon. We Chose to Forget It.

We Didn't Lose the Moon. We Chose to Forget It.

April 5, 2026
technology ai future

The most disturbing thing about Artemis II isn’t that it took 50 years to go back to the Moon — it’s that we had the technology the whole time. We didn’t lose the Saturn V to some catastrophic failure. We didn’t have it stolen. We retired it, dismantled the tooling, let the engineers age out, and then spent decades scratching our heads about how to build something comparable. That’s not a tragedy of circumstance. That’s a choice.

And I think we don’t sit with that long enough.

Capability Is a Choice You Keep Making

In software, there’s a well-known phenomenon where a system grows so complex that only the original authors understand it. When they leave, the knowledge doesn’t transfer cleanly — it evaporates. You’re left with a codebase that runs fine until it doesn’t, and then nobody knows why. We call this institutional knowledge decay, and we treat it like a natural disaster, something that just happens.

But it isn’t natural. It’s the accumulated result of thousands of small decisions: not to document, not to cross-train, not to invest in knowledge transfer because the quarterly roadmap didn’t have room for it. The loss feels sudden. The cause is slow and deliberate.

The Apollo program is that, scaled to a civilization.

We didn’t forget how to go to the Moon in one dramatic moment. We defunded the program, reassigned the engineers, stopped training replacements, and repurposed the infrastructure. Then, fifty years later, we looked up and asked why it was so hard to do again. Artemis SLS costs something in the neighborhood of $4 billion per launch. Saturn V, in inflation-adjusted dollars, cost about $1.5 billion. We built a more expensive, less capable rocket to reach a destination we’d already visited — because we let the knowing lapse.

Civilizations Can Lobotomize Themselves

I’ve been thinking about this as a pattern rather than an isolated case. The Roman Empire had concrete that has outlasted nearly everything we build today — the Pantheon has been standing for almost two thousand years. The specific formula, the particular volcanic ash they used, wasn’t rediscovered until the 21st century. They didn’t lose it in a fire. They lost it the way you lose anything: gradually, through disuse, through the slow erosion of the context that makes knowledge actionable.

There’s a framing I find useful here: capability isn’t a thing you have. It’s a thing you practice. The moment you stop flying rockets, you stop being a rocket-flying civilization in any meaningful sense. The blueprints still exist. The physics didn’t change. But the living, embodied knowledge — the instincts, the workarounds, the things that only show up when you’re actually doing the work — that decays fast.

In my experience building software companies, this shows up at much smaller scales constantly. A team that stops shipping features for six months doesn’t pick back up at 100% when the greenlight comes. Something is gone. The rhythm, the judgment calls, the shared context. You have to rebuild it. And rebuilding always costs more than maintaining would have.

What Else Have We Quietly Decided to Stop Being Capable Of?

That’s the question Artemis should be making us ask. Not “how do we get back to the Moon?” but “what does it mean that we stopped?”

Because the Moon isn’t unique. We made a collective decision, somewhere in the 1970s, that lunar travel wasn’t worth maintaining as an active capability. We told ourselves we’d come back to it. We didn’t. Now we’re paying an enormous premium — in money, in time, in national will — to reconstruct something we already knew how to do.

I don’t think this is purely a government failure or a funding failure, though it’s partly both. I think it reflects something deeper about how we collectively reason about capability. We treat knowledge as a static asset rather than a dynamic practice. We assume that because something was done, it remains doable. We underestimate how much of what we can do lives in the hands and heads of people who are actively doing it.

This matters a lot to me right now given everything happening with AI. We’re at a moment where a huge amount of human cognitive work is being handed off to systems that are genuinely impressive but genuinely opaque. I’m not making a luddite argument — I use these tools, I build with them, I think they’re remarkable. But I keep returning to the same unease: if we stop practicing certain kinds of thinking because AI does it faster, what is the actual cost of that? Not theoretically. Practically. What capability are we quietly letting decay?

We didn’t lose the Moon. We chose to stop going, and then we forgot that the choice had been made.

That’s the part that stays with me.

You Didn't Get Hacked. You Delegated Trust to a Stranger and They Got Hacked.

April 2, 2026
technology ai future

Your AI Assistant Passed Every Safety Test and Still Burned Down the House

March 31, 2026
technology ai future

The Abstraction Ceiling: Why Better AI Tools Keep Making Senior Engineers More Valuable, Not Less

March 29, 2026
technology ai future
comments powered by Disqus