Software and control

Automation, trust, and bounded AI in orbit

My software background made it hard to believe in vague space AI for more than five minutes. This piece asks the better question: what kind of automation would a serious mission actually trust, and where would it draw the line?

Because I have spent so long in software, I never had much patience for magic-AI scenes in this book.

The real question turned out to be smaller and better: what sort of automation would a serious mission actually trust when delay, certification, and survival are all in the room? This page is the shorter, cleaner version of the research that came out of that.

Space operations reward bounded systems

On Earth we put up with broad, messy software all the time because the cost is usually annoyance, delay, or cleanup. In orbit the cost profile changes. Advice needs provenance. Procedures need traceability. Boundaries matter.

That is why a bounded assistant interests me more than a synthetic genius. A careful organization would want something narrow, auditable, and very good at one operating domain, not a machine that keeps improvising at unknown confidence.

Trust is a design variable

The fiction got better as soon as I treated trust like a design variable.

How often does the system quote procedure? When does it refuse to guess? What can it detect locally? How much should a sleep-deprived crew member lean on it? Those are better questions than generic AI spectacle because they change actual behavior.

Why this material overlaps with software craft

This one lines up with my own background almost too neatly. Years in software make it hard to believe in frictionless abstractions for long. Systems leak. Interfaces are real. Safety starts to matter most when the smart layer gets confused.

That skepticism is useful. It keeps both the fiction and the journal closer to the sort of engineering culture that might actually sign off on the thing.

Why it belongs on the site

This topic is probably the easiest bridge between the novel and the rest of the site. Someone can land here with no interest in the book at all and still get something out of it.

What I wanted to offer was a compact argument: trustworthy automation in hard environments usually looks smaller, narrower, and less glamorous than people expect.

Source trail

These are the public sources that most directly shaped the piece. I keep them down here so the essay can read like prose first and a bibliography second.

Kai Wrenbury

Novel pages, journal entries, and research notes from the making of the book. Nothing here claims agency ties or official approval.

A work of fiction. Copyright 2026 Kai Wrenbury.