Software and control
Automation, trust, and bounded AI in orbit
My software background made it hard to believe in vague space AI for more than five minutes. This piece asks the better question: what kind of automation would a serious mission actually trust, and where would it draw the line?
Because I have spent so long in software, I never had much patience for magic-AI scenes in this book.
The real question turned out to be smaller and better: what sort of automation would a serious mission actually trust when delay, certification, and survival are all in the room? This page is the shorter, cleaner version of the research that came out of that.
Space operations reward bounded systems
On Earth we put up with broad, messy software all the time because the cost is usually annoyance, delay, or cleanup. In orbit the cost profile changes. Advice needs provenance. Procedures need traceability. Boundaries matter.
That is why a bounded assistant interests me more than a synthetic genius. A careful organization would want something narrow, auditable, and very good at one operating domain, not a machine that keeps improvising at unknown confidence.
Trust is a design variable
The fiction got better as soon as I treated trust like a design variable.
How often does the system quote procedure? When does it refuse to guess? What can it detect locally? How much should a sleep-deprived crew member lean on it? Those are better questions than generic AI spectacle because they change actual behavior.
Why this material overlaps with software craft
This one lines up with my own background almost too neatly. Years in software make it hard to believe in frictionless abstractions for long. Systems leak. Interfaces are real. Safety starts to matter most when the smart layer gets confused.
That skepticism is useful. It keeps both the fiction and the journal closer to the sort of engineering culture that might actually sign off on the thing.
Why it belongs on the site
This topic is probably the easiest bridge between the novel and the rest of the site. Someone can land here with no interest in the book at all and still get something out of it.
What I wanted to offer was a compact argument: trustworthy automation in hard environments usually looks smaller, narrower, and less glamorous than people expect.
Source trail
These are the public sources that most directly shaped the piece. I keep them down here so the essay can read like prose first and a bibliography second.
- DLR | CIMON astronaut assistance system
A concrete example of a narrow assistant in a real mission setting.
- Airbus | Crew assistant CIMON successfully completes first tasks in space
Useful early reporting for separating what CIMON actually did from the hype around it.
- Airbus | CIMON-2 makes its successful debut on the ISS
A good follow-up on what changed once the concept had to keep working.
- NASA Standards | NASA-STD-8739.8 software assurance and software safety standard
This is the assurance culture I had in mind when thinking about what a real mission would approve.
- NASA SMA | Software assurance and software safety
A practical overview of the discipline underneath software trust.
- NASA NTRS | Recommendations to Advance Space Trusted Autonomy
One of the most relevant sources I found on how space organizations talk about trusted autonomy.
- ESA | Software product assurance for autonomous on-board software
Helpful ESA-side assurance material that keeps the page grounded.
- NIST | AI Risk Management Framework
Useful broader trust framing once the argument widened beyond spaceflight.