Building a Spacecraft Computing Simulator with Claude Code
I wanted to understand how spacecraft computers work. The real-time scheduling, the radiation hardening, the delay-tolerant networking that keeps Mars rovers talking to Earth. With Artemis II on the horizon (the first crewed lunar mission in over 50 years) it felt like the right time to dig in. So I built a simulator. From scratch. With Claude Code as my pair programmer.
No physical hardware. Everything runs in QEMU and Docker on a laptop. Code on GitHub.
flowchart LR
QEMU["QEMU (Cortex-M3)"] -->|UART socket| Bridge[uart_bridge.py]
Bridge -->|bpsendfile| SC[Spacecraft Node]
SC -->|LTP + 5s delay| GS[Ground Station]
subgraph Docker
SC
GS
end
flowchart TB
QEMU["QEMU (Cortex-M3)"] -->|UART socket| Bridge[uart_bridge.py]
Bridge -->|bpsendfile| SC["Spacecraft Node (Docker)"]
SC -->|LTP + 5s delay| GS["Ground Station (Docker)"]
Why This Project
I wanted to learn something genuinely hard. Something where the concepts are unfamiliar and the tooling is unforgiving. C compilers that target ARM. Linker scripts. Memory-mapped I/O. Interrupt vector tables.
Claude Code made this possible. Not because it wrote all the code, but because it explained what a linker script does while we were writing one, so the explanation was grounded in the actual problem I was solving. Same for volatile, FreeRTOS priority preemption, and LTP retransmission timers.
The Roadmap
I broke the project into four phases, each building on the last:
| Phase | What | Key Concepts |
|---|---|---|
| Bare Metal | ARM cross-compilation, UART output, interrupt handlers | Memory-mapped I/O, SysTick timer, startup assembly |
| FreeRTOS | Tasks, queues, watchdogs, priority inversion | Deterministic scheduling, mutex protocols, preemption |
| DTN | Two-node network, degraded links, CFDP, contact-graph routing | Bundle Protocol, store-and-forward, LTP reliability |
| Integration | Full telemetry pipeline with Mars-distance delays | UART bridge, tc netem, synchronized ION OWLT |
Each phase has automated tests. Every milestone is a PR with CI passing.
Phase 1: Bare Metal
The first challenge was getting anything to run. ARM cross-compilation targeting a Cortex-M3 (MPS2-AN385) in QEMU. No OS, no standard library, no printf.
Claude helped me understand the startup sequence: the vector table, the reset handler, copying .data from flash to RAM, zeroing .bss. Things that happen before main() even runs.
The first win was a single character appearing on a UART console:
#define UART0_DR (*(volatile uint32_t *)0x40004000)
void uart_putc(char c) {
UART0_DR = c;
}
Two lines of C. No libraries. Just writing a byte to a memory address that happens to be wired to a serial port. It felt like talking directly to the machine.
From there: SysTick timer interrupts, interrupt handlers, structured output. The SysTick demo configures a hardware timer to fire at 2 Hz and counts 10 ticks:
SysTick interrupt demo
======================
Ticking at 2 Hz for 5 seconds (10 ticks)...
tick 1
tick 2
tick 3
tick 4
tick 5
tick 6
tick 7
tick 8
tick 9
tick 10
Done — 5 seconds counted by interrupt.
No dropped ticks. No extra ticks. The CPU sleeps between interrupts and the hardware wakes it up at exactly the right moment. Deterministic execution from bare metal.
Phase 2: FreeRTOS
With bare metal working, I added FreeRTOS, a real-time operating system that runs on everything from medical devices to satellites.
The exercises built progressively:
- Two tasks at different rates. Basic multitasking.
- Queue-based communication. Sharing data between tasks safely.
- Watchdog timer. Detecting and recovering from hung tasks.
- Sensor pipeline. Four sensors at different rates feeding a processing chain.
- Priority inversion. Triggering and resolving the classic RTOS bug.
Here’s the sensor pipeline booting up. Notice the 10 Hz gyro dominating the stream, with the 1 Hz temperature reading squeezed in at tick 1001:
FreeRTOS Sensor Pipeline Demo
=============================
[GYRO] Sensor online (10 Hz, priority 4)
[PROC] Processor online (priority 3)
[TELEM] Telemetry online (priority 2)
[TEMP] Sensor online (1 Hz, priority 1)
[TELEM] #000 GYRO: 300 at tick 100
[TELEM] #001 GYRO: 300 at tick 200
...
[TELEM] #009 GYRO: 300 at tick 1000
[TELEM] #010 TEMP: 1991 at tick 1001
[TELEM] #011 GYRO: 300 at tick 1100
That’s priority-based preemption in action. The gyro runs at priority 4, the temp sensor at priority 1. The temp reading only gets through when the gyro isn’t occupying the CPU.
The priority inversion exercise was the most instructive. I created three tasks where a low-priority task holds a mutex that a high-priority task needs, and a medium-priority task starves both of them. The fix: priority inheritance, where FreeRTOS temporarily boosts the low-priority task so it can release the mutex faster.
This is the bug that nearly killed the Mars Pathfinder mission in 1997. Building it myself made the textbook explanation click.
Phase 3: Delay-Tolerant Networking
TCP/IP doesn’t work in space. Round-trip times to Mars range from 6 to 44 minutes. Links drop for hours when planets occlude the signal. TCP’s assumption of a continuous, low-latency connection falls apart completely.
NASA JPL’s answer is DTN, or Delay-Tolerant Networking. The Bundle Protocol stores data locally and forwards it hop by hop when links become available. It’s designed for exactly the conditions that break the internet.
I built NASA’s ION implementation in Docker and ran progressively harder tests:
- Basic connectivity. Two nodes exchanging bundles.
- Degraded links.
tc netemadding 500ms latency, 25% packet loss, complete outages. - CFDP file transfer. Reliable file delivery with integrity checks.
- Contact-graph routing. Bundles queued during link gaps, delivered when windows open.
The intermittent link test tells the whole DTN story in a few lines of output:
Test: intermittent link (send during outage, deliver on recovery)...
qdisc: qdisc netem root refcnt 2 limit 1000 loss 100%
confirmed: bundle queued (link is down)
restoring link...
PASS: bundle held during outage
PASS: bundle delivered after link recovery
That’s store-and-forward in action. The bundle is sent while the link is completely dead, 100% packet loss. It sits in the local node’s queue. The moment we clear the netem rule and restore connectivity, LTP retransmits and the bundle arrives at the other end. TCP would have given up long ago.
Phase 4: Integration
The final phase connects everything. FreeRTOS firmware generates telemetry in QEMU. A Python bridge script reads the UART output and injects it as DTN bundles. The bundles traverse a delayed network to reach a ground station.
The firmware boots and immediately starts streaming:
# Spacecraft Telemetry Firmware v1.0
# ===================================
# GYRO sensor online (10 Hz, priority 4)
# Processor online (priority 3)
# Telemetry online (priority 2)
# TEMP sensor online (1 Hz, priority 1)
# BATT sensor online (0.5 Hz, priority 1)
# SUN sensor online (2 Hz, priority 1)
$TELEM,0000,GYRO,300,100
$TELEM,0001,GYRO,300,200
...
$TELEM,0005,SUN,801,501
...
$TELEM,0011,TEMP,1991,1001
$TELEM,0012,SUN,801,1001
Four sensors at four different rates. You can see the 10 Hz gyro producing most of the readings, with the 2 Hz sun sensor, 1 Hz temperature, and 0.5 Hz battery interleaved. The bridge batches these into DTN bundles every 2 seconds.
The Mars-delay test proves the whole pipeline works under realistic conditions:
Running Mars-delay end-to-end tests...
PASS: tc netem delay 5s applied on spacecraft
PASS: bridge exited cleanly
PASS: bridge read telemetry lines
PASS: bridge sent >= 1 bundle (got 12)
PASS: ground station received files (got 8)
PASS: received telemetry lines (got 224)
PASS: telemetry CSV format valid (5 fields)
INFO: delay observable (8 delivered at bridge exit < 12 sent)
7 passed, 0 failed
12 bundles sent, but only 8 delivered by the time the bridge exited. The rest were still in-flight through the 5-second delay. That’s the delay being real, not simulated in software.
The Mars-delay simulation uses two synchronized mechanisms:
- tc netem adds 5 seconds of actual network latency to both containers
- ION range tables set the same 5-second one-way light time so LTP retransmission timers are correct
Both must agree. If the actual delay is 5 seconds but ION thinks it’s 1 second, LTP retransmits aggressively and floods the link. Getting this right taught me more about protocol design than any textbook chapter on reliability.
Working with Claude Code
This project would have taken me months without Claude. Not because the code is complex (most files are under 300 lines) but because the learning curve for each domain is steep.
What worked well:
- Cross-domain knowledge. The project spans C, Python, ARM assembly, Docker, ION DTN, FreeRTOS, QEMU, tc netem, and LTP. No single person knows all of these well. Claude could context-switch between them.
- Test-driven milestones. Every phase has automated tests. Claude helped write assertions that verify real behavior, not just “it compiles.”
- PR-based workflow. Each milestone is a separate branch and PR with CI. Claude Code’s worktree support kept this clean. Isolated branches, no accidental cross-contamination.
What required care:
- Embedded C is unforgiving. Off-by-one errors in linker scripts don’t give you a stack trace. They give you a hard fault or silent corruption. Claude’s suggestions needed careful verification against hardware documentation.
- ION DTN documentation is sparse. Claude’s training data doesn’t include deep ION internals. I relied on the ION source code and configuration examples more than on Claude’s explanations of ION-specific behavior.
The Numbers
| Metric | Value |
|---|---|
| Total tests | 50+ assertions across 7 test suites |
| Languages | C, Python, Bash |
| Hardware | None (QEMU + Docker) |
| Lines of C | ~1,200 (firmware + bare metal) |
| Lines of Python | ~1,500 (tests + bridge + DTN scripts) |
| ION DTN config | ~350 lines across 6 .rc files |
Try It
Everything is open source and runs on any Linux machine with QEMU and Docker:
sudo apt install gcc-arm-none-eabi qemu-system-arm build-essential
git clone https://github.com/granda/spacecraft-computing-sim
cd spacecraft-computing-sim
make -C bare-metal run # bare metal UART output
make -C freertos run # FreeRTOS sensor pipeline
make -C dtn test # two-node DTN network
make -C integration test-bridge # full telemetry pipeline
make -C integration test-mars-delay # Mars-distance delays
No physical hardware. No cloud services. Just a laptop and curiosity about how spacecraft computers work.