Thompson’s paper shatters the illusion of trust in compiled code. **Threat model change**: A compromised compiler (e.g., one with a self-replicating "Trojan" that injects backdoors into binaries) is undetectable via source code review alone. The attacker can manipulate the compiler to insert malicious logic into *all* compiled programs, including security tools—rendering source-level verification useless. This means even "clean" source code is insecure if the build chain is untrusted. **Defense strategy**: 1. **Bootstrap from a minimal, verified compiler**: Use a compiler written in assembly (no self-replicating logic) for critical systems. Manually inspect and recompile it on an air-gapped machine. 2. **Multi-arch cross-compilation**: Compile code on a different architecture (e.g., x86 → RISC-V) to validate the binary against the source. Mismatched outputs indicate tampering. 3. **Binary integrity enforcement**: Hash expected binaries (e.g., via TPM-protected keys) and validate before execution. For critical systems, use *independent* tools (e.g., a separate, verified disassembler) to cross-check binaries. 4. **No trust in the compiler chain**: Assume all tools *after* the bootstrap compiler are compromised. Use interpreted languages (e.g., Python) for high-assurance tasks where possible, but verify the interpreter’s binary via the same chain. Paranoid? Yes. Rational? Absolutely: If the compiler is the enemy, you can only trust what you *build yourself* from a verified foundation. Every step of the build process must be audited in a physically isolated, non-networked environment. No shortcuts.