In class, we talked about Snowden and the NSA. What I forgot to bring up, but is an important topic, is the ethics of the NSA hoarding known security vulnerabilities. Let’s say some NSA internal security expert discovers a remote exploit in Windows that allows them to take over a Windows machine. What are they supposed to do with that exploit? They could use it as part of their attack machinery, allowing them to do all the other things that they do once they’ve broken into a computer. Alternatively, they could contact Microsoft and say “hey, this is bad, fix it.”
The core question, from a utilitarian perspective at any rate, is whether the world (or perhaps just the U.S. if you prefer) is better off for the vulnerability to be fixed versus exploited. For example:
- If the NSA could find it, then others can as well. Perhaps the vulnerability is already being exploited by somebody else.
- Once the NSA exploits the vulnerability in the wild, others will see it, reverse engineer the attack, and themselves be able to exploit it.
- If the NSA just tells Microsoft to go fix it, then the NSA “offensive” mission can’t benefit from the vulnerability. However, every Microsoft user benefits from the vulnerability being fixed.
- Conversely, everybody loses some utility when there’s a vulnerability, even if they’re not the target of an NSA-driven exploit for that vulnerability, and even if they’re not a Windows user. If you’re interacting with somebody else and they have a vulnerability, it hurts you.
- The downside risk to Windows users, in the aggregate, is proportionate to how many Windows machines they’re running. There are lots of Windows machines in the U.S. versus, for example, not so many in North Korea. You can conclude that the NSA has more incentive to push Microsoft to fix things than the North Korean equivalent agency might have.
- NSA has two missions: one to conduct its offensive mission (“signals intelligence”) and one to conduct a defensive mission (“information assurance”). You can imagine how putting these two missions under the same roof might lead to some disagreements, since each side of the organization has quite different incentives, in terms of “doing their job”.
Exercise for the reader: try to think these same issues through relative to some of the other recent NSA disclosures, such as the allegation that the NSA deliberately weakened a popular standard for how to securely generate random numbers, or the allegations that they intercept electronics shipments and tamper with the hardware.
What you inevitably will find is that a utilitarian framework like this makes it impossible to say “they should always disclose a vulnerability” or “they should never disclose a vulnerability”. However, you’ll find that it’s useful to compare two different attack modes (e.g., broad spectrum surveillance vs. targeted attacks) and talk about the relative merits and downsides of weaponizing a vulnerability versus patching it.