drflag

There’s a crushing monotony to stories on how the National Security Agency has been bending and breaking every rule to crack open your mail. Each new revelation that hits the news — that the agency has tapped into data warehouses belonging to Google and Yahoo, systematically undermined commercial encryption with backdoors, surreptitiously engineered weaknesses in encryption standards – seems like another confirmation that the NSA is trying to batter down every technological barrier that might prevent it from reading your e-mails and listening in on your phone calls.

The steady drip-drip-drip of new violations obscures the most interesting — and saddest — part of the whole NSA story. The agency wasn’t always out to steal your secrets. Twenty years ago, the agency was trying to protect them from outsiders.

Sometime in the 1990s or early 2000s, most likely in the late Clinton administration, there began a quiet but dramatic shift in doctrine. Over the span of a few years, the NSA decided that American citizens’ computers would have to be targeted. And as targets, we citizens could not be trusted with strong encryption.

The NSA is trying so hard to undermine commercial encryption nowadays that it’s hard to imagine that the agency ever had a different attitude. Back when the Internet was new, however, forward-thinking NSA analysts realized that electronic commerce was coming, and without a strong, cryptographically secure infrastructure, banks and stores and other entities attempting to do business on the Internet would be vulnerable to pirates — or, worse yet, a foreign power — who could collect sensitive information or disrupt commerce. So the NSA tried its damnedest to build cryptographic defenses for Americans to use.

Case in point is the development of the Digital Encryption Standard. In the 1970s, the US decided it needed a new, fast, secure, standardized algorithm for encrypting blocks of sensitive digital data. The National Bureau of Standards sent out a request for proposals, and eventually, an algorithm designed by cryptographers at IBM won out.

The innards of the algorithm resemble nothing more than a gigantic cuisinart, dicing up and scrambling large chunks of data and reassembling them in a new order. That scrambling and reassembling would be easily reversible, making the encryption useless save for a set of mathematical widgets in the middle of the algorithm known as S-boxes.

Each of the eight S-boxes contained a set of numbers that gave precise instructions on how to permute data in an irreversible way. Irreversible, that is, without the secret key. Underneath an enormous amount of juggling and jumbling of bits and bytes, the quality of IBM’s algorithm almost entirely hinged on whether those S-boxes were constructed properly. Unfortunately, S-box design was more an art than a science, especially in the 1970s.

The National Bureau of Standards decided to ask the NSA to evaluate the security of IBM’s proposed algorithm. The NSA gave the green light, but the agency had made a subtle change: the NSA subtly tweaked the numbers in the algorithm’s S-boxes, and refused to explain why. This caused a bit of consternation in the cryptographic community; some believed the agency had introduced a flaw in the S-boxes that would allow the NSA to decrypt messages, if need be. Nevertheless, the new algorithm, dubbed DES, was adopted in 1977.

It took more than a decade for outside cryptographers to figure out why the NSA had tweaked those S-boxes. In the late 1980s, Eli Biham and Adi Shamir (the S in RSA) figured out a new way of attacking cryptographic systems by feeding very similar — but not identical — blocks of data into the algorithm and comparing how the outputs differ. This technique became known as differential cryptanalysis. It turns out that the NSA-chosen S-boxes are particularly resistant. By tweaking the S-boxes to defend against an attack that hadn’t yet been discovered by outsiders, the NSA proved that it had been trying to strengthen domestic cryptography rather than weaken it.

That was the culture of the NSA, at least during the short time that I was there  (from 1992 through 1993). The agency’s mission was not just to crack enemy codes, but to ensure that your own were secure. And your own didn’t just mean your mil-spec equipment, but the everyday codes that invisibly made Citibank and AmEx and NYSE — not to mention power plants, water works, and transport systems — function as securely as possible in the digital wilds.

Of course, the two halves of NSA’s mission were in constant tension. If you made your own cryptosystems stronger, adversaries could use your algorithms and techniques to protect their communications from agency eavesdroppers. Strengthening encryption necessarily made intelligence-gathering harder. And in the early 1990s, it became clear that the spread of fast, cheap, secure encryption algorithms would make NSA’s eavesdropping mission much more difficult. So, how could the agency ensure secure communications at home while denying them to potential adversaries?

The answer the NSA came up with was the infamous “clipper chip.” In the early 1990s, the Clinton administration floated the idea of a sealed microchip that would encrypt data securely using a (then-classified) algorithm known as SKIPJACK. Like DES, SKIPJACK was a secure block cipher, but a backdoor had been built into the chip itself, allowing the government to decrypt the communications.

The backdoor was supposed to be secure; it was a secret code that was known only to the manufacturer of the chip. So, even though the encryption was flawed, the flaw was small and (theoretically) could be activated only by authorized government personnel. And if the government could convince Americans to adopt the clipper chip and other similar law-enforcement-accessible encryption schemes, the two halves of the NSA’s mission would no longer be in conflict. It could promote strong (if government-accessible) encryption at home without any worries that the technology would be attractive to foreign adversaries.

The clipper chip sank like a silicon balloon. Rejected by privacy advocates and corporations alike, the administration retreated, withdrawing the proposal and eventually declassifying SKIPJACK in 1998. The NSA wouldn’t get a clean solution to its dilemma of fostering strong encryption at home while trying to crack foreign cryptosystems. Instead, the government relied on lame, ineffective export controls to try to keep American encryption algorithms from being used overseas. They didn’t work; encryption simply got stronger and stronger, and not just domestically.

I believe that within a few years after the clipper chip died — sometime in the late 1990s or early 2000s — the NSA finally cut the Gordian knot. The agency would stop trying to reconcile two fundamentally irreconcilable mission goals. Foreign intelligence, it decided, was much more important than strengthening American commercial and personal encryption. And so the agency began began actively to undermine the latter to enable the former.

Flash forward to 2007. NIST, the agency formerly known as the National Bureau of Standards, published a set of algorithms known as pseudorandom number generators. Many computer programs rely upon such algorithms to generate a stream of random-looking numbers; if designed poorly — if an attacker can guess which numbers are going to be generated at a given time — they can undermine the security of a computer system or a whole network. (Imagine, for example, what you could do if you could figure out what Keno number was coming up next on the telescreen.)

One of these generators, known as Dual_EC_BCRG, relies upon the properties of some imperfectly-understood mathematical objects known as elliptic curves. Built into the algorithm is a set of numbers that defines the particular shape of the elliptic curves that generate the random numbers. Nobody knew for certain at the time — NIST didn’t make it explicit how those numbers were chosen — but it was assumed that the NSA helped NIST choose them.

We now know that, yes, the NSA was behind the chosen numbers. We know because mathematicians figured out that the person who generated the numbers had the opportunity to insert a backdoor into the algorithm. And now, thanks to the Edward Snowden revelations, we know that the NSA did, indeed, take that opportunity. The algorithm, published and certified by NIST to be secure, had a gaping hole put there by design. Three decades after NSA tweaked DES to make it more secure, it did precisely the opposite with Dual_EC_BCRG.

Those leaked memos show that the National Security Agency has been systematically finding flaws — and creating them — in the cryptographic protocols that all internet commerce relies upon: HTTPS, SSH, SSL, PPTP, and many many more. Nobody in the cryptographic community knows for sure what’s secure and what’s been subtly undermined by the agency. There’s even evidence that the NSA, or an attacker as sophisticated as the NSA, has tried to stick cleverly-camouflaged backdoors into open-source software.

The implications are clear. The NSA has abandoned half of its mission; it no longer feels obliged to help Americans keep their communications secure from outside attackers. Just the opposite. The NSA now feels that to fulfil its intelligence-gathering function, it must undermine the cryptographic security of American citizens and corporations. The NSA of the 1970s was trying to protect our digital infrastructure against exactly the kinds of attacks that the NSA of the 2000s is successfully carrying out again and again.

“Attack” is exactly the right word. It’s a term of the art; it’s the word that cryptographers and cryptanalysts use to describe an attempt to undermine security. Another cryptographic term that fits right in: adversary.

Perhaps the most disturbing revelation of the Snowden leaks is that in the most literal sense, the National Security Agency now considers every American citizen and every American corporation to be an adversary.

Must we now return the favor?