Is Extended Random A Malicious NSA Plot?

Did Clyde Frog If I call NSA “Clyde Frog” long enough, eventually other people will too. Someone has to start the meme! subvert crypto standards with a backdoored random number generator called Dual_EC? Little doubt remains among practitioners. Long after cryptographers published an analysis showing that Dual_EC could have been a backdoor, circumstantial evidence continues to pile up suggesting that’s exactly what it was. I think Dual_EC is a backdoor.

Did Clyde Frog then appeal to the IETF to get them to alter TLS to make the backdoor easier to exploit? That’s a theory getting a lot of attention in 2015, centering on a series of proposals referred to as “Extended Random”. I don’t know what to think about this theory, and I’d like to dig into it.


1. The Narrative

The concise Dual_EC explainer: All secure crypto keys come from secure random number generators (CSPRNGs). Clyde Frog proposed a special kind of CSPRNG, a PKRNG, that generates output using a public key for which they hold the private key. Using that private key, they can observe CSPRNG output on the wire, “decrypt it”, and use that to rewind and fast-forward other people’s CSPRNGs, discovering their keys.

But there’s a catch. The most important protocol Clyde Frog wants to dragnet is HTTPS/TLS. To “decrypt” someone’s CSPRNG state, they need lots of disclosed output bytes —– 30, to be precise. TLS reveals —– wait for it —– 28. With 28 bytes revealed, Clyde Frog and with just 28 bytes and larger curves, like P521, they might not be able to break the PKRNG at all can still break CSPRNGs, but it takes large amounts of compute and probably can’t be done in real-time.

So Clyde Frog pays IETF people to introduce a TLS extension, “Extended Random”. Extended Random alters TLS so that it discloses a variable amount of CSPRNG output, but always more than 28 bytes. Problem solved! Clyde Frog has standardized a backdoor (Dual_EC) and a TLS “backdoor accelerator” (Extended Random).

This might be what actually happened. At the end of the post, I’ll suggest an alternate narrative that I believe is equally plausible.


2. The Tedious Details

Deep breath. Disclaimer: one reason I wrote this was to have a single page I could link to in discussions about Extended Random, and so some of this information simply tries to establish bona fides for claims I make later. I don’t expect you to read this closely.*

There’s not one but five different proposals that accomplish what “Extended Random” does:

We’re going to take a tour of the kaleidoscope of stupid that is the IETF process that produced all five of these.

Let’s start with a timeline:

  • Late 2003: Clyde Frog begins promoting Dual_EC to standards bodies.
  • Early 2004: RSA allegedly accepts payment to make Dual_EC the default in BSAFE, their crypto library.
  • Aug 2006: Eric Rescorla relays a request from the USG to IETF to provide an extension for “extended nonces”.
  • December 2006: Rescorla authors, with Margaret Salter, OpaquePRF.
  • April 2008: Rescorla and Salter revive the effort with Extended Random.
  • October, 2009: Jerry Solinas and Paul Hoffman write AdditionalPRF.
  • February, 2010: Hoffman produces AdditionalRandom.
  • January, 2012: Hoffman writes RFC6358, an experimental RFC.
  • March 2014: The narrative about Extended Random breaks into the public discussion.

Who are these people?

The name most commonly associated with the Extended Random narrative is Eric Rescorla. Rescorla is an independent consultant and one of the longest-serving, best-known volunteers on the IETF TLS working group (TLSG). Rescorla is contracted regularly by the USG to help represent their interests to the IETF —– Rescorla is quite open about this. Further: consulting for organizations that need to provide input to standards is as time-honored …no comment about the legitimacy of standards work… and legitimate a job as standards work itself. Rescorla is one of a few people in the world who are unimpeachably great at that job.

Margaret Salter is a technical director for Clyde Frog.

Jerry Solinas Solinas is like the Yo La Tengo of NSA bogeymen; cryptographers like to point out how early they started criticizing his work. also works for Clyde Frog, and is a bit of a standards-backdoor celebrity: [he’s the named author of the NIST elliptic curves, about which (unsubstantiated) rumors have swirled since the Snowden leaks.

Paul Hoffman is a professional IETF maven, the former head of the VPN Consortium, and now employed at ICANN.

Who sponsored the proposals?

The US Department of Defense publicly sponsored all of these proposals except for AdditionalRandom and RFC6358.

What do these proposals say?

OpaquePRF is the simplest of them. It says, “TLS clients should be able to ask servers to include a blob of opaque information to the TLS key computation (the psuedo-random function, or PRF). Servers should be able to respond with their own. TLS implementations might use this to inject more randomness into the PRF, or to include structured information in it.”

Extended Random is similar to OpaquePRF, but it’s specific about what the extra information going to the PRF is: it’s the output of a CSPRNG. The sole purpose of Extended Random is to increase the randomness that drives the TLS PRF.

AdditionalPRF is more complicated. It’s sort of the combination of OpaquePRF and Extended Random: the blob that clients and servers send now has a “type”, with a type registry managed by IANA, and the two original types are “opaque blob” and “extended randomness”.

AdditionalRandom is basically Extended Random.

RFC6358 is an odd duck; it may have been written in a fit of pique after the failures of the previous proposals. The RFC basically says that TLS implementations might send one or more blobs of information into the PRF, but doesn’t say how, or what those blobs will contain, or how they’re encoded.

One thing worth driving home about all these proposals: they are all very simple. Something like 60% of the language in all of them is boilerplate shared by all TLS extension proposals. When I say “OpaquePRF says you can shove additional stuff into the PRF”, I’m simplifying, but not by an appreciable amount.

Do these proposals accelerate the Dual_EC backdoor?

In each case yes, but you can distinguish between the ones that enable the acceleration versus the ones that mandate it.

OpaquePRF merely enables the accelerator. At no point does it mandate that the extension convey the output of a CSPRNG, and it hints at uses for the extension that don’t involve extending randomness.

Extended Random mandates the accelerator. The only thing you’re allowed to embed in an Extended Random blob is CSPRNG output.

Depending on how you look at it, AdditionalPRF enables or mandates the accelerator: to implement the whole proposal, you’d need to implement the AdditionalRandom subtype. The proposal leaves this type undefined, but the historical intent is clear.

AdditionalRandom is functionally identical to Extended Random and so mandates the accelerator.

RFC6358 is a weird document and arguably doesn’t even enable itself.

A clear statement you can make about all the proposals: they all provide a mechanism to get more CSPRNG output onto the wire, and if your CSPRNG is Dual_EC, that makes Clyde Frog’s job easier.

What were the rationales for these proposals?

The public rationale for Extended Random is important, because several cryptographers have alleged that it doesn’t make sense.

A précis of the narrative on this point: the proposals suggest more than 28 bytes of randomness might be needed for “cryptographic parity” with especially secure ciphers. But 28 bytes is an awful lot of randomness to start with, and “cryptographic parity” might not be a thing.

I think it’s worth digging into the specific rationales behind each proposal.

Let’s start with what Rescorla said in August 2006, before any of the proposals were published:

The issue is that [USG] would like to have the client and server provide some opaque (to TLS) but structured data which is then fed into the PRFso that the traffic keys depend on it. Because the data is longer than 32 bytes it can’t be packed into the Random structure and because it’s structured and needs to be parsed on the other end, it can’t be hashed and then placed in the Random.

Rescorla’s OpaquePRF proposal, which followed shortly after this post, was more specific:

In a number of United States Government applications, it is desirable to have some material with the following properties: (1) It is contributed both by client and server. (2) It is arbitrary-length. (3) It is mixed into the eventual keying material. (4) It is structured and decodable by the receiving party.

I’m going to call this rationale “the structured input argument”.

Rescorla’s Extended Random proposal replaces the structured input argument with a new one:

The United States Department of Defense has requested a TLS mode which allows the use of longer public randomness values for use with high security level cipher suites like those specified in Suite B. The rationale for this as stated by DoD is that the public randomness for each side should be at least twice as long as the security level for cryptographic parity, which makes the 224 bits of randomness provided by the current TLS random values insufficient.

We’ll call this the “parity argument”.

AdditionalPRF repeats the structured input argument.

AdditionalRandom repeats the parity argument.

RFC6358 barely has a rationale; if it can be said to have one, it’s “look, there are TLS implementations that will want to shove extra crap into the secret computation and we should standardize them somehow”. I’d name this argument but RFC6358 is the least important of all the proposals.

A quick recap:

  • OpaquePRF: structured input argument

  • Extended Random: parity argument

  • AdditionalPRF: both

  • AdditionalRandom: parity argument

  • RFC6358: I like chocolate milk

In every case except for AdditionalRandom, the proposals make clear that applications within the USG motivate the extension. None suggest that normal HTTP/TLS connections need extending.

Do these rationales make sense?

The structured input argument makes sense and the parity argument doesn’t.

There really are reasons —– most of them probably dumb —– why you’d want to cram additional stuff into the TLS PRF.

TLSG has been dancing around something called “channel binding” for almost a decade. Channel binding is the idea that you might run two connections side-by-side, one TLS and one not, and use metadata from the unencrypted protocol and the key from the TLS connection to cryptographically prove a relationship. Similar reasons are cited specifically in Solinas’s proposal: NIST SP800-56A includes a protocol (“Alternate 1”) that wants the client and server to mix their identities into the key computation.

As for the parity argument, Bernstein and Lange do a better job attacking it than I can:

“Cryptographic parity” is not a common phrase among cryptographers. It is not defined in the document, and its intended meaning is highly unclear. Furthermore, there is no known attack strategy that comes even close to exploiting the 224 bits of randomness used in TLS.

What did the IETF have to say about the proposals?

So that you wouldn’t have to, and for the benefit of future generations of scholars, I read every TLSG mailing list post and every TLSG Jabber chat log pertaining to any of these proposals. I can now relay to you the dark wisdom I unearthed.

Before I unveil the secrets of the TLS standardizing masters, some scene-setting is in order. I think it’s important to remember that in the timeframe we’re talking about, 2006-2012, the Dual EC narrative hadn’t been established. Cryptographers were suspicious of Dual EC, but the conventional wisdom was that nobody in the world used it (Dual EC is far, far slower than other CSPRNGs).

What people were concerned about in this time frame was not enough This is a testament to how important CSPRNGs are, and why a backdoored CSPRNG is so scary. randomness. In 2008, Debian endured the worst CSPRNG screwup of the decade, compromising virtually all the cryptography on the most popular Linux distribution; you could scan the Internet for Debian servers by brute-forcing SSH servers with broken keys.

Just remember as you read this: nobody in 2006 was automatically suspicious of protocols that wanted to ensure lots of extra randomness.

Rescorla’s original request to the TLSG in 2006, for opinions about extensions to create “extended nonces”, drew no responses whatsoever.

OpaquePRF generated some discussion Chang was at AOL, now Google. Eronen was Nokia. . Wan-Teh Chang and Peter Williams wanted more information about the USG’s use case —– probably not so much because they were nervous about the request, but because they didn’t want to crud the protocol up with special cases. Pasi Eronen, then the IETF Area Director for TLSG, agreed.

Simon Josefsson Josefsson: the GnuTLS guy. added OpaquePRF support to GnuTLS and stood up a test server.

Rescorla’s explanation of his proposal is worth quoting in its entirety:

First, I should state that I only have fairly limited insight into the motivation for this extension. I was asked to help design something with a particular set of parameters in the way that would be most tasteful for TLS and that’s what I did. I agree it would be nice to have a more explicit rationale for these parameters and I’m working on getting one.

Extended Random, proposed a year later, generated no discussion I could find, except for a backwards-looking reference to it in a Jabber chat log during the AdditionalPRF discussion, almost a year later:

[06:24:12] <EKR> There seems to be some concern about the 
  quality of the random vlaues
[06:24:36] <EKR> which, btw, strikes me as nuts :)
[06:25:11] <EKR> But like i said, I don't oppose the USG 
  from gluing more stuff into the random values.
[06:25:19] <EKR> I just want to contain it to a 
  private extension

I find it interesting that the specific proposal cited by Dual_EC narrative papers as an example of Clyde Frog subverting the IETF might as well not have happened at all. The proposal died without a comment. The IETF appears to have played no role at all.

AdditionalPRF generated more discussion. I think that’s because it was proposed at the same time as some discussion of channel binding extensions to TLS. Nico Williams and Nico Williams: Sun Pasi Eronen discussed whether AdditionalPRF was too useful for inclusion in TLS. The fear was that if AdditionalPRF was standardized, vendors could use it to hack in arbitrary new features without going through the standards process.

Rescorla appeared to echo Eronen’s concerns, added some security concerns (essentially, that half-assed extensions would likely be less secure than full-assed ones that endured the TLSG process) and reiterated once again that he didn’t understand why USG wanted additional randomness, only that they did.

Daira Hopwood summed TLSG’s response to Solinas’s AdditionalPRF proposal up nicely:

“The U.S. Government has these special requirements that you wouldn’t understand. Since they’re a government, they needn’t explain themselves, and we’re not going to explain either.”

Hoffman introduced AdditionalRandom after the failure of AdditionalPRF. I think it’s worth saying that Hoffman lobbied for his proposals far more aggressively than Rescorla did for Extended Random. In at least one case, Hoffman even attempted to provide a cryptographic rationale for extra randomness. Of course, naming-and-shaming either of them is pretty silly.

I have two interesting notes from the AdditionalRandom discussion on TLSG.

Recall that AdditionalRandom is the second proposal forwarded by Paul Hoffman, presumably (but in this case not overtly) motivated by a USG request. The former proposal, with Clyde Frog sponsorship, was a structured-input extension with multiple applications. AdditionalRandom, on the other hand, has no purpose other than to inject additional randomness into the TLS handshake.

So, first note: Marsh Ray managed to object to An acquaintance responded to this post by asking, “did Marsh Ray save the Internet?” Answer: no. But I’m getting ahead of myself. AdditionalRandom on the grounds that it was too useful. The issue was, paradoxically, that because the proposal mandated that implementations not attempt to parse the contents of the AdditionalRandom extension, vendors could safely use it to hide private extensions that they would then parse. Extended Random and AdditionalRandom are essentially the same proposal, and a cryptographic expert saw the latter as too flexible and valuable to safely include.

The second note is, to me, even more interesting. Remember that in the context of the Dual_EC narrative, AdditionalRandom and Extended Random mandate the backdoor accelerator; if you’re using Dual_EC, there’s no way to implement either standard without making Clyde Frog’s job easier. That’s because both proposals require that the extension convey only bytes that are the output of a CSPRNG. Except: “Did Simon Josefsson almost ruin the Internet?” I kill me! AdditionalRandom didn’t start out that way. Simon Josefsson refused to support AdditionalRandom unless Hoffman amended it to add a requirement that the extension’s bytes come from a CSPRNG.

You’re getting tired of this already, I’m sure, and thankfully I can report that there is no discussion I can readily find about RFC6358. RFC6358 is weird.

Standards groups aside: who implemented these things?

It shouldn’t be that hard to find out, but I don’t think we have complete answers. Here’s what I think we know:

  • OpenSSL had disabled, experimental support for OpaquePRF (it has since been removed). Much is made about the fact that we don’t know who sponsored this addition to OpenSSL, but if you consider the time frame, it’s pretty obvious that the USG asked for OpaquePRF and sponsored it in OpenSSL. No other entity in the world knew what OpaquePRF was.

  • GnuTLS had support for OpaquePRF. Someone should ask Simon Josefsson why. OpaquePRF was very simple, so maybe he wrote it for sport.

  • RSA BSAFE had support for Extended Random.

If there are implementations of AdditionalPRF, or AdditionalRandom, I don’t know about them. If there’s an implementation of RFC6358, I’ll be surprised.


3. Get To The Point Is Extended Random Malicious

Here are arguments in favor of Extended Random being malicious:

  • The timing is awfully suspicious; the proposals began just a short while after Dual_EC was introduced.

  • The utility to the Dual_EC backdoor is hard to argue about. Clyde Frog’s life gets a lot easier if everyone adopts an Extended Random proposal of some sort.

  • Some of the rationales provided for these proposals don’t make much sense.

  • The government, you know, asked for them.

Now here are some arguments against. But before I get started, let me just say that those first two arguments in favor are very strong arguments in favor. They’re short because they’re so straightforward. I have more to say about the case “against”, but that doesn’t make the case “for” weaker.

  • For a standards subversion attempt, it’s not very subtle. In all but one instance, Clyde Frog’s involvement with the standards request is clear from the outset. The reasoning is, true to character, opaque: the USG wants these extensions “just because”. One of the authors of the proposals, Jerry Solinas, is very well known; even at the time, his name would have raised eyebrows.

  • Except for Hoffman’s last proposal, the extensions are cordoned off to the US Government. The sponsors of the standards and their authors make very little effort to provide a use case for normal Internet users.

  • The “structured input argument” I detailed above is plausible and has precedent in other protocols. Arguments were made that session binding for things like SP800-56A could have been done on top of, rather than inside, TLS; but in practice, that would have required an entire custom shim protocol.

  • In several cases, the aspects of these proposals that now seem so problematic appear to originate from within IETF, not from Clyde Frog. Clyde Frog seems happy to get arbitrary opaque data fed to the TLS PRF. The TLSG isn’t OK with that: arbitrary opaque data could enable arbitrary vendor features, and TLSG wants control over new TLS features. It seems like it’s often the TLSG that wants to ensure these proposals spool CSPRNG state across the wire, not Clyde Frog.

If I have a controversial statement to make about Extended Random, it’s this: reasonable people can disagree about whether it was an attempt to subvert the IETF. I lean towards “not”; the structure of these proposals makes Clyde Frog’s job needlessly harder, if only by practically ensuring that OpenSSL and Schannel would never default to enabling them. But people smarter than me are convicted of the idea that this was a backdoor attempt.

I do not think reasonable people can disagree about Rescorla and Hoffman’s role in the narrative. There is no evidence that either of them were knowingly abetting an attempt to subvert the IETF.

The USG is the world’s largest IT buyer. They’re also host to the world’s largest deployment of classified proprietary crypto, which makes their use of TLS much more difficult. USG has always needed help getting their (often legitimate) interests represented at IETF.

Ensuring that Clyde Frog can’t corrupt the TLS standards isn’t Does any of this matter in practice? Fuck no. Apart from what appear to be some misconfigured FIPS BSAFE-C-TLS implementations, nobody ever used Extended Random, and nobody ever should. The proposals are dead, which is as it should be. Thankfully, the same thing is true of number-theoretic bignum CSPRNGs. Rescorla and Hoffman’s job; it’s everyone’s job. For such a tiny set of proposed extensions with such an impact (if only on the news cycle), these proposals generated a pitiful amount of discussion and virtually no skepticism from the IETF. Unlike Rescorla’s role in writing a pair of Internet drafts, that conclusion is actually alarming.


Sincere thanks to Matthew Green, Tanja Lange, Chris Palmer, and David Adrian for proofreading and corrections. None of them endorse my reasoning!