Questions and Answers from "Against DNSSEC"
What follows are questions and arguments I’ve heard since posting Against DNSSEC. I’ve attempted to order these roughly by frequency and, subjectively, importance. I’ll add to this as I notice more questions that merit responses. I’ve given myself much more leeway to be discursive here. “Against DNSSEC” remains my attempt at a single coherent set of arguments against DNSSEC. Don’t read this page unless you’ve read “Against DNSSEC” already.
The CA system is broken and needs to be replaced!
That’s not a question. You know you’re seeing a variant of this argument when someone brings up the 1,482 CA certificates trusted by browsers. The problem with the argument is that for sites in .COM, DNSSEC simply buys you a 1,483rd and 1,484th trusted third party, both of which are controlled by the US Government. The CA system certainly does need to be replaced. But DNSSEC doesn’t solve the problem; DNSSEC makes things worse. It’s like you have a gaping hole in your wall letting insects and marmots into your house, and someone comes by and offers, for $200,000, to sell you another hole in your wall. That deal only makes sense if it’s (a) not your house and (b) you’re in the plaster repair business.
This point is so fundamental to the argument that it bears repeating. The CA system is fatally compromised and untrustworthy. But so is DNSSEC: in DNSSEC, the most important sites on the Internet are cryptographically controlled by NSA and GCHQ. One can quibble about which is worse. That debate is pointless. Both are unacceptable.
The future of HTTPS security is in key pinning and systems built on top of key pinning. DNSSEC doesn’t make key pinning meaningfully easier. Central authorities can’t solve the Internet trust problem. Central authorities are the Internet trust problem.
How can governments be in control of the DNSSEC PKI if an NGO controls the roots?
Governments People should also be cautious about valorizing NGOs. control the most important TLDs. And the argument that security-conscious sites could flee to safer TLDs is specious. Google can’t opt out of .COM. Shouldn’t security and privacy be the default, not a pitfall that savvy sites avoid by selecting oddball TLDs? And how do site owners gauge the trustworthiness of any given TLD? The popular .IO TLD, for instance, is the British Indian Ocean Territory; one can reasonably assume it’s the property of GCHQ, the world’s most aggressive signals intelligence agency.
Libya already controls BIT.LY; the USG already controls all the .COM names. How can DNSSEC make things any worse?
By adding TLS certificates to the DNS. Libya controls BIT.LY’s name to IP address mapping through their control of .LY. But Libya has no authority over BIT.LY’s TLS certificates. Until we adopt DNSSEC and DANE.
Under DNSSEC/DANE, CA certificates still get validated. How could a SIGINT agency forge a TLS certificate solely using DNSSEC?
By corrupting one of the hundreds of CAs trusted by browsers. It’s 2015. No meaningful security feature can rely on the trustworthiness of the CA system. But if you’re not O.K. with NSA control of the Internet, that’s what DNSSEC proponents require you to do.
Wouldn’t it be difficult for NSA to subvert keys in the DNS? They’d have to spoof and sign entire zones.
And?
A special ring of Robot Hell is reserved for anti-NSA features that admit to feasibile but complicated or costly attacks. NSA’s primary objective isn’t to subvert the Internet. NSA’s primary objective isn’t even to catch terrorists. NSA’s primary objective is to secure more budget for NSA. So long as Internet attacks remain feasible, it’s actually in their best interests for those attacks to be made more complicated. Complexity motivates bigger appropriations and increases their headcount. And the harder it is for organizations other than NSA to launch an attack, the better.
Wouldn’t people pretty much have to notice an NSA attack on this scale?
Why? Look at QUANTUM INSERT. NSA is just as good at targeted attacks as they are at AT&T-backbone-snooping large-scale attacks. Why should a new Internet cryptosystem deployed at enormous expense admit to attacks like QUANTUM INSERT?
If over the next 5 years nothing more is done to shore up Internet security than is already being done, targeted CA-based attacks will become much riskier for NSA and GCHQ because of key pinning. To man-in-the-middle an HTTPS connection, NSA will need to know that the browser they’re targeting hasn’t already cached the correct key fingerprint for the server. If it has, the browser will scream bloody murder and, hopefully, report back to Google or the EFF about the discrepancy. People watching those logs will quickly discover which CAs are signing bogus certificates, and compromised CAs will be evicted from browsers. NSA and GCHQ will have to risk burning an entire CA every time they launch this attack. If we do nothing new at a protocol level, every Chrome and Firefox installation on the Internet will become part of a global anti-surveillance surveillance system.
What happens when the same story is repeated in a DNSSEC/DANE world? .COM is discovered to have signed bogus material for Facebook. Now what? Browsers can’t talk to .COM anymore?
What’s the alternative to DNSSEC?
Do nothing. The DNS does not urgently need to be secured.
All effective security on the Internet assumes that DNS lookups are unsafe. If this bothers people from a design perspective, they should consider all the other protocol interactions in TCP/IP that aren’t secure: BGP4 advertisements, IP source addresses, ARP lookups. Clearly there is some point in the TCP/IP stack where we must draw a line and say “security and privacy are built above this layer”. The argument against DNSSEC simply says the line should be drawn somewhere higher than the DNS.
Can’t end-systems validate DNSSEC records themselves rather than trusting servers?
Sure they can. Everyone can also just run their own caching server. They don’t, though, because the protocol was designed with the expectation that they wouldn’t (this squares with the overall design of the DNS, in which stub resolvers cooperate to reduce traffic to DNS authority servers by relying on caching servers). DNSSEC deployment guides go so far as to recommend against deployment of DNSSEC validation on end-systems. So significant is the inclination against extending DNSSEC all the way to desktops that an additional protocol extension (TSIG) was designed in part to provide that capability.
Browser vendors seem to be punting on DNSSEC. Google Chromium added support and later withdrew it. Mozilla Firefox had a pilot project to implement DANE which was subsequently shelved. Apple mDNSResponder had nascent (unused) support for DNSSEC, but its replacement (discoveryd) does not. Applications without explicit DNSSEC code are stuck with the lowest common denominator of DNSSEC support. The most important applications have opted not to pursue DNSSEC.
How can DNSSEC be expensive to deploy if it’s already been deployed?
For almost a decade, the biggest challenge to DNSSEC adoption was that the roots and TLDs weren’t signed. Even if you deployed DNSSEC on your own zones and signed all your records, you would obtain no meaningful security, because the TLDs and roots wouldn’t vouch for your signatures. That problem was solved, apparently in 2010, and so DNSSEC is now “deployed”.
Of course, that’s not really what “deployment” means. DNSSEC is deployed when end-systems uniformly honor DNSSEC signatures and pass along errors when signatures fail to validate, and when some critical mass of important domains are DNSSEC-signed. A massive amount of time, money, and energy remains to be spent on those deployment problems.
How can DNSSEC be hard to deploy? Isn’t it easier than TLS?
This site tracks DNSSEC outages. The most important DNS zones on the Internet don’t seem to be able to get it right. What makes us believe that the IT department of, say, the country’s 13th biggest insurance firm will do better?
Can’t DNSSEC support Elliptic Curve as well as RSA?
DNSSEC has little-used support for ECDSA using the NIST P-256 curve. The roots and TLDs, upon which the security of the rest of the DNSSEC hierarchy depends, don’t use it. And according to APNIC, in a post cautioning operators not to use ECC DNSSEC, fully 1/3rd of DNSSEC-validating resolvers can’t handle ECDSA signatures.
DNSSEC’s P-256 ECDSA is technologically inferior to modern signing schemes.
The NIST P- curves are most probably not backdoored, despite their reliance on a magic number generated at NSA. But their curve structure is old and error-prone. They are difficult to implement in “constant time”, to stop attackers from measuring the time operations take to learn secret keys. They use a form that requires careful checking of parameters and special cases to ensure security. They’re also slow. Modern curves are reinforced against these problems at an algorithmic level.
ECDSA, derived from work by NSA in the 1990s, is also an outmoded signature scheme. There is a terrible security trap in the ECDSA algorithm. Every ECDSA signature must be accompanied by an unbiased random number, called a nonce. If any bits of those nonces are predictable, attackers can collect signatures and use them to recover the signing key. This vulnerability famously broke the Playstation 3. Researchers have published attacks that are effective against single-bit nonce leaks. Desktop This attack is so fucking cool you almost want to see ECDSA DNSSEC happen just to have more targets for it. computers can recover ECDSA keys from single-digit numbers of biased bits. DNSSEC’s ECC inherits this trap. Modern signature schemes, like Deterministic DSA and Ed25519, don’t.
Couldn’t DNSSEC simply add support for deterministic signatures on strong curves? Theoretically yes, but in practice no. Any signature scheme used in the DNSSEC hierarchy needs to be supported by every DNSSEC-validating resolver. The more people deploy DNSSEC now, the harder it gets to add new cryptography to it. It is murderously hard to push new cryptography out to already-deployed protocols. Blatant, well-understood vulnerabilities in TLS have been left for years while standards groups bickered about how to fix them. And it is much easier to roll out TLS changes than DNSSEC changes.
What’s so important about secret hostnames? Is revealing hostnames really that big a problem?
People who aren’t in the business of securing enterprise networks have a hard time understanding this one. I’ll try to provide three major concerns and ask the reader to trust me that there are more.
Hostnames can reveal confidential information, such as the existence of a particular client or customer, or of an upcoming product or feature. Almost every big company is a party to numerous contracts forbidding the disclosure of that information.
Hostnames reveal to attackers the existence of testing, development, and staging servers. Yes, attackers can usually find these servers anyways. But doing so requires effort. Since every network in the world is breakable given enough effort, security must be measured by the cost imposed on attackers to break it. Publishing full zone contents reduces attacker cost and thus security.
Regulations, or (more importantly) the commercial audit checklists required for certification under regulations, can require zone contents to be kept secure. Publishing them can push an organization out of compliance. There are workarounds to that problem, but they cost money.
The big problem with hostnames being published in the DNSSEC is the way it violates expectations. Big networks are almost universally managed to avoid publishing lists of hostnames. Even if it was straightforward not to encode confidential information in zones, that policy is simply not the norm today. Network security managers are routinely surprised to learn that DNSSEC reveals that confidential information. The security operations community is not prepared for the impact of widespread DNSSEC deployment.
In Europe, many TLDs are DNSSEC-signed by default; how then can it be considered expensive to deploy?
Anything can be made easier to deploy if you trust someone else to do it for you. It’s unreasonable for most organizations to trust hosting providers and registrars with their cryptographic security. The deployment costs of DNSSEC must therefore reflect the expense of organizations installing and managing their own DNSSEC-signed zones.
Why didn’t you mention DDoS amplification?
A popular argument against DNSSEC is that it amplifies denial-of-service attacks. Here’s how: attackers can forge requests that appear to originate from their victims. The responses, generated unwittingly by DNS servers and sent to the victim, dwarf the requests by a factor of tens to hundreds. This gives attackers leverage: by aiming their flooding tools at DNS servers attackers can invest a small amount of traffic to trick those servers into flooding their victims with gigantic amounts of traffic. This attack is extremely effective in practice.
DNSSEC probably does make DDoS floods easier. But the DNS protocol as currently deployed has similar problems. There are DNS query types (like ANY queries) that don’t involve cryptography and do drastically amplify attacker traffic. It’s true that those query types can easily be filtered, unlike DNSSEC, which breaks when filtered. But in practice, Internet-wide filtering of DNS traffic is unlikely to happen.
Denial of service attacks are an Internet fact of life. Mitigate one kind of DoS, and attackers quickly find 3 more. The solution to DoS attacks is attribution, not prevention. It would be dishonest of me to pretend that I believed DoS attacks were a valid reason to halt deployment.
What about DNSCurve?
DNSCurve is like the opposite of DNSSEC. DNSSEC starts at the roots and works its way down to the branches, never quite reaching end systems. DNSCurve starts with the end systems and works its way back to the roots. DNSSEC protects DNS resource records. DNSCurve protects entire DNS protocol transactions. DNSSEC relies on SHA-1 and RSA. DNSCurve uses elliptic curve cryptography. DNSSEC was designed by a standards group initiated by a US DoD funded project in the 1990s. DNSCurve was designed by one of the Internet’s best-known cryptographers just a few years ago. DNSSEC only functions if a critical mass of users adopts it. DNSCurve works today, even if only one person uses it. DNSCurve is better than DNSSEC.
I am not an advocate of DNSCurve, though. “Do nothing” is also a viable option; it’s what we’ve been doing for 20+ years. The burden of proof should be on anyone suggesting we change a core Internet protocol.