Tuesday, June 25, 2013

How to 'backdoor' an encryption app (June 17, 2013)

http://blog.cryptographyengineering.com/2013/06/how-to-backdoor-encryption-app.html?spref=tw

Monday, June 17, 2013

How to 'backdoor' an encryption app

Over the past week or so there's been a huge burst of interest in encryption software. Applications like Silent Circle and RedPhone have seen a major uptick in new installs. CryptoCat alone has seen a zillion new installs, prompting several infosec researchers to nearly die of irritation.

From my perspective this is a fantastic glass of lemonade, if one made from 
particular bitter lemons. It seems all we ever needed to get encryption into the mainstream was... ubiquitous NSA surveillance. Who knew?

Since I've
written about encryption software before on this blog, I received several calls this week from reporters who want to know what these apps do. Sooner or later each interview runs into the same question: what happens when somebody plans a crime using one of these? Shouldn't law enforcement have some way to listen in?

This is not a theoretical matter. The FBI has been 
floating a very real proposal that will either mandate wiretap backdoors in these systems, or alternatively will impose fines on providers that fail to cough up user data. This legislation goes by the name 'CALEA II', after the CALEA act which governs traditional (POTS) phone wiretapping.

Personally 
I'm strongly against these measures, particularly the ones that target client software. Mandating wiretap capability jeopardizes users' legitimate privacy needs and will seriously hinder technical progress in this area. Such 'backdoors' may be compromised by the very same criminals we're trying to stop. Moreover, smart/serious criminals will easily bypass them.

To me, a more interesting question is how such 'backdoors' would even work. This isn't something you can really discuss in an interview, which is why I decided to blog about them. The answers range from 'dead stupid' to 'diabolically technical', with the best answers sitting somewhere in the middle. Even if many of these are pretty obvious from a technical perspective, we can't really have a debate until they've all been spelled out.

And so: in the rest of this post I'm going to discuss five of the most likely ways to add backdoors to end-to-end encryption systems.

1. Don't use end-to-end encryption in the first place (just say you do.)

There's no need to kick down the door when you already have the keys. Similarly there's no reason to add a 'backdoor' when you already have the plaintext. Unfortunately this is the case for a shocking number of popular chat systems -- ranging from Google Talk (er, 'Hangouts') to your typical commercial Voice-over-IP system. The same statement also applies to at least some components of more robust systems: for example, Skype text messages.

Many of these systems use encryption at some level, but typically only to protect communications from the end user to the company's servers. Once there, the data is available to capture or log to your heart's content.

2. Own the directory service (or be the Certificate Authority).

Fortunately an increasing number of applications really do encrypt voice and text messages end-to-end -- meaning that the data is encrypted all the way from sender directly to the recipient. This cuts the service out of the equation (mostly), which is nice. But unfortunately it's only half the story.

The problem here is that encrypting things is generally the easy bit. The hard part is distributing the keys (
key signing parties anyone?) Many 'end-to-end' systems -- notably Skype*, Apple's iMessage and Wickr -- try to make your life easier by providing a convenient 'key lookup service', or else by acting as trusted certificate authorities to sign your keys. Some will even store your secret keys.**

This certainly does make life easier, both for you and the company, should it decide to eavesdrop on you. Since the service controls the key, it can just as easily send you its own public key -- or a public key belonging to the FBI. This approach makes it ridiculously easy for providers to run a 
Man-in-the-Middle attack (MITM) and intercept any data they want.

This is always 'best' way to distinguish seirous encryption systems from their lesser cousins. When a company
tells you they're encrypting end-to-end, just ask them: how are you distributing keys? If they can't answer -- or worse, they blabber about 'military grade encryption' -- you might want to find another service.

3. Metadata is the new data.

The best encryption systems push key distribution offline, or even better, perform a true end-to-end key exchange that only involves the parties to the communication. The latter applies to several protocols -- notably OTR and ZRTP -- used by apps like Silent Circle, RedPhone and CryptoCat.

You still have to worry about the possibility that an attacker might substitute her own key material in the connection (an MITM attack). So the best of these systems add a verification phase in which the parties check a key fingerprint
-- preferably in person, but possibly by reading it over a voice connection (you know what your friends' voice sounds like, don't you?) Some programs will even convert the fingerprint into a short 'authentication string' that you can read to your friend.


From a cryptographic perspective the design of these systems is quite good. But you don't need to attack the software to get useful information out of them. That's because while encryption may hide what you say, it doesn't necessarily hide who you're talking to.

The problem here is that someone needs to move your (encrypted) data from point A to point B. Typically this work is done by a server operated by the company that wrote the app. While the server may not be able to eavesdrop you, it can easily log the details (including IP addresses) of each call. This is essentially the same data the NSA collects from phone carriers
.

Particularly when it comes to VoIP (where anonymity services like Tor just aren't very effective), this is a big problem. Some companies are out ahead of it: Silent Circle (a company whose founders have threatened chew off their own limbs rather than comply with surveillance orders) don't log any IP addresses. One hopes the other services are as careful.

But even this isn't perfect: just because you choose not to collect doesn't mean you can't. If the government shows up with a National Security Letter
 compelling your compliance -- or just hacks your servers -- that information will obtained.

4. Escrow your keys.

If you want to add real eavesdropping backdoors to a properly-designed encryption protocol you have to take things to a whole different level. Generally this requires that you modify the encryption software itself.

If you're doing this above board you'd refer to it as '
key escrow'. A simple technique is just to an extra field to the wire protocol. Each time your clients agree on a session key, you have one of the parties encrypt that key under the public key of a third party (say, the encryption service, or a law enforcement agency). The encrypted key gets shipped along with the rest of the handshake data. PGP used to provide this as an optional feature, and the US government unsuccessfully tried to mandate an escrow-capable system called Clipper.***

In theory key escrow features don't weaken the system. In practice this is debatable. The security of every connection now depends on the security of your master 'escrow' secret key. And experience tells us that wiretapping systems are surprisingly vulnerable. In 2009, for example, a group of Chinese hackers were able to 
breach the servers used to manage Google's law enforcement surveillance infrastructure -- giving them access to confidential data on every target the US government was surveilling.

One hopes that law enforcement escrow keys would be better secured. But they probably won't be.

5. Compromise, Update, Exflitrate. 

But what if your software doesn't have escrow functionality? Then it's time to change the software.

The simplest way to add an eavesdropping function is just to issue a software update. Ship a trustworthy client, ask your users to enable automatic updates, then deliver a new version when you need to. This gets even easier now that some 
operating systems are adding automatic background app updates.

If updates aren't an option, there are always software vulnerabilities. If you're the one developing the software you have some extra capabilities here. All you need to do is keep track of a few minor vulnerabilities in your server-client communication protocol -- which may be secured by SSL and thus protected from third party exploits. These can be 
weaknesses as minor as an uninitialized memory structure or a 'wild read' that can be used to scan key material.

Or better yet, put your vulnerabilities in at the level of the crypto implementation itself. It's terrifyingly easy to break crypto code -- for example, the difference between a working random number generator and a
badly broken one can be a single line of code, or even a couple of instructions. Re-use some counters in your AES implementation, or (better yet) implement ECDSA without a proper random nonce. You can even exflitrate your keys using a subliminal channel.

Or just write a simple exploit like the
normal kids do.

Unfortunately there's very little we can do about things like this. Probably the best defense is to use open source code, disable software updates until others have reviewed them, and then pray you're never the target of a National Security Letter. Because if you are -- none of this crap is going to save you.

Conclusion

I hope nobody comes away with the wrong idea about any of this. I wouldn't seriously recommend that anyone add backdoors to a piece of encryption software. In fact, this is just about the worst idea in the world.

That said, encryption software is likely to be a victim of its own success. Either we'll stay in the technical ghetto, with only a few boring nerds adopting the technology. Or the world will catch on. And then the pressure will come. At that point the authors of these applications are going to face some tough choices. I don't envy them one bit.

Notes:

* See this wildly out of date
security analysis (still available on Skype's site) for a description of how this system worked circa 2005.

** A few systems (notably Hushmail back in the 90s) will store your secret keys encrypted under a password. This shouldn't inspire a lot of confidence, since passwords are notoriously easy to crack. Moreover, if the system has a 'password recovery' service (such as
Apple's iForgot) you can more or less guarantee that even this kind of encryption isn't happening.

*** The story of Clipper (and how it failed) is a wonderful one. Go read
Matt Blaze's paper.

Posted by Matthew Green at 10:55 AM



 

No comments:

Post a Comment