Does encryption still work?

ByChris Walker

Gönderilen2015.03.11

In light of ongoing revelations about the global surveillance efforts of the US National Security Agency (NSA) and its intelligence allies, you might be wondering if any of this really matters anymore. Much of the recent media coverage appears to suggest that, not only are They recording everything we do and say on the Internet, but now They've defeated our encryption. The sky is falling; all is lost; etc. It is certainly the case that we have learned a great deal from this episode, and that toolkits like Security in-a-Box must be updated to reflect some of those lessons. (Watch this space!) And, of course, it is also true that bombshells remain to be dropped; questions to be answered; suspicions to be confirmed or denied. However, to the best of our knowledge—where "our" knowledge, in this case, is largely that of the security community's best and brightest researchers, cryptologists and tool developers—nothing has come to light that should lead us to abandon hope. While perfect security is (and will remain) unachievable, with the right tools and tactics, we can still protect our digital privacy and security in meaningful ways.

In the words of Edward Snowden himself, "Properly implemented strong crypto systems are one of the few things that you can rely on..." One of the primary goals of Security in-a-Box is to help demystify those two facets of reliability: to identify security tools that the community considers "strong," and to guide you through whichever aspects of "properly implemented" fall on your shoulders, as a user of those tools. Over the coming months, we expect to learn a little about how we've done so far, and we hope to learn a lot about how we can do better.

What has changed?

On September 6th, The Guardian, the New York Times and ProPublica jointly reported on a classified NSA program called BULLRUN that is dedicated to subverting various forms of encryption meant protect the privacy of online communications. Crucially, efforts by the NSA to ensure its own ability to bypass these protections has the effect of weakening them against other attackers, as well. It is virtually impossible to install "back doors" into security devices, protocols, software or standards without compromising those systems more generally. (Nor does the NSA appear to be trying all that hard. One particularly blunt instrument in the $250 million BULLRUN arsenal involved blatant manipulation of standards setting initiatives, with the express goal of weakening any recommendations produced.)

The second half of Snowden's statement, mentioned above, is less optimistic: "Properly implemented strong crypto systems are one of the few things that you can rely on. Unfortunately, endpoint security is so terrifically weak that NSA can frequently find ways around it." Or, as Bruce Schneier puts it, "If the NSA wants in to your computer, it's in. Period." So, what does this mean for users of Security in-a-Box? Admittedly, it means you're better off not being a personal target of the most well-resourced espionage organization in the world. But, even if you are—or if it turns out that other nefarious organizations and individuals have similar capabilities—the advice in this toolkit can make their lives much more difficult.

What remains?

We do not yet have specifics on what systems, standards, services and tools the NSA and its allies have attempted to undermine, but security researchers who have weighed in on the topic remain confident in the core technology behind most of the secure communication software recommended in this toolkit. One example is Pretty Good Privacy (also known as PGP or GPG), the encryption scheme implemented by gpg4usb, by the Enigmail add-on to the Thunderbird email client and by the APG application for Android. Another good example is Off the Record (OTR), the instant messaging (IM) encryption used by Pidgin and Adium, and by TextSecure for text messaging (SMS). Along with zRTP—the Voice over IP (VoIP) encryption used by a number of tools that do not yet appear in Security in-a-Box—these technologies have several things in common:

  • They are Free and Open Source Software (FOSS), and they are based on open standards.

  • They rely on end-to-end encryption, which means your content is scrambled when it leaves your computer or smartphone, and it stays that way until it reaches the person with whom you are communicating. Unlike other forms of encryption (HTTPS-protected webmail, for example), the tools listed above prevent your service provider from understanding the data you send and receive. This, in turn, protects you from anyone who might be monitoring or pressuring that provider.

  • They have been around for a while, and are well-respected by the digital security community.

  • Unfortunately, they are not as common as they should be, nor as easy to use as they could be. And, they do not work unless the person with whom you wish to communicate also uses them.

What does that mean for me?

At risk of getting a bit too technical a bit too soon, the following is a brief list of recommendations in light of what we have learned over the past few months. This will make more sense if you've already read through Security in-a-Box (or another guide like it), but even if you are new to this material, you might want to keep these suggestions handy as you begin to learn more about digital security.

1) At least one organization out there probably is recording everything you do or say on the Internet. But, that doesn't mean they can read and understand all of it! Use FOSS, end-to-end encryption tools to protect the content of your sensitive communications. You can read more about this below.

2) Even though attackers probably won't be able to read your end-to-end encrypted communication, we now know that at least one organization is trying really hard. To ensure that they, and others like them, are not successful, you should use strong keys. (For GPG, use at least a 2048 bit RSA key.) If you have an older, weaker (1024 bit and/or DSA) GPG key, now is probably a good time to create a new one. And, in general, ask around or search the the Web for information about other "details" you might not understand at first. As mentioned in the toolkit, for example, "authenticating" your encrypted email and instant messaging contacts is no less important than installing the software itself and exchanging keys. As cryptologists like to say, "the math works," but you have to do your part, as well.

3) Against a truly determined attacker with significant resources, the OTR-based messaging tools described in this resource—Pidgin and TextSecure—are in some ways safer than GPG-encrypted email. Specifically, this is because every message you send or receive using OTR is automatically encrypted with a new key. So, even if somebody records all of your (unreadable) messages over the course of several years, and then somehow gets their hands on your secret key, they still won't be able to decrypt all of that content. This is not true with GPG. In case you're curious, this property is called "perfect forward secrecy" (PFS), and it holds true for zRTP-encrypted VoIP tools, as well, including Jitsi used with the ostel.co service.

4) Be aware that hiding basic data about your Internet and mobile phone "traffic" (also called "meta-data") is much more difficult than hiding the actual content of your communications. Examples of meta-data include: who you talk to, from where, when, for how long and through what channels. If you need to protect this sort of information from a well-connected adversary, use the Tor Browser anonymity tool (in addition to the other software recommended here), and make sure you study up on how to use it properly. Or, better yet, restart your PC with the Tails operating system (and learn how to use the Linux versions of those same tools). This will increase the likelihood that your Internet traffic is successfully anonymized by the Tor network.

5) Be careful with commercial software and services. Some of these resources, like Gmail, almost certainly offer strong privacy protections against outside attackers, but many of the companies who develop and operate them have shown a willingness to install monitoring code, back doors and other "features" that weaken the security of their own software. Naturally, this is most relevant if you believe that these companies—or governments with direct influence over them—are likely to cooperate with those who oppose your online and offline activities. Keep in mind, however, that built-in surveillance mechanisms like this also represent fundamentally bad security design. (This warning applies to commercial operating systems, as well, by the way, including Microsoft Windows, Apple's Mac OS X and all major smartphone platforms.)

6) Make sure you are using the latest stable version of all security tools. And, while you're at it, learn how to "verify" the software you download (using checksums or digital signatures) to ensure that it's authentic. This won't protect you from a back door installed by the developer—voluntarily or under pressure from an organization like the NSA—but it will keep you safe from many other attacks.

Troubles all the way down?

At this point, you might be looking back at what we said about the NSA's BULLRUN program and thinking, "But even if I use trusted, open-source, end-to-end encryption software based on open, transparent standards—and even if I use it properly—how does that help me if the standards themselves have been poisoned by the NSA?" First of all, if it comes to light that the NSA has somehow compromised the most fundamental building-blocks upon which these particular encryption tools were built, and nobody has noticed, then yes; that will be a sad day. Lucky for us, that does not appear to be the case.

Encryption schemes like GPG, OTR and zRTP are themselves open standards, as are the underlying protocols and algorithms from which they were constructed: AES symmetric encryption, RSA asymmetric encryption and signing, DSA signing, Diffie-Hellman key agreement and SHA hash algorithms, among others. These building blocks—which are also used in secure data storage tools like VeraCrypt and KeePassXC—have all been around for quite some time. And, unlike some other standards, none have yet shown evidence of NSA sabotage.

That could change, of course, as the NSA was certainly involved at one level or another in the selection of many cryptographic standards, including several of those listed above. Regardless, the openness of these standards appears to be serving its intended purpose, and is clearly preferable to the alternative. Trusted cryptologists can at least dig the specifications, searching for intentional flaws as well as the accidental ones they've been watching out for all along, rather than just crossing their fingers and hoping for the best. Furthermore, the NSA's credibility as a defender of Internet security is pretty much shot these days, and one can hope that their role in the establishment of future standards will be severely curtailed for a good long while.