16 December 2015

Everything you need to know about encryption: Hint, you’re already using it.

December 8 

A lock icon, signifying an encrypted Internet connection, is seen on an Internet Explorer browser in a file photo illustration.
Congratulations! If you are reading this on The Washington Post website right now, you're using encryption — or at least your browser is. The little lock that probably shows up in the URL bar of your browser highlights that our site deploys HTTPS, a process that creates a sort of digital tunnel between you and our website. 

That encrypted tunnel helps protect you from governments, Internet service providers, your employers, or even the nefarious hackers who might want to spy on or even hijack your Web browsing while they lurk on the WiFi at your local coffee shop. 

When a site has HTTPS turned on, someone trying to get a peek at your online activity can typically see only what site you're visiting, not the actual page you're on or what information you might share on a site. So right now, for instance, someone with access to the network you're connecting through could see that you're reading The Post, but not that you're reading this specific article about encryption. Neat, right? 

Major e-mail providers, social networks, and all sorts of e-commerce such as online shopping and banking rely on encryption to help keep users' data safe, often without visitors even realizing it because the encryption is just baked into how users experience the Internet. 

But over the past year, the U.S. government has been mired in a debate over encryption, one that some intelligence and law enforcement officials have tried to rekindle in the wake of the recent attacks in Paris and San Bernardino, Calif. 

In a televised address on Sunday, President Obama even alluded to the issue, saying he "will urge high-tech and law enforcement leaders to make it harder for terrorists to use technology to escape from justice." And now, the chairman of the House Homeland Security Committee is calling for acommission on encryption and security threats. 

So let's take a step back and talk about this technology and why it's in the spotlight. 

Where does encryption come from and how does it work? 

The concepts behind modern encryption are really old. In fact, Julius Caesar used a form of encryption known now as a "Caesar cipher" to protect the privacy of his messages. In a Caesar cipher, each letter gets replaced by another letter some fixed number of steps away in the alphabet. For example, if the cipher is used to shift one position to the right, A becomes B, and B becomes C. So a message like "Hi Mom" becomes "Ij Npn" — basically nonsense to someone who doesn't understand your super-secret code. 

Computers have made coding information through these sorts of systems much faster — and more complex. But at their most basic level, modern forms of digital encryption are math. 

Okay, more specifically, they use mathematical algorithms to scramble up data. That scrambling is associated with a digital "key" that unlocks it. If you don't have the key, the encrypted data will look just like gibberish, such as that "Ij Npn" message we made using the Caesar cipher. As a general rule, the longer and more complex the key, the harder it is for someone to "break" or decode whatever you've encrypted. 

Okay, but how is encryption actually used? 

There are two basic ways people use digital encryption. The first is to lock up data "at rest," or when you're trying to protect information that's stored somewhere, such as on your computer or your smartphone. You can think of it sort of like a combination safe for your data. In most cases, you use a password or passcode to unlock it. This kind of protection is especially useful if a device gets lost or stolen because it means that whoever gets a hold of it won't be able to dig through whatever personal information might be stored on it. 

The second is to secure data "in transit," or when you're trying to protect information as it travels across the Internet. Here, you can think of encryption as sort of a decoder ring: The two sides of a digital conversation exchange keys that let them understand what each side is saying but prevent others from being able to understand it. 

The most secure form of this type of "in transit" encryption is called "end-to-end." It means that only the sender and the recipient of a message can unlock it — so, basically, only each "end" of the conversation holds the keys. Some of the most advanced forms of "end-to-end" use something called "perfect forward secrecy," which works by having each new message serve as a kind of key for the next message. That means that even if someone is able to break into one message, they aren't able to necessarily unlock the whole conversation. 

HTTPS is actually a form of end-to-end encryption, but instead of connecting two people, it connects one person to the server that hosts a website. And that means that the company behind a site can unlock the data. 

For instance, Gmail uses HTTPS by default when you connect to its e-mail system from the Web. That means that there's automatically "end-to-end" encryption between users and Google, but not directly between one Gmail user and another Gmail user. That's why Google can unlock your e-mails so it can do things like use them to help decide what advertisements to show you and let you easily search through your archives. (Although, it's worth noting that Google is working on a browser extension to help Gmail users send e-mails that are encrypted end-to-end.) 

Similarly, when you buy something online, HTTPS protects your credit card data as it crosses the Web but lets the company or whatever payment processor it uses read it so you are actually able to pay for things. 

So why is the government freaking out about encryption? 

It's not, really. Or at least, not all of it is freaking out about all kinds of encryption. In fact, Obama himself and many of the officials who have expressed concern about some types of encryption have expressed support for encryption generally and acknowledge that it plays a key role in keeping data safe from hackers. The federal government actually is in the midst of trying to roll out HTTPS to all of its websites right now because of the privacy and security benefits it offers. 

But some law enforcement and intelligence officials are worried about big tech companies rolling out products that automatically use the strongest forms of encryption to protect users' data. 

This trend was already underway before former National Security Agency contractor Edward Snowden's revelations about the extent of the government's surveillance powers but has since gained steam — with an added boost from a wave of data breaches at major companies. The end result of that push is that some tech companies are replacing systems in which they previously could unlock data with ones in which they can't because they decided to stop holding on to the keys. 

Apple is probably the most notable example here. It has used end-to-end encryption for products such as iMessage and Facetime for a while now. And in fall 2014, it announced that iOS would automatically encrypt data stored on smartphones and other mobile devices with a key that's based on a user's passcode or password and stored locally on their device. That means that if users have turned off automatic backups to Apple's servers, the company doesn't have a way to access their data. 

That's a problem for some law enforcement officials because it means that the company is just unable to unlock data stored on a phone, even if it's served with a court order. Those officials warn that these protections let criminals and terrorists "go dark" and hide their communications from investigators. And in some cases, they have pushed for companies to roll back these new protections or redesign them to build in ways for the government to access encrypted information from their products. 

Encryption experts typically call that kind of an access point a "backdoor." 

A backdoor? 

Yes. Although it's worth noting that government officials who want this access generally reject that term. Instead, they say that they are asking for a "front door" — a system by which they are able to unlock encrypted information if they have received judicial approval. 

But experts, tech companies and privacy advocates say that no matter what you call it, an access point like that weakens encryption. Actually, they argue that there are a lot of problems with the approach: One is that building in that sort of access will introduce more opportunities for someone to mess up the code and include bugs that leave systems vulnerable to hackers. 

Another more basic problem, they say, is that it's really hard to control who will have access to a door when you build one. Hackers, they warn, will target the door and almost inevitably find a way to break in. And if the United States passes a law that requires a door and gets it, how are companies that operate globally supposed to react if another country such as China or Russia demands the same thing? 

That question actually speaks to a really basic tension inside the U.S. government over encryption: Some parts of the government actually help fund the development of strong, end-to-end encrypted products because they help protect journalists and activists in countries that crack down on the Internet through censorship and surveillance. But other parts are worried that those same tools make it harder for them to track terrorists and criminals. 

Since Apple announced its iOS changes last year, the U.S. government has debated this issue at length. But internal documents from that debate suggest that officials couldn't figure out a way to build the kind of access some wanted without creating other security risks that their own experts believed would outweigh the benefits. So, in October, the administration said it didn't plan to push for legislation that would require companies such as Apple to be able to unlock encrypted data for the government — at least at that point. 

But then the Nov. 13 Paris attacks — claimed by the Islamic State militant group — revived that conversation. 

Wait, how was encryption involved in the Paris attacks? 

That's not totally clear, but in the immediate aftermath, some officials pointed specifically to increasingly popular end-to-end encrypted messaging apps as possible planning ground for the attacks. 

There's actually some signs that they didn't use encryption. For instance, investigators found a phone apparently belonging to one of the attackers that included an unencrypted text message thought to be a signal to launch the attacks and used information from the phone to help track down the alleged mastermind of the plot, Abdelhamid Abaaoud, according to local reports.

That's not to say that the Islamic State doesn't use encryption or doesn't know about it — and there are other signs that some of the attackers may have used the tools. A New York Times report last month said that officials believe Abaaoud started to use encryption after a planned attack in Belgium was foiled by phone taps. Earlier this year, Abaaoud gave one would-be attacker a USB stick with an encryption key on it and told him to await instructions via email, according to the report. 

Numerous reports have also linked Islamic State supporters to Telegram, a messaging app that allows for end-to-end encrypted "secret" chats, as well as public channels. The developers have since cracked down on jihadists' use of public channels to spread propaganda. 

But this isn't really new: Terrorist groups, including al-Qaeda, were reportedto be using encryption tools as far back as the 1990s, long before the Snowden revelations helped set off the current expansion of consumer end-to-end encrypted messaging products. 

Can the government stop terrorists from using encryption? 

Well, no. The most the government can probably do is bar companies from offering the most secure forms of encryption to their users. But encryption isn't just one product. Just like the math it's based on, it's really more of a concept or an idea rather than a specific technical tool. 

And it's pretty impossible to outlaw ideas. 

In fact, the U.S. already tried to do that in the 1990s in policy debates about encryption now known as the "crypto wars" — which failed to stop the spread of encryption and ended up creating a bunch of security problems that still haunt the Internet. 

Even if the government stopped big tech companies from offering end-to-end encryption, the tech would still be available. For one, the U.S. government has little authority to stop corporations outside its borders from offering the same capabilities. But perhaps more importantly, many other popular encrypted communication apps and tools are the result of open-source projects that rely on volunteer developers all around the world to make them better, so there's no one person or company that the government can get to shut them down. 

And because open-source projects make their code available to the public, there's nothing to stop Islamic State supporters from using their own servers to set up their own versions of end-to-end messaging apps. And if they don't want to go that far, terrorists could revert to older, more cumbersome open-source products that provide end-to-end encryption for things such as email that are also already out there. 

That's why many privacy advocates argue that stopping companies from offering the strongest forms of encryption would probably only hurt the privacy of everyday users: Someone who is doing something they know is high-risk, such as planning a crime or a terrorist attack, is probably willing to deal with going through some extra steps to communicate securely. Your grandparents might not, even if they're talking about something that could be sensitive down the line, such as personal health information. 

So how can the government get around encryption to help fight crime and terrorism? 

The good news is that the government has options. Remember how I said that someone spying on your connection with this site could still see that you're looking at The Post even if they can't see which page? Even if you can't decode the content of an encrypted message, someone watching online traffic or providing a communication service can still generally see who is on each side of a conversation and when they're talking. If analyzed correctly, that kind of information, dubbed "metadata," can help law enforcement get a good idea of what a target's social web looks like. 

If the government gets access to encrypted data stored on a device, in some cases it will be able to "brute force" its way through the protections. That means throwing a bunch of computing power at the information to break the encryption. One way to do that is by trying all the possible keys — and it's something that the government is in a position to do because it has a lot more resources than the average hacker. 

But the government can also resort to plain old hacking: Although it doesn't talk about it much, different agencies have teams that try to figure out how to infect targets' computers with viruses and malware. And if you are able to infect someone's whole phone or whole computer, you can generally find ways to get around encryption, such as spying in on when someone enters the password to unlock it or accessing information before it gets encrypted. 

These aren't perfect solutions — they may take longer to unlock data or not work at all in some circumstances — but they are alternatives to approaches such as backdoors, which many experts say could leave everyone less secure. 

What now? 

Who knows, really. It's not clear what the U.S. government's next steps will be on this, and some European countries are debating many of the same issues. 

But this issue clearly isn't going away anytime soon — and, hopefully, this guide has at least helped you understand how the technology actually works and what's at stake in those debates. 

Andrea Peterson covers technology policy for The Washington Post, with an emphasis on cybersecurity, consumer privacy, transparency, surveillance and open government.

No comments: