One of the things you quickly learn when you work for a data security company is that data security doesn't work the way normal people think it does. For example, "normal people," apparently, think that they can somehow get off the leaked Ashley Madison list, the latest data breach story du jour: Now that the hackers of Ashley Madison have released the full 9.7 gigabytes of information, some former patrons (and current victims/penitents) are searching for hackers that will scrub their info from the list. Which is crazy. And laughable. And not doable. The sitcom Newsradio explained it very well back in the late 1990s:https://www.youtube.com/watch?v=0T-CreVC_6YLike he said. It's like getting pee out of the pool. Most people can probably appreciate the folly of even searching for a "solution" to this problem. "once information has been sufficiently socialised and redistributed (which the Ashley Madison data has certainly been), the exposure is irretrievable"But for those who don't get it, and don't understand what the above means (quote from this article), basically, it means you're screwed because the hacked data isn't found in a central depository.Many people have the information now: Security researchers. Journalists. Bloggers. The honestly curious. Hackers with some kind of agenda. Your girlfriend majoring in comp sci. Sure, a hacker for hire could delete your specific entry from one list. But that leaves a million other lists that are on other people's computers.Do you really think that a guy you've paid $2000 is going to be able to (and want to) track down all these founts of dismay?
"once information has been sufficiently socialised and redistributed (which the Ashley Madison data has certainly been), the exposure is irretrievable"
This lunacy of unachievable expectations, however, is not relegated to "normal" people. For example, in the course of this business, I have fielded more than a handful of inquiries where callers were looking for "NSA-proof encryption." Such encryption exists…but also doesn't exist.Let me explain. As the Snowden disclosures have shown over the past couple of years, modern encryption tools like AES are definitely NSA-proof; that is, even the NSA has problems cracking particular encryption algorithms. Because of that, the NSA finds other weak points to exploit outside of encryption itself, such as the inherent weaknesses of passwords; man-in-the-middle attacks; the injection of customized malware; and other forms of procuring the data they need.So, in this context, what exactly is "NSA-proof encryption?" This is my counter-question to the callers, and the often condescending response coming from the phone's receiver is, "we don't want the NSA to be able to get our data in any way or form." As if it could mean anything else.Now, as far as I can tell, these callers weren't engaged in illegal activities. So, chances are that the NSA weren't even looking to get their data. But let's say that's not the case. Do these callers really believe that a full disk encryption solution for their laptops will stop the NSA or any intelligence agency worth their salt from acquiring their data, especially when they have so many other tools at their disposal for extracting it? Including the possible use of physical pain?I tell the callers that we use AES-256, that the disk encryption solution is FIPS 140-2 validated and certified, answer any questions they have, and let the chips fall where they may. If they ask pointedly whether we're "NSA proof,", I answer in the negative. On every single instance, I was given an unmeaning but not unfriendly thanks and never heard from them again.The kicker: every single one of these people called in inquiring about the AlertBoot partnership program. They were people working in the data security sector. They supposedly knew about data security. They were not "normal" people. They knew better (or, at least, they should have known better).I personally think of these instances as dodging particularly pernicious bullets. But, the observation remains that, if so-called professionals fail to understand the limits of the security tools that they use, does the general populace stand a chance? Perhaps faeepalming shouldn't be the immediate response to finding out that people are looking to extricate themselves from the Ashley Madison fiasco.But then, the last ten years have shown us that no company or organization is immune to the ravages of hacking. If top-tier banks and security companies experience data breaches because they can only but curb attempts at stealing data, why would anyone believe that a peccadillo-peddling dot-com would succeed at stopping hackers?
Last week, a Indiana medical firm saw a massive medical data breach that extended throughout the entire U.S. Per online reports, possibly 4 million people in more than 230 hospitals and other healthcare organizations were affected by the breach, which occurred in May of this year.Hackers stole protected health information that included:
"patients’ names, mailing addresses, email addresses and dates of birth … additional information stolen included Social Security Numbers, lab results, dictated reports, and medical conditions."
It's the type of data that sells at a premium in online black markets that, admittedly, are just flooded with such information (and that premium shows how much more in demand detailed medical info happens to be). Needless to say, the company that got hacked – Medical Informatics Engineering (MIE), providers of the NoMoreClipBoard EHR system – went into full damage-control mode, as did its clients.
"patients’ names, mailing addresses, email addresses and dates of birth … additional information stolen included Social Security Numbers, lab results, dictated reports, and medical conditions."
Despite the disastrous results that MIE is seeing, it appears that the company had been as proactive as possible when it comes to data security. For one, they uncovered the breach internally, which contrasts with the many companies who become aware of a data breach only when a third party (like the FBI) gets in touch with them.Also, forensic analysis shows that the breach took place as early as May 7 and was discovered in May 26. While two-and-a-half weeks is an eternity in internet time, it's also not a bad performance from overworked IT staff (that's not to say that it couldn't be better).
Of course, if data encryption had been used to protect the information, retrieving useful information would have been harder for whoever hacked MIE. But, encryption was probably not a viable option for the company. The thing to understand about encryption is that it protects data when that data is not being used. (If that's news to you, just give it some thought: encryption works by scrambling information. In order for a legitimate user to work with encrypted data, it has to be unscrambled first; that is, the information is not encrypted).Now, seeing how medical organizations may need to access patient info in any given 24 hours, MIE would have no option but to ensure that medical information is always accessible. Ergo, it cannot be encrypted, at least not for live databases, which is what the hacker or hackers targeted – the story is different for data going into semi-permanent storage, obviously.
Despite what appears to be a terrible flaw regarding cryptographic security, the truth is that encryption is an excellent way to protect data. After all, there's a lot of data out there that's "not being used": when you're not interacting with your smartphone, for example, the contents of your mobile device are data that's not being used.Same goes for when you're transporting your laptop to and fro from work – it's data that's not being used. (Seriously, you're not one of those types that uses one of these steering wheel trays while driving, right?)The list of devices that hold data that's not being used (at least a good chunk of the time) is huge: smartphones, external hard disks, small USB flash drives, laptops, backup tapes, tablet computers, data discs, etc. For such devices, encryption is not only an appropriate method for protecting the data, it's considered one of the best (and in some circles, the best).It's just a matter of knowing when to use it.
Last week, FBI Director James Comey told senators that encryption was making it harder for the FBI to do its job. To back his words, he brought up examples of instances where the agency couldn't access electronic information despite having the legal right to do so. And while you won't find many denying that this is not the case – encryption software after all, is meant to make it hard to access information, regardless of who's looking to access data – you'll find plenty of detractors to the director's stance that backdoors to encryption are useful.
This is not the first time (nor the last, we presume) that Comey has brought up the issue of backdoors for encrypted data. In October of last year, Comey also talked about the issue. Furthermore, he said the FBI wasn't looking for a backdoor, but a "front door":There is a misconception that building a lawful intercept solution into a system requires a so-called “back door,” one that foreign adversaries and hackers may try to exploit.But that isn’t true. We aren’t seeking a back-door approach. We want to use the front door, with clarity and transparency, and with clear guidance provided by law. We are completely comfortable with court orders and legal process—front doors that provide the evidence and information we need to investigate crime and prevent terrorist attacks.The Electronic Frontier Foundation (EFF) provided a response to the above.While one can appreciate the FBI's insistence that they're not trying to do anything nefarious by requiring an encryption back or front door (or whatever you want to call it), the issue is a matter of the technical weaknesses that a backdoor presents and not the hidden motives it may represent.
There is a misconception that building a lawful intercept solution into a system requires a so-called “back door,” one that foreign adversaries and hackers may try to exploit.But that isn’t true. We aren’t seeking a back-door approach. We want to use the front door, with clarity and transparency, and with clear guidance provided by law. We are completely comfortable with court orders and legal process—front doors that provide the evidence and information we need to investigate crime and prevent terrorist attacks.
Comey's insistence that companies should provide some kind of backdoor to encrypted devices, allowing the FBI and other law enforcement agencies to easily access legally-obtainable evidence, is almost as laughable as asking the above question on guns.Indeed, and I'm going off on a tangent here, but wouldn't it be much more beneficial to law enforcement if guns only kill the bad guys? Think about it, there'd be positive cascading effects: agents of law enforcement wouldn't get hurt or killed. Secure in their knowledge that they won't be shot (cause they're the good guys), accusations of police brutality or excessive force would greatly decrease. Accidental deaths attributed to gunshot wounds would also decrease. Drive-by shootings of innocent bystanders would fall to zero with such a weapon. Etc, etc, etc.The fact that the FBI is not actively looking for guns that shoot only the bad guys shows us that they don't live in a fantasy world. But, apparently, there's something magical about encryption (firstlook.org). They just can't imagine a world where encryption cannot possibly be like this magic gun that only shoots the bad guys:Comey's problem is the nearly universal agreement among cryptographers, technologists and security experts that there is no way to give the government access to encrypted communications without poking an exploitable hole that would put confidential data, as well as entities like banks and power grids, at risk.But while speaking at Senate Judiciary and Senate Intelligence Committee hearings on Wednesday, Comey repeatedly refused to accept that as reality."A whole lot of good people have said it's too hard … maybe that's so," he said to the Intelligence Committee. "But my reaction to that is: I'm not sure they've really tried."Too hard? Maybe that's so? Try impossible.But let's assume that the director is correct, and that the proper incentive would make people try harder. Does it make sense that people haven't tried?
Comey's problem is the nearly universal agreement among cryptographers, technologists and security experts that there is no way to give the government access to encrypted communications without poking an exploitable hole that would put confidential data, as well as entities like banks and power grids, at risk.But while speaking at Senate Judiciary and Senate Intelligence Committee hearings on Wednesday, Comey repeatedly refused to accept that as reality."A whole lot of good people have said it's too hard … maybe that's so," he said to the Intelligence Committee. "But my reaction to that is: I'm not sure they've really tried."
The encryption software market is currently worth billions of dollars and is expected to be worth $5 billion before 2020. This figure doesn't really do it justice since many encryption solutions and technologies are provided for free or for very little money, relatively speaking. To say that the $5 billion figure is a discounted one is an understatement. If a company were to offer, in this situation, an encryption solution that provides a backdoor without being weaker than its no-backdoor peers, what would a reasonable person expect to happen?Of course, such a thing is fantasy: the presence of a backdoor by definition means you've just weakened it. After all, what's to prevent a rogue FBI agent from causing problems using the very same backdoor? Or have some foreign agent infiltrate the FBI for the same purpose, per the movie "The Departed" or it's Asian original, "Infernal Affairs"?But suspend your disbelief for a moment. Pretend that a gun that only shoots bad guys is possible. That unicorns prance in your backyard with your kids. That a particular encryption with a backdoor works just as well and as securely as one without a backdoor. One where the backdoor doesn't represent a potential data breach at all. I mean, really strain your brain.Doesn't logic tell you that it would be a heck of a payday for the company that provides this particular encryption solution? I would imagine that a very sizable part of the $5billion market would become this company's without any overt marketing. Why? Because everybody could use a backdoor, not just the government.There are many situations, far-fetched or otherwise, where a backdoor (that, again, does not pose a security risk) would come in handy. What if you forget your password and don't have a copy of the encryption key? It happens more often than you think. Or an employee unexpectedly quits and immediately hightails it to a temple in Nepal without letting you know his computer's password – the same computer where a very important contract is stored? What if you have a government employee who's involved in a crime, and evidence of his crime is stored encrypted on a government computer, and the employee in question is not cooperating?I imagine that governments alone would opt for their own use this magic encryption technology over the others, just like the US federal government requires FIPS 140-2 validated solutions on government computers. Why wouldn't they? After all, there are benefits and the backdoor of our imaginary encryption solution does not pose a security threat.Does a huge slice of $5 billion not sound like a huge incentive to you? It does to me. So why do we not have this technology?I imagine it's because it's impossible to have encryption with a "secure" backdoor, just like it's impossible to develop the aforementioned gun that only kills bad guys.
Apparently, 2015 is the year when everything old is new again: the encryption wars are back and gaining acceleration; TV shows and movies that were laid to rest are rising from their graves; and classic data breaches are raring their heads as well.For example, the site databreaches.net notes that Human Resource Advantage sent an unencrypted USB stick with sensitive data through the mail. This is the sort of breach notification that reached some epic volumes six, seven years ago. Since then, less insipid data security issues have dominated the net, airwaves, and other media.And, yet, here we are.
One of the notable things about this latest data breach is how databreaches.net covers it. If you read the short blog post out loud, you can taste the exasperation as the words make their way out of your mouth.Understandable, when you consider that this sort of data breach shouldn't be happening anymore. In an era when laptop manufacturers (I'm looking at you, Apple) are basically doing away with data ports because information is mostly shared wirelessly, this type of data breach stands out like a hipster with a lumberjack beard at a CPA conference. You really have to go out of your way for something like this to happen.One could make the argument that the information was sent in this manner precisely because the current wireless interconnectedness is full of security holes. But then, where is the device encryption? The argument falls flat by the lack of cryptographic security – a basic requirement when it comes to data security.If the companies at the center of this breach truly took "the security of the information in their control very seriously," they certainly wouldn't be in this debacle.(It should be noted, though, that there is a limit to what companies can do. Their work is cut out for them if an employee decides to secretly go rogue).
Which brings me to the title of this blog post. The FTC has censured plenty of companies that make bold, misleading claims regarding their data security practices. Usually, the companies claim on their websites that they take information security and data protection very seriously.Once a data breach hits them, the FTC investigates; if it finds that the claims don't match up with the companies' actual security operations, the end result is (usually) the company paying a large fine without admitting that they're at fault.Why is the FTC so rabid about data security claims? The argument goes something like this: Consumers were reassured by upfront data privacy promises, leading them to purchase or sign up for service. Hindsight showed that people were intentionally misled. This is no different from making false claims on the effectiveness of snake oil – and it's the FTC's job to pursue merchants who deceive.It seems to me, though, that claims about "taking the security of personal data very seriously" found in breach notification letters can also be quite misleading. Often times, the notification letter's description of the incident implies that it's very much the opposite.The empty reassurances, of course, don't really reassure anyone. It certainly has not impeded the affected from filing lawsuits, probably to the chagrin (or joy?) of the lawyers who are handling these matters. But, the level of disingenuousness is indistinguishable from what the FTC takes exception to when the reassurance is made up front.
Related Articles and Sites:http://www.databreaches.net/lets-send-an-unencrypted-thumb-drive-via-mail-what-can-possibly-go-wrong-right/
Over at firstlook.org, The Intercept has an article on creating passphrases (not passwords) that are strong and memorizable. The trick lies in the number of elements (that is, how many words are used in the passphrase) and randomness. Indeed, the principle is not different from how encryption works to secure data. For example, AlertBoot's managed laptop encryption relies on AES-256 encryption to secure a laptop's sensitive data.
First, get yourself a die, that six-sided cube with dots or numbers that's used at a craps table. You only need one (hence die and not dice). Then grab a copy of the Diceware word list. Each word is preceded by a 5-digit number.Roll your die five times to get a word. Do this for a total of seven words (so, 35 rolls). Then, chain these words together for a super-duper secure passphrase.Why is this so secure that "not even the NSA can crack it"? Again, the answer lies in the number of elements and randomness.
The Diceware word list contains 7,776 words. If you only used one word as the password, there's a 1 in 7,776 chance that it can be guessed at random. With a fast enough computer, one can go through the entire list of words in a matter of seconds (this act of going through the entire set of possibilities is known as "brute forcing").When two words are used, the set of possibilities increases to over 60 million (7,776 x 7,776 – also known as 7,7762). This offers better security but computers can go through trillions of these per second, so it's not actually secure enough.It turns out that 7,7767 (that raised 7 is where the seven words come into play) is a huge number. Even at a brute force rate of a trillion tries per second, it would take 27 million years to exhaust the list of words. If someone were to get lucky and manage to find the passphrase within the early stages (say, the 10% mark), that still represents 2.7 million years. The 1% mark? 270,000 years.Cool. So what's the deal with the die? Can't you just pick any seven words?
Nope. Because when you pick random words, they're usually not random. They tend to be words you know. And words you know are probably those that most people know and use. This tends to limit the set of words (for example, you probably wouldn't select "zootropic" from the top of your head). Furthermore, chances are you'll arrange them in a linguistically logical way so you can memorize the passphrase more easily. Again, the effect is to limit the passphrase set.Of course, using the Diceware method above doesn't provide failsafe randomness. For example, you roll five numbers and look up the word…and it's a word you don't like / can't memorize / never seen before / is against your religion / whatever and roll again, finding a word that is more suitable for your awesome passphrase.Such an act also artificially limits the set of words. People in the business of hacking passwords don't rely on brute force methods. Rather, they try to get into your head, have a stab at what you may have decided to choose as a password or passphrase. That's why names of family members, dates of birth of loved ones, your personal heroes, the name of your first pet, etc. are generally considered to be valuable clues, as these and other personal information is generally used as a basis for a password.Only true randomness protects you from yourself. Which, incidentally, is the basis of modern encryption.
If you're not a gamer or interested in computer games, you may not be familiar with Twitch, a site that streams live feeds of people playing (and commenting on) titles like League of Legends or Counter-Strike. However, the site is extremely popular – techcrunch.com notes that it's the "fourth largest site… in terms of peak traffic" – and, thus, it shouldn't surprise anyone that it's a target for hackers. It looks like the hackers finally had their day: the team at Twitch notified users that they were forced to reset all passwords because of a data infiltration.They also noted that all passwords were "cryptographically protected"… so what's the deal with the password being reset? After all, isn't encryption supposed to be nearly impossible to break?
When it comes to encryption, though, encryption is not encryption is not encryption. That is, there are all sorts of cryptographic solutions, each meant to do one thing (and not another). For example, a common misunderstanding that we at AlertBoot run into is how laptop disk encryption works.A sizable minority are under the impression that disk encryption allows files to be sent over the internet securely. Or that, since the laptop is encrypted, data copied to a backup disk will also be encrypted automatically. This couldn't be further from the truth, and is an excellent way to increase the risks of a data breach. Disk encryption works by literally encrypting the hard disk of a computer…and nothing more.
Technically, files on an encrypted disk are not encrypted. As I noted above, it's the disk that's encrypted. The files just happen to be protected because they're in an encrypted storage medium. This is why if the same files are copied to an (unencrypted) external hard drive or sent as an attachment via email, they'll be sent and received as plain, unencrypted files.File encryption would resolve the problem but introduce its own: each new file would require encryption. Accessing already encrypted files would require that password be entered each time you try to open them. Data security blind spots like temporary files would become a problem.So, each type of cryptographic solution has its pros and cons.
When it comes to passwords something known as a cryptographic hash is used. Technically, this is not encryption. This is a process where plain text is converted into gibberish…but it cannot be converted back. It's ideal for passwords because it ensures that only the user and no one else (not even system administrators) knows the password.So, why did Twitch reset these passwords? Because there is still a way to figure out these hashed passwords. Essentially, you hash a list of common passwords and see what you get. Because the hash algorithm will always return the same output for an input, it's a matter of comparing the stolen passwords to known input-output outputs.Granted, the hackers won't be able to figure out each and every single password, but the sheer size of Twitch's user base guarantees that the hackers will uncover enough of them to cause damage.