in

This Blog

Syndication

Tags

News

AlertBoot offers a cloud-based full disk encryption and mobile device security service for companies of any size who want a scalable and easy-to-deploy solution. Centrally managed through a web based console, AlertBoot offers mobile device management, mobile antivirus, remote wipe & lock, device auditing, USB drive and hard disk encryption managed services.

Archives

AlertBoot Endpoint Security

AlertBoot offers a cloud-based full disk encryption and mobile device security service for companies of any size who want a scalable and easy-to-deploy solution. Centrally managed through a web based console, AlertBoot offers mobile device management, mobile antivirus, remote wipe & lock, device auditing, USB drive and hard disk encryption managed services.
  • Encryption: Backdoors That "Work" Don't Exist Because They Are A Fantasy

    Last week, FBI Director James Comey told senators that encryption was making it harder for the FBI to do its job.  To back his words, he brought up examples of instances where the agency couldn't access electronic information despite having the legal right to do so.  And while you won't find many denying that this is not the case – encryption software after all, is meant to make it hard to access information, regardless of who's looking to access data – you'll find plenty of detractors to the director's stance that backdoors to encryption are useful.

    Santayana Must Be Facepalming Himself in His Grave

    This is not the first time (nor the last, we presume) that Comey has brought up the issue of backdoors for encrypted data.  In October of last year, Comey also talked about the issue.  Furthermore, he said the FBI wasn't looking for a backdoor, but a "front door":
    There is a misconception that building a lawful intercept solution into a system requires a so-called “back door,” one that foreign adversaries and hackers may try to exploit.

    But that isn’t true. We aren’t seeking a back-door approach. We want to use the front door, with clarity and transparency, and with clear guidance provided by law. We are completely comfortable with court orders and legal process—front doors that provide the evidence and information we need to investigate crime and prevent terrorist attacks.
    The Electronic Frontier Foundation (EFF) provided a response to the above.

    While one can appreciate the FBI's insistence that they're not trying to do anything nefarious by requiring an encryption back or front door (or whatever you want to call it), the issue is a matter of the technical weaknesses that a backdoor presents and not the hidden motives it may represent.

    Why Can't They Make a Gun that Only Kills Bad Guys?

    Comey's insistence that companies should provide some kind of backdoor to encrypted devices, allowing the FBI and other law enforcement agencies to easily access legally-obtainable evidence, is almost as laughable as asking the above question on guns.

    Indeed, and I'm going off on a tangent here, but wouldn't it be much more beneficial to law enforcement if guns only kill the bad guys?  Think about it, there'd be positive cascading effects: agents of law enforcement wouldn't get hurt or killed.  Secure in their knowledge that they won't be shot (cause they're the good guys), accusations of police brutality or excessive force would greatly decrease.  Accidental deaths attributed to gunshot wounds would also decrease.  Drive-by shootings of innocent bystanders would fall to zero with such a weapon.  Etc, etc, etc.

    The fact that the FBI is not actively looking for guns that shoot only the bad guys shows us that they don't live in a fantasy world.  But, apparently, there's something magical about encryption (firstlook.org).  They just can't imagine a world where encryption cannot possibly be like this magic gun that only shoots the bad guys:
    Comey's problem is the nearly universal agreement among cryptographers, technologists and security experts that there is no way to give the government access to encrypted communications without poking an exploitable hole that would put confidential data, as well as entities like banks and power grids, at risk.

    But while speaking at Senate Judiciary and Senate Intelligence Committee hearings on Wednesday, Comey repeatedly refused to accept that as reality.

    "A whole lot of good people have said it's too hard … maybe that's so," he said to the Intelligence Committee. "But my reaction to that is: I'm not sure they've really tried."
    Too hard?  Maybe that's so?  Try impossible.

    But let's assume that the director is correct, and that the proper incentive would make people try harder.  Does it make sense that people haven't tried?

    Would You Leave Billions of Dollars on the Table?

    The encryption software market is currently worth billions of dollars and is expected to be worth $5 billion before 2020.  This figure doesn't really do it justice since many encryption solutions and technologies are provided for free or for very little money, relatively speaking.  To say that the $5 billion figure is a discounted one is an understatement.  If a company were to offer, in this situation, an encryption solution that provides a backdoor without being weaker than its no-backdoor peers, what would a reasonable person expect to happen?

    Of course, such a thing is fantasy: the presence of a backdoor by definition means you've just weakened it.  After all, what's to prevent a rogue FBI agent from causing problems using the very same backdoor?  Or have some foreign agent infiltrate the FBI for the same purpose, per the movie "The Departed" or it's Asian original, "Infernal Affairs"?

    But suspend your disbelief for a moment.  Pretend that a gun that only shoots bad guys is possible. That unicorns prance in your backyard with your kids.  That a particular encryption with a backdoor works just as well and as securely as one without a backdoor.  One where the backdoor doesn't represent a potential data breach at all.  I mean, really strain your brain.

    Doesn't logic tell you that it would be a heck of a payday for the company that provides this particular encryption solution?  I would imagine that a very sizable part of the $5billion market would become this company's without any overt marketing.  Why?  Because everybody could use a backdoor, not just the government.

    There are many situations, far-fetched or otherwise, where a backdoor (that, again, does not pose a security risk) would come in handy.  What if you forget your password and don't have a copy of the encryption key?  It happens more often than you think.  Or an employee unexpectedly quits and immediately hightails it to a temple in Nepal without letting you know his computer's password – the same computer where a very important contract is stored?  What if you have a government employee who's involved in a crime, and evidence of his crime is stored encrypted on a government computer, and the employee in question is not cooperating?

    I imagine that governments alone would opt for their own use this magic encryption technology over the others, just like the US federal government requires FIPS 140-2 validated solutions on government computers.  Why wouldn't they?  After all, there are benefits and the backdoor of our imaginary encryption solution does not pose a security threat.

    Does a huge slice of $5 billion not sound like a huge incentive to you?  It does to me.  So why do we not have this technology?

    I imagine it's because it's impossible to have encryption with a "secure" backdoor, just like it's impossible to develop the aforementioned gun that only kills bad guys.


    Related Articles and Sites:
    http://www.npr.org/sections/thetwo-way/2015/07/08/421251662/fbi-director-says-agents-need-access-to-encrypted-data-to-preserve-public-safety
    https://www.eff.org/deeplinks/2014/10/eff-response-fbi-director-comeys-speech-encryption
    http://www.globalresearch.ca/fbi-director-comey-demands-backdoor-access-to-encrypted-data/5462254
     
  • Maybe FTC Should Take To Task Breached Companies Claiming To Take "Security Seriously"

    Apparently, 2015 is the year when everything old is new again: the encryption wars are back and gaining acceleration; TV shows and movies that were laid to rest are rising from their graves; and classic data breaches are raring their heads as well.

    For example, the site databreaches.net notes that Human Resource Advantage sent an unencrypted USB stick with sensitive data through the mail.  This is the sort of breach notification that reached some epic volumes six, seven years ago.  Since then, less insipid data security issues have dominated the net, airwaves, and other media.

    And, yet, here we are.

    Excoriation

    One of the notable things about this latest data breach is how databreaches.net covers it.  If you read the short blog post out loud, you can taste the exasperation as the words make their way out of your mouth.

    Understandable, when you consider that this sort of data breach shouldn't be happening anymore.  In an era when laptop manufacturers (I'm looking at you, Apple) are basically doing away with data ports because information is mostly shared wirelessly, this type of data breach stands out like a hipster with a lumberjack beard at a CPA conference.  You really have to go out of your way for something like this to happen.

    One could make the argument that the information was sent in this manner precisely because the current wireless interconnectedness is full of security holes.  But then, where is the device encryption?  The argument falls flat by the lack of cryptographic security – a basic requirement when it comes to data security.

    If the companies at the center of this breach truly took "the security of the information in their control very seriously," they certainly wouldn't be in this debacle.

    (It should be noted, though, that there is a limit to what companies can do.  Their work is cut out for them if an employee decides to secretly go rogue).

    FTC Goes After Companies for Misleading Consumers

    Which brings me to the title of this blog post.  The FTC has censured plenty of companies that make bold, misleading claims regarding their data security practices.  Usually, the companies claim on their websites that they take information security and data protection very seriously.

    Once a data breach hits them, the FTC investigates; if it finds that the claims don't match up with the companies' actual security operations, the end result is (usually) the company paying a large fine without admitting that they're at fault.

    Why is the FTC so rabid about data security claims?  The argument goes something like this: Consumers were reassured by upfront data privacy promises, leading them to purchase or sign up for service.  Hindsight showed that people were intentionally misled.  This is no different from making false claims on the effectiveness of snake oil – and it's the FTC's job to pursue merchants who deceive.

    It seems to me, though, that claims about "taking the security of personal data very seriously" found in breach notification letters can also be quite misleading.  Often times, the notification letter's description of the incident implies that it's very much the opposite.

    The empty reassurances, of course, don't really reassure anyone.  It certainly has not impeded the affected from filing lawsuits, probably to the chagrin (or joy?) of the lawyers who are handling these matters.  But, the level of disingenuousness is indistinguishable from what the FTC takes exception to when the reassurance is made up front.

     

    Related Articles and Sites:
    http://www.databreaches.net/lets-send-an-unencrypted-thumb-drive-via-mail-what-can-possibly-go-wrong-right/

     
  • Data Encryption: Creating Passphrases That You Can Memorize While Thwarting Would-Be Monitors

    Over at firstlook.org, The Intercept has an article on creating passphrases (not passwords) that are strong and memorizable.  The trick lies in the number of elements (that is, how many words are used in the passphrase) and randomness.  Indeed, the principle is not different from how encryption works to secure data.  For example, AlertBoot's managed laptop encryption relies on AES-256 encryption to secure a laptop's sensitive data.

    Creating a Passphrase that Can't be Easily Cracked

    First, get yourself a die, that six-sided cube with dots or numbers that's used at a craps table.  You only need one (hence die and not dice).  Then grab a copy of the Diceware word list.  Each word is preceded by a 5-digit number.

    Roll your die five times to get a word.  Do this for a total of seven words (so, 35 rolls).  Then, chain these words together for a super-duper secure passphrase.

    Why is this so secure that "not even the NSA can crack it"?  Again, the answer lies in the number of elements and randomness.

    7 Elements

    The Diceware word list contains 7,776 words.  If you only used one word as the password, there's a 1 in 7,776 chance that it can be guessed at random.  With a fast enough computer, one can go through the entire list of words in a matter of seconds (this act of going through the entire set of possibilities is known as "brute forcing").

    When two words are used, the set of possibilities increases to over 60 million (7,776 x 7,776 – also known as 7,7762).  This offers better security but computers can go through trillions of these per second, so it's not actually secure enough.

    It turns out that 7,7767 (that raised 7 is where the seven words come into play) is a huge number.  Even at a brute force rate of a trillion tries per second, it would take 27 million years to exhaust the list of words.  If someone were to get lucky and manage to find the passphrase within the early stages (say, the 10% mark), that still represents 2.7 million years.  The 1% mark? 270,000 years.

    Cool.  So what's the deal with the die?  Can't you just pick any seven words?

    Random

    Nope.  Because when you pick random words, they're usually not random.  They tend to be words you know.  And words you know are probably those that most people know and use.  This tends to limit the set of words (for example, you probably wouldn't select "zootropic" from the top of your head).  Furthermore, chances are you'll arrange them in a linguistically logical way so you can memorize the passphrase more easily.  Again, the effect is to limit the passphrase set.

    Of course, using the Diceware method above doesn't provide failsafe randomness.  For example, you roll five numbers and look up the word…and it's a word you don't like / can't memorize / never seen before / is against your religion / whatever and roll again, finding a word that is more suitable for your awesome passphrase.

    Such an act also artificially limits the set of words.  People in the business of hacking passwords don't rely on brute force methods.  Rather, they try to get into your head, have a stab at what you may have decided to choose as a password or passphrase.  That's why names of family members, dates of birth of loved ones, your personal heroes, the name of your first pet, etc. are generally considered to be valuable clues, as these and other personal information is generally used as a basis for a password.

    Only true randomness protects you from yourself.  Which, incidentally, is the basis of modern encryption.


    Related Articles and Sites:
    https://firstlook.org/theintercept/2015/03/26/passphrases-can-memorize-attackers-cant-guess/
     
  • Data Encryption: Game Livestreaming Site "Twitch" Resets Encrypted Passwords

    If you're not a gamer or interested in computer games, you may not be familiar with Twitch, a site that streams live feeds of people playing (and commenting on) titles like League of Legends or Counter-Strike.  However, the site is extremely popular – techcrunch.com notes that it's the "fourth largest site… in terms of peak traffic" – and, thus, it shouldn't surprise anyone that it's a target for hackers.  It looks like the hackers finally had their day: the team at Twitch notified users that they were forced to reset all passwords because of a data infiltration.

    They also noted that all passwords were "cryptographically protected"… so what's the deal with the password being reset?  After all, isn't encryption supposed to be nearly impossible to break?

    A Rose is a Rose is a Rose…

    When it comes to encryption, though, encryption is not encryption is not encryption.  That is, there are all sorts of cryptographic solutions, each meant to do one thing (and not another).  For example, a common misunderstanding that we at AlertBoot run into is how laptop disk encryption works.

    A sizable minority are under the impression that disk encryption allows files to be sent over the internet securely.  Or that, since the laptop is encrypted, data copied to a backup disk will also be encrypted automatically.  This couldn't be further from the truth, and is an excellent way to increase the risks of a data breach.  Disk encryption works by literally encrypting the hard disk of a computer…and nothing more.

    Not All Encryption Works the Same

    Technically, files on an encrypted disk are not encrypted.  As I noted above, it's the disk that's encrypted.  The files just happen to be protected because they're in an encrypted storage medium.  This is why if the same files are copied to an (unencrypted) external hard drive or sent as an attachment via email, they'll be sent and received as plain, unencrypted files.

    File encryption would resolve the problem but introduce its own: each new file would require encryption.  Accessing already encrypted files would require that password be entered each time you try to open them.  Data security blind spots like temporary files would become a problem.

    So, each type of cryptographic solution has its pros and cons.

    Password Encryption = Hash

    When it comes to passwords something known as a cryptographic hash is used.  Technically, this is not encryption.  This is a process where plain text is converted into gibberish…but it cannot be converted back.  It's ideal for passwords because it ensures that only the user and no one else (not even system administrators) knows the password.

    So, why did Twitch reset these passwords?  Because there is still a way to figure out these hashed passwords.  Essentially, you hash a list of common passwords and see what you get.  Because the hash algorithm will always return the same output for an input, it's a matter of comparing the stolen passwords to known input-output outputs.

    Granted, the hackers won't be able to figure out each and every single password, but the sheer size of Twitch's user base guarantees that the hackers will uncover enough of them to cause damage.
    Related Articles and Sites:
    http://techcrunch.com/2015/03/23/twitch-passwords-data-breach-hack/
     
  • HIPAA Data Breach: Medical Office Alerts Patients That Nothing Happened

    I've just run across a data breach notification that is a first of its kind: a data breach where the affected organization tells its clients (technically, patients) that nothing happened.  It's like the Seinfeld show of data breaches.  The breach notification letter is about nothing.  Absolutely nothing.  Yet, there is something there.

    All kidding aside, this situation is a novel reason for deploying HIPAA encryption software in medical environments.

    Break-in But No Disturbance

    According to ktvz.com, Mosaic Medical in Oregon has notified 2,207 patients of a "possible" breach of medical information.  In January of this year, Mosaic discovered traces of a break-in at their Health Information Technology department.  Indeed, the organization said:
    There was nothing stolen from the office, and there was no breach of our electronic medical records system.  There is no evidence that anything in the office was disturbed.
    Why the breach notification letter?  The problem lies with their non-digital (i.e., paper) documents:
    we cannot say with certainty that no medical records were accessed.  The personal information that was possibly accessed was on paper documents within the office and included health information, medical insurance information, phone number, and e-mail addresses.
    Of course, there is always the possibility that these medical records were not accessed – it could be that the guy doing the B&E got cold feet as he (or she or they) was crossing the threshold from vandalism to outright burglary.

    Another Reason Why You're Better Off With Encryption Than Without

    The above highlights an interesting situation.  Forget for a moment that a medical office tends to have paper documents with sensitive data on them (for one, incoming patients have to fill forms).  Let's imagine a situation where all data is computerized and that "to suffer from a data breach" means an unauthorized third party accessed medical data.

    Under the current HIPAA/HITECH regulations, covered entities and their business associates are to assume that a potential data breach situation ("potential" because it's not known whether a data breach occurred or not.  For instance, if a laptop is lost) will actually result in a data breach, and thus is a data breach, unless it can be proven otherwise.

    In this light, one can easily see why the use of disk encryption software provides safe harbor from HIPAA/HITECH when dealing with lost or stolen laptops: knowing that the odds of brute-force hacking into an encrypted laptop are minimal, one can assume that the contents of the device are safe if encrypted.  There are, of course, caveats: if the password was written to a post-it and stuck to the laptop, or if the person who absconded with the laptop is the user (think of ex-employees, for example).

    With Mosaic above, even if they operated a fully digital office without a trace of paper, they'd still have to notify their 2,000-odd patients of a potential data breach if they don't use computer encryption software.  The reason being that it's not really possible to figure out whether a computer has been accessed or not: sure, you can set up a system to log all such all instances.  At the same time, erasing such logs and cleaning up any digital traces is not exactly rocket science.

    Related Articles and Sites:
    http://www.databreaches.net/or-mosaic-medical-notifies-patients-of-breach/
    http://www.ktvz.com/news/Mosaic-Medical-reports-possible-personal-info-breach/31763114
     
  • HIPAA Laptop Encryption: Amedisys Notifies Of Possible Data Breach Of Encrypted Devices

    Is the use of encryption a silver-bullet for HIPAA covered entities that are looking to gain safe harbor from the notification policies found under the HITECH Breach Notification Rule?  Generally, yes.  There is a caveat, however, as Amedisys's recent breach notification shows: you must be able to prove that the encrypted data remains secure after the data breach.  Otherwise, what's the use of using HIPAA-grade encryption software for laptops?

    Inventory Check Raises Issues

    What happened at Amedisys?  On March 2nd, the hospice care provider revealed that they were unable to account for 142 encrypted computers and laptops.  Which is unusual for a number of reasons:
    1. That's a lot of devices.  Was the company not doing regular checks, say every 12 months or so?  Because that's a lot of devices to go missing unless audits were pretty rare.
    2. These devices were encrypted.  Although there's always room for mistakes and paranoia, if the company determined that all of these were encrypted when they went missing, there's really no reason to notify anyone about it.  (It should be noted that Amedisys issues a press release, which means they elected to notify basically everyone who had an internet connection.)
    3. While the ratio of missing desktop to laptop computers was not given, it's hard to imagine that it took an inventory check to see whether desktop computers were missing.  Even if only one were missing, it tends to raise alarms in a way missing laptops do not.
    The company further stated that the following personal information could have been stored on these devices: "name, address, Social Security number, date of birth, Medicare and insurance ID numbers, medical records and other personally identifiable data."

    Amedisys revealed that a total of 6,909 patients were affected.  Where did these laptops and desktops go?

    Personnel Changes

    As it turns out, these devices were "assigned to Amedisys clinicians and other team members who left the company between 2011 and 2014."  And that's a problem in many ways.

    The last time I checked, computing hardware still costs a bit of money.  That these devices were essentially given away when people left employment means either that Amedisys had poor controls or is (was?) a very generous company. (On second though, it could also mean the devices were so subpar technologically that management decided giving them away would be cheaper than collecting them back.)

    Also, the software that is installed on these unaccounted-for machines can be costly.  For example, the cost of AlertBoot's full disk encryption is on a "per machine" basis, regardless of how many logins are tied to each one of those machines.  Let's say that Amedisys was using AlertBoot.  Unless the licenses are retrieved from missing devices, the company would be footing the bill for machines they no longer had control over.  Admittedly, we cannot exclude the possibility that the company was using free software like the recently-deceased TrueCrypt, which would allow such actions to be impact-free from a financial perspective.

    The biggest problem, though, and the one that touches on the HIPAA encryption caveat I mentioned at the top of this post, is that the information of patients can be breached despite the use of strong encryption: the clinicians and other team members have the passwords and can access the data.

    Attack from Within

    One of the rising problems of medical data breaches centers around employees: while most can be trusted, there is that small faction with a bent towards malfeasance.  If we can claim that around 2% of employees engage in activities like stealing medical IDs for resale on digital black markets, and that each missing Amedisys device represents one person, then about 3 people could have made use of the fact that they conveniently hold the passwords to encrypted data.

    (One way of preventing of such scenarios from occurring, assuming that devices cannot be collected, is to trigger a remote wipe of the data – if the encryption solution has such a capability built-in the way AlertBoot does).
    Related Articles and Sites:
    http://www.databreaches.net/amedisys-notifies-6909-patients-after-failure-to-locate-142-devices-during-inventory/
     
More Posts Next page »