in

This Blog

Syndication

Tags

News

AlertBoot offers a cloud-based full disk encryption and mobile device security service for companies of any size who want a scalable and easy-to-deploy solution. Centrally managed through a web based console, AlertBoot offers mobile device management, mobile antivirus, remote wipe & lock, device auditing, USB drive and hard disk encryption managed services.

AlertBoot Endpoint Security

AlertBoot offers a cloud-based full disk encryption and mobile device security service for companies of any size who want a scalable and easy-to-deploy solution. Centrally managed through a web based console, AlertBoot offers mobile device management, mobile antivirus, remote wipe & lock, device auditing, USB drive and hard disk encryption managed services.
  • Canada Data Breach Law To Be Updated, May Impose Fines

    Canada has introduced a bill that, if passed, would amend the Personal Information and Electronic Documents Act by making data breach notifications mandatory.  It looks like safe harbor is not being given specifically for the use of security solutions such as AlertBoot's managed disk encryption software.  Nevertheless, safety may be found in the language that, ironically, could also hamstring the well-meaning legislation.

    Good Things about the (Upcoming) Law

    The site canadianlawyermag.com notes that Canada's neighbor to the south issues fines as "an established punishment for data breaches," and that this could be true in the land of maple syrup and hockey.  Bill S-4, introduced in April of this year, would be the vehicle for doing so.
    It would require organizations to notify both individuals and the privacy commissioner, in the event of a breach of security of personal information and keep a record of every breach. Breaches could also incur fines of up to $100,000.
    A former interim privacy commissioner is quoted as saying that the bill gets many things right:
    • Mandatory breach notification.
    • Requiring the notification "as soon as possible" but not within a specified timeline is "well-thought-through."
    • "the notification would occur 'only in cases of significant harm,' which includes 'physical and moral' harm" (that's a direct quote from canadianlawyermag.com).
    I've got to disagree a bit here.  The first bullet makes sense: if you don't make it mandatory, odds are that the companies that need to fess up the most will not do so.  However, the other two?

    Specified Timeline

    A "specified timeline," as it was put, is definitely a good idea.  While the argument stands that companies will need time to determine what happened – and the duration of the investigation is bound to differ from case to case – it's also true that there is such a thing as taking too long.  Forensic investigations can sometimes take over six months, perhaps over one year.  The problem with ASAP is that it means different things to different people, and a breached entity can always argue, in a logical and reasonable manner, why they couldn't notify people sooner.

    The point of the notification is to ensure that affected individuals can protect themselves, meaning that they should be alerted while they can still do something about it.  HIPAA, the US Health Insurance Portability and Accountability Act, sets a 60-calendar day limit for exactly this reason.

    Risk of Harm Threshold

    Another idea that's being flagged as a good thing, but is not in my opinion, is the clause of "significant harm".  This reads suspiciously similar to HIPAA's own "risk of harm threshold."  This essentially put the breached entities in charge of deciding whether a data breach would result in harm to individuals.

    The position has been compared to putting the fox in charge of the chicken coop.  The wolves in charge of the sheep.  Children in charge of the candy store.  You get the idea…it's a bad idea.

    Of course, a perfectly logical reason is given for why there should be a significant harm clause: constant notification over something that usually turns out to be nothing (the loss of a USB drive, which happens often enough but usually leads to no harm, is mentioned) will just serve to unnecessarily scare people.  In the long run, it could lead to a "breach notification tolerance" where people ignore such things.

    This could very well be, but still doesn't resolve the conflict of interest.  Furthermore, what if the breached organization makes the wrong call (no nefarious intentions involved)?  Victimized individuals wouldn't know why, when, or how they became a data breach victim.  Worse, they may associate their troubles with another organization that did notify people but weren't the root cause of their problems.

    There's a reason why laws are being re-written, updated, and modified to remove harm thresholds.

    Related Articles and Sites:
    http://www.canadianlawyermag.com/5334/Data-breach-disclosure-law-could-bring-fines.html
    http://www.databreaches.net/data-breach-disclosure-law-could-bring-fines-in-canada/
     
  • HIPAA Encryption: Why Use NIST-Validated Encryption?

    The site gawker.com has some click-bait titled "Public NYC Taxicab Database Lets You See How Celebrities Tip."  Despite the gossipy nature of the title, the article itself goes into some very specifics on how this was possible despite the information being "protected."  It's not a giant leap from there to why one must ensure the use of strong encryption for HIPAA-protected data.

    NYC Releases Data to Researcher

    A data analyst was the recipient of an "enormous database of every cab ride taken in New York City in 2013" (legally obtained from NYC officials).  The city did make attempts to anonymize the data via a method known as "hashing".

    Hashing is known as a "one-way algorithm" among some.  What this means is it's easy to take A and convert it to B, but it's impossible to figure out from whence B came from.  For example, I have the number 12, drop it into a machine, and it comes out as 71.  What is the relation between the two?  Who knows?  It's impossible to figure out the exact way that 12 became a 71.  Perhaps you added 59.  Or maybe you multiplied by 5 and added 11.  Or maybe the formula asks you take the 59 and subtract it from 100.  This leaves 41.  Then you divide it by 3, rounding up the number if a decimal is involved (which gives 13), then multiply by 5 (that leaves 65), then add the first digit of the result (which is the 6 of the 65), which gives 71.

    Obviously, the formula is made extremely complicated to prevent easy analysis.  The point is, there's no way to know what the exact formula is. And, the result B will depend on what you've entered as the input A.  There's a way around this supposed complication, however.

    Since a hash creates a unique output depending on the input (in our example above, the 12 will always lead to a 71), you just feed it everything and note what you get as an output, and so create a database of the linked input and output.  Of course, the help of a computer is needed to create such a database.  You can distribute the workload across multiple computers, run in 24/7 and soon enough you've got a huge database that you can reference.

    One of the oldest and most studied hashes is MD5.

    All Your Passwords is Belong to Us

    MD5 is notoriously weak for a number of reasons.  As I already noted, it's old and heavily researched.  The former means that advances in hardware have aided in the defeat of MD5: faster computing means that running a list of numbers through MD5 is also accelerated, spitting numbers faster and faster.  The latter means that people have humongous databases.  If the input to MD5 is, say, shorter than 20 characters, there's a very good chance its output has been documented somewhere.

    Indeed, MD5 is the reason why security researchers were apoplectic over certain online data breaches in the past: the breached companies said that the passwords were protected but failed to mention was that they had used MD5.  You may as well proudly declare that your bank vault's combination has been set to 0-0-0-0-0.

    This is not to say that MD5 is useless.  There's a technique known as "salting" where random characters are introduced in addition to the input to create to (hopefully) undocumented outputs.  For example, if a user enters 12 as the input, perhaps $12j9wK (the salt) is added to the beginning.  Technically the input becomes $12j9wK12, which would create an output other than 71.

    But this can be defeated as well.  In the NYC taxicab database story, the researcher knew something about the inputs.  The database included NYC taxi medallion numbers, which has a known format, as well as other known data.  This could be used to reverse engineer the original data if it was hashed without salting, obviously.  But one could hit upon the original data even if a salt had been used – assuming that a weak salt has been used (remember, MD5 has been researched heavily and for a long time).

    What Does this Mean for HIPAA Encryption?

    Basically, encryption algorithms are also beholden to similar and other problems that MD5 has shown over the years.  This is why HIPAA defers to the National Institute of Standards and Technology when it comes to choosing a particular data security solution.  Although NIST cannot provide a recommendation on what to use, it does provide a list of features, parameters, and other requirements that an appropriate encryption solution must posses.  Furthermore, it will provide a certificate of validation for any solutions they examine (and meet the requirements, obviously).
    Choosing a validated solution means that you're using a strong encryption solution.  Using something else could lead to problems down the line (for example, it could turn out that the solution had vulnerabilities or that it didn't work from the beginning).


    Related Articles and Sites:
    http://gawker.com/the-public-nyc-taxicab-database-that-accidentally-track-1646724546
     
  • HIPAA Encryption: Medical Record Theft From Shed Affects 40,000 In Jersey City

    Tens of thousands of patients were affected by a medical data breach in New Jersey.  Patient health records, collected between 1982 and 2009, were stolen from the shed of a Jersey City doctor.  The story would be unremarkable except for the number of people involved and the fact that information was stored on paper.  This is one of those instances where HIPAA encryption software wouldn't have helped.

    But it does raise a question:  are medical professionals kidding themselves when they proclaim a data "container" of any kind (be it a laptop, a USB drive, a tub full of discarded x-rays, or a box full of files) was stolen for its resale value, and not for the data in them?

    Terrible Security

    According to nj.com, the records were stolen from the grounds of a doctor's practice.  On those grounds was a storage shed.  In that shed were stored medical records of patients.  The thing securing this PHI data?  Two latches on the shed that "had been cut with an unknown cutting tool." (I'd bet good money the tool was a bolt cutter.  They're cheap and pretty portable.  It's why bike thieves use them).

    The breach involved SSNs, dates of birth, addresses, and medical histories.  Despite knowing what type of information was breached, the doctor was unable to name any of the patients when filing a police report.

    There is no easy and foolproof way of securing paper documents. This is especially true for single practitioners who cannot afford the professional services of storage experts like Iron Mountain (which, despite its name and line of business, has been at the center or a number of data breaches over the years).  What can an individual do?  Storing something in a shed sounds like a solution, although not an ideal one.

    Had it been me, I'd probably have done something similar but used a storage facility with security personnel and closed-circuit cameras.  It's not perfect but it's better than a random shed.  Plus, there is a certain degree of safety in anonymity: a shed owned by a doctor stores the doctor's stuff, but which lot among tens or hundreds of identical others belongs to him?  A crime of opportunity is harder for the latter.

    Was It for the Data?

    While the contents of the shed were not listed, it stands to reason that it contained something easier to flip on the streets than a bunch of documents.  Yet, there's no mention of other stuff.  The patient files were all that was taken.

    And with medical data theft (and illegal sale) growing at astronomical rates, the only conclusion to be garnered here is that the break-in is inextricably tied with data theft.

    Things become more nebulous, however, if theft involves something with tangible value, such as a laptop full of medical data.  Was the medical data the target or the laptop?  Many medical organizations will say it's the latter.  But there's no way to be sure.

    Furthermore, that's not the right question to ask.  The right question is, "seeing how a laptop full of PHI was stolen, what are the chances that the PHI will be breached?"  With incidents like the above, it wouldn't be an outlier to find that thieves check the contents of the laptop to see if they managed to land a fish that's more than it appears.

    Related Articles and Sites:
    http://www.phiprivacy.net/nj-medical-records-of-40000-patients-stolen-from-shed-behind-doctors-office/
    http://www.nj.com/hudson/index.ssf/2014/10/medical_records_of_40000_patients_stolen_from_jersey_city_doctors_office_police_say.html
    http://www.nj.com/hudson/index.ssf/2014/10/medical_records_of_40000_patients_stolen.html
     
  • HIPAA Laptop Encryption Used In Stolen South Carolina Dept. Of Mental Health Computer

    I don't think I've ever come across a story where a laptop stolen from a car turns out to be protected according to HIPAA standards.  Well, there's always a first time for everything: according to thestate.com, laptop encryption was used on a stolen computer that belonged to the South Carolina Department of Mental Health.  And while the department has earned its Teflon shield, the same cannot be said of the employee.

    Parked, Unlocked Car

    If there was ever a reason to use encryption software, the following story is it.  An employee with the above-mentioned department lost her state-issued laptop, cellular phone, keys, and a Wi-Fi hotspot device, among other things.  And while she's a victim of theft, she cannot avoid blame, either.

    For you see, the items were stolen from a car.  A car that was apparently parked unlocked.  Regardless of how safe the neighborhood, doing so is just asking for trouble.

    Of course, it's cases like this that makes the benefits of encryption salient: despite knowing that leaving your car parked and unlocked is a bad idea, people do it all the time.  The fact that there are valuables in it does not matter, apparently.

    In light of such actions, does it surprise anyone that employee education on data security procedures and best practices doesn't really work?

    Chicken and Egg

    There is room for debate here, though:  Couldn't it be that the employee had acted a bit irresponsibly knowing that the laptop was encrypted?  You know, like how people started speeding up once seat belts in cars became mandatory?  And then once again when airbags were mandatory as well?

    My guess is that the answer is "no" in this particular circumstance, since it wasn't only the laptop that was stolen.  A bunch of other stuff, presumably personal in nature, was stolen as well – and my guess is that these were not protected as well as the laptop had been.

    Why is This News?

    You got me.  Under HIPAA, the conditions are met for safe harbor from the Breach Notification Rule.  Furthermore, South Carolina is one of those states where the use of encryption provides safe harbor from state breach notification laws.  (And let's not forget that encryption provides real protection.  It's not just some requirement you cross off a list to "prove" that you're meeting some abstract requirement that's part of some bureaucracy).

    My personal guess is that an intrepid reporter found a case filed with the police department, and got wind of the data breach that way.  Had it not been for the theft of personal items, chances are that the public would not be aware of such a happening.
     
    Related Articles and Sites:
    http://www.thestate.com/2014/10/17/3751592_sc-mental-health-workers-laptop.html?rh=1
    http://www.phiprivacy.net/sc-mental-health-workers-laptop-stolen-in-rock-hill/
    http://www.perkinscoie.com/en/news-insights/security-breach-notification-chart-south-carolina.html
     
  • Data Encryption: TD Bank Settles For $850,000 After 1.5-Yr Long Investigation Over Lost Data Tapes

    In 2012, TD Bank reported the loss of backup tapes.  With over 250,000 people affected, it was one of the top data breaches for that year.  The lack of data encryption on the tapes – plus the fact that this happened in Massachusetts, which is considered to have one of the most onerous data security and notification laws in the country – meant that the financial institution had to report the breach.

    The site databreaches.net is reporting that the bank has announced a multi-state settlement of $850,000.  This is on top of whatever monies were used to deal with the immediate aftermath of the data breach (notifying clients, setting up call centers for people demanding to know more, carrying out forensic investigations, etc).

    All in all, there's nothing new here when it comes to the details of the resolution.  You've got the promises to do better, to educate employees, the upgrade to security, etc.  However, some things did catch my eye.

    9 AGs Involved, 1.5 Years

    Finding closure to the data breach took 1.5 years: the breach was reported in October 2012 (although the data breach itself took place 8 months earlier).  In addition, nine state Attorney Generals were involved – which presumably means residents in nine states were affected.

    Although there's no way to tell how often the bank had to accommodate demands for access and information, I can imagine that it couldn't have been easy.  It must have been costly, diverting manpower and man-hours to something that doesn't contribute to the bottom line at all (Unlike data security solutions like encryption software, which are expenditures without a positive ROI but can be argued that, like insurance, there's an upside should something go wrong).  To have done so for nine different AGs must have been something else.

    Backup Tapes will be Encrypted

    As part of the settlement, the bank has agreed that "no backup tapes will be transported unless they are encrypted and all security protocols are complied with."

    I'm surprised that this is even a thing…but then again, not so much.  Most reasonable people would of course ensure that similar future incidents do not reoccur.  In TD's case, since it's impossible to guarantee that backup tapes won't go missing again (it's outside their control), the best it can do is to encrypt sensitive data on tapes (which is within their control).

    On the other hand, I've already witnessed instances where organizations repeatedly fall victim to one breach or another that happens to be similar in nature: Unencrypted laptop stolen from car; unencrypted laptop stolen from van; unencrypted laptop stolen from home during burglary; etc.  Despite the differences, it's really just one data breach – and one that can be easily combated by the same action: encrypt your laptops.

    Related Articles and Sites:
    http://www.databreaches.net/nys-attorney-general-schneiderman-announces-multi-state-settlement-with-td-bank-over-lost-backup-tapes/

     

     
  • Encryption Law: EU Ministers Say Encryption Must Provide Safe Harbor From Publicizing Data Breach

    Details are beginning to emerge for the European Union's data protection laws.  Like many before it, the use of encryption software is being encouraged in order to safeguard the private information of people who fall under the EU's jurisdiction.  In addition, a number of other details are being considered as well, such as the use of anonymized data for satisfying data security requirements.

    Personal Data Breach Notification

    According to out-law.com, the EU committee has agreed that breach notifications must be made to regulators within 72 hours of discovery – assuming that it "'may result in physical, material or moral damage' to individuals."  Whether this language will constitute a loophole that undermines the breach notification laws remains to be seen. (For example, a person who's overseeing the breach may conclude that the risks are negligible – not because that happens to be the case but because going public with the breach would be bad business).

    A second similar problem that I see is the use of the term "undue delay":

    organisations would also face a new obligation to inform consumers "whose rights and freedoms could be severely affected" by a personal data breach of such an incident "without undue delay".

    What exactly is an undue delay?  Is a day? A week?  A month?  A year?  The use of such linguistic loopholes tends to create havoc for those the law intends to protect.  It's the reason why in the US, the set of federal rules governing patient information (aka, HIPAA) requires that a notification be sent as soon as possible but not later than 60 calendar days since discovery.  Pass this mark and you're in breach of the law.

    Likewise, the final version of HIPAA struck out the "harm threshold" requirement that reads similar to the may result in physical, material or moral damage to individuals language.  The reason?  It became quite obvious that such a clause invited abuse.

    Encryption Software Use Encouraged

    The committee also agreed that using data encryption would be grounds for providing safe harbor from reporting requirements,

    "appropriate technological protection measures" to protect the data that has been lost or stolen from being accessed by people not authorised to see it.

    "Such technological protection measures should include those that render the data unintelligible to any person who is not authorised to access it, in particular by encrypting the personal data,"

    Which is a good move, seeing how experts agree that the use of encryption provides more than adequate data security.  However, there is some room for improvement here, as not all encryption is created equally.  Perhaps the law should specify the use of strong encryption, or perhaps do what the HIPAA regulators have done and defer the definition of encryption to a body that oversees such matters (the NIST, in HIPAA's case).

    Likewise, the use of pseudonymization as a data security move is quite puzzling, as experts are increasingly finding out that this form of data protection doesn't really work, especially when combined with Big Data initiatives.  Although laws tend to lag behind technology, there's no reason why it has to when the evidence is well within reach.

    There is, of course, still plenty of time left for arriving at a final set of rules.  One hopes the EU will play it smart and not repeat the mistakes others have made before them.

    Related Articles and Sites:
    http://www.databreaches.net/businesses-should-not-need-to-publicize-personal-data-breaches-if-data-is-encrypted-say-eu-ministers/
    http://www.out-law.com/en/articles/2014/october/businesses-should-not-need-to-publicise-personal-data-breaches-if-data-is-encrypted-say-eu-ministers/
     
More Posts Next page »