This Blog




AlertBoot offers a cloud-based full disk encryption and mobile device security service for companies of any size who want a scalable and easy-to-deploy solution. Centrally managed through a web based console, AlertBoot offers mobile device management, mobile antivirus, remote wipe & lock, device auditing, USB drive and hard disk encryption managed services.


AlertBoot Endpoint Security

AlertBoot offers a cloud-based full disk encryption and mobile device security service for companies of any size who want a scalable and easy-to-deploy solution. Centrally managed through a web based console, AlertBoot offers mobile device management, mobile antivirus, remote wipe & lock, device auditing, USB drive and hard disk encryption managed services.

November 2013 - Posts

  • HIPAA Business Associate Full Disk Encryption: Management And Verification

    The HIPAA Final Omnibus Rule incorporated a number of changes that look small on paper but have huge ramifications, such as the striking of the "harm threshold" rule and requiring business associates to know and comply with HIPAA rules.  In essence, the changes strengthened the existing HIPAA legislation, closing any loopholes and ambiguities that were open for interpretation and further solidifying the focus on patient data security.

    The importance of these changes is easily proved by stories like the one below.

    Milwaukee Files HIPAA Complaint Against Dynacare

    According to, the City Attorney for Milwaukee has filed a statement with the Office of Civil Rights at the Department of Health and Human Services, bringing public attention to the fact that 9,000 employees of the city were involved in a HIPAA data breach.  The blame has been assigned to Dynacare Laboratories.

    Specifically, an unencrypted USB disk with personal data of Milwaukee employees, and their spouses or domestic partners, was lost when a Dynacare employee's car was stolen.

    The thing is, Milwaukee never signed a contract with Dynacare.  Rather, it's a subcontractor to Froedtert Community Health/Workforce Health.  In other words, Dynacare is a business associate (and BAs account for not an insignificant number of breaches).

    What Can Covered Entities Do to Check on Business Associates?

    As HIPAA rules currently stand, and based on some of the conclusions the Office of Civil Rights at the Department of Health and Human Services has arrived at, it behooves covered entities (CE) to be aware of how their business associates (BA) are approaching ePHI concerns.

    For example, it might not be enough to formulate an agreement that forces the BA to use encryption software and protect ePHI on laptops, smartphones, and other electronic devices.  CEs should perform due to diligence to see that BAs have stuck by the agreement.  But doing so is a tall order.

    For who has the extra time and resources to see whether a business partner has really encrypted their laptops?  Furthermore, would the BAs send in their laptops to be checked?  Or would personnel from the CE's office go to the BA's venue to check that laptops are indeed encrypted?  What if the associate is in not in the US?  Remote sessions could work, but we've seen exacting companies that want to slave a hard disk to check whether the encryption is really working or not.

    Off-topic: interestingly enough, this "slave and check" move is not paranoid at all.  I've come across one instance (which I assume is a rarity solely based on the craziness of the story) where someone was looking for an FDE solution that wouldn't prompt the endpoint user for any type of "password" at all, be it an actual password, a token, a biometric scan, what have you.  These people were looking to comply with the letter of the law (FDE must be installed) but not with the spirit of it (can't be bothered to really protect anything because passwords are an inconvenience).

    So, what can a CE or a BA do when it comes to checking or proving that disk encryption is deployed and used in the workplace, and thus ensuring safe harbor from the HIPAA Breach Notification Rule should something go awry?

    Use a Report from an Independent 3rd Party

    Well, you could run a report.  Take AlertBoot for instance.

    Reporting is central to the cloud-based endpoint security solution because the deployment and installation of FDE on laptops and MDM on smartphones is done remotely.  The lack of direct physical oversight (not that it's really necessary with AlertBoot) means a substitute method is necessary for ensuring that the deployment and installation processes completed correctly.

    Hence, a dedicated reported engine was designed and incorporated into the AlertBoot process.  Fully customizable, the report engine's output can be tailored to show any information that's necessary.  And because it's accessed over the internet, a BA can easily show a CE (or the OCR) that they are complying with any contractual or legal responsibilities.
    Related Articles and Sites:


  • HIPAA Security Breach: Encryption Is Nothing Without Proof

    If you're somehow involved in the medical sector, chances are that you or your business comes under the purview of HIPAA.  And while there are many aspects to it, when it comes to ePHI, the odds are that it's been drilled into your head: encryption, encryption, encryption.

    PHI data encryption is the only guaranteed way to obtain safe harbor from the Breach Notification Rule, which obligates HIPAA covered-entities and business associates to report any instances where patient information is breached (or, technically, is believed to be breached).

    Except that it's not really a guarantee.  The use of encryption software is merely the first step.  There's a little technicality that trips a lot of people: you have to be able to provide documentation (i.e., proof) that the PHI was encrypted at the time of the breach, whether it was a stolen laptop, a misplaced smartphone, missing backup tapes, whatever.

    An Example: UC San Francisco Laptop Theft

    Take for instance, the following statement from UC San Francisco (UCSF) regarding the theft of a laptop computer that stored ePHI (my emphases):
    UC San Francisco is alerting some individuals to the theft of a physician’s personal laptop computer that contained personal and health information.

    While the physician believed the laptop was encrypted, this could not be confirmed. As a result, the individuals involved are being notified.

    The security of protected health information at UCSF is of utmost importance. While there is no evidence at this time....

    (You can read the rest here).
    Basically, this is UCSF saying that they weren't able to take advantage of safe harbor because they weren't able to prove that the doctor's laptop was encrypted at the time of the theft.

    The use of "believed" in "the physician believed the laptop was encrypted" can be interpreted in multiple ways, but ultimately it stands that, if UCSF had been able to prove the use of encryption, it wouldn't be sending out notification letters, regardless of what the doctor thought, believed, imagined, etc.

    So, how does one prove that a laptop was protected with encryption when it got stolen?

    HIPAA Breach Prevention: The Status Log

    The easiest way is to do what AlertBoot FDE and MDM does: provide an audit log that shows the encryption status whenever it connects to our central server.  Among other things, laptops and smartphones protected with AlertBoot connect with management servers every 24 hours (or when prompted by force).  This is to ensure that the latest security policies are pushed out to all endpoints, ensuring that clients are not blindsided.

    Since devices report their statuses every 24 hours, and there's a list of such entries going to the first day the device was first protected, it doesn't take much to extrapolate and conclude that a device was encrypted at the time something went awry.

    This contrasts with standalone encryption software.  Yes, they are NIST-validated.  And, yes, they're effective at securing data.  And, yes, people who use these solutions can pat themselves on the back for looking out for their clients'/patients' digital welfare.

    But, no, there's no failsafe, foolproof way to prove that encryption was used.  And that matters when it comes to HIPAA regulations and fines.

    Related Articles and Sites:


  • UK Encryption and MDM: Poll Shows British Want Harsher Penalties For Companies That Breach Data

    According to, a recent poll on 1,000 British has revealed dissatisfaction when it comes to news of corporate data breaches.  While the poll does not show the general public's opinion on whether safe harbor should be extended to companies that made use of data encryption solutions like AlertBoot (most legislation around the word does make an exception for properly protected data), it does make a number of obvious yet surprising revelations.

    Penalize Organizations, Reveal Data Breaches

    The poll has shown that:
    • People don't believe enough is being done to penalize organizations that suffer a data loss.
    • Legislation that forces organizations to go public when suffering data breaches was supported by two-thirds of the respondents.  (Current legislation only required affected people to be notified.)
    • A little over half admit that "they would think twice about doing business" with organizations that suffered a data breach.
    • Nearly half also admitted that it would be impossible to prevent hackers from compromising their data.
    • Only 16% believe that government organizations are doing enough to protect data from threats.
    • Healthcare providers and financial services institutions were viewed positively when it comes to data protection; social media and gamin websites were viewed most negatively.

    Wisdom of the Crowds

    The British public may be on to something.  In the USA, there are a number of competing regulations and legislation that govern data security, especially when it comes to sensitive personal data.  Of the numerous federal, state, professional, and other laws and regulations, HIPAA is one of the most rigorous and open to the public.

    HIPAA's governing body – the Department of Health and Human Services (HHS) – is authorized to assess monetary penalties of up to $1.5 million (and already has, in a number of instances).  Furthermore, they're required to publicly list any data breach incidents that involve more than 500 people.  HIPAA covered organizations are also required to alert public media about a data breach if it involves more than 500 patients, regardless of whether they're able to directly get in touch with all of those affected.  (HIPAA has a separate but similar rule where public notice must be given if the affected entity is unable to reach all individuals.)

    And what's the result of all this?  Well, I can only testify to what we've experienced over here at AlertBoot, but I can confirm that, of all the different sectors we've contacted, those in the medical sector – especially those who are covered by HIPAA – tend to sign up for our services.

    While this could be because those in the medical sector are more sensitive when it comes to protecting "client" (i.e., patient) data, "to comply with HIPAA" is the number one reason given for signing up with AlertBoot.  Indeed, we saw an uptick once the deadline for the final omnibus rule approached (and passed).

    Based on the above, I can see why people, not just in the UK but anywhere in the world, would be interested in harsher penalties for organizations that play fast and loose with data, and would think that being more open and transparent about data breaches is a good thing.
    Related Articles and Sites:
  • FDE Security: Why Online Password Breaches Can Be Bad For Disk Encryption

    Earlier this week, I blogged on the massive Adobe password data breach.  Today, I've come across another password breach that impacted a considerable number of people: over 800,000 people were affected when MacRumors was attacked by hackers.  The site was using best practices when it got attacked (salted password hashes).

    I forgot to add last time that such breaches are dangerous, at least indirectly, for people who use full disk encryption like AlertBoot.

    MacRumors Hashed Passwords, Used "Individualized" Salts

    Long story short: someone figured out the password that belonged to a MacRumors moderator.  Once inside the site, the hacker made off with the password list for 860,106 people.  Thankfully, the passwords were not only hashed but also salted.  In fact, a story on notes that the salts were individualized.

    On the other hand, a separate story noted that the salts were 3 bytes long – that is, three characters long – and thus couldn't truly provide individualized salts.  A salt is a string of characters that are added to the password prior to hashing, which will result in completely different hashed results.

    For example, let's say that two users are using the same password, "thispassword."  Without a salt, hashing them would result in the same output (let's say, "298fj2nfs8wh23h8whewhw").  If the same salt was used ("thispassword" is salted with"321" and becomes "thispassword321"), then the result would be something else – "39009Q907T094JWEJWOIW2nsjsa" – for both passwords.  However, if separate salts were used ("321" and "322", respectively), then one of the hashed passwords would end up completely different from the other.

    So, separate salts are a good thing.  However, if they're only three characters long, it really doesn't take a hacker too long to guess and hit upon the right one, thanks to the power of today's computers: it's just a matter of trying all (popular) password and salt combos until you find one that works.

    The true, best practice would have been to use salts that are truly individualized; however, that also means that MacRumors would have needed at least 860,000 separate salts, which is problematic in of itself.  (In the area of encryption software, where keys are truly individualized, encryption key management is a big deal.  A HUGE deal, which is why a managed disk encryption service like AlertBoot is a godsend to many companies).

    Online Spilling into Semi-Offline World

    As I noted in the introduction, password data breaches for websites can be a problem for FDE users.  How so?

    Because people reuse passwords.  Just like there are many people out there who reuse their IDs and passwords across different portals, there are people who will use the same password anywhere they can, including as the password to their encrypted computers.

    Is the leaching of online passwords to the "offline" world – seeing how a FDE's password is technically used offline – a realistic concern?  I think it is, for a couple of reasons.

    First, chances are that, if you have a large enough database, nearly all the passwords that can be imagined by a person will be reflected in it.  While the potential for password variation is infinite, there is a limit to how many random characters the human brain can hold (or type into the password field.  Even if it were possible for me to memorize 120 consecutive random characters easily, this does not guarantee that I won't have a typo every time I attempt to type it into the password field, causing me endless frustration and eventually leading me to use a shorter password).

    This means anyone looking to break into an encrypted system would do well to start running all the passwords in said database first, and then trying to figure out other ways to guess at the password.

    Which brings us to my second reason why breaching online passwords is bad news for FDE.  A database of online passwords can reveal not only which passwords, but also what types of passwords, are popular.  Such a database can be used by people who are looking to break into encrypted systems as an initial point of attack.  How long are passwords, in general?  When is a password essentially too long?  When forced to use an alphanumeric system, how many people add a number to the end of the password as opposed to starting with a number?  Are passwords generally words or combinations of words?

    In other words, it gives hackers a rough map of where they should begin.  And that's not a good thing.
    Related Articles and Sites:


  • HIPAA Encryption: FDE Not Multifunctional Enough?

    The other day I offered a short analysis of 29 new entries to the HIPAA Wall of Shame.  Among them, one entry specifically was so large that it overwhelmed the analysis (and led me to ponder whether it should be classified an outlier or not).  It was also one of those instances where whole disk encryption like AlertBoot would have prevented a data breach.

    So, why wasn't FDE used?  As a story in the shows, there were other security measures in place.  Of course, that didn't prevent the two laptops at the center of the data breach that affected 729,000 patients.  But this led me to the question: could FDE be given short shrift because it's not multifunctional enough?

    Laptops Stolen from AHMC Hospitals

    Two laptops were stolen from AHMC hospitals, causing the PHI breach of 729,000 people.  The laptops were stolen on October 12.  So far, things sound very familiar to the point of banality (not that data breaches should ever reach that state...but that's the times we live in).

    Except, AHMC had plenty of security in place.  According to the (my emphases):
    The thieves swiped the laptops from a video-monitored sixth-floor office on a medical campus that officials said is "gated and patrolled by security."
    Gary Hopkins, a spokesman for AHMC, said the hospital group called Alhambra police as soon as the theft was discovered Oct. 14. Security video showed that the theft occurred Oct. 12.
    AHMC Healthcare had already asked an auditing firm to perform a security risk assessment and it was following the recommendations, officials said. Administrators will now expedite a policy of encrypting all laptops, they said
    So, based on the above, it sounds like encryption software was being used, just not on all laptop computers.  Furthermore, there were physical restrictions ("gated" and "security" personnel) and non-physical ones as well (security cameras).  While AHMC was not following best practices, one could argue that they were following HIPAA protocols.  After all, HIPAA doesn't require perfect security.

    Physical Security is not Data Security

    Consider what was in place at the time of the data breach:
    • Video monitoring, which we can assume was not actually being monitored in real perhaps we should refer to it as video "monitoring."
    • Security guards making their rounds.
    • Doors.

    What's a common factor among these?  They're "multifunctional."

    The video monitoring can be used to track people who are looking to steal laptops, but also people looking to break into medicine cabinets or monitoring people's behavior in general.  Security guards can prevent thieves from coming in or going out, but can also be summoned in case of other disturbances.  Doors are multifunctional in many ways, including granting privacy, tamping down ambient noise, securing assets, etc.  The point is, it wouldn't be hard to convince management to open up the company's purses when it comes to traditional security.

    Full disk encryption is anything but, however.  It's a tool that does one thing, and one thing only: it ensures that unauthorized people do not access a device's contents, especially in the event said device is lost or stolen.  Plus, it doesn't work to protect data if you have to send something via email (you'd either need email encryption or file encryption), and – depending on the encryption solution – data copied off of an encrypted laptop poses a risk, since such data usually falls outside of the solution's reaches (but not with AlertBoot: USB memory devices are automatically protected, assuming the setting is turned on).

    So, making the case for laptop disk encryption, when other options seem to provide more bang for the buck, can be a hard task.  On the other hand, the right encryption solution is guaranteed to prevent a $1.5 million HIPAA fine if (or when) the time comes.  Video cameras, doors, and security guards don't offer that.

    Related Articles and Sites:
  • HIPAA Data Protection: Laptops And Desktops Still Account For Half Of Breaches

    According to the folks at, the recent addition of twenty-nine data breaches to the HHS "Wall of Shame" show that laptop and desktop computers still account for approximately half of all medical data breaches.  This indicates two things: (1) Too many covered entities are not using HIPAA level encryption and (2) computers still account for most data breaches, regardless of what your fears over hacking may be.

    Computers Account for Exactly Half or Slightly More than Half

    The loss and theft of computers – be they laptops or desktops, but definitely biased towards laptops – account for half of the twenty-nine data breach...or slightly more than half.  It really depends on whether you want to consider one particular instance as an outlier.

    Of the 29 HIPAA data breaches, all except one ranged from approximately 500 to 10,000.  The outlier is a laptop breach that affected approximately 729,000 people.

    With the outlier:
    • 818,448 total patients affected.
    • 29 cases total.  Of these, 13 are attributed to laptops, 2 to desktop computers.
    • 761,624 patients affected due to laptops.  Another 1,466 attributed to "computers" which is interpreted as desktop computers.
    Without the outlier:
    • 89,448 total patients affected.
    • 28 cases total.  Of these, 12 are attributed to laptops, 2 to desktop computers.
    • 32,624 patients affected due to laptops.  Another 1,466 attributed to "computers" which is interpreted as desktop computers.
    Regardless of which dataset you go with, at least half of the 29 incidents are tied to the loss of computers.  These are incidents that can be easily prevented by using security software like AlertBoot's managed disk encryption.  Why are HIPAA covered entities (and their business associates) still avoiding laptop disk encryption?  I don't have an answer for that.  But, if I may, I must point out that masochists do exist in the world.

    In terms of people affected, the theft of endpoints account for either 93% or 36% of people affected.  Again, it depends on whether you want to include the outlier or not.  If you don't, well, 36% is not a figure one sneeze's at.  Nearly 40% of people are being affect by, once again, something can be prevented with an easy remedy?  Unconscionable.

    In an effort to present a little bit more data, these are the mean and median for both instances:
    • Mean, with outlier: 28,222.3
    • Median, with outlier: 2,812
    • Mean, without outlier: 3,194.6
    • Median, without outlier: 2,811

    Top Ten

    If you take the top ten data breaches above, laptops account for four out of the ten HIPAA data breaches.  In terms of people affected, 94.8% of people were affected because of a laptop theft.  If you prefer not to include the outlier, the figure falls to 28.5% (but becomes a list of top nine breaches).  It's less impressive, but not insignificant.

    In a sense, it's kind of like a fractal: you keep making smaller and smaller slices, and yet those ratios just keep maintaining themselves.  It's uncanny.


    One thing that troubles me is whether the one laptop that affected over 700,000 people should be treated as an outlier.  On the one hand, it's way out of proportion.  On the other, it's not a theoretical mistake of some sort.  That data breach happened.  Plus, a list of data breaches that go back to 2009 shows that at least 9 other incidents are even bigger.

    And when you consider that an analysis of the data shows that data breaches appear to follow a power law, it doesn't make sense to treat that particular data point as an outlier.
    Related Articles and Sites:
More Posts Next page »