This Blog




AlertBoot offers a cloud-based full disk encryption and mobile device security service for companies of any size who want a scalable and easy-to-deploy solution. Centrally managed through a web based console, AlertBoot offers mobile device management, mobile antivirus, remote wipe & lock, device auditing, USB drive and hard disk encryption managed services.


AlertBoot Endpoint Security

AlertBoot offers a cloud-based full disk encryption and mobile device security service for companies of any size who want a scalable and easy-to-deploy solution. Centrally managed through a web based console, AlertBoot offers mobile device management, mobile antivirus, remote wipe & lock, device auditing, USB drive and hard disk encryption managed services.
  • Convicted Terrorist Steals Hard Drive in Brussels Main Justice Offices

    According to the Associated Press, a man who was convicted (and released from prison) for terrorist activities is suspected in the theft of an external hard drive from a forensic doctor's office. The story is full of puzzling questions, including why the Belgian government is not doing an adequate job of securing their data.

    Motives Unknown

    Illias Khayari was convicted in Brussels, in 2016, for participating in terrorist activities (he confessed to decapitating a person in Syria as part of said activities but somehow that didn't factor into his sentence of five years in prison). This, of course, appears to the layperson as a comparatively insignificant amount of time. Even worse, he was released after serving only half that time (what, how, and why!?) because he was arrested earlier this month for (allegedly) stealing a hard drive, among other items. The hard drive, located in the "forensic doctor's office at the city's [Brussels] main justice offices" contained "autopsy reports about the victims of the suicide bombings in Brussels in 2016." Khayari has denied the charges. Naturally, he didn't give a motive as to why he picked that particular location in which to burgle. The nature of his past conviction and the hard drive's contents would seem to provide a reason; however, it could also be a huge coincidence. The city of Brussels seems pretty confident it's the latter:
    Brussels prosecutor's office spokesman Denis Goeman said the hard drive was a backup of an office computer so it contained several files relating to different cases. He said there was no label on the drive that might have identified it as containing material related to the March 22, 2016 attacks on the Brussels metro and airport that killed 32 people. No files or evidence was lost and no cases have been compromised by the Jan. 3 theft. "It is therefore premature to say that this is a theft concerning precise documents, on the contrary," Goeman said. The suspect has denied the charges.
    It most certainly is premature. But would it be erroneous? Of all the different places Khayari could break into, he chooses that place? And steals that hard drive? The other items that were stolen in tandem could just be to make it appear as if the drive was not the target. The lack of a label on the drive is insignificant; why would you assume that a storage device in a forensic doctor's office holds something other than forensic evidence?

    Encryption in Government Facilities

    Belgium is part of the European Union. Indeed, the EU headquarters is located in its capital city, Brussels. As such, it would be weird if its government didn't follow certain privacy and information security directives and laws that have been passed over the years. And while the EU does not require the use of encryption, it does strongly encourage it. One imagines that its use would be especially encouraged where it involves sensitive data surrounding an ongoing case. Was encryption used on the stolen hard drive which, incidentally, is yet to be recovered? Based on the words and actions of the prosecutor's office, one would have to guess a careful "no." Those who could be potentially affected by the disk theft have been notified, a move that is not required if the information had been secured via encryption.


    Related Articles and Sites:

  • Scathing Government Report Concludes 2017 Equifax Breach Entirely Preventable

    This week, the US government published a report on the massive data breach Equifax experienced last year.  The overall conclusion shared by the House Oversight and Government Reform Committee is that the data breach – the largest one todate in US history and the foreseeable future – was entirely preventable.  However, as one reads through the entire report, it becomes apparent that it wasn't preventable at all.

    Or rather, it was preventable the way cardiac arrests, skincancer, and adult onset diabetes (now known as type two diabetes) are preventable: making sure that you're doing what needs to be done all the time.  Eating right.  Exercising regularly.  Applying sunscreen.  The majority don't do most of the above, and millions around the world suffer theconsequences every year

    Likewise, Equifax fell well short in what they had to do to maintain a healthy and secure data environment. To say that the one incident was preventable is to give Equifax too much credit.  The company had set itself up for a data breach.

    It's a wonder a massive information security incident didn't occur sooner.



    At the heart of Equifax's data breach was a critical Apache Struts vulnerability which was disclosed publically along with a security patch.  Obviously, hackers could and would take advantage of this vulnerability ASAP, and as often as possible, since there was a limited window of opportunity to exploit it: once patched, the window would close permanently.

    Equifax failed to apply it.

    Not that they didn't try. Equifax gave itself a 48-hour deadline to patch the weakness.  They scanned their network to see if the vulnerability was present but couldn't find any.  An unsurprising development, seeing how Equifax had no idea what they had, where they had it, and possibly how they had gotten it, the result of years-long acquisition binges that created a complex and fractured computing environment.

    Of course, this leads to the question of how they ultimately did learn of the breach.  The answer lies in expired security certificates.

    Equifax had allowed 300 security certificates to expire (bad).  More shockingly, they knew that these needed to be renewed and sat on it, in certain cases for over a year (terrible).  Once renewed, the company's IT department saw that something was very, very wrong.  Had those security certificates been active when the hackers exploited the Struts weakness, Equifax would have been aware of the breach immediately.  This is undisputed. 

    (Also undisputed is that the breach wouldn't have happened if they had applied the patch….but as already explained, they couldn't find the vulnerable Struts application.)

    So, it seems that, based on this, the report's authors concluded that the incident was preventable.  All it would have taken was to apply a free patch to an unaccounted-for vulnerability that would have been unearthed via the certificate (indirectly) if it had not been allowed to expire, if the network was already breached by hackers who were siphoning data away. Otherwise, nothing would have been flagged.

    How it that "preventable?" Under the circumstances, being breached is an active element of discovering that you can be and have been breached.  Unless, of course, what they meant was that the hackers could have been stopped if Equifax had a nominally "normal" infrastructure with an adequate (not even"good" or "stellar") approach to data security.

    But wouldn't that be true for pretty much all data breaches we've read about in the past ten years?


    Plus Ça Change, Plus C'est La Même Chose

    In September, there was a Congressional hearing that looked into Equifax's data breach.  Much of what's in the report echoes the hearing, although there are instances where the report further illuminates on what was disclosed previously.  In a number of instances, the report even seems to contradict what was said in September. For example, you'd have to really stretch the truth to describe the breach as "human error" after reading this report.

    Despite all that's been revealed, Equifax has not really been held accountable for its actions, or lack thereof.  Certainly, civil lawsuits have been filed.  And, for a short while, its stock price was hammered.  But, aside from the circus show in September, the government hasn't really done anything to the company. Of course, this does not mean that changes are not on their way, the Equifax bill being one such example and Democrats in the Senate calling for an information fiduciary law being another.

    And yet, attempts to pass such bills have a long history of dying in Congress, so don't hold your breath.


    Related Articles and Sites:

  • HIPAA Notifications Are Now Within 30 Days Since Breach If You're In Colorado

    According to, any HIPAA-covered entities that do business in Colorado will now have 30 days to notify Coloradans (or Coloradoans, if you prefer) of a data breach involving personal information, and not the customary 60 calendar days under HIPAA. The reason? A bill on data security that went into effect in September.
    As usual, the use of encryption provides safe harbor. Indeed, the bill – HB18-1128 – goes out of its way to define data breaches as unauthorized access to "unencrypted" personal information. Furthermore, it notes that cryptographically protected information is still subject to the reporting requirements if the encryption key is compromised; that "encryption" is whatever the security community generally holds to be as such; and that a breached entity does not need to provide notification if it determines that "misuse of information… has not occurred and is not reasonably likely to occur."
    In the past, variations of that last part were heavily criticized. Naturally, it's in the breached entity's interest to declare that there won't be misuse of the hacked information, ergo no need to inform anyone about it. In 2018, however, it'd be a laughable position to take.  

    Surprising Development? Not Really

    Colorado's "encroachment" on HIPAA can take one aback but this would be merely a knee-jerk reaction to unfamiliar news: to date, if one was covered under HIPAA, state privacy and information security laws left HIPAA-covered entities alone. But there's absolutely no reason for it. After all, it wouldn't be the first time that a state decided to pass laws that are more severe than federal ones.
    Furthermore, think about the purpose of notifications. Supposedly, it's so (potentially) affected individuals can get a start on protecting themselves. If the past ten years have shown us anything, it's that receiving a notification 30 days after the discovery of a data breach can already be too late. In that light, waiting 60 days could be disastrous.
    It's a wonder that HIPAA hasn't updated its rules to reflect reality. HIPAA was, arguably, a trailblazer when it came to protecting personal information, with its no-nonsense approach and enforcement of the rules. That last one was a biggie: When Massachusetts General Hospital (MGH) was fined $1 million in 2011 – the largest amount at that time for a data breach – the medical sector not only took notice, they went into action. At minimum, entities started to encrypt their laptops; those paying attention did far, far more.
    At the time, HIPAA's 60-day deadline was seen as revolutionary by some (if memory serves, existing data breach laws didn't codify a deadline for alerting people). Of course, companies being what they are, covered-entities ended up doing as most people feared would do: they put off sending notifications for as long as possible, like mailing letters on the 59th day.
    Not everyone did this and HIPAA specifically prohibited the practice. A handful were fined as a result of purposefully delaying the inevitable. But waiting until the last possible moment to send notifications appears to be the ongoing behavior, regardless. The same thing happens for non-HIPAA data breaches, except that most states have set a 30-day limit, so companies send it on the 29th day.  

    Update Those BA Docs!

    Unsurprisingly, Colorado's law also affects business associates to HIPAA-covered entities. All hospitals, clinics, private practitioners, and others in the medical sector should immediately update legal documents that establish obligations between themselves and BAs.
    Remember, a covered entity's data breach is the covered entity's responsibility, and a BA's data breach is also the covered entity's responsibility.  


    Related Articles and Sites:

  • Leading Self-Encrypting Drives Compromised, Patched

    Earlier this week, security researchers revealed that certain SEDs (self-encrypting drives) sold by some of the leading brands in the consumer data storage industry had flaws in its full disk encryption.

    Bad Implementation

    One of the easiest ways to protect one's data is to use full disk encryption (FDE). As the name implies, FDE encrypts the entire content of a disk. This approach to protecting files ensures that nothing is overlooked: temp files, cached files, files erroneously saved to a folder that was not designated to be encrypted, etc.
    There is a downside to full disk encryption: it can slow down the read and write speeds of a disk drive, be it the traditional hard-disk drive or the faster solid-state drive (SSD). In order for a computer user to work with the encrypted data it must be decrypted first. This extra step can represent a slowdown of 10% to 20%. Not the best news if you invested in SSDs for the bump up in read/write speeds.
    The downside mentioned above, however, is mostly true when software-based FDE is used; that is, you used a software program to encrypt the disk, like Microsoft's BitLocker. For SEDs, the "self-encrypting" portion of their name comes from the fact that an independent chip for encrypting and decrypting data is built into the storage device. That means there is no impact in reading and writing data. It does mean, however, that you've got a new point of failure when it comes to data security. If the chip is not secure enough, it could lead to a data breach.
    The researchers were able to extract the encrypted information by modifying how these chips behave. It was hard, time-consuming work but they figured out how to bypass the encryption entirely. In certain instances, they found that the data wasn't encrypted at all due to a misconfiguration. You can read the details here.
    If you read the paper, you'll notice that the data hack is not for the faint of heart. While certain security professionals have decried the incompetence in how the SEDs' encryption was implemented – and truth be told, they are right. Some of these workarounds are very Wile E. Coyote – finding these flaws would have been nearly impossible for mere mortals, non-professionals, and amateur hackers.
    Indeed, it's quite telling that it took academic researchers to shine the light on the issue.

    BitLocker "Affected" As Well

    Oddly enough, BitLocker, arguably the most deployed full disk encryption program in the world today, was affected by the SED snafu. How, you may ask, seeing that BitLocker is software-based while the security issue affects hardware-based encryption?
    By default, BitLocker hands the reins over to SEDs if disk encryption is turned on, assuming an SED is being encrypted. On the surface, deferring to the SED encryption makes sense. People don't care how their data is encrypted as long as it is encrypted, and foregoing software-based encryption means there is no performance hit. It appears to be win-win.
    (There is a group policy setting to override this behavior. Security professionals recommend that this setting be used going forward. Being security professionals, it makes sense they'd place more weight on security than performance.)

    Trade-Off: Speed vs. Transparency

    Relying on hardware-based encryption, however, means that you're relying on Samsung, Crucial, and other hardware manufacturers to implement encryption correctly. Have they? There isn't an easy way to know because they're not transparent about the design and implementation. The revealed vulnerabilities could be all there is to it… or could represent the tip of the iceberg.
    Hence the recommendation by the pros that software-based encryption be used: any solution that is worth its salt will ask NIST to validate it. Sure, the process is long and expensive; however, the ensuing uptick in business more than makes up for it. While NIST's stamp of approval does not guarantee perfect security (possibly not even adequate security), it does remove the possibility of terrible security implementation like the ones witnessed this week. And even if it's not validated, the ones that are transparent allow for examination. If something's is glaringly wrong, it will be found and noted by researchers.
    All of this being said, confirms that companies have either come out with firmware patches for the vulnerabilities in question or are working on it. Apply those as soon as possible, and rest easy (or easier) that your data will be safer by doing so.
    Related Articles and Sites:
  • Anthem, Yahoo To Shell Out Additional Money Over Data Breaches

    This week saw additional headaches for two US companies involved in major data breaches (we're talking top ten in US history to date). Yahoo, now a part of Verizon, has agreed to settle a lawsuit for $50 million. In addition, Anthem, Inc. – the Indiana-based BlueCross BlueShield insurance company – has agreed to settle HIPAA violations by paying a $16 million monetary penalty to the US Department of Health and Human Services (HHS).
    Earlier this year, Yahoo's "other arm" – now known as Altaba, a separate entity from Verizon – settled with the SEC for $35 million. Likewise, just a couple of months ago, Anthem settled a lawsuit for $115 million.
    The final tally so far for the Yahoo breach: $85 million in settlements, over $30 million in lawyer fees (for the plaintiffs), and a $350 million haircut when Verizon acquired the company. That's a total of $465+ million.
    For Anthem: a total of $165 million.
    And let's not forget that these figures do not include what each company paid for their own defense (the numbers certainly must be in the millions).
    Conclusion: data breaches now suck for both breachees and breachers. It wasn't always like that.

    Historical Inflection Point?

    Ten years ago, a lawsuit centered around a data breach would have been tossed from court. Today, that hardly seems to be the case… although exceptions do exist, like Equifax. (Still, it's only been a little over one year since that particular data breach. Yahoo and Anthem's travails took years to be resolved, and with Equifax's data breach being in the top five information security incidents of all time, it's still too early to tell whether the credit-reporting agency will join the two companies' dubious circle of honor).
    It may, perhaps, be too early to declare that the days of conveniently ignoring data security, in the belief that there will be little to no blowback when it happens, are really over. Still, there are many signs that this is a watershed year, including:
    • People are leaving social media platforms or decreasing their use, mostly due to privacy and data security concerns.
    • Over the course of ten years, pretty much everyone has been affected by a data breach. Chances are that everyone knows someone who has been affected quite negatively. Even judges who in the past couldn't see what the big deal was. Nothing like hitting close to home to understand what's what.
    • Greater and greater fines are being imposed for data breaches, a direct result of continuing and ever-expanding information security incidents.
    • The EU passed this year some of the strongest privacy laws yet.  


    Related Articles and Sites:

  • Google and Google+ : Data Breach or Not?

    This week's revelation that Google covered up a data breach connected to Google+, the much-unused Facebook-competitor, has spilled a lot of digital ink. Unsurprisingly, most of it is unsympathetic to Google. One exception was an article at, where it noted that "the breach that killed Google+ wasn't a breach at all."
    And, on the face of it, it's true. As far as Google knows, the "data breach" (in reality a bug that could have allowed a data breach) was never exploited. Its logs show nothing. And, in order to make use of this exploit, a person had to request permission for access to an API (Application Programming Interface), which only 432 people did. In the end, Google estimates that 500,000 people could have been affected… if there was an actual data breach. We're talking theoretical potential here, not post facto possibility.
    And the most damning indication that this is not a data breach is the fact that the data that could have been exposed from the API bug wouldn't have actually triggered a breach notification. If you look through the list of data that could have been compromised, you'll see that they wouldn't qualify as sensitive or personal information under US data breach notification laws. Full names, email addresses, a profile pic? Unauthorized access to these do not merit a notification under any of the 50 US state laws dealing with data breaches.
    Again, on the face of it, it looks like no big deal. If you dig into the details, however, you'll see some problems. First off, Google can't really know what happened because they only keep two weeks' worth of logs; the bug, on the other hand, went unfixed for over two years. Who knows what happened prior to the patch, outside of the two-week period?
    Second, the fact that less than 450 people applied for the API is little comfort when you realize that the Facebook Cambridge Analytica situation only required one renegade API user. (Perhaps we could get some comfort that "only" 500,000 people could have been affected, but we don't really know where that figure came from. Is it based on the severely curtailed log data? Or the total connections that the API-requesters currently have? What if they dropped connections over the years, thus depressing the figure?)
    Still, despite the above, it looks like this data breach is not really a data breach. Facebook said the same self-serving thing about the Cambridge Analytica situation…but, in that case, data was exploited in an unauthorized manner.  

    EU is not US

    As noted above, the data that was accessible via the bug is not covered under data breach laws, at least not in the US. The US does not have an all-encompassing federal law; it's all done at the state level. And, it was only earlier this year that the final 50th state succumbed to the times and passed a data breach notification law (thank you, Alabama). Under these laws, what's defined as a "reportable" data breach is strictly defined. When you look at the data at the center of the breach, it's obvious that it doesn't really pass the "personal information" test: without more "substantial" information like SSNs, driver's license ID numbers, financial data, etc., Google is in the clear.
    In comparison, the EU has stronger privacy laws, but the bug was found before these laws were strengthened quite recently. Still, a case could be made that the potential breach required public notification within the framework of the older European laws. For example, the UK (before Brexit) gave this example on what constitutes "personal data" as part of the EU's Data Protection Directive:
    Information may be recorded about the operation of a piece of machinery (say, a biscuit-making machine). If the information is recorded to monitor the efficiency of the machine, it is unlikely to be personal data…. However, if the information is recorded to monitor the productivity of the employee who operates the machine (and his annual bonus depends on achieving a certain level of productivity), the information about the operation of the machine will be personal data about the individual employee who operates it. [section 7.2, personal_data_flowchart_v1_with_preface001.pdf]
    As you can see, within the EU, there is a gray area as to what personal data is. It could be that Google is not out of the woods yet, legally speaking. Bugs are Identified and Fixed All the Time As article notes,
    There is a real case against disclosing this kind of bug, although it’s not quite as convincing in retrospect. All systems have vulnerabilities, so the only good security strategy is to be constantly finding and fixing them. As a result, the most secure software will be the one that’s discovering and patching the most bugs, even if that might seem counterintuitive from the outside. Requiring companies to publicly report each bug could be a perverse incentive, punishing the products that do the most to protect their users.
    Quite an accurate point. In addition, it should be noted that this literally is a computer bug and nothing more because it was discovered in-house. If a third party had found the security oversight and reported it to Google, it would have been a data breach: that person, as an unauthorized party, would have had to illegally access the data to identify the bug as such.
    In this particular case… well, you tasked someone with finding bugs and that person did find it. That's not a data breach. That's a company doing things right.  

    Hush, Hush. Sub rosa. Mum's the Word

    But then, why the secrecy surrounding it? Supposedly, there was an internal debate whether to go public with the bug, a debate that included references to Cambridge Analytica. If Google had been in the clear, the discussion would have been unnecessary.
    Or would it? With Facebook's Cambridge Analytica fiasco dominating the headlines at the time, Google couldn't have relished the idea of announcing an incident that, in theory, closely mirrors Facebook's – but has led to a different data security outcome. Thus, it is unsurprising that the bug, and the debate surrounding it, has been kept quiet. (It was a cover-up, some say. But again, Google didn't technically have a data breach. It truly was their prerogative on whether to go public).
    Now that the world knows of it, though, it has led to the same outcome: a global scandal; governments in the EU and US looking into the situation; another tech giant's propriety and priorities questioned (arguably, a tech giant that was held in higher esteem than FB); the alienation and angering of one's user base.
    Going public with the bug could be seen as a "damned if you do, damned if you don't" sort of situation. But, when considering what Google's been up to lately, like this and this, you've got to wonder what is really driving the company in Mountain View.  
    Related Sites:
More Posts Next page »