in

This Blog

Syndication

Tags

News

AlertBoot offers a cloud-based full disk encryption and mobile device security service for companies of any size who want a scalable and easy-to-deploy solution. Centrally managed through a web based console, AlertBoot offers mobile device management, mobile antivirus, remote wipe & lock, device auditing, USB drive and hard disk encryption managed services.

Archives

AlertBoot Endpoint Security

AlertBoot offers a cloud-based full disk encryption and mobile device security service for companies of any size who want a scalable and easy-to-deploy solution. Centrally managed through a web based console, AlertBoot offers mobile device management, mobile antivirus, remote wipe & lock, device auditing, USB drive and hard disk encryption managed services.
  • Cost of a Data Breach: Target Says 2013 Breach Cost $252 Million So Far

    Target, the Minneapolis-based retailer, has released its latest financial figures, revealing that it has booked $162 million in expenses for the 2013 hack of its client data.  This has led to media sources to report that the hack has cost the company $162 million.  But, the cost is actually much higher than that.

    Cost vs. "Cost"

    Quoting techcrunch.com on the latest Q4 earnings that Target published:
    The figure, revealed in the company’s Q4 earnings published today, includes $4 million in Q4, and $191 million in gross expenses for 2014, as well as $61 million gross for 2013. Target says that the gross number was offset in part by insurance receivables of $46 million for 2014 and $44 million for 2013.
    If you add up all these numbers (taking care not to include the $4 million in Q4 since it pertains to the 2014 figure), you do indeed get $162 million.  It's a whopper.  A bigger whopper is the true cost of the data breach, which excludes the insurance payouts.  Adding back in the $90 million gives a figure of $252 million.

    It may be semantics but there's a difference between what the breach cost Target and what the breach cost.  I mean, if the insurance had covered everything, would one go around saying that the hack didn't cost anything?  Obviously that doesn't make any sense.  More importantly, it downplays the level of damage that the hack had on the company.

    Expect More.  Pay More

    The above figures are not the end of the story however:
    This is also not including whatever expenses Target may incur as a result of class action lawsuits filed after the breach, or wider damage to its reputation with customers
    The latter, to be honest, is probably a non-issue in the long run.  The same Q4 filing shows that the company has beat analysts' revenue estimates, and cnbc.com reports that the company's comparable same store sales rose in the last quarter.

    Such revelations are in line with past data breaches: even the grandfather of all retailer data breach debacles, the TJX data breach of 2007, showed that being the victim of a hack had a negligible effect on sales.  Coincidentally, that breach also cost the company around $250 million.

    Related Articles and Sites:
    http://techcrunch.com/2015/02/25/target-says-credit-card-data-breach-cost-it-162m-in-2013-14/
    http://www.databreaches.net/target-says-credit-card-data-breach-cost-it-162m-in-2013-14/
    http://www.cnbc.com/id/102451887#.
     
  • Connecticut Data Encryption: Senator To Propose Required Encryption For Insurance Companies

    It looks like Connecticut could be following in the footsteps of New Jersey: according to stamford.dailyvoice.com, state senators are considering proposing legislation that would require Insurance companies to encrypt any sensitive personal data.   If said proposal passes, it would become the second state I know of that makes it mandatory for insurance companies to use data encryption.  New Jersey recently approved a bill that did the same for insurance companies in the Garden State, going as far as requiring the encryption of data on desktop computers.

    Anthem Breach Aftermath

    One of the largest data breaches to hit the US was made public in January: a breach of Anthem's database affected approximately 80 million members.  Over 1 million of them were residents of Connecticut, and enough of them contacted the state to merit considering legislation specifically to the insurance sector.

    Over the past month, the topic of Anthem's data breach has been such that I'm surprised the issue hasn't been broached sooner.  With the exception of a handful of laws, current federal and state statutes are seriously lacking when it comes to data security.  Most recommend the use of encryption, with dire consequences in the event of a data mishap.  However, a recommendation does not have the same sense of urgency as compulsory obligation.  No surprise, then, that many organizations take the recommendation as an optional action.  Of course, they're not actually supposed to approach it in that manner but why wouldn't they?  They're not obligated to do anything and there's so much to do (or so the real-world reasoning goes).

    Encryption not a Silver Bullet

    Of course, encryption is not a cure-all for all data ills.  As knowledgeable people have pointed out after the Anthem data breach, there is very little that the insurer could have done to protect their data because the company's database is in use all the time.

    For example, technologies like disk encryption only protect information when a device is in "off" mode, be it a laptop, a portable hard drive, or a data server.  The analogy of a safe is not out of place if one thinks of encryption as the vault and the money as sensitive data: as long as the money is being used, it can't be in the vault and hence it remains unprotected.  Put the money in the vault and it's protected…but it can't be used.  Likewise, when data is being used, it cannot be protected.

    The Connecticut senators appear to be aware of this shortfall:
    That is why we are introducing this necessary, commonsense legislation to encrypt personal information. If we cannot prevent hackers from getting in, we can at least thwart their efforts by limiting what information they get and rendering it useless.
    It's becoming clearer and clearer that this is the kind of thinking we need.  The method of passing indirect, passive-aggressive legislation has run its course and sadly proven that it doesn't work.
    Related Articles and Sites:
    http://stamford.dailyvoice.com/politics/sens-leone-duff-propose-encrypting-personal-data
     
  • Data Breach Law: Wyoming Updates Laws On Data Privacy

    Wyoming has approved two Senate bills that update the state's data privacy laws.  Senate Files 35 and 36 expand on the definition of what constitutes a breach of personal information, and what steps organizations must take when a data breach takes place.  Missing from the update: a safe harbor clause that would protect organizations if data encryption is used to safeguard the data.

    Tokens and Security Questions are PII, Too (According to the Law)

    According to trib.com, Senate File 36 amended the definition of "personal identifying information" (PII) to include:
    birth or marriage certificates, health and medical insurance information, and "security tokens" like passwords or security questions such as "What is your mother's maiden name?" if they are linked to an account log-in or similar security procedure.
    It's a somewhat surprising development, not because the loss or theft of such data should be left out of the legal definition for PII, but because it is so specific.  The thing I've learned about legislators over the past five years is that they hate being too specific about data security definitions because things in the tech world grow old and useless sooner than later.  For example, the inclusion of security questions as PII makes sense, but so do all the other security devices, mechanisms, and protocols that will be developed in the future as well.  It's often simpler and more effective to create a catch-all clause to account for these.

    Toll-Free Numbers are Not Enough

    Also, the approved bills put a further onus on companies to alert people of a data breach.  Previously, a company only needed to set up a toll-free number where people could call in to get more information on a data breach.  Now,
    companies would have to provide information about the types of data that was breached, a description of how the breach happened, when it happened, what actions the company has taken to protect against future breaches and whether notification of the breach was delayed because of a law enforcement investigation.

    Related Articles and Sites:
    http://trib.com/news/state-and-regional/govt-and-politics/wyoming-senate-committee-tackles-data-privacy-bills/article_24f040a5-99a5-563a-a9d4-96bf685404cc.html
    http://www.databreaches.net/wyoming-house-committee-approves-data-definition-breach-notification-bills/
     
  • Smartphone Security: Phone Theft Drops In Cities As Kill Switches Take Hold

    Reuters is reporting the unsurprising news that London, New York City, and San Francisco are seeing dramatic drops in smartphone thefts after the implementation of kill switches on devices became mandatory.  The ability to encrypt the contents of these devices has existed for years (via smartphone encryption that came either turned on by default or otherwise).  The capability to render a smartphone useless from a distance is new-ish.  Of the two, however, the latter was always better poised to curb thefts.  The surprise is that it took this long for the kill switch to take center stage.

    Kill Switches and Smartphones: Made for Each Other

    Why is the kill switch having such an incredible effect?  Because it's attacking the primary reason why smartphones are being stolen: so they can be given or sold to people who want a smartphone (usually at a discounted price, possibly all the way down to zero dollars).

    The kill switch, once triggered, renders the smartphone useless.  This means that the thief can end up with an electronic brick in his hands despite his efforts.  I write "can end up" because the kill switch only works if the device is connected to the internet.  The kill signal must reach the smartphone for it to be effective, after all.  And, because of the nature of the smartphone and the apps on it, the odds are that the signal will be received.  Even if a stolen device is turned off immediately after it's stolen, it will have to be turned on when the time comes to sell it, putting the thief at the mercy of the kill signal.

    The kill switch is an elegant solution that's singularly adapted to smartphones.  It doesn't work as well on laptops, for example, because a laptop doesn't have to be connected to the internet in order to be useful; there's a realistic possibility that the kill signal won't reach it.  Plus, the internet represents the only viable way to reach a laptop with a kill signal, whereas smartphones have cellular networks as well as the internet.

    It also helps that each smartphone comes with a unique global identifier which, as of right now, cannot be spoofed.  Laptops also come with such identifiers (such as the MAC address) but these can be spoofed with the right software.

    Data Security vs. Asset Protection

    All of this being said, a kill switch is not really a data security tool.   It's an asset security tool, like a laptop cable that discourages a thief from stealing a computer.  An example of a data security tool would be mobile encryption, which prevents access to the device's contents unless the correct password, finger swipe, biometrics, etc. are provided.

    Encryption doesn't pose a problem for thieves who steal smartphones (generally) because, if they can sell a working device, they're happy.  This does not preclude thieves from finding it even better to steal a device that's not encrypted.  Any personal data that they can monetize long after selling the physical good must be gold in their books.  Encryption's inability to act as an asset protection tool, however, doesn't make it's less important or less effective.

    Another important aspect of the kill switch is that it can be coupled with encryption to remotely delete data.  More specifically, the data is erased by deleting a device's encryption key, which is used to unlock the encrypted data.  Since the key is gone, the encrypted data remains locked forever.

    Related Articles and Sites:
    http://uk.reuters.com/article/2015/02/11/uk-usa-smartphone-killswitch-idUKKBN0LF09320150211

     
  • Encryption vs. Cyberinsurance: One's Risk Management, The Other's Risk Transfer

    The Anthem data breach is turning out to be big not only in terms of number of people affected.  According to pymnts.com, quoting ft.com, Lloyd's of London has stated that cyber attacks are "now too big for private insurance companies to handle" after details of Anthem's hack were revealed.  This is another development that should make people take a long, hard look at using encryption software to secure sensitive data.

    Risk Management

    As breaches of personal and other sensitive information started to grow exponentially, and data security professionals kept pointing out that data security tools like disk encryption were meant to manage risk (and could not eliminate it), some people started to misinterpret the advice they were given.

    It was unusual yet not rare to find people thinking along the lines of: well, if it's meant to manage risk, maybe we don't need these security tools.  We'll just manage it in a different way.  And, presto, you had companies that signed up for cyberinsurance only at the expense of using proper data security tools and drafting up enforceable, well-thought computer usage policies.

    There are advantages to this short-sighted approach: huge savings on anything that is remotely related to technical issues, including IT labor; instant coverage as opposed to the weeks or months (or years!) that it could take to plan and implement a technical approach; reducing oversight and monitoring; etc.  The savings in time, energy, and money are astronomical.

    The problem is, this is a different kind of risk management: while the use of data security solutions represents a reduction in risk, the use of cyberinsurance represents a transfer of risk.

    Transfer vs. Reduction

    From the point of a company looking to manage the risk of a data breach, perhaps it doesn't matter that they're transferring the risk as opposed to reducing it.  After all, on the surface it achieves the same thing: it clears away the risk.

    But, there is the issue of permanence: as pymnts,com showed, insurance companies are increasingly unwilling to venture into the field of insuring again data breaches.  So, in the long run, companies may need to look into implementing data security tools after all (although it may not be true in the really long long run; technology has a way of finding solutions to its own vexing problems, especially ones that don't originate from the natural world).

    Plus, legal protections don't extend to signing up for insurance.  And, people are not less likely to sue you because you signed up for insurance (in fact, maybe they'd be more likely to bring legal claims against you).

    Last but not least, there is no guarantee that you'll be able to cash in on your insurance: insurance companies have gone to court over payments, asserting on technicalities that certain things aren't covered.

    Meanwhile, reducing risk is win-win all around: legal protections abound in the form of safe harbor clauses in legislation; it wouldn't be hard to convince the courts that encrypted data does not represent a data breach because the data is protected; most people are quite aware that encryption offers real protection.  Plus, as opposed to transferring the risk (specifically, financial risk), the threat of a data breach is actually reduced.
    Related Articles and Sites:
    http://www.pymnts.com/news/2015/we-cant-cover-cyberattacks-says-lloyds-of-london-insurer/
    http://www.databreaches.net/big-cyberattacks-crippling-private-cyberinsurance-firms/
     
  • HIPAA Encryption: Anthem Didn't Encrypt Data Stolen In Massive Hack

    The wsj.com points out in an article that Anthem Inc, the health insurer that recently announced a massive data breach potentially affecting 80 million people, did not use health data encryption to secure the data that was stolen.  It also points out that applying encryption can be a "balancing act between protecting the information and making it useful."

    80 Million People Affected

    The details of the breach are as follows: Anthem Inc., which in years past was also known as Wellpoint, found last week that hackers – potentially backed by the Chinese government – broke into the health insurer's online database.  The extent of the damage is as of yet unknown although the company has admitted that all of its business units have been affected.   The company boasts 80 million members.

    The stolen information includes addresses, phone numbers, names, dates of birth, and Social Security numbers.  Financial information such as credit card numbers were spared.  It is pointed out that this could be "the largest computer data breach disclosed by a health-care company," meaning that it will also be the largest breach listed on the HIPAA "Wall of Shame."  Currently, top spot is held by Science Applications International Corporation (SAIC), thanks to the 4.9 million military members who were affected when it experienced its own massive data breach in 2011.

    It looks like Anthem will blow SAIC out of the water.  Interestingly enough, the company already had a run-in with the HHS before, for HIPAA data security violations: in July 2013, it settled with the HHS for $1.7 million when it was still known as Wellpoint (well, technically Anthem and Wellpoint merged).

    Slowly Tilting Toward Encryption

    There's a reason why HHS does not require the use of encryption anywhere and everywhere sensitive personal data is stored: sometimes, it just might not be possible.  Consider, for example, an MRI machine.  The gigantic magnetic cocoon is only part of the machine; a computer that collects and processes the data is another part.  Whether this computer can be encrypted is not really up to individual hospitals and clinics, but to the manufacturers.  Likewise, there are myriad reasons why a particular database is not encryptable (although, in this day and age, the odds of that reason being a technical one would be remote).

    However, it seems that HIPAA covered entities will have to bite the bullet and find ways to ensure that all of their patient data are encrypted: forking over $1 million or more on a periodic basis, inviting the wrath of clients (and their lawsuits), having HHS/OCR oversee their operations for months on end after an incident, dealing with the consequences for years (the breach that resulted in the Wellpoint settlement of 2013 goes back to June 2010), etc. is really not worth the trouble of not using encryption, or making it a point to choose hardware that can be properly protected.

    Related Articles and Sites:
    http://www.wsj.com/articles/investigators-eye-china-in-anthem-hack-1423167560

     

     
More Posts Next page »