Of course, it doesn’t mean that you can’t use both (or that using both is not recommended). But encrypting an entire laptop’s hard drive holds an ace over individual file encryption: convenient, one-stop security.
Once a computer’s entire disk is encrypted, the only thing the end user has to do is remember the username and password required to access the laptop. This process is identical to typing the username and password in order to gain access into Windows, if the login prompt is set up. But unlike the Windows prompt, encryption provides security, whereas the Windows prompt just provides the sense of security. You can think of encryption as a Jumble word puzzle with a shot of testosterone (strong enough that even the US Government uses it to safeguard their own documents if they decide to use encryption, which is not as often as you’d think, based on the news), whereas the ordinary Windows username and password is the gray, silver layer on instant-win lottery tickets that is easily scratched off with your fingernail. The latter does the job of hiding information admirably, but nobody in the world would claim it to be “secure.” Would you trust your bank account information on such a gimmick?
So why encrypt the entire disk over encrypting individual files? In the old days, encrypting an entire disk had its downsides: there was a performance hit, for example, and you’d notice that you computer was slower overall. In those same old days, though, a computer weighed over thirty pounds, including the CRT monitor with burnt-in screen, keyboard, mouse, and CPU and it’s chassis. Things have changed as technology has progressed, and whole-disk encryption is a completely transparent addition to any computer, be it a desktop or a laptop (or a smartphone, for that matter). Well, it is if you’re using AlertBoot. And “transparent” in this case means that the user doesn’t notice it at all: when it gets installed; when it’s encrypting; when the computer is fully protected. We can thank advances in computer speed for that, and of course, advances in computer technology.
However, advances in technology have also brought forth problems as well. When computers operate, they use swap files, temporary files, browser cookies. Users decide to save information on a temporary basis, but forget to delete them afterwards. Files may be deleted, but are easily recovered with the correct—and easily affordable—software. If one were to use file encryption, and encrypt files one by one, that person might give up because there would be many different files to encrypt. It especially becomes problematic if a person has to hunt down files that are created automatically, such as temporary files that may contain sensitive information. Instead of creating a complex set of policies to ensure that all bases are covered, whole-disk encryption would be a simpler—and due to its simplicity, a more effective—solution.
Plus, if and when a computer gets lost, stolen, or misplaced, the aftermath is easier to deal with. The public is alerted that a laptop that was encrypted was lost. That’s it. In fact, under the celebrated (to a degree, by consumer and privacy advocates) California laws concerning data breaches, encrypted devices that are stolen do not require a public announcement. And, the time not spent on PR efforts can be spent on finding ways to prevent a replication of whatever scenario let to potential data breach in the first place.
However, with file encryption, there would always be the question of “well, did our complex set of encryption policies provide the protection for the correct files?” And that question is hard to answer; it will require a lot of manpower to find the answer to that one, with IT personnel going through backups and audit reports. And while waiting for the results, there will be a lot of speculation, recrimination, finger‑pointing, and other public issues that the affected company will have to deal with.
Computer Security Day falls on November 30th of each year, namely, tomorrow. There are many things you can do to ensure that your laptop and desktop computers remain safe, ranging from running antivirus software and changing passwords to making sure liquids are not near your computer or electrical outlets.
Of course, some of them enhance your computer security in greater ways than others. Changing your passwords happens to be one of them. About two-thirds of people never change their passwords, and they are setting themselves up for a potential data breach, especially if their passwords are really short. The reasoning lies in very simple math.
Assuming that the password in question is ****, meaning that four characters constitute the password, how hard is it to crack it? Well, if characters are not case-sensitive (i.e., “A” is the same as “a”), and special characters such as #$%^& are not used, there’s 34 total options (24 from the English alphabet and 10 from the numbers 0 through 9) for each placeholder. So, to exhaust all possible combinations—starting from aaaa, aaab, aaac, and so on and so forth—the number of tries would be 1,336,336. That’s a lot of tries, and in the day and age of Charles Babbage it might have been a deterrent, but to a modern computer it would take the blink of an eye. Now, if the password was only one character long, all possible “combinations” would be exhausted in 34 tries, which is next to nothing.
So, the longer the password, the better the chances of you being protected. However, hackers also know that nobody wants to use something like “qzudsnasj234jans,” since it would be impossible to remember as a password. Most people go for words or short sentences such as “god” or “iamgod” or “iamgodsgifttowomankind.” That last one is long, and I’ll say again that longer passwords are generally better because it takes longer to crack it. HOWEVER, simple words and sentences are discouraged as well because hackers also run programs trying all the words found in the English dictionary. So if one uses the word “supercalifragilisticexpialidocious” as a password, it will be sooner than later that your data will be breached. When you consider that the English language has approximately 988,000 words of varying lengths (per the Global Language Monitor—nobody really knows for sure), a longer, simple word found in a dictionary is no more secure than a random 4-letter password, with a count of 1,336,336, as pointed out above.
So, passwords ought to be a combination of letters and numbers. If you’re bilingual, you can use different combinations such as a long English word followed by numbers, followed by a word in a second language.
For encryption services such as AlertBoot, the password is the key to unlocking the contents of your digital devices. As such, passwords ought to be as secure as possible and changed regularly. You can also set up conditions for your passwords, such as whether palindromes can be used; whether anagrams of the username is allowed; if already used passwords can be reused; how often passwords ought to be changed; etc.
That last statistic is actually quite significant, apparently. It’s a metric for measuring contractual customers or subscribers that leave a company. I assume it would be a better metric for financial service companies or companies that provide encryption services like AlertBoot, than a grocery store or a retailer where the concept of a “subscription” doesn’t really make sense. I’m sure, however, that there must be some data-mining techniques for figuring out the churn rate for those industries as well.
From a simple numerical standpoint, 66 basis points look like something you can sneeze at, hijacking an oft-used expression. On the other hand, companies such as Wal-Mart dominate their industries due to differences that are measured in basis points (also known as 1/100th of 1%). When you operate on volume, like the megaretailers do, the difference between 2.65% and 2.60% are astronomical; I know this because stock prices get hit like crazy due to small fluctuations in gross profit figures. I can only imagine what a difference of 66 basis points is considered when it comes to customer retention rate.
This makes me wonder about TJX’s position that their customers don’t feel inconvenienced by the massive data breach they had announced earlier this year. In court, they pointed to their ever-increasing revenue numbers as indirect proof that their customers had shrugged off the incident. Otherwise, their revenue numbers would be down, right? The thing is—and I’m not accusing TJX of any financial hanky-panky (heck, I haven’t even taken a look at their 10-Qs and 10-Ks in a while)—there are ways to affect revenue figures.
The most “celebrated” of these would be what Lucent did in its heyday before the Internet bubble popped. Called channel‑stuffing, it’s the practice of recording sales even when there’s a good chance that payment may not be received or the product may be returned. Channel-stuffing is done strictly to inflate revenue numbers. Another company that was engaged in such practices was Sunbeam, under celebrity CEO Al “Chainsaw” Dunlap, and it ultimately resulted in their filing Chapter 11. However, there are other ways that revenue numbers can go up without resorting to questionable accounting practices. If the prices go up across the board due to inflation, revenue numbers must go up as well, naturally. Or if people start purchasing big-ticket items. Or if less people start buying more from their stores—which can happen if you hate the store but the competition decided to pull out of your market. Where else are you going to shop? So, in more ways that one, TJX was lucky: Either their customers are really forgiving or fortune decided to smile upon them.
It’s not rocket science that breaches in customer data will leave a terrible impression on one’s clientele, and depending on the level of egregiousness, a significant number of those customers will leave. Companies would be better served by ensuring the security of their customer data than hoping that they’ll also be visited by Lady Fortuna. I wonder what TJX’s metrics showed in terms of customer churn?
For those who don’t know, this has been a very busy year for Formula One, not in terms of racing, but in terms of data theft. Over the past Thanksgiving weekend I had seen a blurb in a small Brazilian article about McLaren accusing Renault of stealing secrets (Formula 1 racing in Brazil…I never realized that it was big until F1 driver Ayrton Senna got a funeral fit for a king back in 1994, when he died on the tracks from a head-on collision). Anyhow, it turns out that theft is not what they’re accused of. Rather, Renault was accused of possessing stolen secrets. Secrets that belonged to McLaren-Mercedes. Ironically enough, McLaren was accused of possessing stolen Ferrari designs and technology earlier this year. What goes around, comes around.
What’s funny is that in both cases the data was delivered to each recipient team in floppy disks. We’re talking about the 3.5” squarish, plastic things that couldn’t go near a magnet. There’s some talk in the blogosphere (and blog comment-o-sphere) that this doesn’t sound right: eleven floppy disks were used in the McLaren-Renault account. For those who don’t know, F1 teams are possessors of some of the latest, greatest technology. Plus, I’ve even seen a picture of a gigantic treadmill used for a full scale F1 race car, as if it needed to go on a diet and lose some weight by burning some rubber on some (other) rubber. You have to imagine that guys who can build and operate a treadmill designed for cars have plenty of moolah. So, the floppy disk part sounds curious, apocryphal at best. A holographic data storage module from Star Trek sounds more reasonable for these guys.
Except that floppy disks make sense in some respects. Everyone involved in F1 happens to be very competitive. The racers, yes, but also everyone else, including the engineers. Let’s face it—if F1 engineers were the hippie-kind that wanted to bring harmony and peace to the world via machines, they wouldn’t be in the business of designing cars that require wings to stay on the ground; cost millions of dollars per car; and have the fuel efficiency of Hummers dragging an M1 Abrams Tank (this last one’s probably a misstatement on my part, actually. Frequent pit stops for refueling means lost time, so my guess is that fuel efficiency can’t be that bad). No, they’d be designing low-maintenance, easy-to-operate water pumps for distribution in Africa. The point is, these particular engineers jealously guard their secrets. They’d be prime candidates for AlertBoot, just in case someone decides to steal their servers—with cages and all. But I get the feeling that they already have something similar to it protecting their computers. Plus, because data and design leakage is always a worry, I’m betting that they’ve got software monitoring and controlling what devices can be connected to the different ports found on computers, just in case someone decides to steal designs using a 10 GB USB thumbdrive.
In fact, similar functions can be found in AlertBoot as well, where one can specify which devices can be connected to the computer. It’s known as Port Control, and it can be tailored to fit different user profiles: For example, the waterboy cannot connect his USB drive to company machines; the owner can, but only to his office computer; the engineers can as well, to their computers as well as each other’s, but computer number three is off-limits to all USB drives.
Where do the floppy disks come in? Well, if McLaren’s software does not have support for blocking floppy disk drives, then this pretty much becomes the only way to steal data. How do you protect from something that you cannot recognize, right? I guess Port Control found in AlertBoot would differ in some respects since you can white list or blacklist (or both) what gets access (or does not get access)—this way, anything that doesn’t fit the profile cannot be used.
Incidentally, hardware is also the reason why the government of Ohio was so, shall we say, cavalier about the implications of lost government tapes earlier this year: without the correct hardware, the tape is useless. It’s like finding a Betamax tape when all you’ve got in your arsenal is DVD players. You’d either have to raid Sony’s museum of failures or build your own Beta VCR in order to see the contents, which is no easy task.
Or at least, it certainly feels like it. In addition to last week’s UK government public relations fiasco with the two lost CDs—and the other post I had regarding a break-in into an Indian government military research lab, where three computers got stolen—there are reports from Canada that a consultant for the Provincial Public Health Laboratory (PHL) of Newfoundland and Labrador took home from a laboratory a laptop containing patient information, creating a data breach.
From the reports, it doesn’t sound as if the laptop was stolen from the consultant. Rather, and this is freaky as hell, a security researcher called up the consultant in question at home, letting the consultant know that the researcher was able to access the data, via the consultant’s internet connection. Can imagine? You’re minding your own business, doing some government work when a guy calls in to let you know, “I can see your data.” It feels like the fourth sequel to I Know What You Did Last Summer; quick, someone get in touch with Jennifer Love Hewitt’s agent.
What’s surprising to me about this is that a security researcher, of all people, finds out about this data breach and calls the guy to alert him. What were the chances? Of course, they way they paint Canadian politeness and conscientiousness, you’d believe that Canadian hackers (or crackers, if you prefer) would do the same….
Anyway, the Canadian government has promised a full investigation. The data exposed included test results for HIV, hepatitis, and other infectious diseases; Medical Care Plan numbers; age; sex; and the name of physicians. I guess it’s more than enough information for carrying out medical insurance scams. Or calling up someone and blackmailing them.
And, of course, the Health Minister Ross Wiseman had to assuage the fears of the PHL’s internal security practices possibly resembling those of the UK government’s. He said that this one incident is an isolated situation, and it doesn’t reflect the integrity of the systems in the laboratory or the company that provides IT services to the PHL. The consultant in question breached policies and, obviously, a computer taken outside of the security zone (the lab premises, in this case) cannot be protected any more than the village idiot who runs outside the fortress walls during a siege (my words, not the Health Miniter’s).
I agree with Mr. Wiseman. I also agree with countless of other people who time and time again point out, when similar data breaches occur, that policies always failI (which is only logical; the data breach is there because the policy failed). Now, that doesn’t mean that policies don’t work, nor does it mean that they’re not necessary. Quite to the contrary, policies work most of the time; most people follow policies. But when you have a situation where all it takes is one instance of someone not following or being ignorant of an organization’s data security policies, most of the time is not good enough, since it will result in a (big) problem some time, at least in this day and age.
If organizations are going to rely on policies to create a data-safe environment—as opposed to just using unenforceable policies to cover their butts later on—perhaps what they should include is a policy that can prevent individual employees from breaking said policies (i.e., end users of the computers); it seems like 80% of the problems originate from renegade employee acts. Policies such as “all computers must be encrypted” would be easy to implement and helpful with services such as AlertBoot. You can deny employees the capability to uninstall encryption (which would lead to a security breach) while allowing decryption (so that workers can actually use their computers while the data is still protected with encryption); so even if they take home their work laptops, breaking one set of company policies, at least the information on those laptops are protected due to another set of policies if unforeseen circumstances were to arise.
So...any takers on whether there will be an Australian government data breach this week?
Computer security experts always say that there is no perfect security solution. My guess is any security or safety expert will say the same, ranging from OSHA to Secret Service agents. Knowing this, the best one can do is to minimize the risk of security and data breaches and thefts. Whether by design or by luck, the DMSRDE wasn’t subject to a particularly alarming data breach, but the potential that it could have been more must be worrisome. Furthermore, reviewing and planning committees must have been under the impression that what they had in place was enough for the circumstances. They’ll probably want to review their inventory as well as current security policies and practices, and even consider encryption, such as provided by AlertBoot, to further minimize any potential risks they find