The InfoSec Blog
25Apr/08

Are these “Top 10” dumb things or not?

At "10 dumb things users do that can mess up their computers" Debra Littlejohn Shinder brings up some interesting common failings. Lets look at her list, because I have a different take.

#1: Plug into the wall without surge protection
#2: Surf the Internet without a firewall
#3: Neglect to run or update antivirus and anti-spyware programs
#4: Install and uninstall lots of programs, especially betas
#5: Keep disks full and fragmented
#6: Open all attachments
#7: Click on everything
#8: Share and share alike
#9: Pick the wrong passwords
#10: Ignore the need for a backup and recovery plan

Well, they seem interesting, but ...
The big "but" gets back to one of my favourite phrases:

Context Is Everything

Very simply, in my own context most of this is meaningless. It may well be in yours as well.

Lets first look at the stated and unstated context, which should have been made clear up front.

The author mentions Windows XP a couple of times without making it clear which version, and only a passing reference to other versions of Windows. There is no mention of any other operating systems, Mac OSX, Linux, BSD, OLPC, or even embedded systems in PDAs. I can surf the net with Trusty Old Newton. More on that in a moment.

She also fails to mention the context in which the computer is being used. Is this a home personal system, a home office system, a small business or a larger commercial enterprise with its own IT and InfoSec departments? This matters not only from the point of view of meeting this points but of legal ramifications.

Many of us in InfoSec use the terms "diligence" and "care". We usually omit the word "due" so as to avoid the legal meaning and the gunnysack of baggage that gets dragged in. 'Diligence' means a constant and earnest effort and application. 'Care' means the effort is serious and devoted. Neither of these terms are used in the article. However one would reasonably expect these to be part of the approach in business of any kind or even in a home setting where personal assets need to be protected and perhaps children to be cared for. The author fails to mention this too.

Plug into the wall without surge protection.

I'd rate this as 'necessary but not sufficient' for a number of reasons.
First and foremost the author does not make it clear that a UPS and a surge protector are not the same thing. Yes, many UPSs include surge protection, but think about these two things for a moment.

  1. You can have surge protection but still loose data when the power fails.
    This isn't just about the work that you've done sine the last 'save', although loosing that can be serious. That loss of power may occur at a critical point for the hardware causing corruption of the file system (disk drive, networked or USB). It is almost certainly going to cause a loss of your train of thought, and that may be very serious.
  2. Surge protection wears out.
    Most people are unaware that surge protectors have a limited life and its not measured in time but in how much energy (aka surges) they have to absorb. So one day your surge protector isn't going to protect you any more. FINIS. Game Over. The surge gets though and your machine is toasted.
    How do you know when your protector has used up its surge capacity? Generally you don't, though some newer ones do have an indicator.
    What can you do about it? Not a lot, except buy a new one.

That's why I like using a high-end laptop as a workstation. The power-brick and the battery do protect against surges and the battery acts as UPS. Sort of.

But please note that not all UPSs are created equally. Its not just about battery power. I'll save that for another article.

Surf the Internet without a firewall.

While this is good advice in general, the specifics are the killer.

My firewall is a separate machine, an old HP Vesta P1 with 256Meg of RAM and a 30Meg and a CD reader. If you feel so inclined you could probably pick up something like this from the Salvation Army for about $10.
I run the IP-COP firewall on it. I've run other firewalls including the Mandriva MDF with its sophisticated GUI. I loved playing with Shorewall, which is one of the most flexible open source firewalls I've met. But IP-COP is small, fast and reliable. It has plugins for caching and for handling Dynamic DNS, as well as many other functions if you chose to install the plugins.

Why have I chosen to run a separate firewall rather than the software or modem based approach that the author of the article suggests? There are may reasons, but prime among them is the principle of Separation of Duties. I'm a firm believer in the idea that each thing should do just one thing and do it well, and the idea of a 'security appliance' or of running the firewall on the host (i.e. the target) doesn't appeal to me.

Perhaps there should be a "solely" in there.

Neglect to run or update antivirus and anti-spyware programs

This is another "Context is Everything" situation.

At home, even though I have an 'always on' broadband connection, I have a Linux based firewall and all my servers and laptops run Linux. Its not that Linux is guaranteed 100% protection against all forms of malware, but at least its not the highly vulnerable situation of Windows that necessitates running AV software.

And lets face it, as Bob Rosenberger at VMyths points out, AV software is getting less and less effective and the cycles of malware are getting more capable and more aggressive and more insidious.

But its not just me and its not just Linux. I have a number of high profile clients who put AV software on their corporate laptops and workstations ... but it is disabled. Its there, I'm forced to conclude, to satisfy the auditors. However these organizations don't suffer from malware attacks for other reasons, most notably that they have strict control over outside access. For the most part, there is none. Internals users are not allowed to use the Internet except under special conditions. Incoming and outgoing mail is aggressively filtered.

We're beginning to see this kind of access control with products from Ironport (Cisco) and Proofpoint. These are "appliances" more available to smaller sites. In all probability most users of these products aren't going to use their full capability and will still want another layer of protection against malware.

Sadly, the most effective one is the one that is weakest and is also the most easily subverted. Its user awareness and discipline. Don't open unexpected attachments, download and run strange programs, visit dubious sites. See below.

Please don't think that I'm saying having a firewall is an excuse for not keeping your software well maintained. There are many reasons for keeping up to date quite apart from making the software attack-proof. The the mantra "If it ain't broke, don't fix it" is not a reasonable stance with something as complex as software. It may be broken in ways that you don't see or haven't seen yet. This is quite different from choosing not to apply a change because you've analyzed it and determine that it is not appropriate.

And lets not forget that a firewall has lots of limitation - most are designed to protect the internal network from the outside world and assume that the internal network is trustworthy. Hence its no use at all if an internal machine is infected by some other means.

Install and uninstall lots of programs, especially betas

I was at IT360 and heard David Rice, the author of "Geekonomics" speak on software quality. One point he made was that the large software vendors treat all users as the "beta testers" for their products. He says:

"Software buyers are literally crash test dummies for an industry that is remarkably insulated against liability, accountability, and responsibility for any harm, damages or loss that should occur because of manufacturing defects or weaknesses that allow cyber attackers to break into and hijack our computer systems."

So while this point may be a good one, we are all on the roundabout and can't get off.

Keep disks full and fragmented

This is a meaningless and unhelpful generalization.

Firstly, I see an amazing amount of nonsense published about de-fragmentation. It warrants a posting and discussion in its own right, but please, don't buy into this myth.

The second thing is that I DO keep a disk full and never run de-fragmentation on it. But then I have my hard drives partitioned. One contains the operating system, just what is needed to boot; another contains the system and libraries. These are pretty full and apart from the upgrades and occasional patches (which are less frequent and less extensive with Linux than Windows) there is very little "churn" on these partitions. I can leave them almost full. This includes auxillary programs where I keep on-line documentation ("manual pages") and things like icons, wallpaper, themes and so on.

Next up is the temporary partition - /tmp in Linux parlance. Its the scratch workspace. It is cleaned out on every reboot and by a script that runs every night, but most programs clean up their temporary files after themselves. This partition looks empty most of the time. There's no point de-fragmenting it and no point backing it up.

Another few partitions deal with what can be termed "archives". These may be PDFs of interest or archived e-mail. Backup of these is important but they are in effect 'incremental' storage so there is no 'churn', just growth, so de-fragmentation is completely irrelevant.

So what's left? Partitions that deal with "current stuff", development, writing, so forth. These are on fast drives, aggressively backed up, and use journaled file systems for integrity.

But overall I simply don't do ANY de-fragmentation. I think its a waste of time for a number of reasons.

The first is that it simply makes no sense in any of the contexts above. The second is that given high speed disks and head activity and good allocation strategies in the first place, its not going to help.

The third and most significant is that since I use volume management software it can't possibly help.

I use LVM on all my Linux platforms to manage disk allocation. If you read up on it you'll see that it means that a contiguous logical volume may not correspond to a contiguous physical allocation on the disk. Since LVM subsumes RAID as well, it may not even be on a single physical drive.

Remember:

Now, after reading that article, speculate about how I do backups 🙂

Open all attachments

Good advice at last! Sadly human nature seems perverse. People seem to be sucked in to reading attachments and visiting dubious web sites (see below) and admonishions don't seem enough to change their behaviour.

Perhaps evolution has failed us; perhaps we need a Darwinian imperative so that people foolish enough to do this can no longer contribute to the gene (or is it meme?) pool.

Click on everything

More good advice, more efforts to overcome human stupidity.

Share and share alike

"Context is everything"

Oh dear. This doesn't make sense any more. To be effective in business you do need to share data. I don't need to go into detail, but I will mention that most businesses need a web site to share information with customers, prospects and investors.

There are now many web-based businesses based on sharing, Flicr, Facebook, LinkedIn and the like.

And lets not forget that the whole "Open Source" model is about sharing.

Pick the wrong passwords

There are two things I object to here.
The first is the hang-up with passwords. They are, to coin a phrase, "so twentieth century".

The problem isn't dreaming up passwords - we get nonsense like this:

"Help users create complex passwords that are easy to remember"

Lets face it, there;'s no real problem dreaming up passwords.
Certainly not for me. I had to learn by heart poems and passages from famous works, chunks of Shakespeare and that kind of thing at school. I can always pull out something, take first letters, mange them however.

But the real problem, whether you have this repertoire or whether you use a generator software tools, is remembering them. Oh, and forgetting them when you have to change them. Oh, and knowing which one applies where.

This is the point that Mike Smith makes in his book, "Authentication" and is why people write down passwords or use passwords that are essentially mnemonics or use the same password for many situations.

Twenty years ago I only had to deal with a few passwords, now I have to deal with hundreds. Almost every web site I visit demand that I log in.

We have reached a point now where using 'strong' password technology is becoming a liability and using passwords is and of itself an increasing risk. The likelihood that a new employee will re-use a password he's used on a public web site for his corporate login is high. The load on his memory is just too great. This is why there is a market for software that remembers your passwords. But how portable is it? USB drives, you say? I seem to loose USBs with alarming frequency.

So, how happy are you with doing financial transaction over the Internet using just a password as authentication, even if it is over a SSL connection? I'm not very happy. This is a subject that deserves a long blog article in its own right, but lets just point out that banks in Canada and the US have chosen not to use the more secure "two factor" and "one time pad" authentication systems that are normal for European and Scandinavian banks, and so have put their customers at risk. Not all the risks have to do with the Internet connection.

Some banks have moved to what they call "two factor" authentication. Well, it certainly isn't really what the security industry calls "two factor". At best it might be called 'two passwords' - instead of asking you just your password they will ask for the password and then one of a set or previously agreed questions like "what was the colour of your first car". It gives the illusion of security, but its just a double-password. Compare it to having a lock on your screen door and your front door. If the theif comes in by breaking a window or by stealing your keys (or the book you have your passwords written down in since you have so many of them!) then this doens't help.

Real "Two-Factor" authentication has two different things. A password is "something you know". The colour of your first car is also something you know. Its also something other people can know.

A real second factor would be "something you have" like your bank Client Card that you use with your personal identification number (P.I.N.) which is "something you know". Both have to be used together. Someone might know - or guess - your PIN without you knowing about it, but if you loose possession of the card you do now about it.

Another factor is "something you are" - biometrics. Recognition of your fingerprint or iris along with a password.

Of course these more secure methods require more technology which is why most web sites fall back to the only thing they are sure you have - a keyboard.

Rick Smith's book is ...
"Authentication: From Passwords to Public Keys" ISBN 0201615991

See his home page at http://www.smat.us/crypto/index.html
He refers there to ..

A companion site, The Center for Password Sanity, examines the
fundamental flaws one finds in typical password security policies
and recommends more sane approaches.
http://www.smat.us/sanity/index.html

See also 'The Strong password dilemma' at http://www.smat.us/sanity/pwdilemma.html

And not least of all the cartoon at http://www.smat.us/sanity/index.html

Seriously: go read Rick Smith's book.

There is a lot of nonsense out there about passwords and a lot of it is
promulgated by auditors and security-wannabes.

Ignore the need for a backup and recovery plan

As you can see above, I've made things easy for backups.

One reason for this is that the real problem is not having a backup and recovery plan, is the doing of it, making it a habit, a regular part of operations.

That is one reason most larger organizations use centralized services, so that the IT department takes care of backups. Its a major incentive for "thin clients" where there is no storage at the workstation that needs to be backed up.

Its also one reason that I partition my drives so I can identify what is 'static' and what is 'dynamic'.

One of my great complaints about Microsoft Windows is that everything is on the C: drive. I very strongly recommend partitioning your drives. Having a D: drive and remapping your desktop and local storage there makes things so much easier. It also helps to have a separate partition for the swap area and for temporary files. Sadly, while this is possible and is documented (search Google for details), its not straight forward. Which is sad, because it is a very simple and effective way of dealing with many problems. No the least of which is that you can re-install Windows without over-writing all your data.

Posted by Anton Aylward

Comments (0) Trackbacks (0)

No comments yet.


Leave a comment

Trackbacks are disabled.