And much like that.
A number of people outside InfoSec have pointed this out to me and I thought I'd pass it along with a couple of observations.
"If you think that technology can solve your problems then you
don't understand technology and you don't understand your
It was this comment to the posting that caught my attention:
Some of us idiots used to think that any devs who weren't aware of buffer overflow before the Morris worm would be aware of it after the Morris worm. But in fact, your posting almost points out why many devs remain blissfully unaware:
"we developers were trained to focus on and typically only ever focused
on how legitimate users will use the product"
Close. Developers who want to have good jobs have to get trained to focus on how their managers pretend the product will be used. Anyone who thinks as far out as actual end users will get canned for not being
a team member. Anyone who thinks even further out about actual end misusers will be sued for being a hacker. But yeah, you explained it.
Long time readers will know that the Morris worm is my poster-boy for complaining that modern schools don't teach defensive programming.
It seems I'm not alone.
Related articles by Zemanta
- Conficker: How a Buffer Overflow Works (wired.com)
- Microsoft's Banning Memcpy() Functions in the Name of Security [Microsoft] (gizmodo.com)
- Crypto hash boffins trip on buffer overflow (theregister.co.uk)
- Nasty Java bug could lead to attack (infoworld.com)
- Software Bugs A Software Architect Point Of View (slideshare.net)
- Experts reboot list of 25 most dangerous coding errors (go.theregister.com)
- The 25 Most Dangerous Programming Errors (developers.slashdot.org)
We can all see what went wrong here.
1. He should have gone by car and not the train.
2. He should have had the documents on his laptop
3. The laptop should have been tethered in the trunk of the said car.
4. The documents should have been clearly labelled
"*Not* about the F-35"
5. His laptop should have had its patches and AV up to date.
Just one question.
What's with this "hit by"?
That headline is trying to make out that the documents were the guilty - and actively so - party.
Well, perhaps that not the fault of the journalist, perhaps that's the stance the politician is taking 🙂
Related articles by Zemanta
- 4 In 10 People Would Exchange Their Laptop For A Tablet (lockergnome.com)
- Bigotgate: the day the PM joined The Thick of It (guardian.co.uk)
Some of us security droids find this frightening.
My colleague Miriam Britt managed to sum up the reasons why one should have separation quite sussinctly and forcefully. With her permission I have copied her reasoning here and I hope many people will either reference this or copy it to their own blogs. This kind of straight forward statement needs a wide exposure.
Some of us security types were discussion policy, login notices and the like.
Someone commetned on a badly written poicy about the use of corporate e-mail and discussion about the company.
... I recently worked at a place that had an weak and over specific email policy.
One day management realizes there are other areas where "contraband communication" can take place - internet groups, blogs, forums, IM, Blackberries, etc. If the policy hadn't been wrtten to deal specifically with "email" or been more general about the level of technology it would have saved us some hassle.
As it was, our policy development and approval process was too sllw and ciumbersome.
This is a generic issue and not limited to e-mail, IM, etc.
Long ago in a policy development workshop that I was running we thrashed out how to express ACCESS CONTROL so that it was perfectly generic, applied to
everything from the parking lot to the executive washroom, was in language everyone from the Board of Directors to the Janitor could understand. Of
course it applied to computer/network access, and its wording marched the requirement of the 'restricted access' logon notices.
I've been told the lawyers didn't like it but the reasons seemed to boil down to the fact that the language was so straight forward and unambiguous that there wouldn't be enough billable hours if it came to a court case.
If you structure your policy management properly so there is a succinct POLICY STATEMENT and ancillary sections that address
- Consequences of Non compliance
- Roles and Responsibilities
- Who/When/Where/Why Does this Apply?
- Guidelines for Interpretation
- Relevant Standards (Internal and External)
and of course
then its a very effective and efficient way to work.
This is because
a) You don't need a lot policies if they are "general"
b) It makes them easy to learn and remember
c) You don't have to keep going back to the board to get picayune changes approved
The definition of 'information security' seems limited to access control, which is very disappointing. The definition for 'computer security' is more comprehensive. Never the less, to a security professional both these definitions are lacking.
What screams out to me, and this is very obviously my bias, is the lack of any mention of INTEGRITY in these definitions. As I keep pointing out, if you don't have integrity, any other efforts at security, be it information security, or "Gates, Guards, Guns and Dogs" physical security, be it backup and disaster recovery, be it access control, be it 1024-bit SSL, are all going to be pointless.
Its not until we follow a few links at the Encyclopaedia do we come to a mention of Donn Parker's six fundamental and orthogonal attributes of security is there mention of 'integrity'. Even so, that definition has only a like to 'data integrity'. There is a separate definition for 'message integrity'. While these specific items are important, they are details. What is lacking is a general definition of "Integrity". Once again, Fred Cohen's seminal 1997 article on the importance of Integrity comes to mind.