In my very first job we were told, repeatedly told, to document everything and keep our personal journals up to date. Not just with what we did but the reasoning behind those decisions. This was so that if anything happened to use kn knowledge about the work, the project, what had been tried and thought about was lost, if, perhaps, we were 'hit by a bus on the way to work'.
At that point whoever was saying this looked toward a certain office or certain place in the parking lot. One of the Project managers drove a VW bus and was most definitely not a good driver!
So the phrase 'document everything in case you're hit by a bus' entered into the work culture, even after that individual had left.
And for the rest of us it entered into our person culture and practices.
Oh, and the WHY is very important. How often have you looked at something that seems strange and worried about changing it in case there was some special reason for it being like that which you did no know of?
Unless things get documented .... Heck a well meaning 'kid' might 'clean it out' ignorant of the special reason it was like that!
So here we have what appear to be undocumented controls.
Perhaps they are just controls that were added and someone forgot to mention; perhaps the paperwork for these 'exceptions' is filed somewhere else or is referred to by the easily overlooked footnote or mentioned in the missing appendix.
It has been pointed out to me that having to document everything, including the reasons for taking one decision rather than another, "slows down work". Well that's been said of security, too, hasn't it? I've had this requirement referred to in various unsavoury terms and had those terms associated with me personally for insisting on them. I've had people 'caught out', doing one thing and saying another.
But I've also had the documentation saving mistakes and rework.
These days with electronic tools, smartphones, tablets, networking, and things like wikis as shared searchable resources, its a lot easier.
Sadly I still find places where key documents such as the Policy Manuals and more are really still "3-ring binder" state of the art, PDF files in some obscure location that don't have any mechanism for commenting or feedback or ways they can be updated.
Up to date and accurate documentation is always a good practice!
 And what surpises me is that when I've implemented those I get a 'deer in the headlight' reaction from staff an managers much younger than myself. Don't believe what you read about 'millennials' being better able to deal with e-tools than us Greybeards.
My digital camera uses exif to convey a vast amount of contextual information and imprint it on each photo: date, time, the camera, shutter, aperture, flash. I have GPS in the camera so it can tell the location, elevation. The exif protocol also allows for vendor specific information and is extensible and customizable.
Unless and until we have an 'exif' for IoT its going to be lame and useless.
What is plugged in to that socket? A fan, a PC, a refrigerator, a charger for your cell phone? What's the rating of the device? How is it used? What functions other than on/off can be controlled?
Lame lame lame lame.
At the very least, this will apply a 'many eyes' to some of the SSL code and so long as the ssh pruning isn't wholesale slash-and-burn that cutting it back may prove efficacious for two reasons.
Less code can be simpler code, with decreased likelihood of there being a bug due to complexity and interaction.
Getting rid of the special cases such as VMS and Windows also reduces the complexity.
POSIX I'm not sure about; in many ways POSIX has become a dinosaur. Quite a number of Linux authors have observed that if you stop being anal about POSIX you can gt code that works and a simple #ifdef can take care of portability. In the 90% case there isn't a lot of divergence between the flavours and in the 99% case the #ifdef can take care of that.
Whether SSH fits into the 90% or the 99% I don't know. The APIs for 'random' and 'crypto' are in the grey areas where implementations differ but also one where POSIX seems to be the most anal and 'lowest common denominator'. I suspect that this is one where the #ifdef route will allow more effective implementations.
We shall see what emerges, but on the whole the BSD team have a reputation for good security practices so I'm hopeful about the quality.
I'd be interested to see their testing approach.
An article on Linked entitled 'The Truth about Practices" started a discussion thread with some of my colleagues.
The most pertinent comment came from Alan Rocker:
I'm not sure whether to quote "Up the Organisation", ("If you must have a policy manual, reprint the Ten Commandments"), or "Catch-22" (about the nice "tidy bomb pattern" that unfortunately failed to hit the target), in support of the article. Industry-wide metrics can nevertheless be useful, though it's fatal to confuse a speedometer and a motor.
However not everyone in the group agreed with our skepticism and the observations of the author of the article.
And Anton aren't the controls you advocate so passionately best practices? >
NOT. Make that *N*O*T*!*!*! Even allowing for the lowercase!
I get criticised occasionally for long and detailed posts that some readers complain treat them like beginners, but sadly if I don't I get comments such as this in reply
Data Loss is something you prevent; you enforce controls to prevent data
leakage, DLP can be a programme, but , I find very difficult to support
with a policy.
Does one have visions of chasing escaping data over the net with a three-ring binder labelled "Policy"?
Let me try again.
Policy comes first.
Without policy giving direction, purpose and justification, supplying the basis for measurement, quality and applicability (never mind issues such as configuration) then you are working on an ad-hoc basis.
In many of the InfoSec forums I subscribe to people regularly as the "How long is a piece of string" question:
How extensive a risk assessment is required?
It's a perfectly valid question we all have faced, along with the "where do I begin" class of questions.
The ISO-27001 standard lays down some necessities, such as your asset register, but it doesn't tell you the detail necessary. You can choose to say "desktop PCs" as a class without addressing each one, or even addressing the different model. You can say "data centre" without having to enumerate every single component therein.
Last month, this question came up in a discussion forum I'm involved with:
Another challenge to which i want to get an answer to is, do developers
always need Admin rights to perform their testing? Is there not a way to
give them privilege access and yet have them get their work done. I am
afraid that if Admin rights are given, they would download software's at
the free will and introduce malicious code in the organization.
The short answer is "no".
The long answer leads to "no" in a roundabout manner.
Unless your developers are developing admin software they should not need admin rights to test it.
This kind of question keeps coming up, many people are unclear about the Statement of Applicability on ISO-27000.
The SoA should outline the measures to be taken in order to reduce risks such as those mentioned in Annex A of the standard. These are based on 'Controls'.
But if you are using closed-source products such as those from Microsoft, are you giving up control? Things like validation checks and integrity controls are are 'internal'.
Well, its a bit of a word-play.
- SoA contains exclusions on controls that are not applicable because the organization doesn't deal with these problems (ie ecommerce)
- SoA contains exclusions on controls that pose a threat (and risks arise) but cannot be helped (ie A.12.2 Correct processing in applications) and no measures can be taken to reduce these risks.
With this, a record must be present in risk assessments, stating that the risk (even if it is above minimum accepted risk level) is accepted
The key to the SOA is SCOPE.
Call me a dinosaur (that's OK, since its the weekend and dressed down to work in the garden) but ...
On the ISO27000 Forum list, someone asked:
That's a very ingenious way of looking at it!
One way of formulating the risk statement is from the control
objective mentioned in the standard.
Is there any other way out ?
Ingenious aside, I'd be very careful with an approach like this.
Risks and controlsare not, should not, be 1:1.
The Navy's premier institution for developing senior strategic and
operational leaders started issuing students Apple iPad tablet
computers equipped with GoodReader software in August 2010,
unaware that the mobile app was developed and maintained by
a Russian company, Good.iWare, until Nextgov reported it in February.
OK so its not news and OK I've posted about this before, but ...
So the question here is: Why should software produced in the country where there are more evil-minded programmers be superior to software produced in Russia?
You do do backups don't you? Backups to DVD is easy, but what software to use?
So to have great (subjective) protection your layered protection and controls have to be "bubbled" as opposed to linear (to slow down or impede a direct attack).
I have doubts about "defence in depth" analogies with the military that many people in InfoSec use.
Read what they are really talking about in those military examples: its "ablation": that means burning up resources, like land (the traditional defence the Russian Empire used) or manpower (the northern states used in the US civil war) and resources (the USA in WW2). They try to slow down a direct and linear attack, hopefully to a standstill.
As the Blitzkrieg showed in dealing with the Maginot Line, if you "go around it" the defence isn't a lot of use.
Through the ages of war and politics and empire-hood and nation-hood and tribalism we've seen many threats and attacks and subversions used.
The reality is that many InfoSec defences are more like umbrellas, the assume that the attack in coming from a particular direction in a particular form. What's needed is more like an all-enclosing "bubble" rather than something linear with the 'defence in depth' model. But that gets back to the problem of the perimeter.
Many wifi enabled devices are really "spies inside the defensive perimeter".
There was a scare a while ago that various networking equipment was made by companies or fabricators in places that were or might be inimical or economic competitors and as such have subversive code hidden in them. No doubt this will come around again when journalists have nothing better to write about or the State Department need to wave a big stick and scare the public -- its form of showing that "its doing something".
But how can we tell? The reality is that "security specialists" are finding errors - never mind deliberately malicious code - in all manner of devices: pacemakers, insulin pumps, automobile throttle controllers. Will they find "errors" that allow subversion in mainstream IT deceives like home wifi routers (aka the next generation of spambots), home PC software (that's a no-brainer isn't it!) never mind commercial databases.
I dedicate this to the memory of Ken Thompson
The hack to make the HP printers burn was interesting, but lets face it, a printer today is a special purpose computer and a computer almost always has a flaw which can be exploited.
In his book on UI design "The Inmates are Running the Asylum", Alan Cooper makes the point that just about everything these days, cameras, cars, phones, hearing aids, pacemakers, aircraft, traffic lights ... have computers running them and so what we interface with is the computer not the natural mechanics of the device any more.
Applying this observation makes this a very scary world. More like Skynet in the Terminator movies now that cars have Navi*Star and that in some countries the SmartStreets traffic systems have the traffic lights telling each other about their traffic flow. Cameras already have wifi so they can upload to the 'Net-of-a-Thousand-Lies.
Some printers have many more functions; some being fax, repro, and scanning as well as printing a document. And look at firewalls. Look at all the additional functions being
poured into them because of the "excess computing facility" - DNS, Squid-like caching, authentication ...
I recently bought a LinkSys for VoIP, and got the simplest one I could find. I saw models that were also wifi routers, printer servers and more all bundled onto the "gateway" with the "firewall" function. And the firewall was a lot less capable than in my old SMC Barricade-9 home router.
I'm dreading what the home market will have come IP6
I recall the Chinese curse: yes we live in "interesting security issue" times!
But in the long run of things the HP Printer Hack isn't that serious. After all, how many printers are exposed to the Internet. We have to ask "how likely is that?".
Too many places (and people) put undue emphasis on Risk Analysis and ask "show me the numbers" questions. As if everyone who has been hacked (a) even knows abut it and (b) is willing to admit to the details.
No, I agree with Donn Parker; there are many things we can do that are in the realm of "common sense" once you get to stop and think about it. Many protective controls are "umbrellas", that its about how you configure your already paid-for-and-installed (you did install it, didn't you, its not sitting in the box in the wiring closet) firewall; by spending the money you would have spent anyway for the model that has better control/protection -- you do this with your car: air-bags, ABS and so on so why not with IT equipment? The "Baseline" is more often about proper decisions and proper configuration than "throwing money at it" the way governments and government agencies do.
What framework would you use to provide for quantitative or qualitative risk analysis at both the micro and macro level? I'm asking about a true risk assessment framework not merely a checklist.
Yes, this is a bit of a META-Question. But then its Sunday, a day for contemplation.
When does something like these stop being a check-list and become a framework?
COBIT is very clearly a framework, but not for risk analysis and even the section on risk analysis fits in to a business model rather than a technology model.
ISO-27K is arguably more technology (or at least InfoSec) focused that COBIT, but again risk analysis is only part of what its about. ISO-27K calls itself a standard but in reality its a framework.
The message that these two frameworks send about risk analysis is
Context is Everything
(You expected me to say that, didn't you?)
I'm not sure any RA method works at layer 8 or above. We all know that managers can read our reports and recommendations and ignore them. Or perhaps not read them, since being aware of the risk makes them liable.
Ah. Good point.
On LinkedIn there was a thread asking why banks seem to ignore risk analysis .. presumably because their doing so has brought us to the international financial crisis we're in (though I don't think its that simple).
The trouble is that RA is a bit of a 'hypothetical' exercise.
he documentation required and/or needed by ISO-2700x is a perenial source of dispute in the various forums I subscribe to.
Of course management has to define matters such as scope and applicability and the policies, but how much of the detail of getting there needs to be recorded? How much of the justification for the decisions?
Yes, you could have reviews and summaries of all meetings and email exchanges ..
But that is not and has nothing to do with the standard or its requirements.
The standard does NOT require a management review meeting.