In my very first job we were told, repeatedly told, to document everything and keep our personal journals up to date. Not just with what we did but the reasoning behind those decisions. This was so that if anything happened to use kn knowledge about the work, the project, what had been tried and thought about was lost, if, perhaps, we were 'hit by a bus on the way to work'.
At that point whoever was saying this looked toward a certain office or certain place in the parking lot. One of the Project managers drove a VW bus and was most definitely not a good driver!
So the phrase 'document everything in case you're hit by a bus' entered into the work culture, even after that individual had left.
And for the rest of us it entered into our person culture and practices.
Oh, and the WHY is very important. How often have you looked at something that seems strange and worried about changing it in case there was some special reason for it being like that which you did no know of?
Unless things get documented .... Heck a well meaning 'kid' might 'clean it out' ignorant of the special reason it was like that!
So here we have what appear to be undocumented controls.
Perhaps they are just controls that were added and someone forgot to mention; perhaps the paperwork for these 'exceptions' is filed somewhere else or is referred to by the easily overlooked footnote or mentioned in the missing appendix.
It has been pointed out to me that having to document everything, including the reasons for taking one decision rather than another, "slows down work". Well that's been said of security, too, hasn't it? I've had this requirement referred to in various unsavoury terms and had those terms associated with me personally for insisting on them. I've had people 'caught out', doing one thing and saying another.
But I've also had the documentation saving mistakes and rework.
These days with electronic tools, smartphones, tablets, networking, and things like wikis as shared searchable resources, its a lot easier.
Sadly I still find places where key documents such as the Policy Manuals and more are really still "3-ring binder" state of the art, PDF files in some obscure location that don't have any mechanism for commenting or feedback or ways they can be updated.
Up to date and accurate documentation is always a good practice!
 And what surpises me is that when I've implemented those I get a 'deer in the headlight' reaction from staff an managers much younger than myself. Don't believe what you read about 'millennials' being better able to deal with e-tools than us Greybeards.
At the very least, this will apply a 'many eyes' to some of the SSL code and so long as the ssh pruning isn't wholesale slash-and-burn that cutting it back may prove efficacious for two reasons.
Less code can be simpler code, with decreased likelihood of there being a bug due to complexity and interaction.
Getting rid of the special cases such as VMS and Windows also reduces the complexity.
POSIX I'm not sure about; in many ways POSIX has become a dinosaur. Quite a number of Linux authors have observed that if you stop being anal about POSIX you can gt code that works and a simple #ifdef can take care of portability. In the 90% case there isn't a lot of divergence between the flavours and in the 99% case the #ifdef can take care of that.
Whether SSH fits into the 90% or the 99% I don't know. The APIs for 'random' and 'crypto' are in the grey areas where implementations differ but also one where POSIX seems to be the most anal and 'lowest common denominator'. I suspect that this is one where the #ifdef route will allow more effective implementations.
We shall see what emerges, but on the whole the BSD team have a reputation for good security practices so I'm hopeful about the quality.
I'd be interested to see their testing approach.
Perhaps that's cynical and pessimistic and a headline grabber, but then that's what makes news.
What I’m afraid of is that things like this set a low threshold of expectation, that people will thing they don't need to be better than the herd.
So I need to compile a list of ALL assets, information or otherwise,
That leads to tables and chairs and powerbars.
OK so you can't work without those, but that's not what I meant.
Physical assets are only relevant in so far as they part of information processing. You should not start from those, you should start from the information and look at how the business processes make use of it. Don't confuse you DR/BC plan with your core ISMS statements. ISO Standard 22301 addresses that.
This is, ultimately, about the business processes.
I often explain that Information Security focuses on Information Assets.
Some day, on the corporate balance sheet, there will be an entry
which reads, "Information"; for in most cases the information is
more valuable than the hardware which processes it.
-- Adm. Grace Murray Hopper, USN Ret.
Some people see this as a binary absolute - they think that there's no need to asses the risks to the physical assets or that somehow this is automatically considered when assessing the risk to information.
The thing is there are differing types of information and differing types of containers for them.
On the ISO2700 forum one user gave a long description of his information gathering process but expressed frustration over what to do with it all all, the assets, the threats and so forth, and trying to make it into a risk assessment.
It was easy for the more experienced of us to see what he was missing.
He was missing something very important -- a RISK MODEL
The model determines what you look for and how it is relevant.
Java 7 Update 10 and earlier contain an unspecified vulnerability
that can allow a remote, unauthenticated attacker to execute arbitrary
code on a vulnerable system.
By convincing a user to visit a specially crafted HTML document,
a remote attacker may be able to execute arbitrary code on a vulnerable
Well, yes .... but.
In many of the InfoSec forums I subscribe to people regularly as the "How long is a piece of string" question:
How extensive a risk assessment is required?
It's a perfectly valid question we all have faced, along with the "where do I begin" class of questions.
The ISO-27001 standard lays down some necessities, such as your asset register, but it doesn't tell you the detail necessary. You can choose to say "desktop PCs" as a class without addressing each one, or even addressing the different model. You can say "data centre" without having to enumerate every single component therein.
From the left hand doesn't know what the right hands is doing department:
Ngair Teow Hin, CEO of SecureAge, noted that smaller companies
tend to be "hard-pressed" to invest or focus on IT-related resources
such as security tools due to the lack of capital. This financial
situation is further worsened by the tightening global and local
economic climates, which has forced SMBs to focus on surviving
above everything else, he added.
Well, lets leave the vested interests of security sales aside for a moment.
I read recently an article about the "IT Doesn't matter" thread that basically said part of that case was that staying at the bleeding edge of IT did not give enough of a competitive advantage. Considering that most small (and many large) companies don't fully utilise their resources, don't fully understand the capabilities of the technology they have, don't follow good practices (never mind good security), this is all a moot point.
Let us leave aside the poor blog layout, Dejan's picture 'above the fold' taking up to much screen real estate. In actuality he's not that ego-driven.
What's important in this article is the issue of making OBJECTIVES clear and and communicating (i.e. putting them in your Statement of Objective, what ISO27K calls the SoA) and keeping them up to date.
Dejan Kosutic uses ISO27K to make the point that there are high level objectives, what might be called strategy, and the low level objectives. Call that the tactical or the operational level. Differentiating between the two is important. They should not be confused. The high level, the POLICY OBJECTIVES should be the driver.
Yes there may be a lot of fiddly-bits of technology and the need for the geeks to operate it at the lower level. And if you don't get the lower level right to an adequate degree, you are not meeting the higher objectives.
Last month, this question came up in a discussion forum I'm involved with:
Another challenge to which i want to get an answer to is, do developers
always need Admin rights to perform their testing? Is there not a way to
give them privilege access and yet have them get their work done. I am
afraid that if Admin rights are given, they would download software's at
the free will and introduce malicious code in the organization.
The short answer is "no".
The long answer leads to "no" in a roundabout manner.
Unless your developers are developing admin software they should not need admin rights to test it.
This kind of question keeps coming up, many people are unclear about the Statement of Applicability on ISO-27000.
The SoA should outline the measures to be taken in order to reduce risks such as those mentioned in Annex A of the standard. These are based on 'Controls'.
But if you are using closed-source products such as those from Microsoft, are you giving up control? Things like validation checks and integrity controls are are 'internal'.
Well, its a bit of a word-play.
- SoA contains exclusions on controls that are not applicable because the organization doesn't deal with these problems (ie ecommerce)
- SoA contains exclusions on controls that pose a threat (and risks arise) but cannot be helped (ie A.12.2 Correct processing in applications) and no measures can be taken to reduce these risks.
With this, a record must be present in risk assessments, stating that the risk (even if it is above minimum accepted risk level) is accepted
The key to the SOA is SCOPE.
On the ISO27000 Forum list, someone asked:
That's a very ingenious way of looking at it!
One way of formulating the risk statement is from the control
objective mentioned in the standard.
Is there any other way out ?
Ingenious aside, I'd be very careful with an approach like this.
Risks and controlsare not, should not, be 1:1.
The Navy's premier institution for developing senior strategic and
operational leaders started issuing students Apple iPad tablet
computers equipped with GoodReader software in August 2010,
unaware that the mobile app was developed and maintained by
a Russian company, Good.iWare, until Nextgov reported it in February.
OK so its not news and OK I've posted about this before, but ...
So the question here is: Why should software produced in the country where there are more evil-minded programmers be superior to software produced in Russia?