In theory, consumers and businesses could punish Symantec for these
oversights by contracting with other security vendors. In practice, there’s
no guarantee that products from other vendors are well-secured, either
— and there is no clearway to determine how secure a given security
product actually is.
Too many firms take an “appliance” or “product” (aka ‘technology”) approach to security. There’s a saying that’s been attributed to many security specialists over the years but is quite true:
If you think technology can solve your security problems,
then you don’t understand the problems and you don’t
understand the technology.
Its still true today.
The ‘appliance’ attitude is often accompanied by either an unwillingness to do a proper risk analysis and apply organizational changes to make the InfoSec structure self-reliant and where necessary self healing,
that is institute a proper ISMS – which is often quite a lot of initial effort and then ongoing effort, which bring to mind another old quotation:
The biggest problem a security consultant has is getting
managers to perform regular risk assessments. They don’t
want to hear that it’s an on going process. The attitude
was “why bother if I can’t just check it once and be
done with it”.
Not just the risk analysis but the risk management, and treat both as an ongoing cycle. As I say, a proper ISMS is needed – of which ISO27001/2 is a good example – rather than an ‘appliance’ or piece of OTS software such as those mentioned in the article, which often run on a ‘fire and forget’ mode and are installed by a netadmin or hostadmin who has little to no real, meaningful security understanding.
Security is a process not a product
is quite true but understates the case. “Process” mean commitment from the Board and management, which in turn means there is budget to implement the possibly ongoing organizational changes to deal with
changes in the security profile to deal with the changes in technology and threats — as indeed the recent shifts to BYOD and ‘Cloud’ have shown — and the risk management processes, the people and the training.
Companies that are not willing to deal with this are going to suffer.
Breaches and hacks may have, up to now, been an embarrassment and inconvenience, perhaps the cost of sending out notification letters, a short blip in stock value. But consumer awareness is growing, and in
the e-commerce world consumers are coming to expect many basic quality and security baseline features. And that too is an evolving issue. sites like PayPal and eBay devote a lot of energy not simply to security but to the whole process of evolving security, being aware of evolving threats and methods and vulnerabilities.
But its also easy to do it all wrong, to go though the motions with no real results.
We can see that with the way the US Government is dealing with InfoSec and in doing so generating the artificial ‘skills gap’ of InfoSec specialists. What they are doing is demanding the low-level operatives, in effect ‘enhanced’ sysadmins and netadmins who are trained in using the appliances and configuring Windows devices and servers. This is ‘tactical work. What they are avoiding doing is the strategic work, addressing organizational and structural issues, doing proper risk analysis and management, the heavy ‘paperwork’ of implementing ISO27000 or ISO31000. One reason for this is that it is going to be disruptive, “drag them kicking and screaming out of the 19th century”.
We can point quite clearly to various US government departments since they are high profile, well publicized in the media and reports, and quite recidivist, but there are no shortage of other organizations, commercial, NGO and governmental, throughout the world that have implemented just enough “security to say “well that doesn’t apply to me”. All to often that ‘just enough’ is in the form of appliances and OTS software for otherwise poorly configured Windows systems, run by an under-staffed, under-trained (because its under-budgeted and managed by people who don’t understand Risk Management) people. And there’s a lot of “Denial” going on.
This is why I like dealing with first and second tier banks and the large insurance companies that have been around for a long time. They’ve been doing Risk Analysis and management in the meat-world for a long time and segueing that into Cyberspace is no big deal for them. Their main issue is that they have to be a bit un-conservative to deal with rapidly advancing technology.
But as the real world shows, even they aren’t completely immune.
So any organization saying “I’m all right” and “I don’t need to do these things” and “I’m OK with my appliances and OTS software” is deluding themselves.
Related articles across the web
In my very first job we were told, repeatedly told, to document everything and keep our personal journals up to date. Not just with what we did but the reasoning behind those decisions. This was so that if anything happened to use kn knowledge about the work, the project, what had been tried and thought about was lost, if, perhaps, we were ‘hit by a bus on the way to work‘.
At that point whoever was saying this looked toward a certain office or certain place in the parking lot. One of the Project managers drove a VW bus and was most definitely not a good driver!
So the phrase ‘document everything in case you’re hit by a bus’ entered into the work culture, even after that individual had left.
And for the rest of us it entered into our person culture and practices.
Oh, and the WHY is very important. How often have you looked at something that seems strange and worried about changing it in case there was some special reason for it being like that which you did no know of?
Unless things get documented …. Heck a well meaning ‘kid’ might ‘clean it out’ ignorant of the special reason it was like that!
So here we have what appear to be undocumented controls.
Perhaps they are just controls that were added and someone forgot to mention; perhaps the paperwork for these ‘exceptions’ is filed somewhere else or is referred to by the easily overlooked footnote or mentioned in the missing appendix.
It has been pointed out to me that having to document everything, including the reasons for taking one decision rather than another, “slows down work”. Well that’s been said of security, too, hasn’t it? I’ve had this requirement referred to in various unsavoury terms and had those terms associated with me personally for insisting on them. I’ve had people ‘caught out’, doing one thing and saying another.
But I’ve also had the documentation saving mistakes and rework.
These days with electronic tools, smartphones, tablets, networking, and things like wikis as shared searchable resources, its a lot easier.
Sadly I still find places where key documents such as the Policy Manuals and more are really still “3-ring binder” state of the art, PDF files in some obscure location that don’t have any mechanism for commenting or feedback or ways they can be updated.
Up to date and accurate documentation is always a good practice!
 And what surpises me is that when I’ve implemented those I get a ‘deer in the headlight’ reaction from staff an managers much younger than myself. Don’t believe what you read about ‘millennials’ being better able to deal with e-tools than us Greybeards.
So I need to compile a list of ALL assets, information or otherwise,
That leads to tables and chairs and powerbars.
OK so you can’t work without those, but that’s not what I meant.
Physical assets are only relevant in so far as they part of information processing. You should not start from those, you should start from the information and look at how the business processes make use of it. Don’t confuse you DR/BC plan with your core ISMS statements. ISO Standard 22301 addresses that.
This is, ultimately, about the business processes.
I often explain that Information Security focuses on Information Assets.
Some day, on the corporate balance sheet, there will be an entry
which reads, “Information”; for in most cases the information is
more valuable than the hardware which processes it.
— Adm. Grace Murray Hopper, USN Ret.
Some people see this as a binary absolute – they think that there’s no need to asses the risks to the physical assets or that somehow this is automatically considered when assessing the risk to information.
The thing is there are differing types of information and differing types of containers for them.
I get criticised occasionally for long and detailed posts that some readers complain treat them like beginners, but sadly if I don’t I get comments such as this in reply
Data Loss is something you prevent; you enforce controls to prevent data
leakage, DLP can be a programme, but , I find very difficult to support
with a policy.
Does one have visions of chasing escaping data over the net with a three-ring binder labelled “Policy”?
Let me try again.
Policy comes first.
Without policy giving direction, purpose and justification, supplying the basis for measurement, quality and applicability (never mind issues such as configuration) then you are working on an ad-hoc basis.
On the ISO2700 forum one user gave a long description of his information gathering process but expressed frustration over what to do with it all all, the assets, the threats and so forth, and trying to make it into a risk assessment.
It was easy for the more experienced of us to see what he was missing.
He was missing something very important — a RISK MODEL
The model determines what you look for and how it is relevant.
In many of the InfoSec forums I subscribe to people regularly as the “How long is a piece of string” question:
How extensive a risk assessment is required?
It’s a perfectly valid question we all have faced, along with the “where do I begin” class of questions.
The ISO-27001 standard lays down some necessities, such as your asset register, but it doesn’t tell you the detail necessary. You can choose to say “desktop PCs” as a class without addressing each one, or even addressing the different model. You can say “data centre” without having to enumerate every single component therein.
How do you know WHAT assets are to be included in the ISO-27K Asset Inventory?
This question and variants of the “What are assets [for ISO27K]?” comes up often and has seen much discussion on the various InfoSec forums I subscribe to.
Perhaps some ITIL influence is need. Or perhaps not since that might be too reductionist.
The important thing to note here is that the POV of the accountants/book-keepers is not the same as the ISO27K one. To them, an asset is something that was purchased and either depreciates in value, according to the rules of the tax authority you operate under, or appreciates in value (perhaps) according to the market, such as land and buildings.
Here in Canada, computer hardware and software depreciates PDQ under this scheme, so that the essential software on which you company depends is deemed worthless by the accountants. Their view is that depreciable assets should be replaced when they reach the end of their accounting-life. Your departmental budget may say different.
Many of the ISO27K Assets are things the accountants don’t see: data, processes, relationships, know-how, documentation.
Let us leave aside the poor blog layout, Dejan’s picture ‘above the fold’ taking up to much screen real estate. In actuality he’s not that ego-driven.
What’s important in this article is the issue of making OBJECTIVES clear and and communicating (i.e. putting them in your Statement of Objective, what ISO27K calls the SoA) and keeping them up to date.
Dejan Kosutic uses ISO27K to make the point that there are high level objectives, what might be called strategy, and the low level objectives. Call that the tactical or the operational level. Differentiating between the two is important. They should not be confused. The high level, the POLICY OBJECTIVES should be the driver.
Yes there may be a lot of fiddly-bits of technology and the need for the geeks to operate it at the lower level. And if you don’t get the lower level right to an adequate degree, you are not meeting the higher objectives.
This kind of question keeps coming up, many people are unclear about the Statement of Applicability on ISO-27000.
The SoA should outline the measures to be taken in order to reduce risks such as those mentioned in Annex A of the standard. These are based on ‘Controls’.
But if you are using closed-source products such as those from Microsoft, are you giving up control? Things like validation checks and integrity controls are are ‘internal’.
Well, its a bit of a word-play.
- SoA contains exclusions on controls that are not applicable because the organization doesn’t deal with these problems (ie ecommerce)
- SoA contains exclusions on controls that pose a threat (and risks arise) but cannot be helped (ie A.12.2 Correct processing in applications) and no measures can be taken to reduce these risks.
With this, a record must be present in risk assessments, stating that the risk (even if it is above minimum accepted risk level) is accepted
The key to the SOA is SCOPE.
Call me a dinosaur (that’s OK, since its the weekend and dressed down to work in the garden) but …
On the ISO27000 Forum list, someone asked:
That’s a very ingenious way of looking at it!
One way of formulating the risk statement is from the control
objective mentioned in the standard.
Is there any other way out ?
Ingenious aside, I’d be very careful with an approach like this.
Risks and controlsare not, should not, be 1:1.
What framework would you use to provide for quantitative or qualitative risk analysis at both the micro and macro level? I’m asking about a true risk assessment framework not merely a checklist.
Yes, this is a bit of a META-Question. But then its Sunday, a day for contemplation.
When does something like these stop being a check-list and become a framework?
COBIT is very clearly a framework, but not for risk analysis and even the section on risk analysis fits in to a business model rather than a technology model.
ISO-27K is arguably more technology (or at least InfoSec) focused that COBIT, but again risk analysis is only part of what its about. ISO-27K calls itself a standard but in reality its a framework.
The message that these two frameworks send about risk analysis is
Context is Everything
(You expected me to say that, didn’t you?)
I’m not sure any RA method works at layer 8 or above. We all know that managers can read our reports and recommendations and ignore them. Or perhaps not read them, since being aware of the risk makes them liable.
Ah. Good point.
On LinkedIn there was a thread asking why banks seem to ignore risk analysis .. presumably because their doing so has brought us to the international financial crisis we’re in (though I don’t think its that simple).
The trouble is that RA is a bit of a ‘hypothetical’ exercise.
he documentation required and/or needed by ISO-2700x is a perenial source of dispute in the various forums I subscribe to.
Of course management has to define matters such as scope and applicability and the policies, but how much of the detail of getting there needs to be recorded? How much of the justification for the decisions?
Yes, you could have reviews and summaries of all meetings and email exchanges ..
But that is not and has nothing to do with the standard or its requirements.
The standard does NOT require a management review meeting.
People keep asking questions like
If the risk equation I use is Impact * Probability, when it comes to calculating the residual risk value do I still need to consider the impact of Loss of confidentiality, integrity and availability of the asset afterwards ? My understanding us that the probability value may decrease after applying some controls to mitigate the risk, but how does does the impact change?
Personally I don’t like the use of the generalization “Impact“. It hides details and it hides seeing where the control is being applied. Assets are often affected by more than one threat or more than one vulnerability. You really need to recalculate the whole thing over again after the controls have been applied – don’t try for short cuts.
I’d further suggest looking at
I discuss this kind of over-simplification at
- Planning means planning for success and for not-success (herdingcats.typepad.com)
Some people seem to be making life difficult for themselves with risk models such as “Impact * Probability” and as such have lead themselves into all manner of imponderable … since this model hides essential details.
I discuss the CLASSICAL risk equation in my blog
There is a good reason for, no make that MANY good reasons, for separating out the threat and the vulnerability and asset rather that just using “impact”.
Any asset is going to be affected by many
Any control will almost certainly address many assets and in all likelihood deal with many threats and vulnerabilities.
Any reasonable approach will try to optimise this: make the controls more effective and efficient by having them cover as many assets, threats or vulnerabilities as possible.
As such, the CLASSICAL risk equation can then be viewed as addressing residual risk – the probability AFTER applying the controls.