In theory, consumers and businesses could punish Symantec for these
oversights by contracting with other security vendors. In practice, there’s
no guarantee that products from other vendors are well-secured, either
— and there is no clearway to determine how secure a given security
product actually is.
Too many firms take an “appliance” or “product” (aka ‘technology”) approach to security. There’s a saying that’s been attributed to many security specialists over the years but is quite true:
If you think technology can solve your security problems,
then you don’t understand the problems and you don’t
understand the technology.
So I need to compile a list of ALL assets, information or otherwise,
That leads to tables and chairs and powerbars.
OK so you can’t work without those, but that’s not what I meant.
Physical assets are only relevant in so far as they part of information processing. You should not start from those, you should start from the information and look at how the business processes make use of it. Don’t confuse you DR/BC plan with your core ISMS statements. ISO Standard 22301 addresses that.
On the ISO2700 forum one user gave a long description of his information gathering process but expressed frustration over what to do with it all all, the assets, the threats and so forth, and trying to make it into a risk assessment.
It was easy for the more experienced of us to see what he was missing.
He was missing something very important — a RISK MODEL
The model determines what you look for and how it is relevant.
It’s a perfectly valid question we all have faced, along with the “where do I begin” class of questions.
The ISO-27001 standard lays down some necessities, such as your asset register, but it doesn’t tell you the detail necessary. You can choose to say “desktop PCs” as a class without addressing each one, or even addressing the different model. You can say “data centre” without having to enumerate every single component therein.
There are many holes in this, but I think they miss some important points.
First is setting IT HR to look for Infosec.
That is because many people think InfoSec is a IT function as opposed to an organizational function. This goes in cycles: 20 years ago there was the debate: “Should Infosec report to IT?” The overall decision was no;. Infosec might need to ‘pull the plug’ on IT to protect the organization.
Second there is the vast amount of technology claiming to do InfoSec.
It is all network (and hence IT) as opposed to business fulfilment. This has now spread to “Governance”. You can buy governance software. What does this do for the ethical outlook of the executive, the board and management? How is Governance tied to risk management and accountability and visibility by this software?
Technology won’t solve your problems when technology *is* your problem.
InfoSec is about protecting the organization’s information assets: those assets can be people, processes or information. Yes technology may support that just as technology puts a roof over your head (physical security) and somewhere to store the information. Once this was typewriters, and hand-cranked calculators and filing cabinets, and copying was with carbon paper. The technology may have changed but most of the fundamental principles have not. In particular the ones to do with attitudes and people are the same now as they were 50 or 100 years ago.
This kind of question keeps coming up, many people are unclear about the Statement of Applicability on ISO-27000.
The SoA should outline the measures to be taken in order to reduce risks such as those mentioned in Annex A of the standard. These are based on ‘Controls’.
But if you are using closed-source products such as those from Microsoft, are you giving up control? Things like validation checks and integrity controls are are ‘internal’.
Well, its a bit of a word-play.
SoA contains exclusions on controls that are not applicable because the organization doesn’t deal with these problems (ie ecommerce)
SoA contains exclusions on controls that pose a threat (and risks arise) but cannot be helped (ie A.12.2 Correct processing in applications) and no measures can be taken to reduce these risks.
With this, a record must be present in risk assessments, stating that the risk (even if it is above minimum accepted risk level) is accepted
What framework would you use to provide for quantitative or qualitative risk analysis at both the micro and macro level? I’m asking about a true risk assessment framework not merely a checklist.
Yes, this is a bit of a META-Question. But then its Sunday, a day for contemplation.
When does something like these stop being a check-list and become a framework?
COBIT is very clearly a framework, but not for risk analysis and even the section on risk analysis fits in to a business model rather than a technology model.
ISO-27K is arguably more technology (or at least InfoSec) focused that COBIT, but again risk analysis is only part of what its about. ISO-27K calls itself a standard but in reality its a framework.
The message that these two frameworks send about risk analysis is
Context is Everything
(You expected me to say that, didn’t you?)
I’m not sure any RA method works at layer 8 or above. We all know that managers can read our reports and recommendations and ignore them. Or perhaps not read them, since being aware of the risk makes them liable.
Ah. Good point.
On LinkedIn there was a thread asking why banks seem to ignore risk analysis .. presumably because their doing so has brought us to the international financial crisis we’re in (though I don’t think its that simple).
Like many forms of presenting facts, not least of all about risk, reducing complex and multifaceted information to a single figure does a dis-service to those affected. The classical risk equation is another example of this; summing, summing many hundreds of fluctuating variables to one figure.
Perhaps the saddest expression of this kind of approach to numerology is the stock market. We accept that the bulk of the economy is based on small companies but the stock exchanges have their “Top 100” or “Top 50” which are all large companies. Perhaps they do have an effect on the economy the same way that herd of elephants might, but the biomass of this planet is mostly made up, like our economy, of small things.
The financial loss of internet fraud is non-trivial but not exactly bleeding us to death. Life goes on anyway and we work around it. But it adds up. Extrapolated over a couple of hundred years it would have the same financial value as a World Killer Asteroid Impact that wiped out all of human civilization. (And most of human life.)
A ridiculously dramatic example, yes, but this kind of reduction to a one-dimensional scale such as “dollar value” leads to such absurdities. Judges in court cases often put dollar values on human life. What value would you put on your child’s ?
We know, based on past statistics, the probability that a US president will be assassinated. (Four in 200+ years; more if you allow for failed attempts). With that probability we can calculate the ALE and hence what the presidential guard cost should be capped at.
Skip over his ranting about the definition of “hackers”
This is the meat:
Wewrote the OSSTMM 3 to address these things. We knew that penetration
testing the way it continued to be marginalized would eventually hurt
security. Yes, the OSSTMM isn’t practical for some because it doesn’t
match the commercial industry security of today. But that’s because the security model today is crazy! And you don’t test crazy with tests
designed to prove crazy. So any penetration testing standard, baseline,
framework, or methodology that focuses on finding and exploiting
vulnerabilities is only perpetuating the one-trick pony problem.
Furthermore it’s also perpetuating security through patchity, a process
that’s so labor intensive to assure homeostasis that nobody could
maintain it indefinitely which is the exact definition of a loser in the
cat and mouse game. So you can be sure it also doesn’t scale at all with
complexity or size.
I’ve been outspoken against Pen Testing for many years, to my clients, at conferences and in my Blog. I’m sure I’ve upset many people but I do believe that the model plays up to the Hollywood idea of a Uberhacker,
produces a whack-a-mole attitude and is a an example of avoidance behaviour, avoiding proper testing and risk management such as incident response good facilities management.
I’ve seen to many “pen testers’ and demos of pen testing that are just plain … STUPID. Unprofessional, unreasonable and pandering to the ignorance of managers.
In the long run the “drama-response” of the classical pen-test approach is unproductive. It teaches management the wrong thing – to respond to drama rather than to set up a good system of governance based on policy, professional staffing, adequate funding and operations based on accepted good principles such as change management.
And worse, it
shows how little faith your management have in the professional capabilities of their own staff, who are the people who should know the system best, and of the auditors who are trained not only in assessing the system but assessing the business impact of the risks associated with a vulnerability
has no guarantees about what collateral damage the outsider had to do to gain root
says nothing about things that are of more importance than any vulnerability, such as your Incident Response procedures
indicates that your management doesn’t understand or make use of a proper development-test-deployment life-cycle
Yes, classical hacker-driven pen testing is more dramatic, in the same way that Hollywood movies are more dramatic. And about as realistic!
A colleague in InfoSec made the following observation:
My point – RA is a nice to have, but it is superfluous. It looks nice
but does NOTHING without the bases being covered. what we need
is a baseline that everyone accepts as necessary (call it the house
odds if you like…)
Most of us in the profession have met the case where a Risk Analysis would be nice to have but is superfluous because the baseline controls that were needed were obvious and ‘generally accepted’, which makes me wonder why any of us support the fallacy or RA.
One list I subscribe I saw this outrageous statement:
ISO 27001 requires that you take account of all the relevant threats
(and vulnerabilities) to every asset – that means that you have to
consider whether every threat from your list is related to each of
I certainly hope not!
Unless you have a rule as to where to stop those lists – vectors that you are going to multiply – are going to become indefinitely large if not infinite. Its a problem in set theory to do with enumberability.
I never like to see the term ‘impact’.
Its not a metric.
I discuss how length, temperature, weight, are metrics whereas speed, acceleration, entropy are derived values. In the same sense, ‘impact’ is a derived value – “the cost of the harm to an asset”. The value of an asset can be treated as a primary metric, but how much it is “impacted” is a derived value.
This is the same kind of sloppy thinking, the same failure to identify tangible metrics as we see when people treating ‘risk’ as if it were something tangible, never mind a metric! Continue reading “Impact” is not a Metric
I note the line that so many of us in the InfoSec business have encountered and complained about …
As we’ve seen during the last few years, “risk” has turned out to be a dead end too. The numbers mean nothing. Even if you could somehow measure risk, it’s easy enough for managers to accept a higher level of risk than the security manager.
But so many ‘authorities’ – ISO-2700x, ISACA’s COBIT, ValIT and RiskIT as well as its Professional Practices – all focus on Risk Analysis.
We’ve recently seen mention of NIST 800-30.
There on page 9 a nine-step (why not 12-step?) program for what they call “Risk Assessment”. Actually it isn’t; it involves controls and results. I makes it look sooooo simple! But as many practitioners have pointed out, in many ways, its not like that in reality. Many of us question if its doable. Continue reading More on how to win friends and influence management
There are numerous assumptions and estimations in the risk
assessment process, so all calculated values have quite wide margins
of error. Worse still, there are almost certainly risks or impacts
that we have failed to recognise or assess, in other words we need to
allow for contingency.
Oh,its worse than that!
The problem is that the potential perpetrators are the ones that determine “the most significant risks” of which you speak, in both frequency (when they decide to strike) and impact (how much damage they will do and what they will do with the results of their attacks), not the person performing the risk analysis.
We are debating how to value an asset, book value, replacement value or the value of the process of using it. Well that doesn’t matter; its the value to the perpetrator of the attack at counts. What you value and defend might be of no interest to him (or her). Obtaining the desired asset may result in collateral damage.
So long as you focus on a Risk Analysis model rather than a comprehensive plan of diligence and security stablemen you are going to get caught out by these false assumptions.
Face it: the Risk Analysis approach means you have no idea who and where the potential perpetrators are, rational or irrational; when and how they may strike (with a tank, an army, or with false data entry).
But act and calculate as if you do.
You have no idea of the perpetrator’s
but the Risk Analysis approach presumes that you do.
I’m sorry, this doesn’t make sense and hence arguing about how to calculate the value of an asset doesn’t make sense in this context. Its like arguing over how many angels can dance on a pinhead when there’s war and famine going on outside.