So I need to compile a list of ALL assets, information or otherwise,
That leads to tables and chairs and powerbars.
OK so you can't work without those, but that's not what I meant.
Physical assets are only relevant in so far as they part of information processing. You should not start from those, you should start from the information and look at how the business processes make use of it. Don't confuse you DR/BC plan with your core ISMS statements. ISO Standard 22301 addresses that.
This is, ultimately, about the business processes.
How do you know WHAT assets are to be included in the ISO-27K Asset Inventory?
This question and variants of the "What are assets [for ISO27K]?" comes up often and has seen much discussion on the various InfoSec forums I subscribe to.
Perhaps some ITIL influence is need. Or perhaps not since that might be too reductionist.
The important thing to note here is that the POV of the accountants/book-keepers is not the same as the ISO27K one. To them, an asset is something that was purchased and either depreciates in value, according to the rules of the tax authority you operate under, or appreciates in value (perhaps) according to the market, such as land and buildings.
Here in Canada, computer hardware and software depreciates PDQ under this scheme, so that the essential software on which you company depends is deemed worthless by the accountants. Their view is that depreciable assets should be replaced when they reach the end of their accounting-life. Your departmental budget may say different.
Many of the ISO27K Assets are things the accountants don't see: data, processes, relationships, know-how, documentation.
Like many forms of presenting facts, not least of all about risk, reducing complex and multifaceted information to a single figure does a dis-service to those affected. The classical risk equation is another example of this; summing, summing many hundreds of fluctuating variables to one figure.
Perhaps the saddest expression of this kind of approach to numerology is the stock market. We accept that the bulk of the economy is based on small companies but the stock exchanges have their "Top 100" or "Top 50" which are all large companies. Perhaps they do have an effect on the economy the same way that herd of elephants might, but the biomass of this planet is mostly made up, like our economy, of small things.
The financial loss of internet fraud is non-trivial but not exactly bleeding us to death. Life goes on anyway and we work around it. But it adds up. Extrapolated over a couple of hundred years it would have the same financial value as a World Killer Asteroid Impact that wiped out all of human civilization. (And most of human life.)
A ridiculously dramatic example, yes, but this kind of reduction to a one-dimensional scale such as "dollar value" leads to such absurdities. Judges in court cases often put dollar values on human life. What value would you put on your child's ?
We know, based on past statistics, the probability that a US president will be assassinated. (Four in 200+ years; more if you allow for failed attempts). With that probability we can calculate the ALE and hence what the presidential guard cost should be capped at.
Sometimes I wonder why we bother ...
A colleague in InfoSec made the following observation:
My point - RA is a nice to have, but it is superfluous. It looks nice
but does NOTHING without the bases being covered. what we need
is a baseline that everyone accepts as necessary (call it the house
odds if you like...)
Most of us in the profession have met the case where a Risk Analysis would be nice to have but is superfluous because the baseline controls that were needed were obvious and 'generally accepted', which makes me wonder why any of us support the fallacy or RA.
It gets back to the thing about the Hollywood effect that is Pen Testing. Quite apart from the many downsides it has from a business POV it is non-logical in the same way that RA is non-logical.
What we had drilled into us when I worked in Internal Audit and when I was preparing for the CISA exam was the following
RISK is the
PROBABILITY that a
THREAT will exploit a
VULNERABILITY to cause harm to an
R = f(T, V, A)
Why do you think they are called "TVAs"?
More sensibly the risk is the sum over all the various ..
This isn't just me sounding off. Richard Bejtlich says much the same thing and defends it from various sources. I can't do better that he has.
It seems that to make better cybersecurity-related decisions a senior FBI official recommends considering a simple algebraic equation:
rather than solely focusing on threat vectors and actors.
To be honest, I sometimes wonder why people obsess about threat vectors in the first place. There seems to be a beleive that the more threats you face, the higher your risk, regardless of your controls and regardless of the classification of the threats.
Look at it this way: what do you have control over?
Why do you think that people like auditors refer to the protective and detective mechanisms as "controls"?
Yes, if you're a 600,000 lb gorilla like Microsoft you can take down one - insignificant - botnet, but the rest of us don't have control over the threat vectors and threat actors.
What do we have control over?
Vulnerabilities, to some extent. We can patch; we can choose to run alternative software; we can mask off access by the threats to the vulnerabilities. We can do things to reduce the the "vulnerability surface" such as partitioning our networks, restricting access, not exposing more than is absolutely necessary to the Internet (why oh why is your SqlServer visible to the net, why isn't it behind the web server, which in turn is behind a firewall).
Asset to a large extent. Document them. Identify who should be using them and implement IAM.
And very import: we have control over RESPONSE.
Did the FBI equation mention response? I suppose you could say that 'awareness' is a part of a response package. Personally I think that response is a very, very important part of this equation, and its the one you have MOST control over.
And response is - or should be - totally independent of the threats
since it focuses on preserving and recovering the assets.
I think they have it very, very confused and this isn't the most productive, most effective way of going about it. But then the FBI's view of policing is to go after the criminals, and if you consider the criminals to be the threat then that makes sense.
But lest face it, most corporations and are not in the business of policing. neither are home users.
Which is why I focus on the issue of "what you have control over".
Related articles by Zemanta
- School Spy Program Used on Students Contains Hacker-Friendly Security Hole (wired.com)
- The Top 10 Reports For Managing Vulnerabilities (lockergnome.com)
- FBI searching for 'Flavor Flav Bandit' (seattlepi.com)
- Why Security Vendors are loosing (tech.bl0x.info)
- Editorial: Flawed F.B.I. Background Checks (nytimes.com)
- FBI details surge in death threats against lawmakers (americablog.com)
- Optimist: The glass is half full
- Pessimist: The glass is half empty
- Cost Accountant: The vessel is too large for its purpose
- Engineer: There is a 100% safety margin.
"Policy: All information stored electronically has value and shall be protected
commensurate with its value."
Corrolary: "If data has no value, it
should not be using storage space."
Related articles by Zemanta
The trouble with some people is that they make some deceptively reasonable comments that don't stand up under critical analysis
With an ailing economy and a whole lot of cancelled contracts resulting from
that poor economy. Pandemic planning is a major threat to our most important
asset people and it appears as though that vulnerability may have been
activated. Its time to dust off the BCP plan and update it with a Pandemic
If it takes a pandemic to motivate you to create or review a BCP then
something is seriously wrong, and it has nothing to do with the pandemic.
As one manager said to me a long time ago, "show me the numbers".
The number of confirmed cases rose Monday to 50 in the U.S., the result
of further testing at a New York City school. The WHO has confirmed 26
cases in Mexico, six in Canada and one in Spain. All of the Canadian
cases were mild, and the people have recovered.
The Mexican government suspects the virus was behind at least 149 deaths
in Mexico, the epicentre of the outbreak, with hundreds more cases
I'm sure just about any ocotr - or the 'Net - can supply us with figures on the cases and deaths from 'regular' flu world-wide, as well as the named versions.
Les Bell, another ex-pat Brit who lives in Australia was discussing the importance of training and reinforcement in such matters as DR/BCP. Les is also a pilot and so many of his analogies and examples have to do with piloting and aircraft.
Part of our discussion has a much wider scope.
Les had said:
"People under extreme stress may behave unpredictably and have limited capacity for rational thought"
This is the basis of much of pilot training, particularly in simulators, where procedures that are too dangerous to be attempted in a real aircraft can be repeated until drills are automatic.
Don't quote me on this, but I seem to recall reading in an aviation safety-related article that in an emergency, something like 50% of people lose it to the extent that they are completely unable to cope, 25% are capable of functioning with some degree of impairment, and 25% of people are able to complete required tasks correctly. Training by means of drills and rehearsals is able to correct that situation to a considerable extent.
Therefore in BCP/DRP planning, it's important to - as far as possible - simulate an emergency, rather than just story-boarding it, or doing a whiteboard walkthrough. Hence the requirement for fire drills, evacuation drills and the like; repetition conditions the mind to perform the task correctly under stressful conditions.
Most of us don't get the chance to do a full interruption test for our DRP, but the closer we can get, the better.
Training - drill and reinforcement so that you can carry out the actions automatically even when extreme stress has completely blanked and cognitive functions - is an important part of military "boot camp" training and one reason I find it so comical that CISSP course training gets called "boot camp".
Les is quite right. For a variety of reasons most people "loose it" under extreme stress. This is why military heroes, people who can hang in there and think clearly and make critical decisions, are held in such esteem. Similarly test pilots (and those test pilots who became the early astronauts). Having lightening fast reactions (racing drivers) and being in top physical condition helps, but there is something more.
Some authorities look to the old American 'gunslingers' and speculate about how the adrenaline rush in such situations is handled by the body and the brain. Typically all that adrenaline pumps up the muscles for "fight or flight" and in such panic or near panic situations rationality is not the key issue. But if we shift from the evolutionary context to the 'gunslinger', standing still means that there is a lot of 'shakes'. Being able to stay calm and not have the shakes leads to being a sucesfull 'gunslinger'. Evolution in action?
There are other forms of stress as well. I've seen sysadmins who have been up for more than 30 hours trying in futile to solve a problem that to me, well rested, is simple and obvious.
The lesson here is two-fold. The first is the point that Les makes. Train and reinforce.
The second is that when the disaster does strike be aware that the stress will load up on fatigue and that stressed and fatigued people do not make good decisions. Rest, shifts, alternates, standard plans and scenarios that can work to relieve the stress are important.
In this article at TechRepublic, Tom Olzak tries to address the issue of insider threat by talking about why your employees might 'go rogue'. I think he completely misses the point by discussing the motivation for spies and convicted traitors. This is a different class of people from toss that commit financial fraud and take revenge on employers who they think have wronged them.
Lets be fair, how many of these characteristics would have applied to people like Nick Leason, Jerome Kerviel, the rogue traders such as Yasuo Hamanaka at Sumitomo Corporation of Japan in 1998 and John Rusnak at the Allied Irish Bank in 2002, Toshihide Iguchi at Daiwa Bank, John Rusnak was a former currency trader at Allfirst bank, Matt Piper of Morgan Stanley, Anthony Elgindy, Thom Calandra and Brian Hunter - never mind the rogue executives as WorldCom, Enron and Parmalat and many other corporate and accounting scandals that were motivated by greed.
The list on the blackboard in the cartoon doesn't, I think, apply to the 'rogue traders'. It applies only somewhat to the rogue executives but it does apply more comprehensively to the spies and traitors like Ames & Early.
However Donn Parker's point that (many) white-collar criminals are led into crime by "intense personal problems" makes more sense and also applies to people such as Brian Molony at the CIBC. So I don't think this is a very good article. Donn's observation si more geenral and more useful than Tom's.
More to the point, since Tom's article fails to address issues such as senior management ignoring the business controls that are in place because the people concerned were making a profit (aka greed in high places) and because it doesn't address the issue of having internal resources where staff can come to get advice about pressing personal problems, and finally because it doesn't deal with the possible channels for ethics complaints and whistle-blowing, it fails to address its title; there is nothing here about prevention - only detection, and very limited form of detection at that.