Soccer Goal Security – Fair and unfair analysis

http://taosecurity.blogspot.com/2005/08/soccer-goal-security-i-found-this-ad.html

In recent discussion on various forums and elsewhere in this blog I’ve raised the points that the way attackers value things and the way defenders things are not the same; their perception of other values, such as business assets, processes and so forth, can be very different from yours. As an extreme example, you may be defending the network and IT assets quite capably while the executives of the company are while gambling and snorting away the company’s bank account. I often point to Enron as a poster-boy here – would exemplary IT security have helped ?

And this is what is wrong – one of the many things wrong – with relying to heavily on the model of the classical risk equation as a basis for risk analysis. Its not that the risk equation is wrong; its that WE DON”T KNOW.

We do know the value to us – on the inside, from our point of view.
We do not know how the attacker viewed things.
Any equation will suffice if the accept the guesswork of the input.
Or as the philosopher Nietzsche said “Any lie will suffice provided everyone believes in it“.

At this level I can see the point of any form of RA, ROI or what have you, if its objective is to present a case to management to get the funding to do the security. I don’t think that’s an ethical or honest approach, but I can imagine that in some organizations, ones where FUD often works, it may be necessary. But if the security practitioners who made this case start believing their own lies then things are in a terrible state.

This isn’t quite the point that Richard Bejtlich is making in this particular blog article, but in other postings he points out that the classical RA methods need more vigour and a more scientific method of justifying their inputs and relationships. He calls this approach FAIR. A great deal of this is based on “Risk Assessment is not Guesswork“.
I’m sorry to say that I have to agree with Richard’s analysis of the ‘simple scenario” which he comments on liberally.
Richard repeatedly brings up the question of how the ‘figures’ and ‘estimates’ and situations are arrived at – his “Says Who?”questions. He also questions the over all absence of hard data and precision.

In one sense that’s fair and in another sense its not. This kind of analysis intrinsically leads itself to speculate about situations where there is no data, where all the input are guesswork. GIGO.

Dwight D. Eisenhower is supposed to have said “In preparing for battle I have always found that plans are useless, but planning is indispensable.” No doubt the same applies to RA, but it seems to me a ponderous way to begin. There’s another military adage attributed to Robert Heinlein: “Get your first shot off fast. If you miss, it will throw off the other guy’s aim, allowing you to make your second shot count.” From a security POV I take this to mean one should get some protection in there – what others refer to as “Baseline” and “Diligence” – while others are still doing the risk analysis.

One good and very powerful aspect of RA is often abused or completely mis-used. It is the “Identification of Assets”. Lets get one thing clear: the equipment is not an asset.

In his marvelous 1992 novel “Snow Crash“. Neal Stephenson describes a franchising system and makes reference to the “three ring manual”. This manual is the set of operating procedures for the franchise, who does what and how, down to the smallest detail. I mention this in contrast to, for example, some of the businesses that failed after 9/11. These businesses did not have any ‘plant’ – desks, computers, software, even data – that could not be replaced. They failed because their real assets were not documented – the business processes existed solely “in the heads” of the people carrying them out.

The real assets of a company are not the COTS components. This is a mistake that technical people make. The ex-IBM consultant, Gerry Weinberg, the guy who came up with the term “egoless programming“, also pointed out that people with strong technical backgrounds can convert any task into a technical task, thus avoiding work they don’t want to do. Once upon a time I excelled in the technical side of things, but I found that limited my ability to influence change with management.

The business is what the business does. The tools are important, and there may be special proprietary tools (be they custom machine tools or software applications). But unless the processes for using them are documented, having them as ‘physical assets’ is of no use.
So yes, identifying assets is important.

But does your company know who – and value – who is key to its operations? Is what that person does documented or could be documented so that it could be done by someone else?

I recall one ‘audit’ I carried out where the machine room operator explained to me what all the equipment was for and how the input tapes were processed and the end of day reports were generated. After she finished if there as a check-off list for each day, weekends, month end and so on. If there was a manual detailing the steps she had just explained. She said there wasn’t. So I asked her how she knew what to do. She told me she’d only been there a week – no ‘month end’ yet – but the person who held the job before her had come in one afternoon to explain what had to done.

I don’t think it takes a lot to identify that highest risk here has nothing to do firewalls or patches or IDS.

So we get back to the ‘soccer goal’ picture in the article by Richard Bejtlich that I started with. He puts it in a very straight forward manner – defending against the wrong risks, no doubt because all the suppositions about the attacker’s motivation and methods are incorrect and the assets have not been properly identified.

Even after some way of correctly identifying all of the above, getting meaningful input from subject matter experts and so on, I still see this as a lot of detailed and tedious work.

Which is why I prefer to think in terms of ‘effect’ and the effect of failure.

Lets look at that soccer match again. Stopping the opposing side scoring goals is great, but that’s not the business. The business is in getting fans to pay to come into the stadium. If you have a winning team, that’s great, but stopping goals isn’t the direct cause of revenue. In fact scoring goals – winning the game by scoring more goals that the other team – isn’t always a formula for business success. The Toronto Maple Leafs, for example, sell out every home game despite their less-than-awesome record over the last few decades. Look at the history of the Green Bay Packers and the Detroit Lions, a rivalry that has spanned 75 years and 150 games and had some of the most memorable moments in the history of professional football. During that time only once have the Lions defeated the Packers – 1962. But fans turn up for the entertainment, not the score, and the same holds true for soccer in the countries where that is a national sport. A winning season is great, it makes the fans happy and offers many other opportunities for bringing in revenue. Great players also bring in the fans, but great players don’t always mean winning games – Michael Jackson is probably the greatest player in the NBA, but the Knicks have only placed first a few time in the Eastern conference since he joined and keep loosing the play-offs.
And if the players are assets because they bring in the fans, lets not forget, they also get traded.

All in all I’m unhappy with everything about the methods of Risk Analysis that I read. It seems speculative and prone to a lot of suppositions. At best it seems to pander to the belief that management need numbers, figures, dollar values on which to base decisions.

Gerry Weinberg also talks of the “Rutubuga Rule”. The rutabagas take up storefront space at the grocers and don’t sell, so get rid of them; then what comes next? And so with security, deal with the known stuff first, just as you would he a lock on your front door. When you’ve dealt with all the ‘baseline’ issues for your industry or similar environments, simplified your processes (because complexity leads to complications and errors) and applied the Deming or Shewhart cycle (Plan, Do, Check, Act) a few times, when you have and have tested plans to deal with response and recovery to failure – regardless of the threat or vulnerability, learnt where your real problems with supporting the business processes are, then and only then I’d think about the “by the book” RA.

Why?
Because you will be able to deliver effective (and measurable) results faster than going through the RA process.

FMEA as a risk assessment methodology

While this point arose from a discussion on the ISO-27001 mailing list, in other InfoSec/Audit forums I’m known as a strong proponent of Failure Mode Effect Analysis (FMEA), to the point where people naturally associated it with me. However it clear in my 20+ years in InfoSec in a number of countries (including the UK, where ‘7799 and 2700x come from) that the accepted FMEA approach is not a normal method of ‘risk assessment‘ and is not taught (or examined for) as part of InfoSec, for example as part of the CISSP.
I learnt FMEA (and other techniques we now bundle under 6-sigma, ITIL and so forth, before they were categorized and labelled) in engineering – physical, electrical and aviation. In general, I’d say that InfoSec has a lot to learn from other engineering professions about managing threats, vulnerabilities and failures, and what actually constitutes “risk”. For a start, we have too much of a techie-geek outlook and we are not well educated in statistical methods.
Just giving numbers is meaningless. I will not trust any ‘graph’ that does not have ‘error bars’ and which does not document the sample size vs the total population and show the variance, for example against a random population. These test are easy to perform and I’m disappointed that so many numerically justified ‘risk analysis’ models are really just a pile of spread-sheet mumbo jumbo with no discipline of process behind them.

Estimating “on a scale of 1-5” the components of the risk equation for many factors then multiplying out and averaging is pretty meaningless, yet I’ve seen it done by consultants from TLA companies and accepted by managers. At the very least it ignores cross interactions, and is essentially just a “your guess is a good as mine” approach.
I have nothing against opinions about risks, what I do object to is trying to make opinions into solid numbers. That “estimate on a scale of 1-5” will have an error bar of size >= 5 !

Lets face it, most people don’t understand statistics. All to often I’ve seen managers ignore the “once in 100 years” MTBF (and ignore any MTTR – see FMEA) because they only plan to be with the company for five years, or ignore it because it happened 25 years ago so they think they have a 75 year breathing space. Yes, I know it sounds apocryphal, but I’ve met it too often.

To my mind FMEA is not only easier than TRA, but it focuses the mind on two key issues – survival and recovery (see MTTR) – that TRA doesn’t.

Zemanta Pixie

Spam, baseline and ROI calculation

We know that anti-spam (and for some, AV) is a necessary baseline.
(I’ll avoid using the ‘diligence’ words for now.)

But here is a spreadsheet that ‘does the numbers’.

As I’ve said before, the ROI issue isn’t about justifying the project – the normal B-school way of looking at at things. Its about ‘choosing between’. That’s what this spread sheet purports to be doing. The reality is subtly different.
In doing so, it illustrates all that can go wrong with this approach. Basically you are ‘buying in‘ to someone else’s way of looking at the analysis. This is essentially the trick that sales-droids use. They get you to accept their world view, and once you do accepting that their product is the ‘right one‘ naturally follows.

In a way, this is like the conclusion to the Sapir-Whorf hypothesis about linguistic determinism.
“Put simply, the hypothesis argues that the nature of a particular language influences the habitual thought of its speakers. Different patterns of language yield different patterns of thought.”

That wikipedia article also has some references to the use of language as determinism in fiction that are vary pertinent.

Elsewhere I find: “A well-known saying by Alan Perlis states that ‘a language that doesn’t affect the way you think about programming is not worth knowing‘. he’s talking about computer languages, but it applies to natural languages – and of course tools – software tools like spreadsheets, word processors, UML modelers, and things like Photoshop.
So how is this relevant?

There in the spreadsheet are the four categories:

  • Difficulty
  • Investment
  • Capability
  • Expandability

No-one says quite what they mean.
The Initial/Daily/Ongoing might appear to clarify, but when you work through it can also add to the confusion.

Is the investment in time or money? Well, if time is money, how are we calculating the equivalence?
Right, another sheet, grade of techie vs bill-out rate.
Well what about things like cost per server, cost per seat? cost per bandwidth?

…. and so on …

How is ‘daily’ different from ‘ongoing’?

…. and so on …

Are these the only categories? Would you break things down more specifically? Would you use other or more categories?
Then there’s the weighting values. Do you agree with them? Are you going to accept other people guiding the judgment, making it appear that you are making the decisions, when in reality they have made all the important structural decisions?
The answer from the numbers-people (you know who you are!) is that it is easy enough to add another sheet, insert anotter row or column, alter the weightings … And indeed it is. For them. They think in those terms – they cannot do otherwise. “When the only tool you have is a hammer you view every problem as a nail“.
But implicit in this is that you accept that culture and its conclusions – that the number (and spreadsheet) are “God”.

In the closing of the First World War, Kipling wrote a poem that poked fun at the established “old wise saying” that many people used to justify action while avoiding really thinking about matters. I see the obsessive use of tools like spreadsheets and the numbers they give – hidden by the arbitrary assumptions of the model – as a similar phenomena.
So lets look at another way.
How would you choose your AV product?

  • Market Dominance
    As in ‘you never get fired for buying IBM’ updated to the modern world.
    Certainly when the Big Name auditors come along to do your compliance audit they will check “AV” off on their list.
  • Magazine Reviews
    There are two approaches here. You can ‘blindly’ accept the ‘Editors Choice‘, which is really no different from accepting how they formulated their spreadsheet, assigned weightings and values. Alternatively you can read lots of reviews and form your own impression.
  • Advice of Peers
    As in ‘this worked for me‘ or ‘I had lots of problems with that one‘.
    In some ways this is like reading lots of reviews, but with a more personal and ‘real world’ feel to it. You can also seek out someone who has a set-up similar to yours so that the context of the advice is more meaningful.
  • Corporate Standard
    Maybe you don’t have a choice. (I recall a site where ‘policy’ said that all IBM computers had to be in the raised-floor room. It was probably originally justified by insurance, UPS, HVAC issues. However when I audited the site I found a stack of IBM laptops there that had never been used.) Maybe the security is out-sourced or somehow ‘managed’ and details are outside your control. Perhaps there is a “Master Purchasing Agreement” with a specific vendor and if that vendor has a product that is pertinent you get stuck with it.

So how do you make the decisions – not the “decision to” but the “decision which”?

Common Sense

You don’t have to be obsessively conservative or paranoid to avoid a lot of problems and risks. Applying a little common sense will do, a my fellow CISSP, Martin McKeay points out in his blog entry:

Use common sense Anything that sounds too good to be true probably is. Don’t follow the link from an anonymous email promising quick riches or cheap products. Most of those are just attempts to get your money, and some are going to try and install software on your computer or get information from your computer.”

The post may be a few years old, but this advice, along with his other points, is current.

Technorati Tags: , , , ,

Is Bigger always Better?

No, this isn’t a Small Is Beautiful article. Its about “Small is Practical”.

Let me begin with an anecdote.

Back in the early 1980s I worked for a UNIX shop as a kernel programmer. I wrote many device drivers for many platforms. It was a true “How I fought with hardware and Software but kep my sanity” stage of life and was very interesting. One of the ‘toys’ was an early VAX-780 with an early version of BSD 4.x. No, really, we had TCP in 4.1c before 4.2. But the hardware or the VAX and the PDP-11, like other hardware I’d worked on, was liited by today’s standards. We had a whopping 4 Megabytes of memory in the PDP-11/44 that the company ran on. It supported 40 users doing development, building compilers and cross compiling for other platforms. We shifted across to the VAX as it proved its stability and its performance improved as Bill Joy played software leap-frog with Dave Cutler – but that’s another story.

Continue reading Is Bigger always Better?

Realistic Risk Assesment

I found the contents of this very interesting:
http://www.cato.org/pubs/regulation/regv27n3/v27n3-5.pdf

Example:

Accordingly, it would seem to be reasonable for those in charge of our safety to inform the public about how many airliners would have to crash before flying becomes as dangerous as driving the same distance in an automobile. It turns out that someone has made that calculation: University of Michigan transportation researchers Michael Sivak and Michael Flannagan, in an article last year in American Scientist, wrote that they determined there would have to be one set of September 11 crashes a month for the risks to balance out. More generally, they calculate that an American’s chance of being killed in one nonstop airline flight is about one in 13 million (even taking the September 11 crashes into account). To reach that same level of risk when driving on America’s safest roads — rural interstate highways — one would have to travel a mere 11.2 miles.

Aw come on! You have to keep the insurance business, auto-mechanics and scrapyards in business. Never mind undertakers. They all contribute positively to the economy. Probably more so than the DHS.

Technorati Tags: , , , , , , , , ,

Zemanta Pixie

Laws won’t stop cybercriminals, say experts

They won’t?
Tell us something we didn’t know.

(A follow-on to http://www.securityabsurdity.com/failure.php)

Is this any different from the Canukistani Federal Gun Registry Boondoggle?
You expect criminals to register their guns?

“You can’t attack this castle unless you are this high”
Its back to erecting a pole in your garden for the burglars to run into and knock themselves out on.

http://www.infoworld.com/article/06/05/10/78183_HNlegalsol_1.html

Terrorists and organized criminals are using computer vulnerabilities to line their pockets, but many cybersecurity ideas coming out of the U.S. Congress may not help much, some experts said Wednesday.

Congress tends to make reactive laws, too late, that address style not substance and get rolled in with other matters that dilute and weaken. Look at DHS. Where’s its budget? Where’s it vision?

I’ve read other articles recently to the effect that people who manage technology can no longer remain ignorant of the technologies they manage. Sadly, we’ve had a ‘management’ view that the science of management is independent of what it manages. We are not seeing the end of that paradigm.

Since a rash of data breaches in early 2005, Congress has introduced more than 10 bills related to data breach notification.

TEN!! Can’t they get it right?
Obviously not.
But with a shotgun you don’t have to be precise, do you?

The working model for a data breach bill seems to be the SOX law, which has cost U.S. businesses hundreds of millions of dollars Kobayashi said. “The model is a sledgehammer,” he said. “What economists hope is Congress steps back and looks at the costs and benefits before they do something like that.”

I’m sorry? Why should they do that?
Yes it would be nice, even sensible, but what evidence is there from past behaviour that they do this?

Instead of waiting for Congress to act, businesses should demand more secure IT products, said Ken Silva, chief security officer for security vendor VeriSign Inc. He encouraged technology buyers to join organizations that advocate for more secure products.

Well, lest skip the ‘self serving’ bit in that, and just look at “What do you mean by ‘secure’?”. When we’ve solved that we can start on the trivial stuff like “Does God Exist” and “why do men and women have trouble communicating”.

“We can’t wait for Congress to solve this problem because it’s not going to solve the problem,” Silva said. “The fact of the matter is extortion is already illegal. Passing a law to make electronic extortion even more illegal looks good on television, but it doesn’t really solve the problem.”

Therein lies the difference between the US and the Canukistani approach. Here in the GWN we have a “Criminal Code”. Instead of whole new bills that are “Seen to be doing something”, we insert an extra clause in the Criminal code to extend scope or definition.

As its says above, extortion is extortion is extortion. Fraud is fraud is fraud is fraud. It doesn’t matter what medium or technology.

This is no different from what I preach in my workshops on Developing Policies and Procedures. I try to show that your “Access Control” policy is NOT about passwords, its about authorization – be it to the computer, the parking lot or the executive washroom. If you have all your policy as ‘reductionist’ low level statements, each one addressing a technology rather than an principle, you will be forever revising them.

But some people never seem to learn from past mistakes. What’s the line in my quotable quote database…

People who won’t quit making the same mistake over and over are what we call conservatives.
– Richard Ford, in his novel Independence Day

(Note the small ‘c’. Ford should have listened to Disraeli.)
However I can find about a dozen more in the quotes database that are appropriate.