The InfoSec Blog

Why I don’t see the need for elaborate Risk Analysis

Posted by Anton Aylward

http://www.informationweek.com/news/showArticle.jhtml?articleID=202101781

Convicted hacker Robert Moore, who is set to go to federal prison this
week, says breaking into 15 telecommunications companies and hundreds of
businesses worldwide was incredibly easy because simple IT mistakes left
gaping technical holes.

"It's so easy. It's so easy a caveman can do it," Moore told
InformationWeek, laughing. "When you've got that many computers at your
fingertips, you'd be surprised how many are insecure."

Even before I took up auditing as a profession every client I dealt with had glaring errors and omissions in their security arrangement, be it physical, logical or documentation.

Yes, this includes divisions of banks (brokerage firms were the worst).
Most of the horror stories would be familiar to people who read and contribute to security forums and blogs. This is what is, when it comes down to it, really astounding. The omissions from the 'baseline' of good practice and obvious issues like documentation (so as to span the employment of different sysadmins and communicate within the IT group); restriction on access to root password (especially for developers); not doing development on the production machine/database; backups - that reflect the business and not just the convenience of the hardware/sysadmin; documenting (and hopefully approving!) changes; actually installing and configuring the firewall, which, of course, assumes there is policy which
reflects the business needs rather than the 'best guess' of the sysadmin to determine how its going to be configured.

And so on and so on.

So it gets to be, if you'll pardon the analogy, like worrying over the diseases of civilization like Alzheimer's, Osteoarthritis/Osteoporosis, ALS, Macular degeneration, diseases due to over-rich diets, Senescence in general when you don't have a adequate diet or clean water to drink.

"Standards" like a ISO-17799/27001, ITIL aren't trying to do anything more than lead people though a process to make them deal with the basic good practices. When they talk of things like Risk Analysis they are trying to get people to think about risk and their risk posture, and that is, all to often, sadly, something most firms don't seem to have got around to.

Judging by what I see people asking - as well as asserting - on other forums about security and risk, most of the IT industry is in a bad way and doesn't even know it. Of course the dominance in IT departments of the techie-geek-and-proud-of-it who has a dislike for 'suits' means that there is an unhealthy obsession with equipment (rather than business processes) as assets, and with identifying and enumerating individual threats and vulnerabilities rather than they effect - as classes - on the business processes and how to mitigate or recover from those effects. (In other words FMEA. You knew I was going to get around to saying that, didn't you :-) )

Lets worry about the baseline before we try to address the esoteric.

You can Build a $2,500 supercomputer – but what can you do with it?

Posted by Anton Aylward

http://blogs.zdnet.com/storage/?p=184&tag=nl.e539

Years ago, David Cheriton and others built a distributed OS - THOTH I think it was called, and the HARMONY extension to UNIX. Cheriton went off t build "The V System" in which there was a message passing micro kernel on each CPU and the processes, even the subroutines of the device drivers, were distributed. Essentially all (well not quite all) subroutine calls were low cost messages. The result of this was that the load was always balanced across all available nodes. The dining philosophers problem not only became trivial, but stayed trivial as more philosophers turned up and/or more tables and plates were added or subtracted.

We've now got to the point where we desperately need this technology. We've got two, four, sixteen or sixty four processors on a chip, which is a real high speed backplane! Stack a few of them with a high speed switch like in this article ....

At the end of the article he says "another 10 years you�ll be able to have tte equivalent of a 5,000 node Google cluster in your den." Heck, using this technique of four boards in a mini-tower case with four CPUs on each board I can easily get a lot of parallel power on my desktop today.

But the point is that we don't have the software that will spread the processing across it. We still have an architecture where one process lives on one machine and stays there.

Oh, I know about VMWare, but that doesn't do the micro-level migration that Cheriton could achieve. Right now, the Beowulf clusters are dedicated to specially written applications, like the chess playing search tree.

Years ago (the 1980s) I wrote RPC-based applications using the SUN XPC protocols. Since then I've seen three (or more)-tier applications, like web front ends talking to database engines via TCP links. I'm now seeing RPC embedded in XML embedded in HTML for web sites. But its still about a complete process on a machine and that process unable to dynamically migrate to an idle machine. Yes I know about load balancers - that's the same trap.

We need a new programming paradigm to deal with the new hardware.

Or perhaps we need new compilers that will break up the program into new modules. Of course some programmers will still use a style that fights the compiler.

Lets see .... When the Macintosh first came out it had an overlay scheme borrowed from one of the not-quite-virtual-memory models of the IBM 360 range. The idea was that an application had modules and a dependency tree for them, so that not all the modules needed to be loaded at once. You could write:

main(argc, argv)
{
do_Initialization();
do_Process_command_line();
do_Interactive_stuff()
do_shutdown();
}

and compile that as one module. The "do_Initialization()" module, also compiled in parts, would load and then unload ... and so on. So a 800k program might only need "main()" - at less than 1k - and the data and some other modules loaded, amounting to perhaps 250k. Great if your machine only had 256k!

But LO!, some application developers (I recall Adobe being one of them!) didn't Get It. They compiled the application into one big module. Perhaps this was deliberate so that you couldn't run anything along side it :-)

Of course the advent of demand-paged virtual memory made all this moot. It had been a technique to allow for lower cost hardware - even back in the 360 days. The cost of the additional hardware for instruction interruption and restart was non-trivial back when. Now, its all just on the chip.

But the approach to distributed programming that that Cheriton illustrated in his papers on the V System did require a new paradigm. In the same way that classical SQL (i.e. before cursors) turned the nested "for each" blocks inside out, so too did Cheriton's approach to subroutines get turned inside out.

Certainly this is going to be an area for research if massively multi-node computing is going to end up on the desktop.

Soccer Goal Security – Fair and unfair analysis

Posted by Anton Aylward

http://taosecurity.blogspot.com/2005/08/soccer-goal-security-i-found-this-ad.html

In recent discussion on various forums and elsewhere in this blog I've raised the points that the way attackers value things and the way defenders things are not the same; their perception of other values, such as business assets, processes and so forth, can be very different from yours. As an extreme example, you may be defending the network and IT assets quite capably while the executives of the company are while gambling and snorting away the company's bank account. I often point to Enron as a poster-boy here - would exemplary IT security have helped ?

And this is what is wrong - one of the many things wrong - with relying to heavily on the model of the classical risk equation as a basis for risk analysis. Its not that the risk equation is wrong; its that WE DON"T KNOW.

We do know the value to us - on the inside, from our point of view.
We do not know how the attacker viewed things.
Any equation will suffice if the accept the guesswork of the input.
Or as the philosopher Nietzsche said "Any lie will suffice provided everyone believes in it".

At this level I can see the point of any form of RA, ROI or what have you, if its objective is to present a case to management to get the funding to do the security. I don't think that's an ethical or honest approach, but I can imagine that in some organizations, ones where FUD often works, it may be necessary. But if the security practitioners who made this case start believing their own lies then things are in a terrible state.

This isn't quite the point that Richard Bejtlich is making in this particular blog article, but in other postings he points out that the classical RA methods need more vigour and a more scientific method of justifying their inputs and relationships. He calls this approach FAIR. A great deal of this is based on "Risk Assessment is not Guesswork".
I'm sorry to say that I have to agree with Richard's analysis of the 'simple scenario" which he comments on liberally.
Richard repeatedly brings up the question of how the 'figures' and 'estimates' and situations are arrived at - his "Says Who?"questions. He also questions the over all absence of hard data and precision.

In one sense that's fair and in another sense its not. This kind of analysis intrinsically leads itself to speculate about situations where there is no data, where all the input are guesswork. GIGO.

Dwight D. Eisenhower is supposed to have said "In preparing for battle I have always found that plans are useless, but planning is indispensable." No doubt the same applies to RA, but it seems to me a ponderous way to begin. There's another military adage attributed to Robert Heinlein: "Get your first shot off fast. If you miss, it will throw off the other guy's aim, allowing you to make your second shot count." From a security POV I take this to mean one should get some protection in there - what others refer to as "Baseline" and "Diligence" - while others are still doing the risk analysis.

One good and very powerful aspect of RA is often abused or completely mis-used. It is the "Identification of Assets". Lets get one thing clear: the equipment is not an asset.

In his marvelous 1992 novel "Snow Crash". Neal Stephenson describes a franchising system and makes reference to the "three ring manual". This manual is the set of operating procedures for the franchise, who does what and how, down to the smallest detail. I mention this in contrast to, for example, some of the businesses that failed after 9/11. These businesses did not have any 'plant' - desks, computers, software, even data - that could not be replaced. They failed because their real assets were not documented - the business processes existed solely "in the heads" of the people carrying them out.

The real assets of a company are not the COTS components. This is a mistake that technical people make. The ex-IBM consultant, Gerry Weinberg, the guy who came up with the term "egoless programming", also pointed out that people with strong technical backgrounds can convert any task into a technical task, thus avoiding work they don't want to do. Once upon a time I excelled in the technical side of things, but I found that limited my ability to influence change with management.

The business is what the business does. The tools are important, and there may be special proprietary tools (be they custom machine tools or software applications). But unless the processes for using them are documented, having them as 'physical assets' is of no use.
So yes, identifying assets is important.

But does your company know who - and value - who is key to its operations? Is what that person does documented or could be documented so that it could be done by someone else?

I recall one 'audit' I carried out where the machine room operator explained to me what all the equipment was for and how the input tapes were processed and the end of day reports were generated. After she finished if there as a check-off list for each day, weekends, month end and so on. If there was a manual detailing the steps she had just explained. She said there wasn't. So I asked her how she knew what to do. She told me she'd only been there a week - no 'month end' yet - but the person who held the job before her had come in one afternoon to explain what had to done.

I don't think it takes a lot to identify that highest risk here has nothing to do firewalls or patches or IDS.

So we get back to the 'soccer goal' picture in the article by Richard Bejtlich that I started with. He puts it in a very straight forward manner - defending against the wrong risks, no doubt because all the suppositions about the attacker's motivation and methods are incorrect and the assets have not been properly identified.

Even after some way of correctly identifying all of the above, getting meaningful input from subject matter experts and so on, I still see this as a lot of detailed and tedious work.

Which is why I prefer to think in terms of 'effect' and the effect of failure.

Lets look at that soccer match again. Stopping the opposing side scoring goals is great, but that's not the business. The business is in getting fans to pay to come into the stadium. If you have a winning team, that's great, but stopping goals isn't the direct cause of revenue. In fact scoring goals - winning the game by scoring more goals that the other team - isn't always a formula for business success. The Toronto Maple Leafs, for example, sell out every home game despite their less-than-awesome record over the last few decades. Look at the history of the Green Bay Packers and the Detroit Lions, a rivalry that has spanned 75 years and 150 games and had some of the most memorable moments in the history of professional football. During that time only once have the Lions defeated the Packers - 1962. But fans turn up for the entertainment, not the score, and the same holds true for soccer in the countries where that is a national sport. A winning season is great, it makes the fans happy and offers many other opportunities for bringing in revenue. Great players also bring in the fans, but great players don't always mean winning games - Michael Jackson is probably the greatest player in the NBA, but the Knicks have only placed first a few time in the Eastern conference since he joined and keep loosing the play-offs.
And if the players are assets because they bring in the fans, lets not forget, they also get traded.

All in all I'm unhappy with everything about the methods of Risk Analysis that I read. It seems speculative and prone to a lot of suppositions. At best it seems to pander to the belief that management need numbers, figures, dollar values on which to base decisions.

Gerry Weinberg also talks of the "Rutubuga Rule". The rutabagas take up storefront space at the grocers and don't sell, so get rid of them; then what comes next? And so with security, deal with the known stuff first, just as you would he a lock on your front door. When you've dealt with all the 'baseline' issues for your industry or similar environments, simplified your processes (because complexity leads to complications and errors) and applied the Deming or Shewhart cycle (Plan, Do, Check, Act) a few times, when you have and have tested plans to deal with response and recovery to failure - regardless of the threat or vulnerability, learnt where your real problems with supporting the business processes are, then and only then I'd think about the "by the book" RA.

Why?
Because you will be able to deliver effective (and measurable) results faster than going through the RA process.