An article on Linked entitled 'The Truth about Practices" started a discussion thread with some of my colleagues.
The most pertinent comment came from Alan Rocker:
I'm not sure whether to quote "Up the Organisation", ("If you must have a policy manual, reprint the Ten Commandments"), or "Catch-22" (about the nice "tidy bomb pattern" that unfortunately failed to hit the target), in support of the article. Industry-wide metrics can nevertheless be useful, though it's fatal to confuse a speedometer and a motor.
However not everyone in the group agreed with our skepricism and the observations of the autor of the article.
And Anton aren't the controls you advocate so passionately best practices? >
NOT. Make that *N*O*T*!*!*! Even allowing for the lowercase!
I get criticised occasionally for long and detailed posts that some readers complain treat them like beginners, but sadly if I don't I get comments such as this in reply
Data Loss is something you prevent; you enforce controls to prevent data
leakage, DLP can be a programme, but , I find very difficult to support
with a policy.
Does one have visions of chasing escaping data over the net with a three-ring binder labelled "Policy"?
Let me try again.
Policy comes first.
Without policy giving direction, purpose and justification, supplying the basis for measurement, quality and applicability (never mind issues such as configuration) then you are working on an ad-hoc basis.
In many of the InfoSec forums I subscribe to people regularly as the "How long is a piece of string" question:
How extensive a risk assessment is required?
It's a perfectly valid question we all have faced, along with the "where do I begin" class of questions.
The ISO-27001 standard lays down some necessities, such as your asset register, but it doesn't tell you the detail necessary. You can choose to say "desktop PCs" as a class without addressing each one, or even addressing the different model. You can say "data centre" without having to enumerate every single component therein.
Last month, this question came up in a discussion forum I'm involved with:
Another challenge to which i want to get an answer to is, do developers
always need Admin rights to perform their testing? Is there not a way to
give them privilege access and yet have them get their work done. I am
afraid that if Admin rights are given, they would download software's at
the free will and introduce malicious code in the organization.
The short answer is "no".
The long answer leads to "no" in a roundabout manner.
Unless your developers are developing admin software they should not need admin rights to test it.
This kind of question keeps coming up, many people are unclear about the Statement of Applicability on ISO-27000.
The SoA should outline the measures to be taken in order to reduce risks such as those mentioned in Annex A of the standard. These are based on 'Controls'.
But if you are using closed-source products such as those from Microsoft, are you giving up control? Things like validation checks and integrity controls are are 'internal'.
Well, its a bit of a word-play.
- SoA contains exclusions on controls that are not applicable because the organization doesn't deal with these problems (ie ecommerce)
- SoA contains exclusions on controls that pose a threat (and risks arise) but cannot be helped (ie A.12.2 Correct processing in applications) and no measures can be taken to reduce these risks.
With this, a record must be present in risk assessments, stating that the risk (even if it is above minimum accepted risk level) is accepted
The key to the SOA is SCOPE.
Call me a dinosaur (that's OK, since its the weekend and dressed down to work in the garden) but ...
On the ISO27000 Forum list, someone asked:
That's a very ingenious way of looking at it!
One way of formulating the risk statement is from the control
objective mentioned in the standard.
Is there any other way out ?
Ingenious aside, I'd be very careful with an approach like this.
Risks and controlsare not, should not, be 1:1.
The Navy's premier institution for developing senior strategic and
operational leaders started issuing students Apple iPad tablet
computers equipped with GoodReader software in August 2010,
unaware that the mobile app was developed and maintained by
a Russian company, Good.iWare, until Nextgov reported it in February.
OK so its not news and OK I've posted about this before, but ...
So the question here is: Why should software produced in the country where there are more evil-minded programmers be superior to software produced in Russia?
You do do backups don't you? Backups to DVD is easy, but what software to use?
So to have great (subjective) protection your layered protection and controls have to be "bubbled" as opposed to linear (to slow down or impede a direct attack).
I have doubts about "defence in depth" analogies with the military that many people in InfoSec use.
Read what they are really talking about in those military examples: its "ablation": that means burning up resources, like land (the traditional defence the Russian Empire used) or manpower (the northern states used in the US civil war) and resources (the USA in WW2). They try to slow down a direct and linear attack, hopefully to a standstill.
As the Blitzkrieg showed in dealing with the Maginot Line, if you "go around it" the defence isn't a lot of use.
Through the ages of war and politics and empire-hood and nation-hood and tribalism we've seen many threats and attacks and subversions used.
The reality is that many InfoSec defences are more like umbrellas, the assume that the attack in coming from a particular direction in a particular form. What's needed is more like an all-enclosing "bubble" rather than something linear with the 'defence in depth' model. But that gets back to the problem of the perimeter.
Many wifi enabled devices are really "spies inside the defensive perimeter".
There was a scare a while ago that various networking equipment was made by companies or fabricators in places that were or might be inimical or economic competitors and as such have subversive code hidden in them. No doubt this will come around again when journalists have nothing better to write about or the State Department need to wave a big stick and scare the public -- its form of showing that "its doing something".
But how can we tell? The reality is that "security specialists" are finding errors - never mind deliberately malicious code - in all manner of devices: pacemakers, insulin pumps, automobile throttle controllers. Will they find "errors" that allow subversion in mainstream IT deceives like home wifi routers (aka the next generation of spambots), home PC software (that's a no-brainer isn't it!) never mind commercial databases.
I dedicate this to the memory of Ken Thompson
The hack to make the HP printers burn was interesting, but lets face it, a printer today is a special purpose computer and a computer almost always has a flaw which can be exploited.
In his book on UI design "The Inmates are Running the Asylum", Alan Cooper makes the point that just about everything these days, cameras, cars, phones, hearing aids, pacemakers, aircraft, traffic lights ... have computers running them and so what we interface with is the computer not the natural mechanics of the device any more.
Applying this observation makes this a very scary world. More like Skynet in the Terminator movies now that cars have Navi*Star and that in some countries the SmartStreets traffic systems have the traffic lights telling each other about their traffic flow. Cameras already have wifi so they can upload to the 'Net-of-a-Thousand-Lies.
Some printers have many more functions; some being fax, repro, and scanning as well as printing a document. And look at firewalls. Look at all the additional functions being
poured into them because of the "excess computing facility" - DNS, Squid-like caching, authentication ...
I recently bought a LinkSys for VoIP, and got the simplest one I could find. I saw models that were also wifi routers, printer servers and more all bundled onto the "gateway" with the "firewall" function. And the firewall was a lot less capable than in my old SMC Barricade-9 home router.
I'm dreading what the home market will have come IP6
I recall the Chinese curse: yes we live in "interesting security issue" times!
But in the long run of things the HP Printer Hack isn't that serious. After all, how many printers are exposed to the Internet. We have to ask "how likely is that?".
Too many places (and people) put undue emphasis on Risk Analysis and ask "show me the numbers" questions. As if everyone who has been hacked (a) even knows abut it and (b) is willing to admit to the details.
No, I agree with Donn Parker; there are many things we can do that are in the realm of "common sense" once you get to stop and think about it. Many protective controls are "umbrellas", that its about how you configure your already paid-for-and-installed (you did install it, didn't you, its not sitting in the box in the wiring closet) firewall; by spending the money you would have spent anyway for the model that has better control/protection -- you do this with your car: air-bags, ABS and so on so why not with IT equipment? The "Baseline" is more often about proper decisions and proper configuration than "throwing money at it" the way governments and government agencies do.
What framework would you use to provide for quantitative or qualitative risk analysis at both the micro and macro level? I'm asking about a true risk assessment framework not merely a checklist.
Yes, this is a bit of a META-Question. But then its Sunday, a day for contemplation...
When does something like these stop being a check-list and become a framework?
COBIT is very clearly a framework, but not for risk analysis and even the section on risk analysis fits in to a business model rather than a technology model.
ISO-27K is arguably more technology (or at least InfoSec) focused that COBIT, but again risk analysis is only part of what its about. ISO-27K calls itself a standard but in reality its a framework.
The message that these two frameworks send about risk analysis is
Context is Everything
(You expected me to say that, didn't you?)
I'm not sure any RA method works at layer 8 or above. We all know that managers can read our reports and recommendations and ignore them. Or perhaps not read them, since being aware of the risk makes them liable.
Ah. Good point.
On LinkedIn there was a thread asking why banks seem to ignore risk analysis .. presumably because their doing so has brought us to the international financial crisis we're in (though I don't think its that simple).
The trouble is that RA is a bit of a 'hypothetical' exercise.
he documentation required and/or needed by ISO-2700x is a perenial source of dispute in the various forums I subscribe to.
Of course management has to define matters such as scope and applicability and the policies, but how much of the detail of getting there needs to be recorded? How much of the justification for the decisions?
Yes, you could have reviews and summaries of all meetings and email exchanges ..
But that is not and has nothing to do with the standard or its requirements.
The standard does NOT require a management review meeting.
What's interesting here is that this isn't preaching "The Cloud" and only mentions VDI in one paragraph (2 in the one-line expanded version).
Also interesting is the real message: "Microsoft has lost it".
Peter Drucker, the management guru, pointed out that the very last buggy-whip manufacturer in the age of automobiles was very efficient in its processes - it *HAD* to be to have survived that long. (One could say the same about sharks!)
"Keeping desktop systems in good working order is still a labour of Sysiphus .."
Indeed. But LinuxDesktop and Mac/OSX seem to be avoiding most of the problems that plague Microsoft.
A prediction, however.
The problem with DOS/Windows was that the end user was the admin and could fiddle with everything, including download and install new code. We are moving that self-same problem onto smart-phones and tablets. Android may be based on Linux, but its the same 'end user in control' model that we had with Windows. Its going to be a malware circus.
- eWEEK Review: Unidesk Simplifies VDI Deployment and Management (prweb.com)
- Dell Delivers Desktop-as-a-Service (informationweek.com)
- Zenk GmbH to Distribute Unidesk VDI Management Software in Germany (prweb.com)
- The key questions you must ask to save your virty desktop dream (go.theregister.com)
- 6 Common Desktop Virtualization Mistakes (informationweek.com)
- 5 Best Alternatives of Windows 8 (indianbloggist.com)
In case you're not aware, ISECOM (Institute for Security and Open Methodologies) has OSSTMM3 - The Open Source Security Testing Methodology Manual - http://www.isecom.org/osstmm/
There's an interesting segue to this at
Skip over his ranting about the definition of "hackers"
This is the meat:
Wewrote the OSSTMM 3 to address these things. We knew that penetration
testing the way it continued to be marginalized would eventually hurt
security. Yes, the OSSTMM isn't practical for some because it doesn't
match the commercial industry security of today. But that's because the
security model today is crazy! And you don't test crazy with tests
designed to prove crazy. So any penetration testing standard, baseline,
framework, or methodology that focuses on finding and exploiting
vulnerabilities is only perpetuating the one-trick pony problem.
Furthermore it's also perpetuating security through patchity, a process
that's so labor intensive to assure homeostasis that nobody could
maintain it indefinitely which is the exact definition of a loser in the
cat and mouse game. So you can be sure it also doesn't scale at all with
complexity or size.
I've been outspoken against Pen Testing for many years, to my clients, at conferences and in my Blog. I'm sure I've upset many people but I do believe that the model plays up to the Hollywood idea of a Uberhacker,
produces a whack-a-mole attitude and is a an example of avoidance behaviour, avoiding proper testing and risk management such as incident response good facilities management.
I've seen to many "pen testers' and demos of pen testing that are just plain ... STUPID. Unprofessional, unreasonable and pandering to the ignorance of managers.
In the long run the "drama-response" of the classical pen-test approach is unproductive. It teaches management the wrong thing - to respond to drama rather than to set up a good system of governance based on policy, professional staffing, adequate funding and operations based on accepted good principles such as change management.
And worse, it
- shows how little faith your management have in the professional capabilities of their own staff, who are the people who should know the system best, and of the auditors who are trained not only in assessing the system but assessing the business impact of the risks associated with a vulnerability
- has no guarantees about what collateral damage the outsider had to do to gain root
- says nothing about things that are of more importance than any vulnerability, such as your Incident Response procedures
- indicates that your management doesn't understand or make use of a proper development-test-deployment life-cycle
Yes, classical hacker-driven pen testing is more dramatic, in the same way that Hollywood movies are more dramatic. And about as realistic!
"Crazy" is a good description of that approach.
Apparently (ISC)2 did this survey ... which means they asked the likes of us ....
Faced with an attack surface that seems to be growing at an overwhelming rate, many security professionals are beginning to wonder whether their jobs are too much for them, according to a study published last week.
Right. If you view this from a technical, bottom-up POV, then yes.
Conducted by Frost & Sullivan, the 2011 (ISC)2 Global Information Security Workforce Study (GISWS) says new threats stemming from mobile devices, the cloud, social networking, and insecure applications have led to "information security professionals being stretched thin, and like a series of small leaks in a dam, the current overworked workforce may be showing signs of strain."
Patching madness, all the hands-on ... Yes I can see that even the octopoid whiz-kids are going to feel like the proverbial one-armed paper-hanger.
Which tells me they are doing it wrong!
Two decades ago a significant part of my job was installing and configuring firewalls and putting in AV. But the only firewall I've touched in the last decade is the one under my desk at home, and that was when I was installing a new desk. Being a Linux user here I don't bother with AV.
"Hands on"? Well yes, I installed a new server on my LAN yesterday.
No, I think I'll scrub it, I don't like Ubuntu after all. I'm putting
in Asterix. That means re-doing my VLAN and the firewall rules.
So yes, I do "hands on". Sometimes.
At client sites I do proper security work. Configuring firewalls, installing Windows patches, that's no longer "security work". The IT department does that. Its evolved into the job of the network admin and the Windows/host admin. They do the hands-on. We work with the policy and translate that into what has to be done.
Application vulnerabilities ranked as the No. 1 threat to organizations among 72 percent of respondents, while only 20 percent said they are involved in secure software development.
Which illustrates my point.
I can code; many of us came to security via paths that involved being coders, system and network admins. I was a good coder, but as a coder I had little "leverage" to "Get Things Done Right". If I was "involved" in secure software development I would not have as much leverage as I might have if I took a 'hands off' roles and worked with management to set up and environment for producing secure software by the use of training and orientation, policy, tools, testing and so forth. BTDT.
There simply are not enough of us - and never will be - to make security work "bottom up" the way the US government seems to be trying We can only succeed "top down", by convincing the board and management that it matters, by building a "culture of security".
This is not news. I'm not saying anything new or revolutionary, no matter how many "geeks" I may upset by saying that Policy and Culture and Management matter "more". But if you are one of those people who are overworked, think about this:
Wouldn't your job be easier if the upper echelons of your organizations, the managers, VPs and Directors, were committed to InfoSec, took it seriously, allocated budget and resources, and worked strategically instead of only waking up in response to some incident, and even then just "patching over" instead of doing things properly?
Information Security should be Business Driven, not Technology Driven.
 Or devolved, depending on how you look at it.
- Information Security By the Numbers (michaelpeters.org)
- Malware in Medical Equipment Poses Serious Threat to Hospital Security (eweek.com)
- Re: CISO Challenges: The Build vs. Buy Problem (1:2) (h30499.www3.hp.com)
- Information Security Awareness Through Analogy (clerkendweller.com)