My fellow CISSP and author Walter Jon Williams observed that
Paranoia is not a part of any mindset. It is an illness.
Ah, Walter the literalist!
Yes I agree with what you say but look at it this way
"We're paid to be paranoid" doesn't mean we're ill.
It's a job.
Now if your job is an obsession, one you take home with you and it interferes with your family life, that you can't let go, then its an illness whatever it is.
"We're paid to be paranoid"
Its a job. You don't pay us Information Security Professionals to be pollyannas, to have a relaxed attitude.
Many of us come from a military or law enforcement background, some having served at the sharp edge of confrontations. The sharp edge isn't always the "mud and guns", sometimes its watching a screen or sifting
through intelligence reports or forensics or after action reports or ...
But if you don't have (a) a suspicious mind and (b) 20-20 peripheral vision about threats and contingencies and (c) a complete lack of silo-ization, then you can't be doing a good job in those roles.
Perhaps there are "pen testers" who know everything about breaking in to a network. Ranum and others have written on why such people are not really "security professionals": part of that is their silo mind-set.
We see similar rants about "jumped-up system administrators".
Many of us here are engineers or have an engineering background or education. Engineers, I've found, often operate on the expectation that things *will* go wrong, stuff *will* break, it *won't* perform to manufacturers specs. Not all of that is experience, a good part is education since they are tight how to build indefinitely reliable stuff
out of unreliable parts - given the budget and opportunity. And if Engineers are sceptical about anything, its Budget.
So when it comes down to a quick description of this "suspicious" mindset, one that is not confined to a narrow silo but covers all the domains of the CBK and possibly more (perhaps you too read Risks Digest and GrandPaRob's book reviews, one that would qualify you for various TLA organizations of which we choose to discuss only in unfavourable terms, _what_ word or phrase are you going to use?
I agree, Walt, the definition of 'PPD' in DSM-IV is unpleasant and not one that I would like to be applied to me:
Paranoid Personality Disorder
A pervasive distrust and suspiciousness of others such that their
motives are interpreted as malevolent, beginning by early adult-
hood and present in a variety of contexts, as indicated by four (or
more) of the following:
- Suspects, *without sufficient basis*, that others are exploiting, harming, or deceiving him or her.
- Is preoccupied with *unjustified doubts* about the loyalty or trustworthiness of friends or associates.
- Is reluctant to confide in others because of *unwarranted* fear that the information will be used maliciously against him or her.
- Reads hidden demeaning or threatening meanings into benign remarks or events.
- *persistently bears grudges* (i.e., is unforgiving of insults, injuries, or slights).
- Perceives attacks on his or her character or reputation that are not apparent to others and is quick to react angrily or to counter-attack.
- Has recurrent suspicions, *without justification*, regarding fidelity of spouse or sexual partner
I'd note on reading the above that if that definition were to be applied to a nation state or its security apparatus then many countries of the Western World and quite a few of the ones in the Eastern World can be clinically diagnosed as being 'paranoid'.
That page I reference goes on to define 'Schizoid Personality Disorder'.
The 'solitary' and the 'religious' parts seem contradictory, but one wonders.
The point I think is key to what you say, Walter, we need a better way and yes what we are doing is Risk Analysis. I think that Risk - the probabilistic aspect - is important and differentiates from the think-tank prophets of doom,
even though the latter grab headlines and produce responses from politicians - vis Global Warming and many such in the past.
Good managers understand risk.
Perhaps this is why the ISO-31000 people talk of 'risk' in terms of uncertainty and allow for an upside. They see a risk of winning a lottery.
The "paid to be paranoid" view is important. A lot of the time in my career I've been paid not to be paranoid but to find controls and opportunities. Perhaps this is the ISO-3100 aspect.
That being said, I think the ISO-31000 people have twisted the language a fair bit and become obsessional in their won way. You don't insure against success. Much of our culture is really about controls and safety nets. That doesn't - shouldn't - destroy hope and progress.
In my DatabaseOfDotSigQuotes is this
If a better system is thine, impart it;
if not, make use of mine.
The phrase "Paid to be paranoid" is succinct and catchy -- and no that doesn't mean your objections are wrong, Walt. But not everyone lives & dies by DSM-IV. And yes I agree with the rest of you post about what we should be projecting as an image.
But can you or anyone else come up with something better, something still succinct and still catchy?
I'd be glad to hear it an make use of it.
So I need to compile a list of ALL assets, information or otherwise,
That leads to tables and chairs and powerbars.
OK so you can't work without those, but that's not what I meant.
Physical assets are only relevant in so far as they part of information processing. You should not start from those, you should start from the information and look at how the business processes make use of it. Don't confuse you DR/BC plan with your core ISMS statements. ISO Standard 22301 addresses that.
This is, ultimately, about the business processes.
I often explain that Information Security focuses on Information Assets.
Some day, on the corporate balance sheet, there will be an entry
which reads, "Information"; for in most cases the information is
more valuable than the hardware which processes it.
-- Adm. Grace Murray Hopper, USN Ret.
Some people see this as a binary absolute - they think that there's no need to asses the risks to the physical assets or that somehow this is automatically considered when assessing the risk to information.
The thing is there are differing types of information and differing types of containers for them.
On the ISO2700 forum one user gave a long description of his information gathering process but expressed frustration over what to do with it all all, the assets, the threats and so forth, and trying to make it into a risk assessment.
It was easy for the more experienced of us to see what he was missing.
He was missing something very important -- a RISK MODEL
The model determines what you look for an how it is relevant.
Java 7 Update 10 and earlier contain an unspecified vulnerability
that can allow a remote, unauthenticated attacker to execute arbitrary
code on a vulnerable system.
By convincing a user to visit a specially crafted HTML document,
a remote attacker may be able to execute arbitrary code on a vulnerable
Well, yes .... but.
In many of the InfoSec forums I subscribe to people regularly as the "How long is a piece of string" question:
How extensive a risk assessment is required?
It's a perfectly valid question we all have faced, along with the "where do I begin" class of questions.
The ISO-27001 standard lays down some necessities, such as your asset register, but it doesn't tell you the detail necessary. You can choose to say "desktop PCs" as a class without addressing each one, or even addressing the different model. You can say "data centre" without having to enumerate every single component therein.
From the left hand doesn't know what the right hands is doing department:
Ngair Teow Hin, CEO of SecureAge, noted that smaller companies
tend to be "hard-pressed" to invest or focus on IT-related resources
such as security tools due to the lack of capital. This financial
situation is further worsened by the tightening global and local
economic climates, which has forced SMBs to focus on surviving
above everything else, he added.
Well, lets leave the vested interests of security sales aside for a moment.
I read recently an article about the "IT Doesn't matter" thread that basically said part of that case was that staying at the bleeding edge of IT did not give enough of a competitive advantage. Considering that most small (and many large) companies don't fully utilise their resources, don't fully understand the capabilities of the technology they have, don't follow good practices (never mind good security), this is all a moot point.
Let us leave aside the poor blog layout, Dejan's picture 'above the fold' taking up to much screen real estate. In actuality he's not that ego-driven.
What's important in this article is the issue of making OBJECTIVES clear and and communicating (i.e. putting them in your Statement of Objective, what ISO27K calls the SoA) and keeping them up to date.
Dejan Kosutic uses ISO27K to make the point that there are high level objectives, what might be called strategy, and the low level objectives. Call that the tactical or the operational level. Differentiating between the two is important. They should not be confused. The high level, the POLICY OBJECTIVES should be the driver.
Yes there may be a lot of fiddly-bits of technology and the need for the geeks to operate it at the lower level. And if you don't get the lower level right to an adequate degree, you are not meeting the higher objectives.
Last month, this question came up in a discussion forum I'm involved with:
Another challenge to which i want to get an answer to is, do developers
always need Admin rights to perform their testing? Is there not a way to
give them privilege access and yet have them get their work done. I am
afraid that if Admin rights are given, they would download software's at
the free will and introduce malicious code in the organization.
The short answer is "no".
The long answer leads to "no" in a roundabout manner.
Unless your developers are developing admin software they should not need admin rights to test it.
This kind of question keeps coming up, many people are unclear about the Statement of Applicability on ISO-27000.
The SoA should outline the measures to be taken in order to reduce risks such as those mentioned in Annex A of the standard. These are based on 'Controls'.
But if you are using closed-source products such as those from Microsoft, are you giving up control? Things like validation checks and integrity controls are are 'internal'.
Well, its a bit of a word-play.
- SoA contains exclusions on controls that are not applicable because the organization doesn't deal with these problems (ie ecommerce)
- SoA contains exclusions on controls that pose a threat (and risks arise) but cannot be helped (ie A.12.2 Correct processing in applications) and no measures can be taken to reduce these risks.
With this, a record must be present in risk assessments, stating that the risk (even if it is above minimum accepted risk level) is accepted
The key to the SOA is SCOPE.
On the ISO27000 Forum list, someone asked:
That's a very ingenious way of looking at it!
One way of formulating the risk statement is from the control
objective mentioned in the standard.
Is there any other way out ?
Ingenious aside, I'd be very careful with an approach like this.
Risks and controlsare not, should not, be 1:1.
The Navy's premier institution for developing senior strategic and
operational leaders started issuing students Apple iPad tablet
computers equipped with GoodReader software in August 2010,
unaware that the mobile app was developed and maintained by
a Russian company, Good.iWare, until Nextgov reported it in February.
OK so its not news and OK I've posted about this before, but ...
So the question here is: Why should software produced in the country where there are more evil-minded programmers be superior to software produced in Russia?
So to have great (subjective) protection your layered protection and controls have to be "bubbled" as opposed to linear (to slow down or impede a direct attack).
I have doubts about "defence in depth" analogies with the military that many people in InfoSec use.
Read what they are really talking about in those military examples: its "ablation": that means burning up resources, like land (the traditional defence the Russian Empire used) or manpower (the northern states used in the US civil war) and resources (the USA in WW2). They try to slow down a direct and linear attack, hopefully to a standstill.
As the Blitzkrieg showed in dealing with the Maginot Line, if you "go around it" the defence isn't a lot of use.
Through the ages of war and politics and empire-hood and nation-hood and tribalism we've seen many threats and attacks and subversions used.
The reality is that many InfoSec defences are more like umbrellas, the assume that the attack in coming from a particular direction in a particular form. What's needed is more like an all-enclosing "bubble" rather than something linear with the 'defence in depth' model. But that gets back to the problem of the perimeter.
Many wifi enabled devices are really "spies inside the defensive perimeter".
There was a scare a while ago that various networking equipment was made by companies or fabricators in places that were or might be inimical or economic competitors and as such have subversive code hidden in them. No doubt this will come around again when journalists have nothing better to write about or the State Department need to wave a big stick and scare the public -- its form of showing that "its doing something".
But how can we tell? The reality is that "security specialists" are finding errors - never mind deliberately malicious code - in all manner of devices: pacemakers, insulin pumps, automobile throttle controllers. Will they find "errors" that allow subversion in mainstream IT deceives like home wifi routers (aka the next generation of spambots), home PC software (that's a no-brainer isn't it!) never mind commercial databases.
I dedicate this to the memory of Ken Thompson
The hack to make the HP printers burn was interesting, but lets face it, a printer today is a special purpose computer and a computer almost always has a flaw which can be exploited.
In his book on UI design "The Inmates are Running the Asylum", Alan Cooper makes the point that just about everything these days, cameras, cars, phones, hearing aids, pacemakers, aircraft, traffic lights ... have computers running them and so what we interface with is the computer not the natural mechanics of the device any more.
Applying this observation makes this a very scary world. More like Skynet in the Terminator movies now that cars have Navi*Star and that in some countries the SmartStreets traffic systems have the traffic lights telling each other about their traffic flow. Cameras already have wifi so they can upload to the 'Net-of-a-Thousand-Lies.
Some printers have many more functions; some being fax, repro, and scanning as well as printing a document. And look at firewalls. Look at all the additional functions being
poured into them because of the "excess computing facility" - DNS, Squid-like caching, authentication ...
I recently bought a LinkSys for VoIP, and got the simplest one I could find. I saw models that were also wifi routers, printer servers and more all bundled onto the "gateway" with the "firewall" function. And the firewall was a lot less capable than in my old SMC Barricade-9 home router.
I'm dreading what the home market will have come IP6
I recall the Chinese curse: yes we live in "interesting security issue" times!
But in the long run of things the HP Printer Hack isn't that serious. After all, how many printers are exposed to the Internet. We have to ask "how likely is that?".
Too many places (and people) put undue emphasis on Risk Analysis and ask "show me the numbers" questions. As if everyone who has been hacked (a) even knows abut it and (b) is willing to admit to the details.
No, I agree with Donn Parker; there are many things we can do that are in the realm of "common sense" once you get to stop and think about it. Many protective controls are "umbrellas", that its about how you configure your already paid-for-and-installed (you did install it, didn't you, its not sitting in the box in the wiring closet) firewall; by spending the money you would have spent anyway for the model that has better control/protection -- you do this with your car: air-bags, ABS and so on so why not with IT equipment? The "Baseline" is more often about proper decisions and proper configuration than "throwing money at it" the way governments and government agencies do.
What framework would you use to provide for quantitative or qualitative risk analysis at both the micro and macro level? I'm asking about a true risk assessment framework not merely a checklist.
Yes, this is a bit of a META-Question. But then its Sunday, a day for contemplation...
When does something like these stop being a check-list and become a framework?
COBIT is very clearly a framework, but not for risk analysis and even the section on risk analysis fits in to a business model rather than a technology model.
ISO-27K is arguably more technology (or at least InfoSec) focused that COBIT, but again risk analysis is only part of what its about. ISO-27K calls itself a standard but in reality its a framework.
The message that these two frameworks send about risk analysis is
Context is Everything
(You expected me to say that, didn't you?)
I'm not sure any RA method works at layer 8 or above. We all know that managers can read our reports and recommendations and ignore them. Or perhaps not read them, since being aware of the risk makes them liable.
Ah. Good point.
On LinkedIn there was a thread asking why banks seem to ignore risk analysis .. presumably because their doing so has brought us to the international financial crisis we're in (though I don't think its that simple).
The trouble is that RA is a bit of a 'hypothetical' exercise.
McAfee has released a new study on malware in cars:
Now you may think that this is scaremongering on the part of McAfee because their traditional market is drying up. Not so, this is actually a threat we have been aware of or nearly half a century: