The InfoSec Blog

Brexit: What’s Next for Privacy, Policing, Surveillance?

Posted by Anton Aylward

Now we're getting over the "how could that do THAT!" shock stage and starting to think what the operational, rather than just the financial, implications are.

Confusion over Physical Assets, Information Assets – Part Two

Posted by Anton Aylward

So I need to compile a list of ALL assets, information or otherwise,

That leads to tables and chairs and powerbars.

OK so you can't work without those, but that's not what I meant.

InfoAssetsPhysical assets are only relevant in so far as they part of information processing. You should not start from those, you should start from the information and look at how the business processes make use of it.  Don't confuse you DR/BC plan with your core ISMS statements.  ISO Standard 22301 addresses that.

This is, ultimately, about the business processes.

The #1 Reason Leadership Development Fails

Posted by Anton Aylward
I wouldn't have though, based on the title, that I'd be blogging about this, but then again one can get fed up with fed up with purely InfoSec blogs, ranting and raving about technology, techniques and ISO27000 and risk and all that.

But this does relate somewhat to security as awareness training, sort of ...

My problem with training per se is that it presumes the need for indoctrination on systems, processes and techniques. Moreover, training assumes that said systems, processes and techniques are the right way to do things. When a trainer refers to something as “best practices” you can with great certitude rest assured that’s not the case. Training focuses on best practices, while development focuses on next practices. Training is often a rote, one directional, one dimensional, one size fits all, authoritarian process that imposes static, outdated information on people. The majority of training takes place within a monologue (lecture/presentation) rather than a dialog. Perhaps worst of all, training usually occurs within a vacuum driven by past experience, not by future needs.

Marketing Is Dead – Harvard Business Review

Posted by Anton Aylward

Of course you have to have a catchy title, but what this really says is

... in today's increasingly social media-infused environment,
traditional marketing and sales not only doesn't work so well, it
doesn't make sense. Think about it: an organization hires people —
employees, agencies, consultants, partners — who don't come from the
buyer's world and whose interests aren't necessarily aligned with his,
and expects them to persuade the buyer to spend his hard-earned money on
something. Huh? When you try to extend traditional marketing logic into
the world of social media, it simply doesn't work.

Yes but there are assumptions there.
Marketing WHAT to WHOM?

As opposed to just selling.

See also:

Which makes the point that book publishers have come adrift as far as
marketing in the Internet world goes.

English: Infographic on how Social Media are b...
Enhanced by Zemanta

How to build an asset inventory for 27001

Posted by Anton Aylward

How do you know WHAT assets are  to be included in the ISO-27K Asset Inventory?

SOMF Asset Patterns

This question and variants of the "What are assets [for ISO27K]?" comes up often and has seen much discussion on the various InfoSec forums I subscribe to.

Perhaps some ITIL influence is need.  Or perhaps not since that might be too reductionist.

The important thing to note here is that the POV of the accountants/book-keepers is not the same as the ISO27K one. To them, an asset is something that was purchased and either depreciates in value, according to the rules of the tax authority you operate under, or appreciates in value (perhaps) according to the market, such as land and buildings.

Here in Canada, computer hardware and software depreciates PDQ under this scheme, so that the essential software on which you company depends is deemed worthless by the accountants. Their view is that depreciable assets should be replaced when they reach the end of their accounting-life. Your departmental budget may say different.

Many of the ISO27K Assets are things the accountants don't see: data, processes, relationships, know-how, documentation.

Why Info Sec Positions Go Unfilled

Posted by Anton Aylward

There are many holes in this, but I think they miss some important points.

First is setting IT HR to look for Infosec.
That is because many people think InfoSec is a IT function as opposed to an organizational function. This goes in cycles: 20 years ago there was the debate: "Should Infosec report to IT?" The overall decision was no;. Infosec might need to 'pull the plug' on IT to protect the organization.Risk management sub processes

Second there is the vast amount of technology claiming to do InfoSec.
It is all network (and hence IT) as opposed to business fulfilment. This has now spread to "Governance". You can buy governance software. What does this do for the ethical outlook of the executive, the board and management? How is Governance tied to risk management and accountability and visibility by this software?

Technology won't solve your problems when technology *is* your problem.

InfoSec is about protecting the organization's information assets: those assets can be people, processes or information.  Yes technology may support that just as technology puts a roof over your head (physical security) and somewhere to store the information.  Once this was typewriters, and hand-cranked calculators and filing cabinets, and copying was with carbon paper.  The technology may have changed but most of the fundamental principles have not.  In particular the ones to do with attitudes and people are the same now as they were 50 or 100 years ago.



Using ALE … inappropriately

Posted by Anton Aylward

Like many forms of presenting facts, not least of all about risk, reducing complex and multifaceted information to a single figure does a dis-service to those affected. The classical risk equation is another example of this;  summing, summing many hundreds of fluctuating variables to one figure.

Perhaps the saddest expression of this kind of approach to numerology is the stock market. We accept that the bulk of the economy is based on small companies but the stock exchanges have their "Top 100" or "Top 50" which are all large companies. Perhaps they do have an effect on the economy the same way that herd of elephants might, but the biomass of this planet is mostly made up, like our economy, of small things.

Treating big things like small things leads to another flaw in the ALE model.  (which is in turn  part of the fallacy of quantitative risk assessment)

The financial loss of internet fraud is non-trivial but not exactly bleeding us to death. Life goes on anyway and we work around it. But it adds up. Extrapolated over a couple of hundred years it would have the same financial value as a World Killer Asteroid Impact that wiped out all of human civilization. (And most of human life.)

A ridiculously dramatic example, yes, but this kind of reduction to a one-dimensional scale such as "dollar value" leads to such absurdities. Judges in court cases often put dollar values on human life. What value would you put on your child's ?

We know, based on past statistics, the probability that a US president will be assassinated. (Four in 200+ years; more if you allow for failed attempts). With that probability we can calculate the ALE and hence what the presidential guard cost should be capped at.

Right? NO!

Compliance? What Compliance?

Posted by Anton Aylward

United States Securities and Exchange Commission

Image via Wikipedia

Sometimes I wonder why we bother ...

The Securities and Exchange Commission doesn't just enforce the rules
that govern Wall Street. When asked, it often grants individual
companies exemptions from the rules

Enhanced by Zemanta

What drives the RA? Need or Fashion?

Posted by Anton Aylward

A colleague in InfoSec made the following observation:

My point - RA is a nice to have, but it is superfluous. It looks nice
but does NOTHING without the bases being covered. what we need
is a baseline that everyone accepts as necessary (call it the house
odds if you like...)

Most of us in the profession have met the case where a Risk Analysis would be nice to have but is superfluous because the baseline controls that were needed were obvious and 'generally accepted', which makes me wonder why any of us support the fallacy or RA.

It gets back to the thing about the Hollywood effect that is Pen Testing. Quite apart from the many downsides it has from a business POV it is non-logical in the same way that RA is non-logical.

The Classical Risk Equation

Posted by Anton Aylward

What we had drilled into us when I worked in Internal Audit and when I was preparing for the CISA exam was the following

RISK is the
THREAT will exploit a
VULNERABILITY to cause harm to an

R = f(T, V, A)

Why do you think they are called "TVAs"?

More sensibly the risk is the sum over all the various ..

This isn't just me sounding off. Richard Bejtlich says much the same thing and defends it from various sources. I can't do better that he has.

The FBI risk equation

Posted by Anton Aylward

It seems that to make better cybersecurity-related decisions a senior FBI official recommends considering a simple algebraic equation:

risk = threat x vulnerability x consequence

rather than solely focusing on threat vectors and actors.

To be honest, I sometimes wonder why people obsess about threat vectors in the first place.  There seems to be a beleive that the more threats you face, the higher your risk, regardless of your controls and regardless of the classification of the threats.

Look at it this way: what do you have control over?

Why do you think that people like auditors refer to the protective and detective mechanisms as "controls"?

Yes, if you're a 600,000 lb gorilla like Microsoft you can take down one - insignificant - botnet, but the rest of us don't have control over the  threat vectors and threat actors.

What do we have control over?

Vulnerabilities, to some extent. We can patch; we can choose to run alternative software; we can mask off access by the threats to the vulnerabilities. We can do things to reduce the the "vulnerability surface" such as partitioning our networks, restricting access, not exposing more than is absolutely necessary to the Internet (why oh why is your SqlServer visible to the net, why isn't it behind the web server, which in turn is behind a firewall).

Asset to a large extent. Document them. Identify who should be using them and implement IAM.

And very import: we have control over RESPONSE.

Did the FBI equation mention response? I suppose you could say that 'awareness' is a part of a response package. Personally I think that response is a very, very important part of this equation, and its the one you have MOST control over.

And response is - or should be - totally independent of the threats
since it focuses on preserving and recovering the assets.

I think they have it very, very confused and this isn't the most productive, most effective way of going about it.  But then the FBI's view of policing is to go after the criminals, and if you consider the criminals to be the threat then that makes sense.

But lest face it, most corporations and are not in the business of policing.  neither are home users.

Which is why I focus on the issue of "what you have control over".

Enhanced by Zemanta

Throwing in the towel

Posted by Anton Aylward

I was saddened to hear of an InfoSec colleague who met with overwhelming frustration at work:

After two years of dealing with such nonsense, I was forced to resign
within two months of discovering a serious security issue which possibly
jeopardized overseas operations. I have since found out that they are
selling the company and didn't want any who knew the problems around.

Thank you.
Speaking as an auditor who occasionally does "due diligence" with respect to take-overs, you've just shown another use for LinkedIn - contacting ex-employees to find out about such problems.

Certainly a lot of employees leaving or being fired in the couple of years before the pending acquisition is a red flags, eh?

About creating Corporate IT Security Policies

Posted by Anton Aylward

As I've said before, you should not ask yourself what policies to write but what you need to control.  If you begin with a list of polices, you need to adapt the reality to the list. The risk is that you create a false sense of control of security.

The threat-risk approach is 'technical', and as we've discussed many times, the list of threats cannot be fully enumerated, so this is a ridiculous approach.

Basing policy on risk is also a fruitless approach as it means you are not going to face some important points about policy.

Policy is for people. Its not technical, its about social behaviour and expectations.
Policy can be an enabler, but if you think only about risk you will only see the negatives; your policies will all be of the form "Don't do that".
Policies should tell people what they should do, what is expected of them, give them guidance.

Policies also have to address the legal and regulatory landscape. As such they may also address issues of ethics, which again is not going to be addressed by a threat-risk approach.

All in all, if you follow Mark's advice you may write policies that seem OK, but when it comes to following them it will be like the song from the 70s by The Five Man Electric Band:

Sign Sign everywhere a signsigns, signs
Blocking out the scenery breaking my mind
Do this, don't do that, can't you read the sign

and people will feel put upon and that the company is playing Big Brother. You will have heavy-handed rules that are resented and not clearly understood by all employees.

Policies are there to control the behaviour of people in the corporate setting. Think in terms of people and behaviour, not in terms of threats and risks.
Policies are to guide and control behaviour of people, not of machines and software.

Think of policies as having these kinds of objectives and you will be on a firm footing:

  • Shift attitudes and change perspectives
  • Demonstrate management support
  • Assure consistency of controls
  • Establish a basis for disciplinary action
  • Avoid liability for negligence
  • Establish a baseline against which to measure performance and improvement
  • Coordinate activities

and of course something important to all of us toiling in InfoSec

  • Establish a basis for budget and staffing to implement and enforce the policies

Policies need to be created from the point of view of management, not as a set of techie/geek rules, which the threat/risk approach would lead to.

Not least of all because, as I'm sure Donn Parker will point out, managers don't want to hear all that bad stuff about threats; they want policies that encourage staff to contribute to the profitability of the

Enhanced by Zemanta

The Glass Half Full

Posted by Anton Aylward

LONDON - AUGUST 05:  A man holds a pint glass ...
Image by Getty Images via @daylife
  • Optimist: The glass is half full
  • Pessimist: The glass is half empty
  • Cost Accountant: The vessel is too large for its purpose
  • Engineer: There is a 100% safety margin.

"Policy: All information stored electronically has value and shall be protected
commensurate with its value."
Corrolary: "If data has no value, it
should not be using storage space."

Reblog this post [with Zemanta]

One In Two Security Pros Unhappy In Their Jobs

Posted by Anton Aylward

Well? Are you?

You'd think most professionals in a hot industry like IT security would
feel content and challenged technically and creatively in their jobs --
but not so much. According to the results of a new survey that will go
public next week at Defcon in Las Vegas, half of security pros aren't
satisfied with their current jobs, and 57 percent say their jobs are
neither challenging nor fully tapping their skills.

Like most reports on survey, this is journalism at it worse.

The Need for Social Engineerig in InfoSec

Posted by antonaylward

Communication major dimensions scheme
Image via Wikipedia

When I took my undergraduate Engineering degree the attitude of my professors was that if we had chose engineering as our career then a few things were going on.

First, technology is changing, so teach fundamentals and principles and show how to apply them but don't get hung up on specific technologies. (Who would have guessed then that the RF theory work on transmission  lines would have an impact on writing software for PCB layout and even chip design!)

Second, that if we stayed in engineering, then within three to five years we would have "managerial" responsibilities so we better know about "managerial" things such as budgeting, logistics/supply-chain,
writing proposals and reports.

I mention this to make the point that being a CISSP is not about being a techie-geek. Knowing all there is about crypto, pen testing, or any vendor or product is inherently self limiting. You have put a cap on the authority and influence you have.

To be effective in InfoSec you need to be able to do that "social engineering" - as a recent article says,

"... the application of social science to the solution of social
problems," he said. "In other words, it's getting people to do
what you want by using certain sociological principles."

What you want is for your managers to implement certain strategies that
you believe are for the good of the company and society (see our code of
ethics an associated guidelines). This means you need communication

I realise many people reading this are in fact managers, but they too have to
report to higher authorities. Some here have MBAs. Management is more than the technical skill of a MBA course - that's another form of geekiness. (I know of one very good technical guy who saw Dilbert's Principle being applied in his firm an went and got a MBA. The trouble is that he never had any 'people skills' and the MBA course didn't supply them!)

So we get back to a parallel thread - "Trust"'.

Occasionally I run a workshop "Why people don't follow Policies and what you can do about it". Its for technical managers, those who have to enforce many policies, not least of all InfoSec ones, and manage those who are carrying out the associated Procedures. Its always a difficult workshop since its about seeing the patterns in behaviour, something technical managers are quite capable of, but have never been taught before.

Its my belief that InfoSec is meaningless unless it deal with the social and psychological issues. Right now we treat the term "social engineering" the way we do "risk", as something that has *only* a negative meaning. That has to stop. Management don't see "risk" as being bad and as far as threats go, we know that People are the sourceof them all! First and foremost, InfoSec practitioners need to be able to deal with People. Technology is for geeks. If you want to being
about change you have to deal with people.

"Social Engineering" - in the broadest and positive sense - is every bit as key as any other of the domains of the CBK. Its omission just shows how technology-centred the profession is, despite the threats and despite what needs to be done by practitioners to fulfil their roles.

Reblog this post [with Zemanta]

Swine Flu Issues – insufficient discrimination

Posted by antonaylward

The trouble with some people is that they make some deceptively reasonable comments that don't stand up under critical analysis

 With an ailing economy and a whole lot of cancelled contracts resulting from
that poor economy. Pandemic planning is a major threat to our most important
asset people and it appears as though that vulnerability may have been
activated. Its time to dust off the BCP plan and update it with a Pandemic
Mitigation strategy.

If it takes a pandemic to motivate you to create or review a BCP then
something is seriously wrong, and it has nothing to do with the pandemic.

As one manager said to me a long time ago, "show me the numbers".
I read:

The number of confirmed cases rose Monday to 50 in the U.S., the result
of further testing at a New York City school. The WHO has confirmed 26
cases in Mexico, six in Canada and one in Spain. All of the Canadian
cases were mild, and the people have recovered.

The Mexican government suspects the virus was behind at least 149 deaths
in Mexico, the epicentre of the outbreak, with hundreds more cases

I'm sure just about any ocotr - or the 'Net - can supply us with figures on the cases and deaths from 'regular' flu world-wide, as well as the named versions.

Benchmarked: Ubuntu vs Vista vs Windows 7

Posted by Anton Aylward

Use of Operating Sistem at November 2009
Image via Wikipedia

Interestingly, even if not that relevant.

And, of course, there's the most important proviso of all: it is very, very likely that a few tweaks to any of these operating systems could have made a big difference to these results, but we're not too
interested in that - these results reflect what you get you install a plain vanilla OS, like most users do.

That's Nonsense! Everyone Tweaks.
All the OS are set up to make tweaking easy!
Temptingly so.

Reblog this post [with Zemanta]

People under extreme stress may behave unpredictably and have limited capacity for rational thought

Posted by Anton Aylward

Les Bell, another ex-pat Brit who lives in Australia was discussing the importance of training and reinforcement in such matters as DR/BCP.  Les is also a pilot and so many of his analogies and examples have to do with piloting and aircraft.

Part of our discussion has a much wider scope.
Les had said:

"People under extreme stress may behave unpredictably and have limited capacity for rational thought"

This is the basis of much of pilot training, particularly in simulators, where procedures that are too dangerous to be attempted in a real aircraft can be repeated until drills are automatic.

Don't quote me on this, but I seem to recall reading in an aviation safety-related article that in an emergency, something like 50% of people lose it to the extent that they are completely unable to cope, 25% are capable of functioning with some degree of impairment, and 25% of people are able to complete required tasks correctly. Training by means of drills and rehearsals is able to correct that situation to a considerable extent.

Therefore in BCP/DRP planning, it's important to - as far as possible - simulate an emergency, rather than just story-boarding it, or doing a whiteboard walkthrough. Hence the requirement for fire drills, evacuation drills and the like; repetition conditions the mind to perform the task correctly under stressful conditions.

Most of us don't get the chance to do a full interruption test for our DRP, but the closer we can get, the better.

Training - drill and reinforcement so that you can carry out the actions automatically even when extreme stress has completely blanked and cognitive functions - is an important part of military "boot camp" training and one reason I find it so comical that CISSP course training gets called "boot camp".

Les is quite right.  For a variety of reasons most people "loose it" under extreme stress.  This is why military heroes, people who can hang in there and think clearly and make critical decisions,  are held in such esteem.   Similarly test pilots (and those test pilots who became the early astronauts).  Having lightening fast reactions (racing drivers) and being in top physical condition helps, but there is something more.

Some authorities look to the old American 'gunslingers' and speculate about how the adrenaline rush in such situations is handled by the body and the brain.   Typically all that adrenaline pumps up the muscles for "fight or flight" and in such panic or near panic situations rationality is not the key issue.  But if we shift from the evolutionary context to the 'gunslinger', standing still means that there is a lot of 'shakes'.  Being able to stay calm and not have the shakes leads to being a sucesfull 'gunslinger'.   Evolution in action?

There are other forms of stress as well.   I've seen sysadmins who have been up for more than 30 hours trying in futile to solve a problem that to me, well rested, is simple and obvious.

The lesson here is two-fold.   The first is the point that Les makes.  Train and reinforce.
The second is that when the disaster does strike be aware that the stress will load up on fatigue and that stressed and fatigued people do not make good decisions.  Rest, shifts, alternates, standard plans and scenarios that can work to relieve the stress are important.

Going Rogue

Posted by Anton Aylward

In this article at TechRepublic, Tom Olzak tries to address the issue of insider threat by talking about why your employees might 'go rogue'.  I think he completely misses the point by discussing the motivation for spies and convicted traitors. This is a different class of people from toss that commit financial fraud and take revenge on employers who they think have wronged them.

Lets be fair, how many of these characteristics would have applied to people like Nick Leason, Jerome Kerviel, the rogue traders such as Yasuo Hamanaka at Sumitomo Corporation of Japan in 1998 and John Rusnak at the Allied Irish Bank in 2002, Toshihide Iguchi at Daiwa Bank, John Rusnak was a former currency trader at Allfirst bank, Matt Piper of Morgan Stanley, Anthony Elgindy, Thom Calandra and Brian Hunter - never mind the rogue executives as WorldCom, Enron and Parmalat and many other corporate and accounting scandals that were motivated by greed.

The list on the blackboard in the cartoon doesn't, I think, apply to the 'rogue traders'. It applies only somewhat to the rogue executives but it does apply more comprehensively to the spies and traitors like Ames & Early.

However Donn Parker's point that (many) white-collar criminals are led into crime by "intense personal problems" makes more sense and also applies to people such as Brian Molony at the CIBC. So I don't think this is a very good article. Donn's observation si more geenral and more useful than Tom's.

More to the point, since Tom's article fails to address issues such as senior management ignoring the business controls that are in place because the people concerned were making a profit (aka greed in high places) and because it doesn't address the issue of having internal resources where staff can come to get advice about pressing personal problems, and finally because it doesn't deal with the possible channels for ethics complaints and whistle-blowing, it fails to address its title; there is nothing here about prevention - only detection, and very limited form of detection at that.

Reblog this post [with Zemanta]