The InfoSec Blog

Nobody wants to pay for security, including security companies

Posted by Anton Aylward

https://www.linkedin.com/pulse/nobody-wants-pay-security-including-companies-beno%C3%AEt-h-dicaire

In theory, consumers and businesses could punish Symantec for these
oversights by contracting with other security vendors. In practice, there’s
no guarantee that products from other vendors are well-secured,
either
— and there is no clearway to determine how secure a given security
product actually is.

Too many firms take an "appliance" or "product" (aka 'technology") approach to security. There's a saying that's been attributed to many security specialists over the years but is quite true:

If you think technology can solve your security problems,
then you don't understand the problems and you don't
understand the technology.

Its still true today.

Cyber risk in the business

Posted by Anton Aylward

https://normanmarks.wordpress.com/2015/06/05/cyber-risk-and-the-boardroom/

The take-away that is relevant :

Cyber risk should not be managed separately from enterprise or business risk. Cyber may be only one of several sources of risk to a new initiative, and the total risk to that initiative needs to be understood.

Cyber-related risk should be assessed and evaluated based on its effect on the business, not based on some calculated value for the information asset.

Cyber, Ciber or Syber?

Posted by Anton Aylward

Occasionally, people do ask:

What exactly do you mean by “cyber security”?
Or “cyber” for that matter. Please explain.

"Steersman Security"?

It seems to be one of those Humpty-dumpty words that the media, the government and others use with -- what's the current politically correct phrase to use now when one would, 50 years ago have said 'gay abandon'? -- because its current;y "in"?

I see it used to mean "computer" and "network" in the specific and "computers" and "networks" in the general, as well as specific functions such as e-banking, & other e-commerce, "Big Data", SCADA, POTS and its replacements.

I see it used in place of "Information" in contexts like "information Security" becoming, as above, "Cyber Security". But you don't know that it means that.

Are we here to protect the data? Or just the network? or just the computer?

Until a few years ago "Cyber" still did mean "steersman", even if that was automated rather than a human presence. No-one would call the POTUS a "Cyber-man' in the sense of being a steersman for the republic.

Perhaps we should start a movement to ban the use of "Cyber-" from use by the media.

Perhaps we might try to get some establishments to stop abusing the term.
I doubt very much we could do that with media such as SCMagazine but perhaps we could get the Estate of the Late Norbert Weiner to threaten some high profile entities like the State Department for the mis-use of the term?

 

Review: “Penetration with Perl” by Douglas Berdeaux

Posted by Anton Aylward

Penetration Testing with Perl

Douglas Berdeaux has written an excellent book, excellent from quite a number of points of view, some of which I will address. Packt Publishing have done a great service making this and other available at their web site. It is one of many technical books there that have extensive source code and are good 'instructors'.

Penetration Testing with Perl is available as both a PDF file and as an e-book in Mobi and epub formats.

It is one of over 2000 instructional books and videos available at the Packt web site.

I read a lot on my tablet but most of the ebooks I read are "linear text" (think: 'novels', 'news'). A book like this is heavily annotated by differentiating fonts and type and layout. How well your ebook reader renders that might vary. None of the ones I used were as satisfactory as the PDF. For all its failings, if you want a page that looks "just so" whatever it is read on, then PDF still wins out. For many, this won't matter since the source code can be downloaded in a separate ZIP file.

Of course you may be like me and prefer to learn by entering the code by hand so as to develop the learned physical habit which you can then carry forward. You may also prefer to have a hard copy version of the book rather than use a 'split screen' mode.

This is not a book about learning to code in Perl, or earning about the basics of TCP/IP. Berdeaux himself says in the introduction:

This book is written for people who are already familiar with
basic Perl programming and who have the desire to advance this
knowledge by applying it to information security and penetration
testing. With each chapter, I encourage you to branch off into
tangents, expanding upon the lessons and modifying the code to
pursue your own creative ideas.

I found this to be an excellent 'source book' for ideas and worked though many variations of the example code. This book is a beginning, not a end point.

Another Java bug: Disable the java setting in your browser

Posted by Anton Aylward

http://www.kb.cert.org/vuls/id/625617

Java 7 Update 10 and earlier contain an unspecified vulnerability
that can allow a remote, unauthenticated attacker to execute arbitrary
code on a vulnerable system.
By convincing a user to visit a specially crafted HTML document,
a remote attacker may be able to execute arbitrary code on a vulnerable
system.

Well, yes .... but.

Image representing XMind as depicted in CrunchBase

Are we fighting a loosing battle?
The New York Times is saying out loud what many of us (see Vmyths.com and Rob Rosenberger have known in our hearts for a long time. AV products don't work.

Learning to Counter Threats – Skills or Ethics?

Posted by Anton Aylward

Fellow CISSP  Cragin Shelton made this very pertinent observation and gave me permission to quote him.

The long thread about the appropriateness of learning how to lie (con, `social engineer,' etc.) by practising lying (conning, `social engineering', etc.) is logically identical to innumerable arguments about whether "good guys" (e.g. cops and security folk) should teach, learn, and practice

  •  writing viruses,
  •  picking locks,
  •   penetrating firewall-protected networks,
  •  cracking safes,
  •  initiating and exploiting buffer overflows, or
  •  engaging in any other practice that is useful to and used by the bad guys.

We can't build defenses unless we fully understand the offenses. University professors teaching how to write viruses have had to explain this problem over and over.

Declaring that learning such techniques is a priori a breach of ethics is short-sighted. This discussion should not be about whether white hats should learn by doing. It should be about how to design and carry out responsible learning experiences and exercises. It should be about developing and promoting the culture of responsible, ethical practice. We need to know why, when, how, and who should learn these skills.

We must not pretend that preventing our white hatted, good guy, ethical, patriotic, well-intentioned protégés from learning these skills will somehow ensure that the unethical, immoral, low breed, teen-vandal, criminal, terrorist crowds will eschew such knowledge.

I have grave reservations about teaching such subjects.

Steve Wozniak: Cloud Computing Will Cause ‘Horrible Problems In The

Posted by antonaylward

http://www.businessinsider.com/steve-wozniak-cloud-computing-will-cause-horrible-problems-in-the-next-five-years-2012-8

Perhaps The Woz isn't the influence he once was, and certainly not on Wall Street and the consumer market place.

Woz and I at dinner

The unbounded RAH-RAH-RAH for the "Cloud" is a lot like the DotComBoom in many ways. No doubt we will see a Crash rationalization.

 

Enhanced by Zemanta

The 19 most maddening security questions | Security – InfoWorld

Posted by Anton Aylward

http://www.infoworld.com/d/security/the-19-most-maddening-security-questions-187983

An interesting list, since it covers issues of public structural security.

I recall reading that the greatest contribution to the health of individuals came about from good public sanitation and clean water, that is civic changes (presumably enabled by legislation) that affected the public in a structural manner.

What would be on your list?

A poster for drinking water security from the EPA

A poster for drinking water security from the EPA (Photo credit: Wikipedia)

Enhanced by Zemanta
Tagged as: No Comments

Using ALE … inappropriately

Posted by Anton Aylward

Like many forms of presenting facts, not least of all about risk, reducing complex and multifaceted information to a single figure does a dis-service to those affected. The classical risk equation is another example of this;  summing, summing many hundreds of fluctuating variables to one figure.

Perhaps the saddest expression of this kind of approach to numerology is the stock market. We accept that the bulk of the economy is based on small companies but the stock exchanges have their "Top 100" or "Top 50" which are all large companies. Perhaps they do have an effect on the economy the same way that herd of elephants might, but the biomass of this planet is mostly made up, like our economy, of small things.

Treating big things like small things leads to another flaw in the ALE model.  (which is in turn  part of the fallacy of quantitative risk assessment)

The financial loss of internet fraud is non-trivial but not exactly bleeding us to death. Life goes on anyway and we work around it. But it adds up. Extrapolated over a couple of hundred years it would have the same financial value as a World Killer Asteroid Impact that wiped out all of human civilization. (And most of human life.)

A ridiculously dramatic example, yes, but this kind of reduction to a one-dimensional scale such as "dollar value" leads to such absurdities. Judges in court cases often put dollar values on human life. What value would you put on your child's ?

We know, based on past statistics, the probability that a US president will be assassinated. (Four in 200+ years; more if you allow for failed attempts). With that probability we can calculate the ALE and hence what the presidential guard cost should be capped at.

Right? NO!

In praise of OSSTMM

Posted by Anton Aylward

In case you're not aware, ISECOM (Institute for Security and Open Methodologies) has OSSTMM3 - The Open Source Security Testing Methodology Manual - http://www.isecom.org/osstmm/

There's an interesting segue to this at
https://www.infosecisland.com/blogview/14651-How-to-Pen-Test-Crazy.html

Skip over his ranting about the definition of "hackers"

This is the meat:

Wewrote the OSSTMM 3 to address these things. We knew that penetration

OSSTMM Logo

OSSTMM Logo

testing the way it continued to be marginalized would eventually hurt
security. Yes, the OSSTMM isn't practical for some because it doesn't
match the commercial industry security of today. But that's because the
security model today is crazy! And you don't test crazy with tests
designed to prove crazy. So any penetration testing standard, baseline,
framework, or methodology that focuses on finding and exploiting
vulnerabilities is only perpetuating the one-trick pony problem.
Furthermore it's also perpetuating security through patchity, a process
that's so labor intensive to assure homeostasis that nobody could
maintain it indefinitely which is the exact definition of a loser in the
cat and mouse game. So you can be sure it also doesn't scale at all with
complexity or size.

I've been outspoken against Pen Testing for many years, to my clients, at conferences and in my Blog. I'm sure I've upset many people but I do believe that the model plays up to the Hollywood idea of a Uberhacker,
produces a whack-a-mole attitude and is a an example of avoidance behaviour, avoiding proper testing and risk management such as incident response good facilities management.

I've seen to many "pen testers' and demos of pen testing that are just plain ... STUPID.  Unprofessional, unreasonable and pandering to the ignorance of managers.

In the long run the "drama-response" of the classical pen-test approach is unproductive. It teaches management the wrong thing - to respond to drama rather than to set up a good system of governance based on policy, professional staffing, adequate funding and operations based on accepted good principles such as change management.

And worse, it

  • shows how little faith your management have in the professional capabilities of their own staff, who are the people who should know the system best, and of the auditors who are trained not only in assessing the system but assessing the business impact of the risks associated with a vulnerability
  • has no guarantees about what collateral damage the outsider had to do to gain root
  • says nothing about things that are of more importance than any vulnerability, such as your Incident Response procedures
  • indicates that your management doesn't understand or make use of a proper development-test-deployment life-cycle

Yes, classical hacker-driven pen testing is more dramatic, in the same way that Hollywood movies are more dramatic. And about as realistic!

"Crazy" is a good description of that approach.

IAM – Basics – Policy

Posted by Anton Aylward

If there's one thing that upsets me when I see articles and posting to forums about policy, its mention of a "Password Policy". I have to step away from the keyboard, go outside and take some deep breaths to calm down.

I get upset because policy is important and developing -- and more importantly communicating -- policy has been an important part of my career and the professional service I offer. Policies need to be easy to understand and follow and need to be based on business needs.

If you begin with a list of policies, you end up adapting the the reality of your business - the operations - to the list. You are creating a false sense of security. You need to address what you need to control, and that is Identity and Access.

Lets face it, passwords, as Rick Smith points out in his book "Authentication", are not only awkward, they are passée - even Microsoft thinks so. More to the point, using passwords can be bad for your financial health.

They should be used with care and not as a default.

And they should most certainly NOT be entombed in a corporate policy statement.

You don’t need a Firewall Security Policy

Posted by Anton Aylward

A member of a discussion list I subscribe asked for a Firewall Policy template.

A usual, I was alarmed enough by this to want to comment and drag it back to the discussion on "assets".

I don't think you should have a "Firewall Security policy".
This is why.

A great book on firewalls once described the firewall as

The network's response to poor host security

You can occasionally see articles on host-centric security drifting by ...

A firewall is a "network PERIMETER protection device".

Do you have a well defined perimeter to which you can apply enforcement policies, or is your 'perimeter' like so many businesses these days, a vague and nebulous concept that is weakly defined? One thing that is "in" these days is "De-perimiterization". See "The Jericho Forum".

The firewall model is inherently one of a 'hard outer shell and soft vulnerable centre'. As I said, its based on the idea of poor host security. Good host security will mean that the hosts don't have any un-necessary open ports. Scan you network. If there are no open ports why do you need a firewall?

Oh, right: port 80. And all the hundreds of services behind it.
In effect those are your 'open ports'. Yes, there are firewalls that claim to do 'deep packet inspection'.  Check what they actually do.

There are other uses for a firewall?   Well some people use it as a NAT device. Some people use it to control outbound connections - "data leakage".   What they are really saying is that they haven't built their information architecture in a robust and secure manner.  Back to the 'poor host security'.  Perhaps you should be doing this sort of thing in your switch or router with ACLs.  Partition your network.

So why did I start by saying "assets"?
Some people think that the assets are the hardware.
Focusing on the hardware as opposed to the services, the information and the processes leads you to think in terms of things like 'firewalls' rather than in abstracts like "perimeters" and "access controls".

By addressing a "Firewall policy" you are focusing on equipment rather than fundamentals.

Enhanced by Zemanta

The Classical Risk Equation

Posted by Anton Aylward

What we had drilled into us when I worked in Internal Audit and when I was preparing for the CISA exam was the following


RISK is the
PROBABILITY that a
THREAT will exploit a
VULNERABILITY to cause harm to an
ASSET

R = f(T, V, A)

Why do you think they are called "TVAs"?

More sensibly the risk is the sum over all the various ..

This isn't just me sounding off. Richard Bejtlich says much the same thing and defends it from various sources. I can't do better that he has.

A Security Policy needs to be abstract not specific

Posted by Anton Aylward

The Information Security triad: CIA. Second ve...
Image via Wikipedia

There's much I don't like about many of the published security policies an the ones I see in use at many sites I visit and audit.   But lets pick on ones that deal with passwords.

Firstly, the concept of passwords are limiting.
Are you going to add a "pass-card policy" and a "iris scan policy" and a "fingerprint policy" ?

Of course not. its all "Authentication".

And it doesn't matter where or how or even WHAT you are accessing - policy applies. So the policy has to be general.

The workshops I've run on policy writing open with an explanation of what makes good and bad policy and use this point as an illustration. Good policy is general and isn't going to need to be revised as business
needs or technology - and hence risk and how its addressed - change.

Access to corporate Information System resources
will be restricted to authorized users in accordance
with their roles. Users will uniquely identify
themselves and will be accountable for the actions
carried out under this identification.

Simple language, very general.
You could say it even applies to the to the parking lot at the data centre.

It doesn't address passwords or swipe cards or fingerprints directly for a simple reason.

THEY ARE NOT POLICY ISSUES.

Le me say that again.
Specific controls and specific control technology are not policy issues.

They are standards.
Refer to them. Refer to NIST, refer to the Microsoft documents.
They are not policy.

The _general_ example I gave above is POLICY.

Can you see the difference?

Now read that paragraph again.

Does it say anything about HOW you access corporate IS resources?

No.
So it doesn't matter if you do it at the computer at your desk in the office; from your laptop when working at home over the VPN; from the airport using your smartphone over the Internet. It doesn't matter if the 'resource' is a parking lot, the email server or in 'The Cloud' somewhere.

You don't need separate policies for all of them.

I picked on 'password policy' because its easy to illustrate how a specific like this is wrong-minded and can easily be invalidated by a shift in technology. But the principle applies to the whole of the proposed document.

Why does this matter?

A minimalist approach has much to recommend it.

Quite apart from making the document shorter an hence easier to communicate, it eliminates redundancy and with it the opportunity for sections that talk about what is essentially the same thing but end up
being contradictory.

The example I gave avoids there being questions like

Does remote access use passwords or certificates?

because its NOT a policy issue. A 'remote access policy' might or might not talk about passwords, about SSH, kerberos or X.509 depending on the the bias of a technical writer. In which case its about standards, not policy, and its about access controls, no policy.

Implementation details - controls - must not be embedded in policy.

There a lot more potential for conflict in the document structure as its laid out at the moment.

Why do I talk about it?
Lets leave a policy document aside or a moment and thing of our jobs as Information Security specialists. part of our roles is thinking about what can go wrong, the weaknesses in the configuration and management of the Information systems, management, communication and storage. We think about threats and vulnerabilities.

Now apply that same approach to the document. this one you are calling a "policy manual". Don't take a bottom-up approach, such as arguing over the length of a password or how often it should be changed. That isn't policy. At best its a standard and a highly context sensitive one at that!

Identify what is in common and make it a policy.

I gave the example above of access control.
It doesn't matter whether its access to the workstation, the server, that CRM database, the "pipe" out to the Internet, or the Citrix array inbound over the 'Net from home or an Internet caf�.

It all access to corporate IS resources. It should have one and only one policy. It should not be spread over a number of policies with ifs and buts and different technologies and phases of the moon.

Remember: you have to write policy that can be followed and can be enforced. If users )or sysadmins for that matter) have to remember lots of different circumstances and special conditions then they are less
likely to conform. "Oh, I forgot"; "Oh, I was confused"; "Oh, I didn't think it applied here"; "Oh, I didn't think it applied to me".

That's a start.

Yes, I've picked on "access", but I could equally well have picked on "virus" or "email" or "mobile".

Enhanced by Zemanta

The FBI risk equation

Posted by Anton Aylward

It seems that to make better cybersecurity-related decisions a senior FBI official recommends considering a simple algebraic equation:

risk = threat x vulnerability x consequence

rather than solely focusing on threat vectors and actors.

To be honest, I sometimes wonder why people obsess about threat vectors in the first place.  There seems to be a beleive that the more threats you face, the higher your risk, regardless of your controls and regardless of the classification of the threats.

Look at it this way: what do you have control over?

Why do you think that people like auditors refer to the protective and detective mechanisms as "controls"?

Yes, if you're a 600,000 lb gorilla like Microsoft you can take down one - insignificant - botnet, but the rest of us don't have control over the  threat vectors and threat actors.

What do we have control over?

Vulnerabilities, to some extent. We can patch; we can choose to run alternative software; we can mask off access by the threats to the vulnerabilities. We can do things to reduce the the "vulnerability surface" such as partitioning our networks, restricting access, not exposing more than is absolutely necessary to the Internet (why oh why is your SqlServer visible to the net, why isn't it behind the web server, which in turn is behind a firewall).

Asset to a large extent. Document them. Identify who should be using them and implement IAM.

And very import: we have control over RESPONSE.

Did the FBI equation mention response? I suppose you could say that 'awareness' is a part of a response package. Personally I think that response is a very, very important part of this equation, and its the one you have MOST control over.

And response is - or should be - totally independent of the threats
since it focuses on preserving and recovering the assets.

I think they have it very, very confused and this isn't the most productive, most effective way of going about it.  But then the FBI's view of policing is to go after the criminals, and if you consider the criminals to be the threat then that makes sense.

But lest face it, most corporations and are not in the business of policing.  neither are home users.

Which is why I focus on the issue of "what you have control over".

Enhanced by Zemanta

Throwing in the towel

Posted by Anton Aylward

I was saddened to hear of an InfoSec colleague who met with overwhelming frustration at work:

After two years of dealing with such nonsense, I was forced to resign
within two months of discovering a serious security issue which possibly
jeopardized overseas operations. I have since found out that they are
selling the company and didn't want any who knew the problems around.

Hmm.
Thank you.
Speaking as an auditor who occasionally does "due diligence" with respect to take-overs, you've just shown another use for LinkedIn - contacting ex-employees to find out about such problems.

Certainly a lot of employees leaving or being fired in the couple of years before the pending acquisition is a red flags, eh?

How much would you give up your laptop for?

Posted by Anton Aylward

http://tech.yahoo.com/blogs/null/154866;_ylt=Av2YyMlmiE8ERpzUwD020zUWLpA5

Remember all those journalists doing the "give you password or a chocolate bar" articles?

Twix bar Purchased March 2005 in Atlanta, GA, USA

Well this seems a lot more realistic - giving up you laptop.

Not just the hardware, but everything on it!

Frightening!

Enhanced by Zemanta

The Cost of patching

Posted by Anton Aylward

I saw this assertion go by and it stood out:

The bigger cost would be the cost of not patching. Such items as downtime will affect more staff/users than patching will.

That's not a fair statement. There is much more to the discussion than whether to patch or not to patch or "stuff this for a lark, lets convert to MAC or Linux".

The issue so far has been black and white.
There is a black and white difference between devices that face the internet and those that are not accessible to or from the 'Net.

But what about the "grey"? No all patches have the same criticality even for 'Net-facing devices.

And there's more to security - even of the Internet-facing devices - than patching software.

How Many Deaths?

Posted by antonaylward

Here http://thecipblog.com/?author=3 I found this quote:

“In order to be designated ‘critical information infrastructure’, how many deaths would the failure of a network have to cause?" asks Matthew Holt, the author of this blog article.

He raises a good point. He asks if “death of people” would be a legitimate category of criteria to use when determining the level of criticality of an ICT system". His answer is "yes", and the number is
"one". Well OK, death is death and irreversible, but there are many other failure modes that are not death and may be too much trouble to reverse. I suppose one example of a "worse case scenario" would be a take-over of your nation by a foreign totalitarian oppressive regime. Or an attempt that leaves you in war-zone or one of the refugee camps that litter the Third World.

About creating Corporate IT Security Policies

Posted by Anton Aylward

As I've said before, you should not ask yourself what policies to write but what you need to control.  If you begin with a list of polices, you need to adapt the reality to the list. The risk is that you create a false sense of control of security.

The threat-risk approach is 'technical', and as we've discussed many times, the list of threats cannot be fully enumerated, so this is a ridiculous approach.

Basing policy on risk is also a fruitless approach as it means you are not going to face some important points about policy.

Policy is for people. Its not technical, its about social behaviour and expectations.
Policy can be an enabler, but if you think only about risk you will only see the negatives; your policies will all be of the form "Don't do that".
Policies should tell people what they should do, what is expected of them, give them guidance.

Policies also have to address the legal and regulatory landscape. As such they may also address issues of ethics, which again is not going to be addressed by a threat-risk approach.

All in all, if you follow Mark's advice you may write policies that seem OK, but when it comes to following them it will be like the song from the 70s by The Five Man Electric Band:

Sign Sign everywhere a signsigns, signs
Blocking out the scenery breaking my mind
Do this, don't do that, can't you read the sign

and people will feel put upon and that the company is playing Big Brother. You will have heavy-handed rules that are resented and not clearly understood by all employees.

Policies are there to control the behaviour of people in the corporate setting. Think in terms of people and behaviour, not in terms of threats and risks.
Policies are to guide and control behaviour of people, not of machines and software.

Think of policies as having these kinds of objectives and you will be on a firm footing:

  • Shift attitudes and change perspectives
  • Demonstrate management support
  • Assure consistency of controls
  • Establish a basis for disciplinary action
  • Avoid liability for negligence
  • Establish a baseline against which to measure performance and improvement
  • Coordinate activities

and of course something important to all of us toiling in InfoSec

  • Establish a basis for budget and staffing to implement and enforce the policies

Policies need to be created from the point of view of management, not as a set of techie/geek rules, which the threat/risk approach would lead to.

Not least of all because, as I'm sure Donn Parker will point out, managers don't want to hear all that bad stuff about threats; they want policies that encourage staff to contribute to the profitability of the
company.

Enhanced by Zemanta