The InfoSec Blog

14 antivirus apps found to have security problems

Posted by Anton Aylward

Let us pass over the "All A are B" illogic in this and consider what we've known all along. AV doesn't really work; it never did.
Signature based AV, the whole "I'm better than you cos I have more signatures in my database" approach to AV and AV marketing that so bedazzled the journalists ("Metrics? You want metrics? We can give you metrics! How many you want? One million? Two million!) is a loosing game. Skip over polymorphism and others.  The boundary between what actually works and what works for marketing blurs.

So then we have the attacks on the 'human firewall' or whatever the buzz-word is that appears in this month's geek-Vogue magazines, whatever the latest fashion is. What's that? Oh right, the malware writers are migrating to Android the industry commentators say. Well they've tried convincing us that Linux and MacOS were under attack and vulnerable, despite the evidence. Perhaps those same vendor driven - yes vendors try convincing Linux and Apple users to buy AV products, just because Linux and MacOS ran on the same chip as Microsoft they were just as vulnerable as Microsoft, and gave up dunning the journalists and advertising when they found that the supposed market wasn't convinced and didn't buy.

That large software production is buggy surprises no-one. There are methods to producing high quality code as NASA has shown on its deep space projects, but they are incompatible with the attitudes that commercial software vendors have. They require an discipline that seems absent from the attitudes of many younger coders, the kind that so many commercial firms hire on the basis of cost and who are drive by 'lines of code per day' metrics, feature driven popularity and the 'first to market' imperatives.

So when I read about, for example, RSA getting hacked by means of social engineering, I'm not surprised. Neither am I surprised when I hear that so many point of sales terminals are, if not already infected, then vulnerable.

But then all too many organization take a 'risk-based' approach that just is not right. The resistance that US firms have had to implementing chi-n-pin credit card technology while the rest of the world had adopted it is an example in point. "It was too expensive" - until it was more expensive not to have implemented it.


An “11th Domain” book.

Posted by Anton Aylward

Gary Hinson makes the point here that Rebecca Herrold makes elsewhere:   Rebecca Herold
Awareness training is important.

I go slightly further and think that a key part of a security practitioners professional knowledge should be about human psychology and sociology, how behaviour is influenced. I believe we need to know this from two aspects:

First, we need to understand how our principals are influenced by non-technical and non-business matters, the behavioural persuasive techniques used on them (and us) by vendor salesmen and the media. many workers complain that their managers, their executives seem t go off at a tangent, ignore "the facts". We speak of decisions drive by articles
in "glossy airline magazines" and by often distorted cultural myths.  "What Would the Captain Do?", or Hans Solo or Rambo might figure more than "What Would Warren Buffett Do" or "What Does Peter Drucker Say About A Situation Like This?". We can only be thankful that most of the time most managers and executive are more rational than this, but even so ...

Learning to Counter Threats – Skills or Ethics?

Posted by Anton Aylward

Fellow CISSP  Cragin Shelton made this very pertinent observation and gave me permission to quote him.

The long thread about the appropriateness of learning how to lie (con, `social engineer,' etc.) by practising lying (conning, `social engineering', etc.) is logically identical to innumerable arguments about whether "good guys" (e.g. cops and security folk) should teach, learn, and practice

  •  writing viruses,
  •  picking locks,
  •   penetrating firewall-protected networks,
  •  cracking safes,
  •  initiating and exploiting buffer overflows, or
  •  engaging in any other practice that is useful to and used by the bad guys.

We can't build defenses unless we fully understand the offenses. University professors teaching how to write viruses have had to explain this problem over and over.

Declaring that learning such techniques is a priori a breach of ethics is short-sighted. This discussion should not be about whether white hats should learn by doing. It should be about how to design and carry out responsible learning experiences and exercises. It should be about developing and promoting the culture of responsible, ethical practice. We need to know why, when, how, and who should learn these skills.

We must not pretend that preventing our white hatted, good guy, ethical, patriotic, well-intentioned protégés from learning these skills will somehow ensure that the unethical, immoral, low breed, teen-vandal, criminal, terrorist crowds will eschew such knowledge.

I have grave reservations about teaching such subjects.

Social Engineering and sufficency of awareness training

Posted by Anton Aylward

Someone asked:

If you have a good information security awareness amongst
the employees then it should not a problem what kind of attempts
are made by the social engineers and to glean information from
your employees.

Security tokens from RSA Security designed as ...

Yes but as RSA demonstrated, it is a moving target.

You need to have it as a continuous process, educate new hires and educate on new techniques and variations that may be employed by the 'social engineers'. Fight psychology with psychology!

Text vs HTML: what is more secure?

Posted by Anton Aylward

There are "good" mailing lists and "not so good" mailing lists from the point of view of security.

Try posting HTML mail to a "good" and one of two things will happen.

  1. If you have a mailer that includes the plain text then the list
    software will discard that, forward the plain text to the list
    with a message reading

    [Non-text portions of this message have been removed]

    I'm sure you've seen that message in posts on yahoogroups and similar.

  2. If you have a mailer that doesn't include the plain text
    then one of two things may happen:

    1. The plain text version is displayed, but being null the text that appears is
      empty, but you still get

      [Non-text portions of this message have been removed]

      I'm sure you've seen that too.

    2. The list software does its best to convert the html to plain text by stripping
      off the html tags. This works, but may
      produce some odd results. However you still get

      [Non-text portions of this message have been removed]

Throwing in the towel

Posted by Anton Aylward

I was saddened to hear of an InfoSec colleague who met with overwhelming frustration at work:

After two years of dealing with such nonsense, I was forced to resign
within two months of discovering a serious security issue which possibly
jeopardized overseas operations. I have since found out that they are
selling the company and didn't want any who knew the problems around.

Thank you.
Speaking as an auditor who occasionally does "due diligence" with respect to take-overs, you've just shown another use for LinkedIn - contacting ex-employees to find out about such problems.

Certainly a lot of employees leaving or being fired in the couple of years before the pending acquisition is a red flags, eh?