In theory, consumers and businesses could punish Symantec for these
oversights by contracting with other security vendors. In practice, there’s
no guarantee that products from other vendors are well-secured, either
— and there is no clearway to determine how secure a given security
product actually is.
Too many firms take an “appliance” or “product” (aka ‘technology”) approach to security. There’s a saying that’s been attributed to many security specialists over the years but is quite true:
If you think technology can solve your security problems,
then you don’t understand the problems and you don’t
understand the technology.
Its still true today.
The ‘appliance’ attitude is often accompanied by either an unwillingness to do a proper risk analysis and apply organizational changes to make the InfoSec structure self-reliant and where necessary self healing,
that is institute a proper ISMS – which is often quite a lot of initial effort and then ongoing effort, which bring to mind another old quotation:
The biggest problem a security consultant has is getting
managers to perform regular risk assessments. They don’t
want to hear that it’s an on going process. The attitude
was “why bother if I can’t just check it once and be
done with it”.
Not just the risk analysis but the risk management, and treat both as an ongoing cycle. As I say, a proper ISMS is needed – of which ISO27001/2 is a good example – rather than an ‘appliance’ or piece of OTS software such as those mentioned in the article, which often run on a ‘fire and forget’ mode and are installed by a netadmin or hostadmin who has little to no real, meaningful security understanding.
Security is a process not a product
is quite true but understates the case. “Process” mean commitment from the Board and management, which in turn means there is budget to implement the possibly ongoing organizational changes to deal with
changes in the security profile to deal with the changes in technology and threats — as indeed the recent shifts to BYOD and ‘Cloud’ have shown — and the risk management processes, the people and the training.
Companies that are not willing to deal with this are going to suffer.
Breaches and hacks may have, up to now, been an embarrassment and inconvenience, perhaps the cost of sending out notification letters, a short blip in stock value. But consumer awareness is growing, and in
the e-commerce world consumers are coming to expect many basic quality and security baseline features. And that too is an evolving issue. sites like PayPal and eBay devote a lot of energy not simply to security but to the whole process of evolving security, being aware of evolving threats and methods and vulnerabilities.
But its also easy to do it all wrong, to go though the motions with no real results.
We can see that with the way the US Government is dealing with InfoSec and in doing so generating the artificial ‘skills gap’ of InfoSec specialists. What they are doing is demanding the low-level operatives, in effect ‘enhanced’ sysadmins and netadmins who are trained in using the appliances and configuring Windows devices and servers. This is ‘tactical work. What they are avoiding doing is the strategic work, addressing organizational and structural issues, doing proper risk analysis and management, the heavy ‘paperwork’ of implementing ISO27000 or ISO31000. One reason for this is that it is going to be disruptive, “drag them kicking and screaming out of the 19th century”.
We can point quite clearly to various US government departments since they are high profile, well publicized in the media and reports, and quite recidivist, but there are no shortage of other organizations, commercial, NGO and governmental, throughout the world that have implemented just enough “security to say “well that doesn’t apply to me”. All to often that ‘just enough’ is in the form of appliances and OTS software for otherwise poorly configured Windows systems, run by an under-staffed, under-trained (because its under-budgeted and managed by people who don’t understand Risk Management) people. And there’s a lot of “Denial” going on.
This is why I like dealing with first and second tier banks and the large insurance companies that have been around for a long time. They’ve been doing Risk Analysis and management in the meat-world for a long time and segueing that into Cyberspace is no big deal for them. Their main issue is that they have to be a bit un-conservative to deal with rapidly advancing technology.
But as the real world shows, even they aren’t completely immune.
So any organization saying “I’m all right” and “I don’t need to do these things” and “I’m OK with my appliances and OTS software” is deluding themselves.
Related articles across the web
I wonder what they consider to be a hack? The wording in the in the article is loose enough to mean that if someone pinged one of their servers it would be considered a hack. Perhaps they even they count Google spider indexing as a probe into their network. It makes me wonder how many ‘real’ hack attempts are made and how many succeed. All in it, it sounds like a funding bid!
Marcus Ranum once commented about firewall logging that an umbrella that notified you about every raindrop it repulsed would soon get annoying.I suspect the same thing is going on here. Are these ‘repulsed’ probes really ‘need to know’? Are they worth the rotating rust it takes to store that they happened?
Oh, right, Big Data.
Oh, right, “precursor probes“.
Can we live without this?
At the very least, this will apply a ‘many eyes’ to some of the SSL code and so long as the ssh pruning isn’t wholesale slash-and-burn that cutting it back may prove efficacious for two reasons.
Less code can be simpler code, with decreased likelihood of there being a bug due to complexity and interaction.
Getting rid of the special cases such as VMS and Windows also reduces the complexity.
POSIX I’m not sure about; in many ways POSIX has become a dinosaur. Quite a number of Linux authors have observed that if you stop being anal about POSIX you can gt code that works and a simple #ifdef can take care of portability. In the 90% case there isn’t a lot of divergence between the flavours and in the 99% case the #ifdef can take care of that.
Whether SSH fits into the 90% or the 99% I don’t know. The APIs for ‘random’ and ‘crypto’ are in the grey areas where implementations differ but also one where POSIX seems to be the most anal and ‘lowest common denominator’. I suspect that this is one where the #ifdef route will allow more effective implementations.
We shall see what emerges, but on the whole the BSD team have a reputation for good security practices so I’m hopeful about the quality.
I’d be interested to see their testing approach.
The latest intelligence on Al-Qaeda, a high profile Child Protection
report and plans for policing the London 2012 Olympics; three very
different documents with two things in common: firstly, they all
contained highly confidential information and secondly, they were all
left on a train.
Or maybe “Strangers on a Train“
Our latest research reveals that two thirds of Europe’s office commuters
have no qualms about peering across to see what the person sitting next
to them is working on; and more than one in ten (14 per cent) has
spotted confidential or highly sensitive information.
An article on Linked entitled ‘The Truth about Practices” started a discussion thread with some of my colleagues.
The most pertinent comment came from Alan Rocker:
I'm not sure whether to quote "Up the Organisation", ("If you must have a policy manual, reprint the Ten Commandments"), or "Catch-22" (about the nice "tidy bomb pattern" that unfortunately failed to hit the target), in support of the article. Industry-wide metrics can nevertheless be useful, though it's fatal to confuse a speedometer and a motor.
However not everyone in the group agreed with our skepticism and the observations of the author of the article.
And Anton aren’t the controls you advocate so passionately best practices? >
NOT. Make that *N*O*T*!*!*! Even allowing for the lowercase!
I get criticised occasionally for long and detailed posts that some readers complain treat them like beginners, but sadly if I don’t I get comments such as this in reply
Data Loss is something you prevent; you enforce controls to prevent data
leakage, DLP can be a programme, but , I find very difficult to support
with a policy.
Does one have visions of chasing escaping data over the net with a three-ring binder labelled “Policy”?
Let me try again.
Policy comes first.
Without policy giving direction, purpose and justification, supplying the basis for measurement, quality and applicability (never mind issues such as configuration) then you are working on an ad-hoc basis.
Java 7 Update 10 and earlier contain an unspecified vulnerability
that can allow a remote, unauthenticated attacker to execute arbitrary
code on a vulnerable system.
By convincing a user to visit a specially crafted HTML document,
a remote attacker may be able to execute arbitrary code on a vulnerable
Well, yes …. but.
From the left hand doesn’t know what the right hands is doing department:
Ngair Teow Hin, CEO of SecureAge, noted that smaller companies
tend to be “hard-pressed” to invest or focus on IT-related resources
such as security tools due to the lack of capital. This financial
situation is further worsened by the tightening global and local
economic climates, which has forced SMBs to focus on surviving
above everything else, he added.
Well, lets leave the vested interests of security sales aside for a moment.
I read recently an article about the “IT Doesn’t matter” thread that basically said part of that case was that staying at the bleeding edge of IT did not give enough of a competitive advantage. Considering that most small (and many large) companies don’t fully utilise their resources, don’t fully understand the capabilities of the technology they have, don’t follow good practices (never mind good security), this is all a moot point.
Last month, this question came up in a discussion forum I’m involved with:
Another challenge to which i want to get an answer to is, do developers
always need Admin rights to perform their testing? Is there not a way to
give them privilege access and yet have them get their work done. I am
afraid that if Admin rights are given, they would download software’s at
the free will and introduce malicious code in the organization.
The short answer is “no”.
The long answer leads to “no” in a roundabout manner.
Unless your developers are developing admin software they should not need admin rights to test it.
Call me a dinosaur (that’s OK, since its the weekend and dressed down to work in the garden) but …
And this doesn’t actually stop them form making use of ‘insider information’ they just have to declare it within 30 days.
No, wait, sorry … you mean that the legislators are saying that legislators shouldn’t do something that is illegal anyway? Or that, if they do something that is already illegal, it is OK as long as they declare it within 30 days? …
It gets worse:
I’d like to claim the system is rigged so ‘the rich get richer’ but if I did that some people who claim they are right wing would accuse me of being left wing. Indeed, this tells me that their political outlook has not progressed since 20 June 1789. This one-dimensional view fails to describe the rich variety of political attitudes in the Washington, never mind the rest of the USA and points elsewhere on the physical compass.
Just those two show we need more that 4 axes to describe a political stance. But as I mentioned in a previous post, journalists are simple-minded and expect the rest of the world to be as limited in outlook and understanding.
Try this test:
How does this all relate to InfoSec, you ask.
Well part of that Political Compass is a view of ‘how authoritarian’.
And that gets back to issues we have to deal with such as Policy and Enforcement, Do We Let Employees have Access to the Internet, and the like.
Hans Eysenk pointed out that the right wing (e.g. Fascism and Nazism) had a lot in common with the left wing (communism). Both are repressive, undemocratic and anti-Semitic. So on these issues, at least, the left-right distinction is meaningless.
How many more such simplistic distinctions such as those foisted on us by journalists are equally meaningless.
Some while ago my Australian fellow ex-pat Les Bell, who apart from being a CISSP is also a pilot, pointed out to me that the method of ‘root cause analysis‘ is no longer used in analysing plane crashes. The reality is that “its not just one thing”, its many factors. We all know that applies in most areas of life.
I suspect most people know that too; its not restricted to the digerati.
There is the old ditty that explains how because of a nail an empire was lost, but no-one is proposing that we fix the failing of the “American Empire” by manufacturing more nails.
Except possibly Journalists.
This isn’t news. Signature-based (and hence subscription based and hence that whole business model) AV is a wrong headed approach. As Rob Rosenberger points out at Vmyths.Com, we are addicted to the update cycle model and its business premise is very like that of drug pushers.
What’s that you say? Other types of AV? Like what?
Well, you could have a front-end engine that checks all downloads and all email and all email attachments and all URL responses by emulating what would happen when they run on any PC or in any browser or any other piece of software such as any of the PDF readers you use, or any of the graphical display software you use or any of the word processors you use
or any of the spreadsheet programs you use or any music players you use … and so on.
Many people in the industry – myself included – have proposed an alternative whereby each machine has a unique cryptographic ID and the legally and properly installed libraries are all signed with that ID, and the program loader/kernel will only load and execute correctly signed code.
Yes, Microsoft tried something similar with ActiveX, but that was signed by the vendor – which can be a good thing, and used PKI, which can also be a good thing. But both can be a problem as well: go google for details. A local signature had advantages and its own problems.
The local signature makes things unique to each machine so there is no “master key” out there. If your private key is compromised then do what you’d do with PGP – cancel the old one, generate a new one and sign all your software with the new one.
No technical measure can overcome human frailty in this regard.
- Avira antivirus upgrade wreaks ‘catastrophic’ havoc on Windows PCs (techworld.com.au)
- How can We Detect Viruses Without Antivirus Software? Built In Antivirus in your Browser 🙂 (shanicomputers.wordpress.com)
- Intel and McAfee unveil plans for unified security future (go.theregister.com)
- John McAfee, antivirus pioneer, arrested by Belize police (networkworld.com)
- GlobalSign Develops Free Tool to Simplify Code Signing Process (prweb.com)
- A Modest Proposal: Please Don’t Learn to Code Because It Will Damage Your Tiny Brain (inventwithpython.com)
- Why Authenticity Is Not Security (leviathansecurity.com)
- Certs 4 Less Announces Support For Individual Code Signing Certificates (prweb.com)
- ‘Catastrophic’ Avira antivirus update bricks Windows PCs (go.theregister.com)
- Avira fixes antivirus update that crippled many PCs (neowin.net)
- Free Anti-Virus Software Fails To Charm Enterprises (informationweek.com)
- Backpack Algorithms And Public-Key Cryptography Made Easy (coding.smashingmagazine.com)
- Cryptography pioneer: We need good code (infoworld.com)
- Contrary to Popular Opinion, Encryption IS the Hard Part (blogs.gartner.com)
- Public Key Cryptography Explained (q-ontech.blogspot.com)
Like many forms of presenting facts, not least of all about risk, reducing complex and multifaceted information to a single figure does a dis-service to those affected. The classical risk equation is another example of this; summing, summing many hundreds of fluctuating variables to one figure.
Perhaps the saddest expression of this kind of approach to numerology is the stock market. We accept that the bulk of the economy is based on small companies but the stock exchanges have their “Top 100” or “Top 50” which are all large companies. Perhaps they do have an effect on the economy the same way that herd of elephants might, but the biomass of this planet is mostly made up, like our economy, of small things.
The financial loss of internet fraud is non-trivial but not exactly bleeding us to death. Life goes on anyway and we work around it. But it adds up. Extrapolated over a couple of hundred years it would have the same financial value as a World Killer Asteroid Impact that wiped out all of human civilization. (And most of human life.)
A ridiculously dramatic example, yes, but this kind of reduction to a one-dimensional scale such as “dollar value” leads to such absurdities. Judges in court cases often put dollar values on human life. What value would you put on your child’s ?
We know, based on past statistics, the probability that a US president will be assassinated. (Four in 200+ years; more if you allow for failed attempts). With that probability we can calculate the ALE and hence what the presidential guard cost should be capped at.