In theory, consumers and businesses could punish Symantec for these
oversights by contracting with other security vendors. In practice, there’s
no guarantee that products from other vendors are well-secured, either
— and there is no clearway to determine how secure a given security
product actually is.
Too many firms take an “appliance” or “product” (aka ‘technology”) approach to security. There’s a saying that’s been attributed to many security specialists over the years but is quite true:
If you think technology can solve your security problems,
then you don’t understand the problems and you don’t
understand the technology.
Its still true today.
The ‘appliance’ attitude is often accompanied by either an unwillingness to do a proper risk analysis and apply organizational changes to make the InfoSec structure self-reliant and where necessary self healing,
that is institute a proper ISMS – which is often quite a lot of initial effort and then ongoing effort, which bring to mind another old quotation:
The biggest problem a security consultant has is getting
managers to perform regular risk assessments. They don’t
want to hear that it’s an on going process. The attitude
was “why bother if I can’t just check it once and be
done with it”.
Not just the risk analysis but the risk management, and treat both as an ongoing cycle. As I say, a proper ISMS is needed – of which ISO27001/2 is a good example – rather than an ‘appliance’ or piece of OTS software such as those mentioned in the article, which often run on a ‘fire and forget’ mode and are installed by a netadmin or hostadmin who has little to no real, meaningful security understanding.
Security is a process not a product
is quite true but understates the case. “Process” mean commitment from the Board and management, which in turn means there is budget to implement the possibly ongoing organizational changes to deal with
changes in the security profile to deal with the changes in technology and threats — as indeed the recent shifts to BYOD and ‘Cloud’ have shown — and the risk management processes, the people and the training.
Companies that are not willing to deal with this are going to suffer.
Breaches and hacks may have, up to now, been an embarrassment and inconvenience, perhaps the cost of sending out notification letters, a short blip in stock value. But consumer awareness is growing, and in
the e-commerce world consumers are coming to expect many basic quality and security baseline features. And that too is an evolving issue. sites like PayPal and eBay devote a lot of energy not simply to security but to the whole process of evolving security, being aware of evolving threats and methods and vulnerabilities.
But its also easy to do it all wrong, to go though the motions with no real results.
We can see that with the way the US Government is dealing with InfoSec and in doing so generating the artificial ‘skills gap’ of InfoSec specialists. What they are doing is demanding the low-level operatives, in effect ‘enhanced’ sysadmins and netadmins who are trained in using the appliances and configuring Windows devices and servers. This is ‘tactical work. What they are avoiding doing is the strategic work, addressing organizational and structural issues, doing proper risk analysis and management, the heavy ‘paperwork’ of implementing ISO27000 or ISO31000. One reason for this is that it is going to be disruptive, “drag them kicking and screaming out of the 19th century”.
We can point quite clearly to various US government departments since they are high profile, well publicized in the media and reports, and quite recidivist, but there are no shortage of other organizations, commercial, NGO and governmental, throughout the world that have implemented just enough “security to say “well that doesn’t apply to me”. All to often that ‘just enough’ is in the form of appliances and OTS software for otherwise poorly configured Windows systems, run by an under-staffed, under-trained (because its under-budgeted and managed by people who don’t understand Risk Management) people. And there’s a lot of “Denial” going on.
This is why I like dealing with first and second tier banks and the large insurance companies that have been around for a long time. They’ve been doing Risk Analysis and management in the meat-world for a long time and segueing that into Cyberspace is no big deal for them. Their main issue is that they have to be a bit un-conservative to deal with rapidly advancing technology.
But as the real world shows, even they aren’t completely immune.
So any organization saying “I’m all right” and “I don’t need to do these things” and “I’m OK with my appliances and OTS software” is deluding themselves.
Related articles across the web
The take-away that is relevant :
Cyber risk should not be managed separately from enterprise or business risk. Cyber may be only one of several sources of risk to a new initiative, and the total risk to that initiative needs to be understood.
Cyber-related risk should be assessed and evaluated based on its effect on the business, not based on some calculated value for the information asset.
A few other things in there too, but those are the leading ones that I think the techie geeks that are attracted to InfoSec need to learn is expressed well in those two phrases. Its not about the technology, its about the business. Its why I hate the term “Cyber-“. Information Security risks existed in the days of typewriters, carbon copies and filing cabinets. Security risks existed in the days of hand written messages and horse-back couriers.
Why do I say this?
Back in my banking days one officer at the bank said
The bank *IS* the computer
I saw his point but ultimately the bank is its dealings with people.
If people loose confidence in the bank, it will fail.
It has happened in the past; it can happen again, and all the
“Cyber-security” in the world won’t help.
For whatever value of “Mobile” is applicable in context, yes.
A lot of what I see is students in the library with their laptops or large tablets_keyboards with paper and books beside. Perhaps if students had the multi-screen displays like the one in the movie “Swordfish” AND there were more books on-line at low cost and multi-access (which isn’t how many libraries work, sadly) then the marketers dream of students with ebooks rather than a knapsack of books would happen. As it is, with only one viewer, books and papers are still needed.
I’m seeing or being told the same by office workers, that a single screen, even the big screens, is not adequate. Really work & study requires parallel access even if the work-flow isn’t massively parallel.
My own work, such as it is, gets by because my desktop, although only one physical screen, has 6 logical screens. Even so I have a stack of papers and many books to hand, as well as my phone and tablet to hand.
Unless we get some sort of virtual projected-into-the-air display, we are going to need a form of HUD glasses that lets us do the MIT Put-That-There screen so that it doesn’t interfere with others, but still lets us look though into the real world at out books and papers. Believe me, the books and papers aren’t going to go away in the foreseeable future whatever the improvements in display technology.
Is interviewing is a much better method that self-certifications and a checklist, if time and resources allow.
In the ISO-27001 forum, my friend and colleague Gary Hinson has repeatedly pointed out, and I fully support him in this, that downloading check-lists from the ‘Net and adopting question lists from there is using a solution to someone else’s
problem. If that.
Each business has both generic problems (governments, sunspots, meteor strikes, floods & other apocalyptic threats and Acts of God) and ones specific to it way of working and configuration. Acts of God are best covered by prayer and insurance.
Gary recommends “open ended questions” during the interview rather than ones that require a yes/no answer. That’s good, but I see problems with that. I prefer to ask “Tell me about your job” rather than “Tell me how your job … can be made more efficient”.
My second point is that risk management will *ALWAYS* fail if the risk analysis is inadequate. How much of the RA should be done by interviewing people like the sysadmins I don’t know, but I have my doubts. I look to the Challenger Disaster. I started in the aviation business and we refines FMEA – failure Mode Effect Analysis. Some people think of this in terms of “impact”, but really its more than that, its also causal analysis. As Les Bell, a friend who is also a pilot and interested in aviation matters has pointed out to me, “Root Cause Analysis” no longer is adequate, failure comes about because of a number of circumstances, and it may not even be a single failure – the ‘tree’ fans both ways!
Yes, FMEA can’t be dome blindly, but failure modes that pertain to the business – which is what really counts — and the fan-in/out trees can be worked out even without the technical details. Rating the “risk”: is what requires the drill-down.
Which gets back to Donn Parker‘s point in a number of his books, though he never states it this way. The FMEA tree can be heavily pruned using diligence as he says: standards, compliance, contracts, audits, good practices, available products. The only thing he leaves out are Policy and Training. Policy gives direction and is essential to any purpose, the choice of standards and products, and identifying what training is needed.
All in all, the article at https://blog.anitian.com/flawed-it-risk-management/ takes a lot of words to say a few simple concepts.
I have my doubts about many things and the arguments here and in the comments section loom large.
Yes, I can see that business sees no need for an ‘arms race’ escalation of desktops once the basics are there. A few people, gamers, developers, might want personal workstations that they can load up with memory and high performance graphics engines, but for the rest of us, its ho-hum. That Intel and AMD are producing chips with more cores, more cache, integrated graphics and more, well Moore’s Law applies to transistor density, doesn’t it, and they have to do something to soak up all those extra transistors on the chips.
As for smaller packaging, what do these people think smart phones and tablets and watches are?
Gimme a brake!
My phone has more computing power than was used by the Manhattan project to develop the first nuclear bomb.
These are interesting, but the real application of chip density is going to have to be doing other things serving the desktop. its going to be
And for #1 & #3 Windows will become if not an impediment, then irrelevant.
Its possible a very stripped down Linux can serve for #1 & #3, but somewhere along the line I suspect people might wake up and adopt a proper RTOS such as QNX much in the same way that Linux has come to dominate #2. It is, however, possible, the Microsoft will, not that Gates and Balmer are out of the scene, adopt something Linux like or
work with Linux so as to stay relevant in new markets. The Windows tablet isn’t the success they hoped for and the buyout of Nokia seemed more to take Nokia out of the market than become an asset for Microsoft to enter the phone market and compete with Apple and Samsung. many big forms that do have lots of Windows workstations are turning to running
SAMBA on Big Iron because (a) its cheaper than a huge array of Windows Servers that present reliability and administrative overhead, and (b) its scalable. Linux isn’t the ‘rough beast’ that Balmer made out and Microsoft’s ‘center cannot hold’ the way it has in the past.
Embedding such devices in something edible only means it will end up in the stomach of the targeted user. Perhaps that is intentional, but I suspect not. Better to put the device in the base of the coffee cup.
I wonder what they consider to be a hack? The wording in the in the article is loose enough to mean that if someone pinged one of their servers it would be considered a hack. Perhaps they even they count Google spider indexing as a probe into their network. It makes me wonder how many ‘real’ hack attempts are made and how many succeed. All in it, it sounds like a funding bid!
Marcus Ranum once commented about firewall logging that an umbrella that notified you about every raindrop it repulsed would soon get annoying.I suspect the same thing is going on here. Are these ‘repulsed’ probes really ‘need to know’? Are they worth the rotating rust it takes to store that they happened?
Oh, right, Big Data.
Oh, right, “precursor probes“.
Can we live without this?
Douglas Berdeaux has written an excellent book, excellent from quite a number of points of view, some of which I will address. Packt Publishing have done a great service making this and other available at their web site. It is one of many technical books there that have extensive source code and are good ‘instructors’.
It is one of over 2000 instructional books and videos available at the Packt web site.
I read a lot on my tablet but most of the ebooks I read are “linear text” (think: ‘novels’, ‘news’). A book like this is heavily annotated by differentiating fonts and type and layout. How well your ebook reader renders that might vary. None of the ones I used were as satisfactory as the PDF. For all its failings, if you want a page that looks “just so” whatever it is read on, then PDF still wins out. For many, this won’t matter since the source code can be downloaded in a separate ZIP file.
Of course you may be like me and prefer to learn by entering the code by hand so as to develop the learned physical habit which you can then carry forward. You may also prefer to have a hard copy version of the book rather than use a ‘split screen’ mode.
This is not a book about learning to code in Perl, or earning about the basics of TCP/IP. Berdeaux himself says in the introduction:
This book is written for people who are already familiar with
basic Perl programming and who have the desire to advance this
knowledge by applying it to information security and penetration
testing. With each chapter, I encourage you to branch off into
tangents, expanding upon the lessons and modifying the code to
pursue your own creative ideas.
I found this to be an excellent ‘source book’ for ideas and worked though many variations of the example code. This book is a beginning, not a end point.
In my very first job we were told, repeatedly told, to document everything and keep our personal journals up to date. Not just with what we did but the reasoning behind those decisions. This was so that if anything happened to use kn knowledge about the work, the project, what had been tried and thought about was lost, if, perhaps, we were ‘hit by a bus on the way to work‘.
At that point whoever was saying this looked toward a certain office or certain place in the parking lot. One of the Project managers drove a VW bus and was most definitely not a good driver!
So the phrase ‘document everything in case you’re hit by a bus’ entered into the work culture, even after that individual had left.
And for the rest of us it entered into our person culture and practices.
Oh, and the WHY is very important. How often have you looked at something that seems strange and worried about changing it in case there was some special reason for it being like that which you did no know of?
Unless things get documented …. Heck a well meaning ‘kid’ might ‘clean it out’ ignorant of the special reason it was like that!
So here we have what appear to be undocumented controls.
Perhaps they are just controls that were added and someone forgot to mention; perhaps the paperwork for these ‘exceptions’ is filed somewhere else or is referred to by the easily overlooked footnote or mentioned in the missing appendix.
It has been pointed out to me that having to document everything, including the reasons for taking one decision rather than another, “slows down work”. Well that’s been said of security, too, hasn’t it? I’ve had this requirement referred to in various unsavoury terms and had those terms associated with me personally for insisting on them. I’ve had people ‘caught out’, doing one thing and saying another.
But I’ve also had the documentation saving mistakes and rework.
These days with electronic tools, smartphones, tablets, networking, and things like wikis as shared searchable resources, its a lot easier.
Sadly I still find places where key documents such as the Policy Manuals and more are really still “3-ring binder” state of the art, PDF files in some obscure location that don’t have any mechanism for commenting or feedback or ways they can be updated.
Up to date and accurate documentation is always a good practice!
 And what surpises me is that when I’ve implemented those I get a ‘deer in the headlight’ reaction from staff an managers much younger than myself. Don’t believe what you read about ‘millennials’ being better able to deal with e-tools than us Greybeards.
My digital camera uses exif to convey a vast amount of contextual information and imprint it on each photo: date, time, the camera, shutter, aperture, flash. I have GPS in the camera so it can tell the location, elevation. The exif protocol also allows for vendor specific information and is extensible and customizable.
Unless and until we have an ‘exif’ for IoT its going to be lame and useless.
What is plugged in to that socket? A fan, a PC, a refrigerator, a charger for your cell phone? What’s the rating of the device? How is it used? What functions other than on/off can be controlled?
Lame lame lame lame.
At the very least, this will apply a ‘many eyes’ to some of the SSL code and so long as the ssh pruning isn’t wholesale slash-and-burn that cutting it back may prove efficacious for two reasons.
Less code can be simpler code, with decreased likelihood of there being a bug due to complexity and interaction.
Getting rid of the special cases such as VMS and Windows also reduces the complexity.
POSIX I’m not sure about; in many ways POSIX has become a dinosaur. Quite a number of Linux authors have observed that if you stop being anal about POSIX you can gt code that works and a simple #ifdef can take care of portability. In the 90% case there isn’t a lot of divergence between the flavours and in the 99% case the #ifdef can take care of that.
Whether SSH fits into the 90% or the 99% I don’t know. The APIs for ‘random’ and ‘crypto’ are in the grey areas where implementations differ but also one where POSIX seems to be the most anal and ‘lowest common denominator’. I suspect that this is one where the #ifdef route will allow more effective implementations.
We shall see what emerges, but on the whole the BSD team have a reputation for good security practices so I’m hopeful about the quality.
I’d be interested to see their testing approach.
He makes the case that once you put a computer in something it stops being that something and becomes a computer.
Camera + computer => computer