My Friend Alan Rocker and I often discuss ideas about technology and tradeoffs. Alan asked about SSDs for Linux:
> I haven't been following hardware developments very closely for a while, so I
> find it hard to judge the arguments. What's important?
Ultimately what's important is the management software, the layer above the drivers, off to one side. That applies regardless of the media and means that the view the applications take of storage is preserved regardless of changes in the physical media.
> The first question is, what areas are currently the bottlenecks and
> constraints, at what orders of magnitude?
The simple answer is 'channels'.
What's the saying "Those who forget history are doomed to repeat it over again"?
Weren't we doing this with routers and ... well if not firewalls as such then certainly filtering rules in the routers, way back in the 1980s?
I recall attending a luncheon put on by Dell about "Software Defined networking". Basically it was having routers that were 'agile' enough to change routing and implement tactical policy by load, demand and new devices or devices making processing demands.
Again we were doing that in the 1980s. Working with ANS as they cut over the academic internet to the commercial internet with their "CO+RE" pseudo-product. basically it was that they had been supporting the academic internet and were not selling commercial services using the same backbones, trunks and "outlets" (sometimes known as 'point of presence'). This 'policy based routing' was carried out by custom built routers; they were IBM AIX desktop boxes -- the kind I'd used to implement an Oracle based time management/billing system for at Public Works Ottawa a few years earlier, along with some custom built T3 interface cards.
In theory, consumers and businesses could punish Symantec for these
oversights by contracting with other security vendors. In practice, there’s
no guarantee that products from other vendors are well-secured, either
— and there is no clearway to determine how secure a given security
product actually is.
Too many firms take an "appliance" or "product" (aka 'technology") approach to security. There's a saying that's been attributed to many security specialists over the years but is quite true:
If you think technology can solve your security problems,
then you don't understand the problems and you don't
understand the technology.
Its still true today.
The take-away that is relevant :
Cyber risk should not be managed separately from enterprise or business risk. Cyber may be only one of several sources of risk to a new initiative, and the total risk to that initiative needs to be understood.
Cyber-related risk should be assessed and evaluated based on its effect on the business, not based on some calculated value for the information asset.
For whatever value of "Mobile" is applicable in context, yes.
A lot of what I see is students in the library with their laptops or large tablets_keyboards with paper and books beside. Perhaps if students had the multi-screen displays like the one in the movie "Swordfish" AND there were more books on-line at low cost and multi-access (which isn't how many libraries work, sadly) then the marketers dream of students with ebooks rather than a knapsack of books would happen. As it is, with only one viewer, books and papers are still needed.
Is interviewing is a much better method that self-certifications and a checklist, if time and resources allow.
In the ISO-27001 forum, my friend and colleague Gary Hinson has repeatedly pointed out, and I fully support him in this, that downloading check-lists from the 'Net and adopting question lists from there is using a solution to someone else's
problem. If that.
Each business has both generic problems (governments, sunspots, meteor strikes, floods & other apocalyptic threats and Acts of God) and ones specific to it way of working and configuration. Acts of God are best covered by prayer and insurance.
Gary recommends "open ended questions" during the interview rather than ones that require a yes/no answer. That's good, but I see problems with that. I prefer to ask "Tell me about your job" rather than "Tell me how your job ... can be made more efficient".
My second point is that risk management will *ALWAYS* fail if the risk analysis is inadequate. How much of the RA should be done by interviewing people like the sysadmins I don't know, but I have my doubts. I look to the Challenger Disaster. I started in the aviation business and we refines FMEA - failure Mode Effect Analysis. Some people think of this in terms of "impact", but really its more than that, its also causal analysis. As Les Bell, a friend who is also a pilot and interested in aviation matters has pointed out to me, "Root Cause Analysis" no longer is adequate, failure comes about because of a number of circumstances, and it may not even be a single failure - the 'tree' fans both ways!
Yes, FMEA can't be dome blindly, but failure modes that pertain to the business - which is what really counts -- and the fan-in/out trees can be worked out even without the technical details. Rating the "risk": is what requires the drill-down.
Which gets back to Donn Parker's point in a number of his books, though he never states it this way. The FMEA tree can be heavily pruned using diligence as he says: standards, compliance, contracts, audits, good practices, available products. The only thing he leaves out are Policy and Training. Policy gives direction and is essential to any purpose, the choice of standards and products, and identifying what training is needed.
All in all, the article at https://blog.anitian.com/flawed-it-risk-management/ takes a lot of words to say a few simple concepts.
I have my doubts about many things and the arguments here and in the comments section loom large.
Yes, I can see that business sees no need for an 'arms race' escalation of desktops once the basics are there. A few people, gamers, developers, might want personal workstations that they can load up with memory and high performance graphics engines, but for the rest of us, its ho-hum. That Intel and AMD are producing chips with more cores, more cache, integrated graphics and more, well Moore's Law applies to transistor density, doesn't it, and they have to do something to soak up all those extra transistors on the chips.
As for smaller packaging, what do these people think smart phones and tablets and watches are?
Gimme a brake!
My phone has more computing power than was used by the Manhattan project to develop the first nuclear bomb.
These are interesting, but the real application of chip density is going to have to be doing other things serving the desktop. its going to be
And for #1 & #3 Windows will become if not an impediment, then irrelevant.
Its possible a very stripped down Linux can serve for #1 & #3, but somewhere along the line I suspect people might wake up and adopt a proper RTOS such as QNX much in the same way that Linux has come to dominate #2. It is, however, possible, the Microsoft will, not that Gates and Balmer are out of the scene, adopt something Linux like or
work with Linux so as to stay relevant in new markets. The Windows tablet isn't the success they hoped for and the buyout of Nokia seemed more to take Nokia out of the market than become an asset for Microsoft to enter the phone market and compete with Apple and Samsung. many big forms that do have lots of Windows workstations are turning to running
SAMBA on Big Iron because (a) its cheaper than a huge array of Windows Servers that present reliability and administrative overhead, and (b) its scalable. Linux isn't the 'rough beast' that Balmer made out and Microsoft's 'center cannot hold' the way it has in the past.
Embedding such devices in something edible only means it will end up in the stomach of the targeted user. Perhaps that is intentional, but I suspect not. Better to put the device in the base of the coffee cup.
I wonder what they consider to be a hack? The wording in the in the article is loose enough to mean that if someone pinged one of their servers it would be considered a hack. Perhaps they even they count Google spider indexing as a probe into their network. It makes me wonder how many 'real' hack attempts are made and how many succeed. All in it, it sounds like a funding bid!
Marcus Ranum once commented about firewall logging that an umbrella that notified you about every raindrop it repulsed would soon get annoying.I suspect the same thing is going on here. Are these 'repulsed' probes really 'need to know'? Are they worth the rotating rust it takes to store that they happened?
Oh, right, Big Data.
Oh, right, "precursor probes".
Can we live without this?