I think that title expresses the problem very well. Continue reading Can We Secure the ‘Internet of Other People’s Things’?
Let us pass over the “All A are B” illogic in this and consider what we’ve known all along. AV doesn’t really work; it never did.
Signature based AV, the whole “I’m better than you cos I have more signatures in my database” approach to AV and AV marketing that so bedazzled the journalists (“Metrics? You want metrics? We can give you metrics! How many you want? One million? Two million!) is a loosing game. Skip over polymorphism and others. The boundary between what actually works and what works for marketing blurs.
So then we have the attacks on the ‘human firewall’ or whatever the buzz-word is that appears in this month’s geek-Vogue magazines, whatever the latest fashion is. What’s that? Oh right, the malware writers are migrating to Android the industry commentators say. Well they’ve tried convincing us that Linux and MacOS were under attack and vulnerable, despite the evidence. Perhaps those same vendor driven – yes vendors try convincing Linux and Apple users to buy AV products, just because Linux and MacOS ran on the same chip as Microsoft they were just as vulnerable as Microsoft, and gave up dunning the journalists and advertising when they found that the supposed market wasn’t convinced and didn’t buy.
That large software production is buggy surprises no-one. There are methods to producing high quality code as NASA has shown on its deep space projects, but they are incompatible with the attitudes that commercial software vendors have. They require an discipline that seems absent from the attitudes of many younger coders, the kind that so many commercial firms hire on the basis of cost and who are drive by ‘lines of code per day’ metrics, feature driven popularity and the ‘first to market’ imperatives.
So when I read about, for example, RSA getting hacked by means of social engineering, I’m not surprised. Neither am I surprised when I hear that so many point of sales terminals are, if not already infected, then vulnerable.
But then all too many organization take a ‘risk-based’ approach that just is not right. The resistance that US firms have had to implementing chi-n-pin credit card technology while the rest of the world had adopted it is an example in point. “It was too expensive” – until it was more expensive not to have implemented it.
At the very least, this will apply a ‘many eyes’ to some of the SSL code and so long as the ssh pruning isn’t wholesale slash-and-burn that cutting it back may prove efficacious for two reasons.
Less code can be simpler code, with decreased likelihood of there being a bug due to complexity and interaction.
Getting rid of the special cases such as VMS and Windows also reduces the complexity. Continue reading OpenBSD forks, prunes, fixes OpenSSL
I get criticised occasionally for long and detailed posts that some readers complain treat them like beginners, but sadly if I don’t I get comments such as this in reply
Data Loss is something you prevent; you enforce controls to prevent data
leakage, DLP can be a programme, but , I find very difficult to support
with a policy.
Does one have visions of chasing escaping data over the net with a three-ring binder labelled “Policy”?
Let me try again.
Policy comes first.
Without policy giving direction, purpose and justification, supplying the basis for measurement, quality and applicability (never mind issues such as configuration) then you are working on an ad-hoc basis. Continue reading Does ISO 27001 compliance need a data leakage prevention policy?
On the one hand I recall a book titled “In Search of Stupidity“, which I strongly recommends reading, its about the hi-tech years that this article covers and takes a different view of how “quality” addressed market share.
On the gripping hand, I also lived though the years that book describes and can add detail. One detail is this. MS-Word was crap. Most offices/secretaries preferred WordPerfect, but MS-Word outsold WP by aggressive marketing – nothing else. The quality of MS-Word was the pits and its still full of bugs. Each release formatted historic documents in a different way, which is no-no in the legal (and other) profession. Its handling of nested indents in style sheets is a mess, so much so that many industries such as MILSPEC contractors simply don’t use style sheets.
I’m dubious about his claim that Linux has fewer add-on products.
Heinlein has a comment about democracy being like adding zeros.
If you look at those supposed products or Windows you’ll find many of them are “me-too” duplicates. We haven’t reached that stage yet with portable devices but we are getting there. When you get there, yes you do have one market leader; when people are spoilt for choice like that then a review or a friend’s recommendation can trip the balance, and that too can propagate. This has little to do with ‘quality’ and a lot to do with a cross between humans ‘herd instinct‘ and the way crystals form in a super-saturated medium.
The use of third-party code in applications represents a big security
risk for companies, according to a study from security vendor Veracode.
but they go on in such a way as to make me wonder what they mean by ‘third party’. Some of what they discuss seems to come from the primary supplier. Now if the primary supplier contracted out work, how are you to know?
Companies often use code libraries that have been developed from either
open-source projects or outsourcing organizations that have been
contracted to create applications…
I wouldn’t be so quick to disparage open source projects. Some of them have demonstrated much better code quality, much better reliability and security than commercial products from first-tier vendors.
“Variable quality“? Well yes, but that goes for the products from first tier vendors. “Ship at the end of the month regardless”. Yes, I’ve seen that. “Release to satisfy the investors/wall street”. I’ve seen that too. Open Source doesn’t have those constraints. Continue reading Third-party code putting companies at risk
A colleague who had the opportunity to restructure the role of his InfoSec department asked for advice about defining the roles and duties and how to make his department more effective.
Being very conservative in some ways I recommended a traditional Separation of Duties. It begins with what might be described, for lack of a better term as “the separation of InfoSec and IT“.
In the limiting case:
– IT “makes it so”
– Audit makes sure that they did.
in other words InfoSec addresses the areas you’ve expressed concerns about responsibility for by setting policy, standards and requirements (?compliance?). IT is responsible for the implementation, the hardware, the software, its installation and maintenance.
It can be an easy sell if you approach it properly.
|You:||See that firewall?|
|IT:||Yea, what about it?|
|You:||Its on the network, right?|
|IT:||Yea, where the f*** do you think it should be?
You bu**ers are always interfering.
|You:||And you guys take care of the network and stuff on the network?|
|IT:||When you bu**ers don’t interfere.|
|You:||Well we’re not going to. Its yours. We won’t touch it.
We won’t go off and buy stuff and put it on your network.
|IT:||Are you serious?|
|IT:||Can I have that in writing?|
|You:||Yes. I’ll copy you on the roles & responsibilities
and separation of duties documents. As well as the policies,
compliance and audit requirements.
Smile when you say that, but don’t make it a predatory smile.
Yes, that makes it sound easy, but reality never is, is it?
That’s why people buy books that offer the same kind of advice.
If you really want to work it through, try the books by The Harvard Negotiation Project:
* Getting to Yes: Negotiating Agreement Without Giving In
* Difficult Conversations: How to Discuss what Matters Most
* Getting Past No: Negotiating Your Way from Confrontation to Cooperation
* Getting Ready to Negotiate
Consultants, that is people with no formal authority in the hierarchy, may also appreciate
(Another time I’ll discuss the stupid idea that some people in the recruiting profession have that because ‘consultants’ don’t occupy a line-management role that they have no management skills or experience.)
The technical staff in the IT department may be perturbed in a number of ways. They might feel that their ‘freedoms’ are being removed and they are being ‘policed’. Make it clear to them that YOU are not policing them. AUDIT is policing them. That is the correct corporate role for audit.
InfoSec is writing the specs – the policy, the requirements, and they are doing it in cooperation with not only IT but also with other stakeholders in the business and to make sure that the IT department is serving the needs of the business and not just collecting expensive “Toys For Boys“.
This is no different from a software or hardware development situation, or, for that mater, the original set-up and procurement that went onto IT.
Someone did a needs analysis (even if it was only guesswork and estimations on a paper napkin), wrote up a requirement and handed it over to the people with a Picard-like order to “made it so”.
I appreciate that this ‘formal’ approach is being depreciated by ‘agile’ methodologies where the techies work without any of the formal management structure, without specifications or formal requirement, writing and running their own tests, all in the name of “Web 2.0”.
However the original idea as to set up a formal system of division of responsibility and duties and to deal with strengths and specializations.
Many people think that by fitting in with the power of a formal system they are giving up ‘freedom’. They don’t see the power of having all that organization (and buying power) behind them, of having defined roles that offload from them the detailed housekeeping that slows them down. They only think in terms of the Marxist cant about oppressive ‘production lines’ that dumb down the worker into an automation.
This is short-sighted and they know it if they’d stop and think about it.
Lets look at an example out of IT: The compiler – a tool that takes a high level requirement and specification and converts it to the fiddly assembly code – is one they accept. But some of us are old enough to remember the arguments against compilers, that they couldn’t produce the same quality code as good assembly programmer.
Perhaps: that may have been true back 30 years but its not now. Now compilers are ‘expert systems’ in code generation for very complex CPUs and instruction streams and branches. Programmers recognise this and accept it, often without thinking very deeply about it – they just code in the HLL and the compiler “makes it so” that it runs on the machine. But even 30 years ago compilers could produce assembly code to match a ‘good’ assembly programmer – but at 10 to 100 times the speed and when used by a middling programmer who understood the subject matter of the application better than he did the hardware of the computer did a very good job of delivering the application program.
This is a classic example of abstracting and encapsulating specialized knowledge and division of labour.
I have no doubt that today’s programmers would be upset if you took away their compilers.
What I am suggesting in this separation of duties between InfoSec, IT and Audit is no different from a doctor writing a prescription and the patient taking it to an apothecary to be filled. The apothecary isn’t doing the diagnosis or needs analysis, but he still plays an essential role.
The “You” in InfoSec, have to understand business needs, regulations, compliance issues. The “Them” in IT have to understand the details of the technology they are working with. Each have their roles to play.
Its when people start interfering with these responsibilities, these ‘separation of duties’, that things get upset.
- “Paid to be paranoid”(infosecblog.antonaylward.com)