My digital camera uses exif to convey a vast amount of contextual information and imprint it on each photo: date, time, the camera, shutter, aperture, flash. I have GPS in the camera so it can tell the location, elevation. The exif protocol also allows for vendor specific information and is extensible and customizable.
Unless and until we have an 'exif' for IoT its going to be lame and useless.
What is plugged in to that socket? A fan, a PC, a refrigerator, a charger for your cell phone? What's the rating of the device? How is it used? What functions other than on/off can be controlled?
Lame lame lame lame.
At the very least, this will apply a 'many eyes' to some of the SSL code and so long as the ssh pruning isn't wholesale slash-and-burn that cutting it back may prove efficacious for two reasons.
Less code can be simpler code, with decreased likelihood of there being a bug due to complexity and interaction.
Getting rid of the special cases such as VMS and Windows also reduces the complexity.
POSIX I'm not sure about; in many ways POSIX has become a dinosaur. Quite a number of Linux authors have observed that if you stop being anal about POSIX you can gt code that works and a simple #ifdef can take care of portability. In the 90% case there isn't a lot of divergence between the flavours and in the 99% case the #ifdef can take care of that.
Whether SSH fits into the 90% or the 99% I don't know. The APIs for 'random' and 'crypto' are in the grey areas where implementations differ but also one where POSIX seems to be the most anal and 'lowest common denominator'. I suspect that this is one where the #ifdef route will allow more effective implementations.
We shall see what emerges, but on the whole the BSD team have a reputation for good security practices so I'm hopeful about the quality.
I'd be interested to see their testing approach.
He makes the case that once you put a computer in something it stops being that something and becomes a computer.
Camera + computer => computer
The latest intelligence on Al-Qaeda, a high profile Child Protection
report and plans for policing the London 2012 Olympics; three very
different documents with two things in common: firstly, they all
contained highly confidential information and secondly, they were all
left on a train.
Or maybe "Strangers on a Train"
Our latest research reveals that two thirds of Europe’s office commuters
have no qualms about peering across to see what the person sitting next
to them is working on; and more than one in ten (14 per cent) has
spotted confidential or highly sensitive information.
Perhaps that's cynical and pessimistic and a headline grabber, but then that's what makes news.
What I’m afraid of is that things like this set a low threshold of expectation, that people will thing they don't need to be better than the herd.
Based on the demonstrated persistence of their enemies, I have a lot of respect for what Israeli security achieves.
Back to Verb vs Noun.
His point about baggage claim is interesting. It strikes me that this is the kind of location serious terrorists, that is the ones who worked
in Europe through the last century, might attack: not just dramatic, but shows how ineffectual airport security really is. And what will the TSA do about such an attack? Inconvenience passengers further.
So what's the best file system to use for archiving and data storage rather than the normal usage?
Won't that depend on ...
a) the nature of the archive files
If this is simply to be a 'mirror' of the regular file system, a 1:1
file mapping then there is no need for some specific optimizations as there would be if, for example, each snapshot were to be a single large file, a TAR or CPIO image say. You then have to look at what you are archiving: small files, large files .... Archiving mail a mbox is going to be different from archiving as maildir. For example the later is going to consume a LOT of inodes and that affects how one would format a ext2, est3 r ext4 file system but not be relevant on a ReiserFS or BtrFS.
b) the demand for retrieval from the archive
This is actually a can of worms. You might not think so at first but I've seen businesses put out of service because their 'backup' indexing was inadequate when the time came to retrieve a specific archive file of a specific date, as oppose to restore the whole backup. You need to be driven by your business methods here and that in turn will determine your indexing and retrieval which will determine your storage format.
Its business drive, not technology driven. Why else would you be archiving?
Now while (b) is pretty much an 'absolute', (a) can end up being flexible. You HAVE to have a clear way of retrieval otherwise your
archive is just a 'junk room' into which your file system overflows.
That (a) can be flexible also means that the optimization curve is not clearly peaked. Why else would you be asking this question? What's the worst situation if you choose ReiserFS rather than extN? The size of the file system? The number of inodes?
But if your indexing broken or inadequate you've got a business problem.
An article on Linked entitled 'The Truth about Practices" started a discussion thread with some of my colleagues.
The most pertinent comment came from Alan Rocker:
I'm not sure whether to quote "Up the Organisation", ("If you must have a policy manual, reprint the Ten Commandments"), or "Catch-22" (about the nice "tidy bomb pattern" that unfortunately failed to hit the target), in support of the article. Industry-wide metrics can nevertheless be useful, though it's fatal to confuse a speedometer and a motor.
However not everyone in the group agreed with our skepricism and the observations of the autor of the article.
And Anton aren't the controls you advocate so passionately best practices? >
NOT. Make that *N*O*T*!*!*! Even allowing for the lowercase!
So I need to compile a list of ALL assets, information or otherwise,
That leads to tables and chairs and powerbars.
OK so you can't work without those, but that's not what I meant.
Physical assets are only relevant in so far as they part of information processing. You should not start from those, you should start from the information and look at how the business processes make use of it. Don't confuse you DR/BC plan with your core ISMS statements. ISO Standard 22301 addresses that.
This is, ultimately, about the business processes.
I often explain that Information Security focuses on Information Assets.
Some day, on the corporate balance sheet, there will be an entry
which reads, "Information"; for in most cases the information is
more valuable than the hardware which processes it.
-- Adm. Grace Murray Hopper, USN Ret.
Some people see this as a binary absolute - they think that there's no need to asses the risks to the physical assets or that somehow this is automatically considered when assessing the risk to information.
The thing is there are differing types of information and differing types of containers for them.
I get criticised occasionally for long and detailed posts that some readers complain treat them like beginners, but sadly if I don't I get comments such as this in reply
Data Loss is something you prevent; you enforce controls to prevent data
leakage, DLP can be a programme, but , I find very difficult to support
with a policy.
Does one have visions of chasing escaping data over the net with a three-ring binder labelled "Policy"?
Let me try again.
Policy comes first.
Without policy giving direction, purpose and justification, supplying the basis for measurement, quality and applicability (never mind issues such as configuration) then you are working on an ad-hoc basis.
On the ISO2700 forum one user gave a long description of his information gathering process but expressed frustration over what to do with it all all, the assets, the threats and so forth, and trying to make it into a risk assessment.
It was easy for the more experienced of us to see what he was missing.
He was missing something very important -- a RISK MODEL
The model determines what you look for and how it is relevant.
Java 7 Update 10 and earlier contain an unspecified vulnerability
that can allow a remote, unauthenticated attacker to execute arbitrary
code on a vulnerable system.
By convincing a user to visit a specially crafted HTML document,
a remote attacker may be able to execute arbitrary code on a vulnerable
Well, yes .... but.