You betcha its not!
There are GOOD practices for deploying SNMP.
The BEST practice is to avoid V2.
If you must SNMP then use v3
if you are feeling geekish.
However my personal view is DON’T DO IT.
Why? Because I’m not an obsessive-compulsive.
Marcus Ranum once commented on such logging in his firewalls list and observed that an umbrella that told us about every raindrop that hit it would soon get its voice-box disabled. I have the same attitude to cars that talk to me.
“But …” you say.
Once at an interview for a project management position I was asked if I was willing to work overtime when the project over-ran. I said no, because I don’t let over-runs happen. (Its risk and scope management.)
Well, actually, rather than ask me ‘why/how is that’ the interviewers started arguing with one another. This told me a lot about the working environment.
So when people ask me about SNMP I ask why, and the answers I’ve had to date don’t make me very happy about the ‘working environment’. I’ve not had people tell me “its about risk management” such as advanced warning of disk failures. No, its about collecting information on raindrops for various management dashboards.
A marketing manager I once knew took me for a drive in his top-end Porche. It had sensors for just about everything, like the pressure and temperature in each tire (this was America so they were ‘tires’ not ‘tyres’), and much more. Now how is this ‘better’ than all the cars that don’t? When I get into my car I, as a now unconscious reflex, check for things like dings, low tires, spreading pools of liquid,
suspension droop, what have you. I admit that in the case of an aeroplane you can’t “stop, get out and look”.
So when it comes to your network which applies? Is it a Porche and you’re being obsessive, a plane and you need to monitor what you can’t see, or is basic logging enough?
In this I agree with Padgett Peterson – you should design your network to be robust and resilient. Segmentation is cheap; local servers are cheap; redundancy is cheap. This is not the 1950s where we have to do careful resource planning because every bytes of memory costs hundreds of dollars. Most of us, individuals and firms, have closets full of old machines that could be used as firewalls, ticket servers, backups, whatever.
Complexity? Partitioning out so ‘each thing does just one thing and does it well’ and eliminating common points of failure is the opposite of complexity.
Of course this runs against many modern trends like ‘overloading’, which has been dressed up in fancy terms like ‘virtualization'.
But then of course perhaps your management is more concerned about ‘fashion’ and coloured ‘dashboards’, in which case that probably trumps such mundane matters as security, reliability, flexibility, and effectiveness.
 I get told of such things by the system logs.
The “Other Anton” has some great presentations on logging which I strongly recommend.
 A lot of virtualization has more to do with the inability of versions of Windows to
multi-task as well as UNIX and Linux has always been able to.
Related articles by Zemanta
- Nagios 3 Ubuntu 9.10 HP Switch SNMP Check Failure (edugeek.net)
- More Rebuttals to Latest GNU/Linux Security FUD (techrights.org)
- Tenable Network Security Releases Security and Policy Compliance Configuration Audits for Cisco IOS (eon.businesswire.com)