3:35 PM -- Sometimes, what's good for one application just isn't good for another.
I've been in the game for a long time and get to see a lot of interesting tactics to prevent exploitation. Recently, I've been involved in auditing a software application, and one of the interesting aspects is how security theory varies from one technology to another.
With Websites, for instance, one of the reasons firewalls have proven to be fairly overkill is because often attackers are coming from addresses where many other people are also originating from (through network address translation). So you can't simply block by an IP address -- yet you feel you must block by them because you are getting attacked by an IP address.
Now look at software. If software running on your desktop detects that someone is attempting to do something malicious, it may be OK to block the request completely -- unlike with a firewall. Who cares if it breaks for a single instance of a single Web page? It's far better than having your local machine compromised. Rather than attempting to handle an error, let your application fail in a safe way. When the system starts acting responsibly again, your application can start running again in a sane way.
While the network can cause major outages if it fails, the worst that a local process can do is cause you to reboot. Setting up a denial-of-service situation on the desktop affects one user temporarily, whereas on the network, tens of thousands of users, as in the case of AOL's proxies.
One is acceptable; the other can be a significant detriment to your business. Different security people work on different systems and with different paradigms, so remember that one security professional's advice on blocking exploits does not fit all circumstances, especially if they aren't working in the same industry you are.