I couldn't help but find the reasoning for this very lacking and misleading.
I engaged in this practice before I was hired by TippingPoint. If
you're an individual or organization with enough visibility (which I
probably was not, but TippingPoint is), it can really turn heads to name
vendors whose products have had vulnerabilities go unremedied for
extended periods of time. The sheer number of vulnerabilities acquired
and the severity of the issues that ZDI deals with means vendors who end
up with vulnerability reports consistently lagging in queues at
TippingPoint may have a PR problem on their hands. The researchers who
contribute to the ZDI are aware of this, and as a result, this type of
"pipeline" information was widely requested of us.
We also gain from having the TippingPoint name associated with the
publicity that the public reports generate. The calculation is not one of:
"If we don't buy TippingPoint, we'll be compromised."
This is a very misleading argument which I don't believe to be responsible or wise. If they were serious about keeping a vendor accountable, it would make far more sense to tell them that after some period of time, they will release the vulnerability details. While that can be painful for the vendor, sometimes it is needed as a good way to hold their feet to the fire. Claiming to know about a secret vulnerability, which you just so happen to be immune to if you purchase my product, is a bit questionable. There is also the issue of TippingPoint assigning a severity to the issues. I would like to believe they will be objective when assigning a severity, but security researchers usually rate an issue more severe than it really is in an attempt to generate press. I can't see a purpose in this behavior outside of deliberate fearmongering or letting the marketing department drive your security team.
I am a fan of releasing full details about any given vulnerability, but I also believe that a sensible embargo date is warranted in many instances. The goal of any vendor should be to keep their customers as safe as possible. The goal of an organization such as TippingPoint is a bit murky.
TippingPoint sells a service which is intended to keep their customers safe from zero day vulnerabilities (unknown issues which there is no fix for). the obvious solution to accomplishing this is to be the organization which is discovering the new vulnerabilities. One of the ways TippingPoint does this is to solicit vulnerabilities form 3rd parties for money (I have no problem with this, if it's how TippingPoint wants to spend their money, that's fine with me). Their hope is that by being the organization that is discovering various vulnerabilities there will be more incentive to purchase their Intrusion Prevention System. If they wait to release information regarding an issue until after the vendor fixes it, there is less incentive to purchase their product. It's a bit like offering to sell water to a thirsty man versus one who just had a drink.
It is important to note that I'm not against what TippingPoint is doing. I have a great deal of respect for vulnerability researchers, I work with them every day. What I have a problem with is TippingPoint pretending to be taking the moral high ground in an effort to frighten people into purchasing their product. One of the goals I see in the open source world right now with technologies such as SELinux, Exec-Shield, and gcc/glibc hardening is that the computer should be the IPS, not some fancy overpriced box sitting outside the firewall.
I admit I'm growing tired of this argument and wish they had a different one. First we should get a few things strainght:
Hard things aren't always bad. Easy things aren't always good.
My favorite example is a bank vault versus a screen door. A bank vault is hard to install, hard to open (legally or otherwise), but is very secure. A screen door is rather easy to open, install, and fairly insecure. If you had a big pile of money, would you rather keep it safe with a screen door, or a bank vault? While I won't speak for everyone, I personally prefer to entrust my money to a bank.
Another amusing analogy might be to claim that BASIC is a "better" language than C since BASIC is easier than C
AppArmor comes with a tool that can investigate what a running program is doing and create a policy based of that. In theory this sounds great, but it's not without problem. The example policy is for Thunderbird (a mail client for those of you who don't know).
The important bit here is this:
Thatt means that for some reason AppArmor has decided that a client side mail client needs the ability to switch to the superuser (root). It's likely the result of a bug in Thunderbird. I don't disagree that tools can help make things easier, but it's rather misleading to claim your tools are something they are not. Whenever a policy is tool-generated in this manner, it will require a knowledgeable engineer to interpret the outcome.
I won't say SELinux is perfect, but I think it's the correct approach for a hard problem. I would much rather see a comparison of SELinux versus AppArmor based on technical merit instead of marketing jibberish.