Bug bounties: Bad dog! Have a treat!

Bug bounty programs are probably very cost-effective for software vendors, but they reward bad behavior

Last month at the FIRST conference (the Forum of Incident Response and Security Teams) in Bangkok, Microsoft announced that it's joining the "bug bounty" crowd. Some might say, "Finally!" but I'm not convinced it's a positive move. No doubt, there are strong arguments both for and against bug bounty programs, but in the long run, I'm not a fan.

Bug bounty programs are used by quite a few software organizations to encourage their customers and the general public to report security vulnerabilities directly to the software organization. Most such programs encourage (or require) a responsible disclosure process, but in the end the vulnerability and its remediation are published to the world. So what's the big deal? Let's consider the pros and cons a bit.

In favor of bug bounties

Bug bounty programs are in essence an extension of security testing programs. They are an ad hoc form of outsourcing a company's security testing -- to a community of people who likely aren't under NDA and, as non-employees, have absolutely no fiduciary responsibility to the software company, I should add. From the company's standpoint, they are relatively cost-effective, since the vendor ends up paying only for actual vulnerabilities, not for time spent trying (and failing) to find the bugs.

In Microsoft's case, it will pay $100,000 for new operating system vulnerabilities and an additional $50,000 for successful mitigation information. (Microsoft also offers $11,000 for critical vulnerabilities found in Internet Explorer.) I suspect these amounts are far less money than Microsoft would pay its own employees for vulnerability exploration and mitigation development work. Any vulnerability explorer will no doubt agree that far more time is spent searching and failing than searching and succeeding. Bug bounty sponsors have found a way to make all that searching-and-failing time cost-free to the software companies.

I'm confident that companies with bug bounty programs have weighed these costs carefully. Having held off on a public program for so long, Microsoft seems likely to have put a great deal of consideration and justification into its decision. Of course, I don't have access to the actual factors that influenced these decisions, but it stands to reason that the cost structure is beneficial for the sponsoring companies.

Against bug bounties

Part of what makes me dislike bug bounty programs is the fact that they reward bad behavior. I can't help but think that the bug finders are in essence holding a metaphorical gun to the heads of the software companies by saying, "pay up or I'm going to publish this vulnerability to the world". Perhaps that explains Microsoft's reluctance till now to embrace bug bounties. Let me explain why I think bug bounty programs are a doggie treat for bad pooches.

Long ago and far away, I spent quite a bit of time working with software development organizations, reporting vulnerabilities and helping test and announce their mitigations. Back then, there was a strong movement toward full disclosure of software vulnerabilities. It was believed that reporting vulnerabilities to one's local computer security incident response team (CSIRT) was an effective form of responsible disclosure. The CSIRT acted as a middleman, oftentimes protecting the privacy of the original reporter as it helped coordinate and communicate with the software developer.

At some point, a few people started reporting vulnerabilities directly to the software development organization involved. But this still constituted responsible disclosure. In either scenario, disclosure of the vulnerability -- and, oftentimes, example exploit code -- was inevitable. Responsible disclosure drove both parties to agree on a timeline, with everyone recognizing that the vulnerability would eventually be published. Responsible disclosure rectified the bad blood that existed in the early days of vulnerability handling, when software vendors would threaten CSIRTs with lawsuits if they published vulnerability information. (I personally experienced this.)

But then that bad behavior I mentioned started to creep into the process. Some vulnerability reporters, frustrated by not getting a prompt response from the developer organization, would threaten to go public. The horrified software vendor would then agree to pay the reporter to keep the information private for a defined period of time.

Most parents will tell you that if you give a crying child candy to get him to be quiet, the crying may stop, but the child has learned an effective way to get candy whenever he wants it. The parent has rewarded bad behavior, and will pay a price down the road.

I recently discovered and reported a software vulnerability in an iOS app developed by a major financial services corporation. I found a case where sensitive customer information was being stored locally on the device without benefit of any real data protection; an attacker with access to the device could compromise the user's financial data in minutes.

I reported the vulnerability to the financial institution; I thought that was the right thing to do. I made sure the company took the problem seriously. I didn't threaten it with irresponsible disclosure, and I provided it with all the information it would need to verify the problem. It did, and it promised a fix. In less than two months, the fix was provided, and the software was updated on Apple's App Store.

I didn't want or seek any form of remuneration or recognition in doing this. I was grateful that the company took the problem seriously and fixed it. Nowhere in the publication of the software update did my name appear. I'm fine with that.

I can't help but think that's a more positive way of handling software vulnerabilities, on both sides of the equation.

But if a software organization shouldn't rely on the public to report vulnerabilities, what should it do? I'd suggest spending more time and money on security testing internally. Build and test it right in the first place. That's not foolproof, I know, and some vulnerabilities won't be caught before the software is released. But when that happens and someone else finds the vulnerability, we all deserve a better system than having to pay that person to disclose the problem. That is a flawed system, folks -- though I know that's not a popular opinion.

With more than 20 years in the information security field, Kenneth van Wyk has worked at Carnegie Mellon University's CERT/CC, the U.S. Deptartment of Defense, Para-Protect and others. He has published two books on information security and is working on a third. He is the president and principal consultant at KRvW Associates LLC in Alexandria, Va.

Read more about application security in Computerworld's Application Security Topic Center.

Tags Microsoftapplication securityApp DevelopmentAccess control and authentication

Show Comments