This month in Canadian politics, we've been inundated with debates and arguments about what is right or wrong for our country. Within the security industry there remains an ongoing debate about how to handle the disclosure of security flaws, particularly when releasing proof of concepts (POCs).
I think, given some recent events on the AntiOnline website, it may be worthwhile to address the debate and more importantly, how companies should react in these situations. At some point or another, your company will encounter an undocumented vulnerability in your products or those your rely upon, and you will need to know how to address it.
First, let's look at how flaws or bugs are disclosed to the security community at large. In essence, there are really 3 methods that are being used today. There are variations to some of them but it really boils down to this: controlled or moderated disclosure with no POC, full disclosure with POC, and "full disclosure, but".
Disclosure Under Control
Controlled or moderated disclosure is the method presently being employed by the SecurityFocus BugTraq lists. The BugTraq lists only disclose details on bugs that have been approved for release by the software vendor, usually after a patch is in circulation or has been announced. Microsoft is a huge supporter of this methodology for alerting the IT community at large.
There are advantages and disadvantages to this arrangement, of course. On the plus side, bug information released in this manner tends to be of a higher level and there is often no POC to speak of. This eliminates attacks by script kiddies, the most prevalent form of attacker out there. This can also serve to control or lessen the impact of a virus that may be waiting in the wings.
The downside, however, is that many companies don't release this information until 2-3 months after the fact. While this is true to a degree, since many exploit or virus creators do reverse engineer patches, there are still those that don't and will gladly exploit a flaw prior to it becoming public knowledge. So, this means that some enterprise networks could be in a situation where they are vulnerable to major attack yet unaware. The further risk is that the enterprise not only suffer an attack but have customer private data exposed and in turn, suffer a lawsuit due to privacy violations (or worse).
Opening a Can of Worms
On the other end of the spectrum is full disclosure. The mailing list Full Disclosure lives up to that point of view. There is limited moderation, if any, and pretty much any topic is fair game. It's not unusual to see POCs released on this list even before the manufacturer is aware of the issue.
Few organizations like this kind of thing, except for the media. The disadvantage is obvious: when a POC is released, the software developer is made to look like it's not prepared or that it doesn't care. It also puts the developer on edge against the person who found the flaw. This kind of situation doesn't lead to a good resolution, if any.
Additionally, the release of the POC to the population at large opens the door for many of the script kiddie types to find it and try it. This can result in virus breakouts and severe compromises by those with little to no clue. A serious coder would purposely introduce flaws into the POC to reduce this effect, but it's not a sure-fire guarantee that the script kiddies won't figure it out.
But there is an advantage, which is the other edge of the double-edge of full disclosure. Because it's full disclosure, nothing is hidden from any administrator nor does an administrator have to wait for the manufacturer to release the details and a patch. An administrator can create his own interim fix until an official release is out. So it may give an alert administrator a heads-up before it really becomes an issue (e.g., blocking ports, looking for specific traffic).
But like politics, the extremes are often unpleasant for so many reasons. Foremost among them is that they become rather unreasonable.
Full Disclosure, but...
This is why moderate or middle-of-the-road policies are often more widely accepted. And that's why I prefer the "full disclosure, but" option. Basically, it is full disclosure BUT, the person finding the flaw notifies the developer in advance, waits a certain time-period and if there is no response, then releases the information.
For most situations, a time period of 4-8 weeks is sufficient. This could be shortened for Web site-related vulnerabilities down to a week since they should altered on the fly. Although for some sites, it may take some time to locate the code in question and make changes, plus it can have on later lines — particularly for things like PHP and ASP. In essence, this particular method takes the advantages of each and minimizes the disadvantages.
There is one distinct disadvantage: if a company chooses to ignore a flaw after being notified, then the company will certainly look foolish at best and disinterested at worst.
A Bug in Your Code? No Sense Crying about It.
You may not have the option to choose which method of disclosure. So what does a company do if a flaw is found and published prior to being notified?
Well, first priority is to fix the issue. Your first priority is to put the company and/or the product back into a safe state first to ensure minimal damage to clients, data and other resources. Ensure that all steps, all emails, and any other communications related to this are kept and recorded for future reference. Ignore any slams or remarks about poor security. All code is written by humans and will be flawed from time-to-time no matter how much we try to avoid it.
Once it's fixed then you can deal with the person releasing the information. Pay a visit to your lawyers. You may need to involve the FBI, your ISP and others. The communications that you documented will be handy for this endeavor and for any further legal proceedings.
Anyone reverse engineering code or performing code audits on Web sites are technically committing a crime under both Canadian and US laws. There is a reason why we often tell people at AntiOnline that they should have written (on paper) permission for performing a penetration audit. The written contract should include the scope and limits of the audit and that the person(s) conducting the audit will be protected from limited legal action. It should also have a Non-Disclosure (NDA) clause to ensure that sensitive data is respected.
Lastly, perform further checks in existing code and code soon to be released to minimize the effect. Ensure that all code has proper documentation to make it easier to find questionable section(s) if the same situation should arise. You won't find every hole, but if you test properly, in addition to regular Q&A testing, you'll likely eliminate the more basic and obvious ones.
As we've said for many years on AntiOnline, "Hackers know the weaknesses in your system. Shouldn't you?"This article was first published on EnterpriseITPlanet.com.