Home Blog Responsible, Coordinated & Ethical Vulnerability Disclosures

Responsible, Coordinated & Ethical Vulnerability Disclosures

Updated April 10, 2017

OTA Joins coalition submitting comments to the National Institute of Standards and Technology's (NIST) to Improving Critical Infrastructure Cybersecurity highlighting the need for vulnerability reporting mechanisms.  Read more>

As a participant of NTIA’s multi-stakeholder vulnerability working group and an attendee to this week’s meeting in DC, I was impressed by the level of collaboration and sharing of best practices.  Progress is encouraging.

For background, we annually evaluate and audit several thousand consumer facing web sites and mobile applications for security and privacy best practices.  In this process we often find vulnerabilities, exploits, malware and mis-configured servers.  We make best attempts to contact these companies, yet in spite of our efforts we have a low level of success in engaging and having companies take action.  We experience the same issues as the research community. At times we are dismissed or threatened with legal action from trade groups representing companies, resulting in us incurring significant costs.

This places on us an ethical decision.  Do we name the site and company and shame them while risking increasing the risk of a vulnerability being exploited?  Traditionally we have chosen to prioritize user security and not share such data publicly, nor have we complied with regulatory requests for such data. 

The reasons I share this is that it is a core issue for us today and has recently come up between Microsoft and Google.  A core ternate of this working group convened by NTIA has been developing and promoting best practices around responsible and coordinated disclosures and creating process and norms for us to work together.  We need trust and collaboration.

What recently occurred in public is not acceptable for the user, society nor this working group.  We need to have trust and confidence while realizing the reality of the complexities of addressing a vulnerability.  While 7 or 10 days might be adequate for an IoT device misconfigured with default passwords, or a vulnerability in Flash, it may also be unreasonable for a complex operating system or browser with millions of lines of code and testing required. 

Unfortunately today there is no agreement on what a reasonable period is. Most agree that when a vendor is making a good-faith effort, the timeline for fixing a vulnerability can be weeks or even months. Others set a deadline to get things fixed before they go public. To the best of my knowledge Google previously recommended 60 or 90 days.  In this case the clock ran out in 10 days.  Not unlike providing notices to privacy policies or terms of use, a change of policy needs to be communicated and old policies archived. 

Assuming Microsoft acted according to its published coordinated vulnerability disclosure program, made good faith efforts and had an open line of communication, what occurred is of concern.  On the other hand if they remained silent, uncooperative and dismissive perhaps public disclosures may be justified, but as we are now experiencing, only after careful consideration of the impact to public safety.

When dominant and competitive market players make such disclosures against each other, the optics and ethics of doing so are of great concern and are questioned.

I hope going forward we can all hit a reset and recalibrate our efforts.  What has occurred only distracts from our efforts and plays in the hands of the cybercriminals.  The competition is cybercrime, not each other.