Recently, I was asked by an event coordinator outside the Information Security industry to present a “live hack” on-stage during the opening day of their multi-day event. Clearly, performing a live-hack on stage has a lot of “ooh-aah” factor. My initial response was “Cool! Let’s break some stuff!” Then, my common sense kicked in and reminded me that live-hacks seldom go well. Even when well-rehearsed, Murphy’s Law will almost ALWAYS sneak up and kick your butt once your lecture begins. It’s just not a good idea. Don’t do it. Nope. Nope. Don’t give in… but, if you MUST… Always build a back-up plan where you record a successful session back in your lab. It ALWAYS works in the lab, right? So if (when) things go sideways while you are on stage, you can always failover to the recorded session and replay the attack which results in a successful lecture or talk.

Even deeper than the live-hack itself, however, are the approvals and responsible disclosures ahead of time associated with responsible demos like this. If you are a researcher or just someone who discovers a bug or vulnerability in a piece of hardware or software allowing unauthenticated bypass, or it simply breaks their product; you need to walk yourself through some logical steps in what is often referred to as “Responsible Disclosure.”

Responsible disclosure can have several dissected interpretations depending on the intended use of the information. For the sake of this discussion, I’ll stick to the basic principles of vulnerability disclosures as defined by a few of the professional societies I’ve grown to know and respect over my *cough cough* many years of service in this industry. As long as coders, developers, researchers, and engineers have been around, there has always been a version 2 of their initial idea. Version 2 of any idea is a result of identifying improvements which can be made to the original idea to make it better, faster, stronger, more secure, more user-friendly, etc. The results of which were identified by internal researchers, or as they are in most instances, external researchers or end-users finding a bug in the code. Sometimes, it may involve identifying a workaround to bypass a certain security control. Then, it’s on to version three, four, or five. But how did the information make it back to the developer? Was it through a social media post, blog post, or a security talk at a hacker conference? Or, was it as the result of a series of hacks in the wild where the vulnerability was exploited and several hundred, thousands or millions of systems were compromised as a result, holding critical corporate data hostage because they launched a secondary ransomware attack as a result of the vulnerability? Not very responsible, is it? But it happens—every day.

This is particularly applicable to the recent developments and forward progress of the ever-advancing transportation industry with the Artificial Intelligence (AI) integration, coupled with vehicle to vehicle (V2V) and Vehicle to Infrastructure (V2I) as it pushes toward autonomous deployment in at a rate struggling to keep pace with demand of investors and marketing strategists, much less the engineers developing these over-the-horizon technologies. With that, let’s walk through what a “responsible disclosure” process should look like, and point you to some industry references which may prove helpful in your future endeavors.

It’s a fairly straight-forward process when you, as a researcher identify a vulnerability or weakness in a piece of hardware, software application, or web presence. Document your findings, ensuring the vulnerability or bug is repeatable and validates your initial assumptions. Logs, working notes, and screenshots are always helpful to the process, confirming your findings to the respective data owner. Document your research in a report of findings which is nothing more than a vulnerability report of their work and yours. Researchers do this extremely well, but there is no specific format. Just ensure you remove the emotion from your report, including mere facts and how you achieved your results.

You’ll need to notify the data or application owners who will be responsible for patching or remediating the vulnerability and give them ample time to fix it. This is where the lines become blurred for some researchers where they think it’s much sexier to “out them” via social media or in a talk at one of the hacker conferences. That’s called “full disclosure,” and it is the wrong way, in my professional opinion, to go about it. I don’t care what the circumstances are; the developers need to be given time to remediate. Now, the big question circulates around “How long should we give them”? Some communities will argue 30, 60, 90, up to 180 days. I would argue that it depends on the severity of the bug and the complexity of the remediation. In most cases, however, I would suggest 60-90 days is generally acceptable. It’s a subjective number, and it varies.

There may be times when the vulnerability is a critical security vulnerability, and it’s so wide-spread that “full disclosure” might be warranted. Due to pure negligence on the part of the developer or vendor, full disclosure gives everyone impacted the ability to apply critical patches to remediate the immediate threat. Even if it’s a temporary fix, it’s important enough to be addressed sooner rather than later.

Back to the flow; once the appropriate time has expired, the researcher is free to be a bit more open about their research findings and impact. They are usually working with the vendor throughout the process to ensure they’ve understood their vulnerability report and salient findings. The findings are sent to the National Institute of Standards and Technology (NIST) and entered into the Common Vulnerability Scoring System (CVSS) where they calculate and assign severity ranking. From there, they are entered into the National Vulnerability Database (NVD) where most end-users end up referencing the vulnerabilities through their lookups in the Common Vulnerabilities and Exposures (CVE) index maintained by Mitre Corporation in conjunction with its partnership with the US-CERT. Industry announcements are published, and in some cases, public announcements are released depending on the severity.

More and more, you read about companies offering what is known as “Bug Bounties” to researchers and users who responsibly identify and report their findings back to them ahead of public disclosure or release. It’s a lucrative business in fact, and there are quite a few researchers out there who are making a pretty good living doing just that. Bug Crowd maintains a comprehensive list of bug bounty and disclosure programs from across the web, curated by the Bugcrowd researcher community.

Here at GRIDSMART, we take pride in our core values of remaining Simple. Flexible. Transparent. You have the right to know how well your products work without spending more money on studies. GRIDSMART products play nice with other products and technologies in your community. When you purchase the GRIDSMART System, powered by Intel technologies, do what you want with it—with or without us. GRIDSMART lets you see exactly how well it performs. When we fix a bug, we post it for the world to see.

Likewise, we emulate the same core values within our Information Security & Threat Intelligence Division at GRIDSMART. We work directly with our industry partners to help identify security vulnerabilities in their hardware, software, code, down to administrative criteria such as configuration controls and policies & procedure to better secure themselves, their clients, and industry.

Ensure you are responsible and respectful in your particular industries and recognize how difficult it is to develop and maintain a rigidly secure product. Whether it’s physical or virtual, at some point in time, someone is bound to identify a flaw in your code; build a workaround to your security protocol, if you even have one. If you are a developer, business owner, or distributor of products, be grateful of the researchers contacting you to notify you of their findings. When you shut them down, which happens quite a bit, you run the risk of them taking the Full Disclosure route, immediately airing your vulnerabilities to the world, which seldom turns out well for anyone except for the criminals.

Nobody wants to be blindsided at an industry event or learn for the first time on a public stage that their multi-million (or billion) dollar investment has been compromised. Take time to inspect your product and networks for vulnerabilities and misconfigurations to minimize unintended exposures and compromise. Don’t allow yourselves to be pressured into taking your technology to market before it’s ready.

Recent Blogs

Stay informed and connected with industry insights, company news and personal stories

Protect Your Community with GRIDSMART Protect

Safety is the most fundamental need in your intersections. But existing in-ground detection and approach-based systems do not do enough, especially for vulnerable road users. That’s why Cubic Transportation Systems is introducing GRIDSMART Protect....

GRIDSMART Releases Version 19.12

Dear Valued Partners, Earlier this month, we shared a preview of GRIDSMART System Software Version 19.12 in a webinar that many of you attended. We're happy to share that 19.12 is now in general release and available for download. We encourage you to update all of...

Featured Event


After much deliberation, and with much remorse, GRIDSMART has decided to cancel INTERSECT this year. The safety of our employees, distributors and customers is of utmost importance.