For Project Basecamp, I have been hell-bent on exploit code. My pursuit has mostly turned into a demonstration on why ICS-CERT’s definition of “vulnerability” is absolutely wrong, especially where actual controllers are concerned.
For Basecamp, I did find a classic buffer overflow issue with a test system. It is your mid-90s style C programming error: the developer called strcpy() on unsanitized data from the network. If the input string is too large, memory is overwritten and wild behavior is executed (including, unfortunately, damage to the flash memory of my test system).
Exploiting it in a way other than a system crash is a bit difficult because the processor is big-endian and the memory layout in the system doesn’t lend itself to code execution from this particular bug. My few attempts to exploit the vulnerability result in seemingly random behavior. I’m sure that it would all make perfect sense if I understood the logic in the CPLD (the device’s memory map is dictated by the CPUs address lines running to this rather obtuse chip).
So that’s an actual vulnerability that ICS-CERT would categorize and announce. My estimate is that it would take a dedicated person or team at least a few weeks of time analyzing the system to make a working exploit. They would require nation-state or organized-crime level funding to make the exploit reliable, and then quite a bit more effort to test it against the various iterations of hardware that probably exist.
The same device also allows me to download a list of usernames and passwords with no authentication. The usernames and passwords downloaded in this way are in plaintext. No special tools are needed to get the list. Any standard Linux, Windows, or Mac PC has the required software built-in. Exploiting this flaw is easy to do, and it’s safe to try against a live system. I could weaponize it in a week, turning it into a worm that infected control systems the world over.
Yet ICS-CERT will not call this latter issue a vulnerability. It is far, far easier to find. I found this one in less than a day, and I actually wouldn’t need the device to think up the exploit, just the configuration software.
Let’s apply ICS-CERT illogic to desktop operating systems. Suppose Microsoft embedded a backdoor account in every copy of its operating system. The hypothetical account was a secret administrator, and anyone with these credentials could mount your C$ share with read+write permissions. This would not be a vulnerability to ICS-CERT. Microsoft designed it into their system. Yet users should know about it, because they can mitigate: an IDS or IPS module can be written to protect us from the backdoor, or file sharing could be turned off or the service firewalled. The same applies to my username/password download. It’s easy to mitigate if you know about the issue — which is precisely why it should be made known.
US-CERT differs from ICS-CERT. US-CERT would issue bulletins for this kind of backdoor and has in the past. A very very quick search on US-CERT shows backdoor disclosure specifically for products in 2004, 2005, and even in 2010.
This last KB article from US-CERT is most interesting to me because it’s inching towards ICS: it’s a silly USB battery charger (so it’s controlling a chemical process, albeit a small one) that installs a backdoor on your desktop system. It’s also interesting because an advisory was published even though there was no patch or vendor-supplied mitigation of any kind. The mitigations are: remove the software, or use a firewall/IPS.
I’m glad that AA batteries are not considered ‘industrial,’ because we would never know about the backdoor if that were the case. ICS-CERT will let us know the hard-to-exploit issues, but not the easy ones. In effect, they’ll let us know about the bugs that require a nation-state to launch, but not the ones that can be launched by my grandmother — bugs that could be detected safely with a tool like, say, Nessus, blocked by a industrial firewall like Tofino’s, or mitigated through some other simple measure.
Image by bobtheberto