I’ve experienced a lot of cognitive dissonance concerning the Basecamp disclosure and exploit tools release over the last few months. I might as well explain some more thinking of why doing what we’ve done is a good idea in the end.
I’ll repeat Dale first: PLCs are vulnerable. EOL.
This next bit is speculation, but I suspect that ICS-CERT caved to vendor requests when it redefined the term ‘vulnerability,‘ at Weisscon to not include designed-in issues. The D20, for example, is configured using TFTP. Part of the configuration process requires executing commands on the command shell, and this is implemented via TFTP in a quite interesting way (I’ll just say “the code documents itself” — read the metasploit module d20tftpbd for amusement).
The D20ME is therefore insecure by design, and their desktop configuration software uses this protocol to set up the device. ‘Fixing,’ it means breaking their software. According to the Weisscon version of ICS-CERT (we’ll call it ICS-CERT 2.0), they don’t treat this as a vulnerability. In a sense, my tools can’t even be defined as ‘exploits,’ under the DHS definition — an exploit can’t exist unless there is a vulnerability. My ‘tools’ are just using designed-in ‘features’ of the D20 to let a user retrieve accounts. They can even be used for positive purposes (for example, emergency access if an operator doesn’t know the password).
DHS was completely wrong on this issue, whatever the motivation. I like the DHS guys, they’re a good group and I hope to work with them in the future some day. I think that they’re stuck in a hard spot, paid by vendors to test their products, unable to release results of those tests. So they’re in the best position to do anything about crappy security in control systems because they have access to this expensive equipment and software, but they can’t help put pressure on vendors. They have us researchers beating on them from the north, vendors beating them on from the south, Congress beating on them from the east, and utilities beating on them from the west. This analogy lacks the 3rd dimension — airstrikes — I’m not sure who that is, but I’ll bet that there is someone else. Still, they’re dead wrong on this issue. I feel a bit like Rorschach here, but it’s a view on which there is no compromise. Using these vulnerabilities, I can cause misoperation of a control system. I can’t hammer this issue any longer, so I’ll stop. ICS-CERT appears to have backpedaled, at least: their announcements clearly call the design issues we uncovered in Basecamp ‘vulnerabilities.’
Basecamp was, to me, partly about calling ICS-CERT on this. So that appears to have worked, although there’s still no explicit mention from them that they were wrong. So we keep up pressure.
Publishing tools is kind of mean. I totally agree. We didn’t inform the affected vendors that this was coming. No doubt I’ve lost a few friends at my former employer over the disclosure method. But it’s time to move on.
There is zero doubt in my mind that attackers have been looking at PLCs and RTUs for years, if not a decade. There is zero doubt in my mind that various world governments have research projects identifying these vulnerabilities. I don’t think that any .gov security researcher who has looked at any of these devices would think that the Basecamp disclosures were even interesting. They’d probably say, “Yeah, we figured that out in one hour instead of the 16 that it took these jokers. Go check the file server for PoC,” or something.
The trouble is, utilities and industrial control facilities have no idea how bad the equipment that they buy is. Even the best pieces of equipment — imnsho things that I beat to death while employed by SEL — have some terrible security practices — plaintext protocols, inconsistent documentation, and the like. In 2012, my smartphone is more featureful and far more secure than the equipment that controls the grid. Sure smartphones are produced in far greater quantities to pad their profit margin, but they’re also 10-100x less costly than a typical PLC. Oh, and I don’t pay for a support contract for my smartphone — a few years after I bought it, I can still get firmware updates.
Personally? I think that these tools are going to be used for bad at some point soon. Does it suck? Yes. I’m sure I’ll lose sleep over it. Seriously. Actually, I already do lose sleep over it. Breaking stuff in my basement is a lot of fun. The idea of someone else breaking stuff in a brewery is not a lot of fun to me.
But you know what? If the tools are used for bad now, the attacks will be quite lame. The big, bad, dangerous skr1pt k1dd1ez won’t know what the heck they’re doing. Stuxnet taught us a very important lesson here — that intelligence gathering portion of a control system attack is as important, if not more important, than the exploit itself, where control systems are concerned.
If we don’t release the tools now, vendors have no incentive to change their ways. Attacks will be way more effective in ten years if the controllers operate the same way that they do now. The Stuxnet cat has been out of the bag for a while, and hacker groups and foreign governments are stepping up their games looking at control systems. In another five or ten years they may have the intelligence required to cause actual harm. The exploits are trivial, so basically attackers will have no impediment to causing real harm. Full stop.
If we suffer another lost decade, we’re screwed. This isn’t a, “the sky is falling,” street-preacher exercise. It’s just reality. Better to put tools in the hands of script kiddies now, when the tools are less effective, than to only let really bad guys be the only ones that have them, waiting for the right time. If an incident happens, a metric boat-ton of press will result, and something will have to give. That something will be fixing basic security problems.
It’s weird, but I think that we’re at the right moment defensively — we have the right combination of control systems fragility and public awareness. Hopefully this disclosure will make the impact that is needed. I don’t know if the outcome is going to be better controllers via media pressure, client pressure, or public policy shift. I do get the feeling that this is going to spark a trend in more secure controllers, though, whatever the secondary catalyst is. More secure controllers exist, that much is for sure — Basecamp only touched upon the controllers that we had on hand. Some really great products exist from my old company, and from talking to other big companies at S-4 I can tell that they have secure controllers out now, and some way better products in the works.
Personally I hope that pressures comes from end users, and that vendors start being more honest about their product spectrum. Vendors often say that security costs money, and end users won’t pay for it. End users are paying for it already, though, via firewalls, data diodes, and a boatload of work, nail-biting, and ulcers to separate their control networks from their corporate as best as they can. If a controller costs more money but means that inter-network security can be a little more relaxed and carry less risk, end users save money in the end.
This is also good practice for vendors. I agree, the Basecamp vulnerability disclosure was not ideal to vendors. It’s not the worst that could happen, though. The worst that could happen is a forensics call for incident response (sidenote: many Basecamp systems don’t log the attacks that were discovered). I view Basecamp as a nice wakeup call to vendors — even the good vendors — that there are going to be security incidents and that they had better be prepared to deal with them responsibly and openly. Frankly, I’ll be happy if a vendor takes a more mature face to the disclosure than Digital Bond has. A vendor could do this by simply testing the issue, acknowledging it, and then providing their customers with information about the vulnerabilities (all of them), temporary mitigation, and a timeline for future patches and security features. Having the discoverer run a validation would be pretty swell, too.
Image by papalars