The idea for mining malware for evidence of targeting automation came out of reading several papers on Stuxnet that discussed the methods used to intercept calls to the S7 PLC. To summarize, Stuxnet replaced the Siemens stock s7otbxdx.dll with a new version that watched the PC-PLC interactions, and either allowed the interaction to go through without modification, or made modifications to the interaction. All in all, I thought this was a rather clever method for one major point: an attacker didn’t need to write some complex library of commands to fully reprogram the S7, s/he could simply extend the existing functionality of the system.
The approach spoke to me because of simplicity and reliability reasons, and I wondered what other approaches might be similar… Would an attacker seek to register a malicious COM/DCOM automation control in place of a valid one? What about sit on an automation port, watching for automation traffic to meet certain conditions before firing off a ‘open all breakers’ command? Were there data points (literally, things like MW, MVARs, pressure, etc) that were basically standard in systems that a piece of automation specific malware could test for and use?
Unfortunately, this is a huge sample space for an individual researcher to go through, so I settled on demonstrating the concept under a specific set of conditions. After a little thought, I settled on a various OPC interfaces for a specific legacy generation DCS, the INFI-90 system. I pulled down from the internet a few OPC servers that are used in power generation, specifically those that interface with the INFI-90 DCS, an older ABB/Bailey system that is still in common use today on 1970-80s era coal fired units.
The INFI-90 system is unique. REALLY unique. Developed in the 80s, it had an proprietary interface between servers and devices over SCSI, though it could also use a lower throughput RS232 interface. Originally, this interface used a DEC VAX/VMS, but upgrades and cost cutting eventually put the functionality into a set of drivers called ‘SEMApi’ on Windows 95/98/NT/2000. The interface is low level, there are no standard Windows drivers that can handle it, and interactions with the DCS using modern OPC servers must either go through the proprietary ABB/Bailey SEMApi drivers, or another set of custom built drivers/APIs that still use SCSI or serial.
So, what we have is a crazy interface that you’re not likely to see outside of a power plant, coupled to a technology (OPC) that isn’t used much outside of automation, and locked down to a specific set of vendors who support it. I pulled down three OPC servers I knew were in use in generation. Two use the SEMApi method of interfacing, one is an alternate interface developed by the vendor. I installed the software on a virtual machine, pulled all the EXEs and DLLs out of the installed software, and then ran it through a ‘strings()’ parser. With all the different options available, I settled on looking for DLLs that were mentioned in the strings() data, going under an assumption that at some point they would reference a common set of DLLs, or interfaces.
I pulled all the DLL’s referenced together into a single list, and ranked them in order of uniqueness on a 1 to 5 scale. If a DLL was extremely common (like a standard Windows component) it was a 1. It the DLL was very unique, such as being a proprietary DLL, it was ranked a 5, with other DLLs falling somewhere in between. This constituted a good idea of what DLLs could be loaded by my OPC programs at runtime, and might be useful in determining if a malicious process was interfering with the function of those DLLs.
While I focused on DLLs in this search, there is a chunk of data that might be useful (people I’ve talked to that routinely look through virus data say “far more useful than DLLs”):
- Registered OCX, COM, and other objects referenced by CLSID – CLSID’s are unique, and serve as a portable method of referencing objects between many systems. Malware will often register and use specific objects by CLSID.
- MD5/SHA Hashes of Important Files – Malware will often use other files, some that it downloads and others that are already resident on the system.
- Common IP traffic – Many of the newer virus searching platforms are using limited dynamic analysis through sandboxing (a’la CukooBox), so they can capture network traffic. This isn’t searchable right now though.
- Simple Hashes of Automation Files – While not generally infected themselves, they are often submitted in bundles which can contain malware. Looking through files uploaded at the same time might be beneficial.
On a whim while writing this post, I entered all the the 5 ranked DLLs (14 in total, of 117) into malwr.com. Malwr.com is a site that is similar to VirusTotal, but gave me the capability to search specific files that were interacted with after being run in a sandbox without a private account. Only one returned a hit, asycfilt.dll, which was a DLL involved in a June, 2010 security update.
This goes to a single point, even when you pull together a large amount of data that you think will show something, you might still not find anything. Making searches quick, simple, and easy is a prerequisite to doing this type of research, making my ‘whim’ search valuable only if anyone can do it quickly.
title image by Jeffrey Beall
Sorry for the delay, but lot’s of news.
ISASecure has launched the System Security Assurance (SSA) certification — “a system-level cybersecurity certification for industrial automation and control systems (IACS) products.” Very ambitious and something we will write more about in upcoming weeks. The CSSC in Japan is planning on supporting this certification as well.
Version 1.0 of the US Government’s Cybersecurity Framework was issued on Wednesday. Cynthia Brumfield at CSO has a nice summary of the changes.
Related and more newsworthy was DHS’s announcement that same day of the Critical Infrastructure Cyber Community (C³) voluntary program. This is the first major effort to encourage use of the CSF.
Rockwell Automation endorsed use of the Cybersecurity Framework for manufacturers. It will be interesting to see how their insecure by design products would be treated by the CSF. Of course manufacturers can continue to accept the risk for decades.
The UK Government plans to run an exercise on the cyber resilience of critical infrastructure.
Bit9 raised $38M in venture capital and merged with Carbon Black. Bit9 has had some success penetrating the ICS space.
From Iran’s supreme leader Ayatollah Khamenei, ”You are the cyber-war agents and such a war requires Ammar-like insight and Malik Ashtar-like (two Prophet’s Companions in early Islamic history) resistance; get yourselves ready for such war wholeheartedly.”
Yokogawa announced field wireless products that will support both
WirelessHART wired HART and ISA100.11A.
Walt Boyes has some interesting videos of the ARCForum press conferences.
Image by TooFarNorth
Nathan Keltner and Josh Thomas of Atredis dove into hardware hacking with a focus on the Teridian System on Chip (SoC). The Teridian SoC is widely used in the smart meter market and is based on the Harvard Architecture.
Nathan and Josh explain the differences between the von Neumann and Harvard Architectures from an attackers perspective, and then give some specific examples of vulns and exploits.
As a bonus, Josh goes into the security issues with NAND.
If you’ll remember from a set of posts last year, I had floated the idea of mining malware for evidence of automation system compromise. The basic premise was to look for the evidence of interactions with control systems by analyzing malware samples graciously sent to me by @VXShare.
The project that I eventually embarked on was… ambitious. It involved parsing out the strings data associated with (initially) the 2 TB of @VXshare malware samples I had immediate access to, and shoving them into a database for searching. The intent was to get an infrastructure set up that would allow ready searches of the entire dataset. From this point, searches could be generated for various ‘indicators’ of an automation system, such as specific IP addresses and ports, usernames/passwords, hostnames, DLL files, extensions, etc.
If you’ve been following along, you’ve already noticed the flaw in my plan that I missed at the beginning; fundamentally, building an infrastructure is neat, but is not the real intent of the research. The real intent is generating the data for searches so that the malware targeting automation can be identified from static code analysis. Unfortunately, I went very far down the infrastructure route before finally turning attention back to generating appropriate searches based on automation programs. Then, life and other work intervened and I eventually shelved the project.
While planning this massive undertaking, I decided the computer systems I had at my disposal would not be enough. With this in mind, I headed to ebay and picked up a used Dell 2950 on my own dime (I’ve always wanted my own powerful server), with 8 cores and 16 Gb of RAM. In my mind, this would be enough to do just about anything I needed as far as processing power. One Ubuntu install la… Well, TWO Ubuntu installs later, I was ready to get crackin. Got MySQL installed (all the Big Data folks are furiously shaking their heads now), and linked it up to the Python scripts I put together for pulling information out of the malware samples.
I had built a very basic database design for the incoming information, and hadn’t taken into account the sheer amount of data I would end up processing. I was looking basically for any string in the malware sample, which is great if you assume an executable. But, a large chunk of the malware samples are actually PDFs, which means that nearly every byte in the PDF is a string, and some can go on for 200, 300 bytes, or longer even if it’s a nonsense string. My basic database design didn’t account for this eventuality, causing it to swiftly balloon in size, or crash out due to length limits being exceeded during import. Because I originally considered the operation to be straightforward, I hadn’t built the scripts to politely handle these issues. I would kick off a job before I went to bed, and it would eventually hit a PDF and die, causing me to lose a chunk of work. I tried compensating by increasing the max length of the data base field, but eventually found it was a losing proposition; there would always be a PDF with a string just slightly longer that would crash the process. I eventually removed PDFs altogether, removing the capability search email phishes for use of automation terms.
Additionally, even though many jobs would finish successfully, another problem I was encountering was the size of the generated database. I originally thought that strings data from malware samples would be relatively small compared to what I was analyzing, on the order of 2-5% (or around 100 gigabytes max). Well, this was WAY off, as I was getting around 100 gigabytes of processed data after 3 of the 50+ samples. I ended up pulling harddrives from my file server to host the database, and moving the database to those hard drives. Have you ever moved a MySQL database to a different physical drive? No, of course not, you’re not a database admin, you’re a cyber security professional, why would this be something you’d ever need to do?
After this, the database would chug away on the data with only a little intervention. I’d log into the box remotely and check the status of a job from time to time, but the imports generally just slowly entered the database. Very slowly. As in so slowly, it would take 2 months to completely load the system with all the samples. This is when I encountered another fun problem, the fact that searching all these entries was going to take forever due to my lack of normalization in the database.
Eventually, my secondhand 2950 failed me, a power supply failed and helped fry a CPU, bringing the major infrastructure activities to a screeching halt. I eventually went back to study the main problem, pulling appropriate information to help identify malware that had an automation bent.
The major advice I have for researchers and engineers conducting similar research is simple:
1. Make sure you stay true to your original intent, don’t go off on tangents
2. If your project looks like it requires skillsets that are out of your usual work, take a step back and reanalyze what you are attempting to do. There may be a simpler way.
3. Avoid building infrastructure, unless the point of the research is to build infrastructure.
Not everything about the malware mining work was unsuccessful, but I wanted to get the lessons learned out of the way first.
title image by emmajanehw
At S4x14 this year, there was a great talk about using an Ardunio Shield to communicate via the HART Protocol by Alexander Bolshev. Michael Toecker Blogged about this talk earlier, read his blog for more details about the talk. As the talk shows the Ardunio shield is a great example to show how we can create a cheap interface to an industrial protocol like the HART Protocol. Alexander’s talk went into some great details of how a shield could be used as an interface to a HART network. After his talk Alexander was giving out boards for people to take away with them. Digital Bond was able to get a few of these boards and we decided to build them to how everything worked and how much they would cost to build and how well they will work.
If Alexander hadn’t given out boards to start with we could have started with the EagleCAD files that he has put up on his github. Once you have the .brd file you can easily take that and upload it to a few different PCB Manufactures, such as OSHPark. I uploaded the file just to gauge the cost, OSHPark shows the Boards from OSHPark as $31.20 for three boards. Once you have the board ready you need to order the parts, I used the electronic component distributor Digikey and created a full Bill of Material (BOM), Feel free to contact me if you want a copy of the BOM to make your own HRTShields.
Next comes the fun part of putting the solder to the board. I printed out a large version of the EagleCAD File to lay all the appropriate parts out to the right spots on the board. Then I used Solder Paste to make this project go faster, however you could hand solder all the parts if you are comfortable with the soldering small Surface Mount components. Working through the Lists of parts that need to be attached took me about 2 hours from start to finish. Using paste, solder wick to clean up some areas the board was completed and ready for testing. Based off this information from Alexander the board can be tested to make sure it is up and working.
Next what is needed is an Arduino Board and a HART Device to test with. Digital Bond would like to ask our followers that if someone with a HART Lab or a few devices for testing purposes would like to help to see what the true capabilities of this device are. Please reach out to us if you are interested in helping us with this project.
Slides From S4
Image by SwedishRoyalGuard
This was the 7th year that JPCERT put on an ICS Security Conference in Tokyo. The conference hall had a capacity of 300 people, and it was sold out weeks before the event. Of course the price was very appealing — free. Great to see the increased interest having participated in some of the earlier versions with about 50 attendees.
Miyaji-san of JPCERT had a very frank opening session on the state of ICS security. For example, “some Japanese ICS experts said that ICS protocols are implemented robustly but this is only a dream”. He gave a fast paced overview of the vulns (highlighting the DNP3 vulns), JPCERT/IPA efforts, standards, certifications and other items that had occurred over the last 12 months.
There was an interesting tidbit … an anti-virus vendor claimed that 30% of the ICS in Japan had a malware incident at some time. There was no study or detail provided. I don’t doubt the number, as much as I do 30% of the asset owners admitting that to an entity.
Kobayashi-san, formerly of IPA and now with CSSC, was the next speaker. It’s noteworthy that the first two speakers were two of the ICS security pioneers in Japan, having focused on it for 5+ years.
I need to add a couple of days to a Japan trip to go out to Tagajo and see the CSSC lab. The pictures are amazing and probably make INL jealous.
A major thrust of CSSC is developing an ISASecure certification capability in Japan. The agreements are in place with ISCI, so now the effort is to gear up for the testing. Kobayashi-san said “CSSC will have to incorporate Japanese proprietary protocols in the Communication Robustness Testing”. This would exceed the current CRT for ISASecure certification that does not address the ICS protocols.
Loyal readers know I have mixed feelings on ISASecure. The organization and approach is sound, but the bar is so low in the functional security and increasingly in the communications robustness testing areas. I still wonder how a PLC can have an ISASecure sticker on it, in English or Japanese, and still be insecure by design. CSSC is also likely to certify to the System Security Assessment (SSA) standard as soon as ISCI finalizes this effort.
The morning had two more Japanese speakers, and I should note JPCERT kindly provides simultaneous translation English – Japanese and Japanese – English as applicable.
Mu-Chun Chang of Taiwan Power Company went into some detail about their control systems, OPSEC alert. This session was probably quite useful for IT Security types that didn’t understand the components, topology and redundancy in a large SCADA system.
Ralph Langner expanded on his RIPE approach. It is a completely different talk than you typically hear at these events. Even knowing Ralph well and studied the RIPE paper I pulled a few nuggets out.
- My favorite was the house of cards analogy. You have build an impressive house of cards, but a gust of wind, a dog bumping into it or a number of other things could cause it to tumble. We could spend time identifying the threats to the house of cards and deal with them. Seal windows, put a leash on the dog, … Or we could focus on dealing with the fragility and add robustness and resiliency to the structure.
- “As an attacker, I’m not interested in attacking your ICS”; the attacker is interested in attacking the process, eg explosion, chem spill
- The translation of a Taguchi on Quality quote to Langner on ICS Security – “ICS Security is evaluated by loss of predictability defined as the amount of functional variation of process control plus all possible negative effects, such as environmental damages and operational costs”
Simeon Simes of AusCERT discussed there procedures for sharing ICS vulnerability information as well as other information sharing. Likely an appropriate session given it was a JPCERT event. Interesting note that Queensland University and Edith Cowan University provide control system security courses.
The last session focused on a JPCERT survey of 300 ICS owner/operators. This kind of data is helpful and very rare. The survey was not anonymous, which could affect the answers, but only cumulative results were provided. It would be interesting to see it further broken down by sector. 7% of the respondents admitted to having a malware incident in the last year on their ICS. 80% said they never had a cyber security incident on the ICS and believe they will never have an incident. Great end of event session.
Nice job by JPCERT on the event. Well run and appreciate being allowed to attend.
After hearing about PLCpwn, S4 vet Jake Brodsky over on SCADA Perspective wrote “Only problem: If you have physical access to the network of a PLC or to the PLC itself, you own it. End of story. That’s very unlikely to change.”
While the ICS community still is stuck in the mud dealing with insecure by design protocols, applications and devices, the offensive effort on ICS cyber weapons is ramping up. And we need to start thinking about how these ICS cyber weapons will be created, deployed, used, defended against and detected. This was what ICSage on Friday of S4x14 Week was all about.
A large portion of the threat actors to critical infrastructure ICS will simply attack the system as soon as they can get access and launch an attack. For those threat actors, I agree with Jake that the concept behind PLCpwn is a non-factor.
Now imagine you work for a government that is all excited by Stuxnet. The order comes down that you need to develop the capability to take out selected parts of numerous potential adversaries critical infrastructure via a cyber attack. But they only want the capability to do it whenever they give the order. They have no immediate plan to use it, and they may never use it. Many weapons that are developed and staged are never used, so why wouldn’t this be true of ICS cyber weapons?
In this case preparation and persistence are key.
Preparation is necessary to develop the attack code not only to compromise the SCADA or DCS, but also to change the process in the desired manner. Maybe only a short term outage is required to be timed with a kinetic attack. Perhaps they want hard to replace equipment in the process to be damaged to take the system out for months. The team responsible for the ICS cyber weapon needs to develop and deploy the weapon to be ready when the order comes.
You also will want your own communication channel to the ICS cyber weapon. If you rely on the adversary’s network to send the GO signal to launch the attack it may not be available. Pulling all external connections is in many organizations’ plans to an increased threat environment. Your tasking on what needs to be done to the process might change causing you to modify the attack code. The ICS engineers may change the process in a way that requires the attack to change.
There are many reasons why an attacker wants his own communication channel to the ICS network. This second communication channel is the purpose of PLCpwn proof of concept. Sure it can attack the PLC it is inserted in, but it is also a rogue computer under the adversary’s control on your network.
Stephen’s demonstration that it took less than 80 hours and a few thousand dollars gets some attention, but this is a non-issue. A true offensive effort is going to develop a slick board that will be much more difficult to distinguish from the real thing. This is a small amount of money and effort for a well funded and motivated group. It was fortuitous that the NY Times published an article the week of S4x14 on an NSA program on developing their own communication channel to targeted computers.
Now the interesting question is what happens when organizations and governments stumble across one of these deployed attack systems and covert channels?
PLCpwn is a Digital Bond project that Stephen Hilt led and presented at S4x14. It was inspired by the Power Pwn that we had used with a number of clients to help them realize ignoring the physical security perimeter might be a mistake.
PLC’s are ideal places to hide attack code and communication channels. They are computers, that are treated like black boxes. Software and hardware upgrades are rare after deployment. They are the printers of the ICS world from an attacker’s perspective.
The end example is an attacker could send a text message of GO to the PLCpwn, and it would stop the CPU on the ControlLogix it was in and all other ControlLogix on the subnet. Of course with a covert channel to the ICS network and an attack platform on that network an attacker can deploy, modify and launch attack code at any time.
One other interesting note that Stephen mentions — we had the PLCpwn in the ICS Village with the door off. No one commented on it, not the Rockwell Automation attendees, nor any of the many users in attendance who had ControlLogix. It is not a statement on their competence, but more a reality check that even a crude implementation of the PLCpwn like this would go unnoticed.
I’ll have another post up later today on why I had Stephen take on this project.
A very brief Friday News and Notes …
Critical Intelligence reports that Shodan is now scanning the default PROFINET port (TCP/34962). Last September Shodan added DNP3 to its scan list.
S4x13 vet Ali Abbassi has released a “very basic Modbus fuzzer” on GitHub.
This actually happened last week, but if you haven’t seen it take a look at the fast-setting concrete going into the Victoria Line signaling equipment building.
NERC CIP lovers (and haters) should read Tom Aldrich’s article on the Technical Meetings on Version 5 as an adjunct to Michael’s post earlier this week.
We encourage passionate disagreement and promotion of new, maybe slightly crazy concepts at S4 through Unsolicited Responses. Attendees can submit their idea for a 5 minute talk, with or without slides, at the event. Some are serious; some are funny.
Normally we don’t release the Unsolicited Response videos, but we wanted to show those who haven’t attended S4 an example. With his permission, this Unsolicited Response from Darren Highfill is related to an answer Eric and I gave at our SCADA Apologist/SCADA Realist argument the day before (at 25:56 in the video).
Of course I don’t have to agree with Darren. I still believe my answer that a start-up/small company will not lead the change to rid us of insecure by design in ICS, but I’d add that the progress is not being impeded or delayed by a lack of technical solution. The challenge of addressing security in high availability, low bandwidth, low power environments has been solved in numerous other fields. We just need to take proven security algorithms, protocols and techniques into the ICS space. It is a matter of will, not technology.
I would add the example of Jim Bidzos at RSA to Darren’s example of Ed Schweitzer. Jim was mostly a solo act in the early RSA years. It took Jim, with the assist of Rivest, Shamir and Adleman, years to explain public key and convince large vendors it was not voodoo, but like Ed, Jim did it. So agree wholeheartedly that a small, startup with an innovative and incredible solution can succeed in even changing a stodgy ICS community. Just disagree with Darren that it will be the path to addressing getting rid of insecure by design features and protocols.
But as you can see we welcome heartfelt disagreement at S4.