Next week look for our announcement of S4xJapan. Dates are set; venues are booked; and we have a great plan to make this a first of its kind event in Japan. Also, Japanese readers should check out digitalbond.jp. We finally found some quality translators fluent in Japanese, English and most importantly ICSsec. (The site itself needs some work, but our goal is to put up two Japanese articles a week.)
Darren Highfill is leaving his startup UtiliSec and joining PricewaterhouseCoopers (PwC) in Atlanta as a Director for their Advisory Group. PwC is one of many large consulting firms opening up or enlarging a ICSsec service offering. Justin Searle will continue with UtiliSec.
Lot’s of private comments and emotional feedback on our story on the DNP3 User Group this week. No doubt the DNP3 User Group deserves high marks for adding Secure Authentication to the protocol and other quality output from the technical committee, as a number of readers pointed out. Other readers provided examples where TMW or the board were ignoring or breaking rules to their benefit. None of those transgressions though come close to the impact of burying the fact that most deployed DNP3 implementations using a vulnerable protocol stack. Asset owners need to get more involved; TMW needs to resign from one of the two board seats; and conflict of interest needs to be addressed going forward.
Those still clinging to the idea that most RFID cards provide effective physical access control should watch the video of RFIDIer cloning some tags … well maybe only if you like to see how easy it is to do by typing commands and seeing results with CCR on in the background. Can’t wait to get our RFIDIer.
Two stories from the automotive world. First, Apple announced CarPlay, an application/interface that will allow you to connect your iOS devices into the car entertainment system. Ferrari, Mercedes and Volvo are early adopters, and there appeared to be differences in the quality of the integration based on reviews. Second, Computerworld reports “the next wave of cars may use Ethernet“.
CIPC met this past week in St. Louis, with a good agenda of cyber, physical, and compliance items. A bit of background for non-CIP folks, the CIPC stands for Critical Infrastructure Protection Committee, an advisory panel to NERC and the ES-ISAC “in the security areas for physical, cyber, operations, and policy matters”. The CIPC meets several times a year to discuss security topics, share information regarding critical infrastructure protection, to develop guidelines for use by NERC members for implementing security, and to assist in the development of the NERC standards (in this case, the CIP standards).
First off, this year appears to be the year of the ES-ISAC. ES-ISAC stands for the Electricity Sector Information Sharing and Awareness Center. Seeing the name, it may have been established due to gov’t action. Maybe, possibly, yes. Every odd-numbered CIPC presentation either pointed to the ES-ISAC as a definitive source for some piece of information, or was discussing changes to the ES-ISAC’s composition/structure and how those changes benefited members. It’s stated purpose is to facilitate information sharing between the gov’t and industry in the area of threats, vulns, and protective measures.
When the 1200 and CIP regulations came out after the 2003 NE Blackout, the ES-ISAC took a hard right to the jaw, and hasn’t really recovered from it. That ‘hard right’ was being operated by NERC , which was now responsible for enforcement the new reliability regulations. I spent a good chunk of 2005-2011 hearing the following question: “Why the heck would I report my vulnerabilities to my auditor??”.
With that history in mind, the most important changes presented at CIPC is that ES-ISAC is freeing itself from the perceived influence of the regulatory side of NERC. There was already a policy in place that didn’t allow communication from ES-ISAC to Audit/Enforcement, but there are steps being taken to wholly separate it out. CIPC presentations indicated that ES-ISAC is going to be a separate corporation from NERC in the near future, and the reporting structure has already been changed. ES-ISAC is now under the CSO, Tim Roxey, and he reports directly to the CEO, Gerry Cauley. This goes in with a general theme of the conference that the ES-ISAC was being stood up as the central coordinating point for electric power. Time will tell how the ES-ISAC fares in presenting themselves to industry as a separate and valuable entity outside of NERC, but the message was very clear at this meeting.
If this is the position of the ES-ISAC, and of the CIPC in particular, the corrective actions they have taken regarding the relationship with Audit/Enforcement are just the beginning. The bulk of the industry remains jaded from years of frustrating cyber security regulatory compliance. Additionally, ES-ISAC will need to show that it is valuable to the electric power industry in a way that is under-served by less sector specific organizations like ICS-CERT and SANS. And, ES-ISAC will need to do this while utilities gear up to work with the massive changes in scope that the CIPv5 is bringing around.
To do this successfully and in a way that adds value, I think the ES-ISAC needs to move beyond the bare bones coordination, recitation, and disclosure of vulnerabilities that ICS-CERT provides. ES-ISAC needs to provide the same type of raw data, but accompany it with analysis-in-context to electric power participants. At minimum, this analysis must support engineers from struggle with evaluating the risks to their systems, or that might be turning to the CIP regulations as their de-facto process. ES-ISAC should provide mitigation that is specific to the way electric power operates and maintains our SCADA, DCS, and other control systems. In this, ES-ISAC has an advantage; They can communicate the specific (and potentially sensitive) context directly to members, but a possible disadvantage is that I’m not sure if they have any charter restrictions that may prevent them from developing/presenting context themselves.
For example, a vulnerability in a PLC would be less of an risk in a normal Control Center environment; Control Centers generally don’t make extensive use of PLCs. But, a PLC vulnerability might be extremely important to a Generation plant, who typically use them in ancillary systems like water treatment, fuel handling, and waste removal. That level of context is missing right now, and it is certainly valuable to members who are concerned about cyber security issues. Without context, it’s easy for owners and operators to get lost in the flood of vulnerability notices that have been coming out, and that will likely continue as more and more cyber security research is conducted on the software and components that help monitor and control infrastructure.
This type of context doesn’t come easy… It requires building relationships with those that have operational experience, and those that have computer security and vulnerability experience, and distilling that into concrete guidance. Luckily, ES-ISAC has a wealth of operational types to draw from in CIPC participants, unlike many other cyber vulnerability organizations.
<Post got too long, stayed tuned tomorrow for Part 2!>
We lost three S4x14 videos due to technical difficulties at the end of the day on Wednesday. One of them was a great session from Stephen Dunlap and Jonathan Butts of the Air Force Institute of Technology entitled PLC Code Protection. The presentation slides from that session are below.
Most of the presentation actually covers PLC hacking and would be of interest to those who follow Project Basecamp. It was very clever how they used the turning of the keyswitch or command to go from Run Mode to Program Mode and back to Run Mode as a trigger for a logic bomb (slide 12). An engineer would likely think the PLC stopped working because of a coding error or other change they just made while in Program Mode.
They also talk about how to maintain persistence by ignoring firmware updates and reporting the expected firmware version when asked.
It is close to a universal truth that vendors in all industries do not handle their first vulnerability disclosure incident well. We now know the same is true of User Groups with the DNP3 User Group as an example.
The widespread DNP3 implementation vulnerabilities found in master and serial fuzzing by Adam Crain and Chris Sistrunk, see the S4x14 video, were initially and to a large extent treated as not a problem with DNP3. This is a great approach if you are trying to avoid any negative buzz, and the identified vulnerabilities were not related to the specification. It was when input was not validated or the protocol was violated that the DNP3 protocol stacks crashed.
However how can a user group downplay and not seriously address an issue that affects almost all of the users? Isn’t the fact that almost all of the tested DNP3 protocol stacks have implementation flaws that can lead to an attacker crashing the master from an outstation an issue that affects users? Eventually there was enough pressure for a helpful response that the DNP3 User Group promised a “set of recommendations” in a document where 80%+ said this is not a DNP3 problem. The recommendations, AN2013-004, are in a helpful and quite good 24 page document available from the DNP3 User Group.
The response to the DNP3 implementation vulns was and is determined by the DNP3 Board of Directors. The Board of Directors is largely controlled by Triangle MicroWorks (TMW), the big gorilla in DNP3. The majority of DNP3 vendor implementations use the TMW stack. The TMW outstation had some vulns, but the master actually did not. Interestingly, the testing found that most of the vendors that integrated the TMW master station protocol stack had vulns.
It’s unclear how much of the blame should be placed on the package that TMW provides their OEM customers and how much should be placed on the integration team. However it is a huge problem for TMW and its customers. They have a large deployed base of DNP3 Master Stations based on the TMW protocol stack that can be crashed from an outstation, via serial or IP comms.
Now to the politics … the DNP3 User Group elected half of their board this January. Erin Hall of TMW was up for re-election to the Board and was concerned that someone who pushed fuzz testing might get on the board. She sent a letter to the TMW customer base, here is the pull quote:
I am concerned this may be an effort to gather enough votes to move the DNP Users Group toward requiring “fuzz testing” or otherwise modify the protocol in a way that benefits fuzz testing companies, rather than benefitting the industry or the DNP3 protocol. Please read the DNP3 ICS-CERT FAQ from Triangle MicroWorks for an explanation of why I think there are better ways to improve DNP3 Security other than fuzz testing.
It could be very painful for TMW and their OEM customers, and the eventual end users, to learn how pervasive this problem is and the potential impact if an attacker can gain access to an outstation.
The DNP3 User Group board has six or seven members (this was unclear but it appears there is some emeritus position). Two members are from TMW, Erin Hall, VP of TMW and Jim Coats, President of TMW. All they need to do is swing one other Board Member, which TMW has some interesting relationships with, and TMW controls the DNP3 User Group. While it may be within the bylaws, it can’t be in the best interest of DNP3 users to have TMW dominate the future of DNP3 this way.
In fact, I would argue that the TMW has a huge conflict of interest on the fuzzing vuln issue and should have recused themselves from voting on any response to this issue.
To come full circle on this article, now that the DNP3 User Group has some experience dealing with vulnerabilities they will likely do better next time. Vendors have the same issue that the TMW members on the DNP3 Board face and eventually learn that customers expect honest and helpful information when vulnerabilities are identified.
Industrial Defender announced ASM support for the Schneider Electric (formerly Telvent) OASyS DNA system. There is a really helpful 6 minute video that shows a demo of the integration and useful for those who haven’t seen what a SIEM can do. There appeared to be a bit more OASyS DNA knowledge in the menus and displays than what you would see in a Tenable Security Center or other competitive implementation. Still it was mainly ports, installed software, missing security patches, Windows events, … the standard things you would see in these types of tools. I didn’t see any indication they are bringing in the OASyS DNA security events that are logged outside the Windows Event log, but it was only a 6 minute demo.
There also is a less informative, but very slick video on monitoring ports and services from Industrial Defender. It shows the challenges of monitoring for changes, but glosses over the difficulty of determining what is required for operation up front.
The Crookston Times covers an Xcel lawsuit against GE. Xcel “alleges that GE and its affiliates knew about a defect in its turbine blades for decades and had documented earlier failures and improved its design, but never told Xcel about the problem.” A turbine failure at the Sherco 3 unit cost Xcel $200 million. Perhaps the most interesting part of the article is GE’s claim that 271 similar turbines are operating fine, and Xcel was “operated outside recommended inspection and maintenance requirements.”
Mark Ward of the BBC writes that energy firms are increasingly trying to get cyber insurance to cover ICS cyber incidents from Lloyds of London and others. The pull quote “Unfortunately, said Ms Khudari, after such checks were carried out, the majority of applicants were turned away because their cyber-defences were lacking.”
Say hello to Michael Toecker at the CIPC meeting in St. Louis next week, which is an industry body that discusses security issues relating to electric infrastructure. The agenda is here.
Wind River announced a new version of VxWorks “to address the new market opportunities created by the Internet of Things (IoT)”. I’m unsure how a more modular architecture or other new features have much to do with the Internet of Things.
Last week there was an entertaining SCADASEC thread on the new SANS/GIAC Global Industrial Cyber Security Professional (GICSP) certification. To get your GICSP you take the 5-day SANS Course ICS410: ICS/SCADA Security Essentials and then get 69% or better on the 3-hour, 115-question GICSP test.
Wow, I am not sure why an industry collaboration involving some of the most respected control system architects, process control engineers, developers, security professionals would provoke so much speculation and emotion from a few people that were not involved. How does something that was guided and developed by a number of major DCS users, DCS/SIS suppliers, and integrators get so much wrong according to simple web reviews? The goal of the GICSP stakeholders was to identify through a rigorous effort the competencies that were important to support ICS security efforts. The exam was developed with an equal mix of essential elements of engineering, safety, and control system and cybersecurity concepts and approaches appropriate for this important environment. I would encourage anyone to look deeper than the website before forming or entrenching an opinion.
What was serendipitous and amusing was the very same day in a SANS Newsbyte Michael politely found the NIST Cybersecurity Framework lacking:
I applauded the President’s action and prioritization of the series of problems we identify with cyber threats and I appreciate that NIST called out the need to address operational technology (specifically automation and ICS) alongside of traditional information technology. At this stage we should have taken the opportunity to explain the real “what” (nature of cyber threats) and the practical “how” to enhance our collective cybersecurity posture. I believe “how” in this context is composed of two major dimensions – what actually works (for the threats that the Executive Order is addressing – those that are directed and structured) and what can be implemented in a prioritized fashion with reasonable effort (achievable competencies and capabilities). There are good elements and concepts in the framework but we are missing an opportunity to explain, prioritize, and define.
Loyal blog readers know that I’m never reticent with criticism of industry efforts. What’s new in ICS is we have a growing number of standards, certifications and frameworks that could result in a seal of approval on a person, vendor product/system or installation. How should we consider and view these certification efforts?
After giving this some thought over the past week, I believe we should determine:
does the certification accurately portray the skill set / security posture that is being certified?
Is that skill set / security posture of value to the ICS community?
Using the GICSP as an easy example:
Five days of instruction is provided on ICS Security Essentials, and then a test on the material covered in those five days must be passed. So basically it is a certification that an individual has taken and passed a five day course. No more and no less. SANS/GIAC states, “This certification will be leveraged across industries to ensure a minimum set of knowledge and capabilities that IT, Engineer, and Security professionals should know if they are in a role that could impact the cyber security of an ICS environment.
“On one hand, “a minimum set of knowledge and capabilities” is a low bar that may be appropriate for a five day course. On the other hand the title of the certification is Global Industrial Cyber Security Professional, which can only be misconstrued to mean more than taking a five day course on the minimum set of knowledge and capabilities.
VERDICT: By the name alone, the GICSP certification is misleading. It tests to a minimum set of knowledge and capabilities, or ICS Security Essentials, and then certifies someone who knows these minimums as a professional.
Is knowledge of the security essentials taught and tested valuable to the ICS community?
VERDICT: Yes. It looks like a helpful course for engineers and IT security types. Is it perfect? No, as many of the SCADASEC comments pointed out. It will improve over time, and all in the ICS community are unlikely to ever agree on what the essentials are.
If the certification was ICS Security Essentials Certified it would be a clear winner.
You must take a two-day ISA course and then pass a test. Both the title of the certificate, with the word Fundamentals, and the description on the web site are accurate. It is a harder call on whether this certificate is valuable to the community. The training course is of value to help use the ISA standards, but recognizing the knowledge learned in a two day class is borderline.
I’ve covered in numerous articles that labeling a PLC that passes EDSA level 1 testing as anything with the word Secure in it is embarrassing and highly misleading. Even the best part of the certification, the communication robustness testing, does not test the control system protocol stack.
Is it of value to the community? Level 1 was of some value 3 to 5 years ago, but not now. Level 2 and 3 certification would definitely be of value and warrant the designation ISASecure. There is much to like about the structure of certification effort so hopefully it will get rid of the misleading ISASecure Level 1.
The best news is there are a number of quality ICS security training options available now. Beyond ISA and SANS, there are training courses from Jonathan Pollet/Red Tiger, Joel Langill/SCADAhacker, Justin Searle, DHS/INL and others. The value of a certification of completion doesn’t negate the value of the training options.
Bryan Owen and OSIsoft have been supporters of ICS security research for almost a decade now. And Bryan had another interesting and pithy 15 minute session at S4x14.
He covers 15 cyber incidents from around the world that affected their products and company … and the lessons learned. For example he discusses how Microsoft was not advertising patches to systems where the vendor repackaged the Windows Common Control, and how this led to OSIsoft not delivering a required patch in a few different cases.
In the video you see some of the challenges that vendors face in delivering secure applications and systems. #6 has been one of my hot buttons for years with specific ICS directories being excluded from malware detection.
TechCrunch reports that Siemens Venture Capital “is launching a new, $100 million fund to back early-stage startups working in the areas of industrial automation and other digital technologies that can transform manufacturing.” One of their first investments was in security vendor Countertack.
The Q4 ICS-CERT Monitor is back online now. ICS-CERT provided 15 ICS security consulting engagement in Q4, which surely makes them the most active ICS security consulting practice in the world. They also summarize their 2013 incident response data, but it is so flawed it’s not worth your time.
A live demo often leads to a presentation disaster, but this was not enough of a challenge of Eireann. He decided to run a Red Team / Blue Team exercise live on the S4 stage.
The target was a Siemens SCALANCE switch with a known vulnerability. The Red Team had exploit code and had practiced with the exploit prior to going on stage. The Blue Team had a patch and the ICS security bulletins from ICS-CERT and the Siemens CERT.
Beyond the Red / Blue exercise, Eireann goes over the CERT bulletins and how that information might be more helpful.
The idea for mining malware for evidence of targeting automation came out of reading several papers on Stuxnet that discussed the methods used to intercept calls to the S7 PLC. To summarize, Stuxnet replaced the Siemens stock s7otbxdx.dll with a new version that watched the PC-PLC interactions, and either allowed the interaction to go through without modification, or made modifications to the interaction. All in all, I thought this was a rather clever method for one major point: an attacker didn’t need to write some complex library of commands to fully reprogram the S7, s/he could simply extend the existing functionality of the system.
The approach spoke to me because of simplicity and reliability reasons, and I wondered what other approaches might be similar… Would an attacker seek to register a malicious COM/DCOM automation control in place of a valid one? What about sit on an automation port, watching for automation traffic to meet certain conditions before firing off a ‘open all breakers’ command? Were there data points (literally, things like MW, MVARs, pressure, etc) that were basically standard in systems that a piece of automation specific malware could test for and use?
Unfortunately, this is a huge sample space for an individual researcher to go through, so I settled on demonstrating the concept under a specific set of conditions. After a little thought, I settled on a various OPC interfaces for a specific legacy generation DCS, the INFI-90 system. I pulled down from the internet a few OPC servers that are used in power generation, specifically those that interface with the INFI-90 DCS, an older ABB/Bailey system that is still in common use today on 1970-80s era coal fired units.
The INFI-90 system is unique. REALLY unique. Developed in the 80s, it had an proprietary interface between servers and devices over SCSI, though it could also use a lower throughput RS232 interface. Originally, this interface used a DEC VAX/VMS, but upgrades and cost cutting eventually put the functionality into a set of drivers called ‘SEMApi’ on Windows 95/98/NT/2000. The interface is low level, there are no standard Windows drivers that can handle it, and interactions with the DCS using modern OPC servers must either go through the proprietary ABB/Bailey SEMApi drivers, or another set of custom built drivers/APIs that still use SCSI or serial.
So, what we have is a crazy interface that you’re not likely to see outside of a power plant, coupled to a technology (OPC) that isn’t used much outside of automation, and locked down to a specific set of vendors who support it. I pulled down three OPC servers I knew were in use in generation. Two use the SEMApi method of interfacing, one is an alternate interface developed by the vendor. I installed the software on a virtual machine, pulled all the EXEs and DLLs out of the installed software, and then ran it through a ‘strings()’ parser. With all the different options available, I settled on looking for DLLs that were mentioned in the strings() data, going under an assumption that at some point they would reference a common set of DLLs, or interfaces.
I pulled all the DLL’s referenced together into a single list, and ranked them in order of uniqueness on a 1 to 5 scale. If a DLL was extremely common (like a standard Windows component) it was a 1. It the DLL was very unique, such as being a proprietary DLL, it was ranked a 5, with other DLLs falling somewhere in between. This constituted a good idea of what DLLs could be loaded by my OPC programs at runtime, and might be useful in determining if a malicious process was interfering with the function of those DLLs.
While I focused on DLLs in this search, there is a chunk of data that might be useful (people I’ve talked to that routinely look through virus data say “far more useful than DLLs”):
Registered OCX, COM, and other objects referenced by CLSID – CLSID’s are unique, and serve as a portable method of referencing objects between many systems. Malware will often register and use specific objects by CLSID.
MD5/SHA Hashes of Important Files – Malware will often use other files, some that it downloads and others that are already resident on the system.
Common IP traffic – Many of the newer virus searching platforms are using limited dynamic analysis through sandboxing (a’la CukooBox), so they can capture network traffic. This isn’t searchable right now though.
Simple Hashes of Automation Files – While not generally infected themselves, they are often submitted in bundles which can contain malware. Looking through files uploaded at the same time might be beneficial.
On a whim while writing this post, I entered all the the 5 ranked DLLs (14 in total, of 117) into malwr.com. Malwr.com is a site that is similar to VirusTotal, but gave me the capability to search specific files that were interacted with after being run in a sandbox without a private account. Only one returned a hit, asycfilt.dll, which was a DLL involved in a June, 2010 security update.
This goes to a single point, even when you pull together a large amount of data that you think will show something, you might still not find anything. Making searches quick, simple, and easy is a prerequisite to doing this type of research, making my ‘whim’ search valuable only if anyone can do it quickly.