- Patrick Miller, Sr. Compliance Engineer, Security for WECC [basically Mr. CIP at WECC] announced he is leaving WECC for an as yet to be named consulting firm. Patrick was also a key founder of EnergySec before joining WECC and has a record of getting things done. Good luck to Patrick and congrats to the company that snared him.
- In case you missed it, a replay of the Cyber Shockwave exercise can be seen in multi-part videos on YouTube.
- The ICSJWG agenda is out via email. Three tracks, lots of interesting presentations. Looks good. Will add a link when available.
Archives for February 2010
Two weeks ago I brought up the topic of sending data from control networks to a Security Event Manager (SEM) on the enterprise network. This week I would like to discuss reasons why you would want to send security data from the control network to the enterprise network.
One of the more obvious reason to send control network security data to an enterprise SEM is for centralized management of security data. Collecting all security related data in one place makes detection of security events easier and can help improve response time, if needed. This also allows the security team to deal with security events rather than the operators. While operators should be concerned with security it should not be their primary job.
By allowing control network security data to be sent to the enterprise SEM, the security department has a better understanding of security events. If a virus starts attacking systems on the network, the SEM will help pinpoint where the virus came from. Without a centralized repository for security data, it may take days to determine exactly where the virus entered the network. If the virus started in the control network and spread to the enterprise network, the security team could rapidly determine how the virus was delivered, such as email, web or sneakernet, and possibly implement ways to stop future occurrences. Conversely, should the virus start on the corporate side, the security team could assess the damage, if any, to the control network. [Read more…]
The Portaledge Meta Event release is now available to Digital Bond site content subscribers. It is also recommended that all adopters of Portaledge grab the latest releases of the Availability and Enumeration packages that accompany this release.
Portaledge is Digital Bond’s security event manager (SEM) that leverages OSIsoft’s PI ACE engine to monitor for, correlate and aggregate potential security events on a control system network. Portaledge receives input from a variety of the PI IT interfaces and uses that that data to trigger security events. Portaledge then correlates the events on commonalities and aggregates the events into event chains. See Digital Bond’s SCADApedia for more information on Portaledge.
The Meta Event capability allows for the correlation of events across event classes providing system administrators a more holistic understanding of what is occurring on their systems.
We hope to soon follow up this release with packages that will assist in meeting NERC CIP requirement through the deployment and use of Portaledge, and a visualization tool to display security metrics at a glance.
One of the rules we try to live by and inculcate with our clients is “don’t try or promise the impossible”. This is a simple and certainly not brilliant concept to avoid a path doomed to failure and frustration and wasted effort.
An example of failing to follow this rule was the pull quote from Michael McConnell, an EVP of Booz Allen Hamilton’s national security business and a former director of national security and national intelligence,. “The government’s role will change to become more active,” he said. “We’re going to morph the Internet from ‘.com’ to ‘.secure.'” He said this during a hearing of the Senate Committee on Commerce, Science and Transportation, see related article and video on Cnet.
This is related to the Advanced Persistent Threat [APT] meme that we have been blogging and talking about since S4. One of my questions at S4 was if APT was in fact persistent and you could not remove the threat from your network, what should an organization do? Shortening and paraphrasing Kris Harm’s of Mandiants answer, an organization should focus the cyber security resources on systems and data that are most important rather than spreading the effort across a large enterprise and achieving only the expected failure.
It’s all about prioritization and learning to live with reality. We do this with almost every other area of our personal and business lives with different expected levels of reliability and security. Why do we hold to the illusion that cyber security will be different?
PS – An ancillary rule is “new organizations and projects should go for the quick win”. I was amazed that ISCI tackled such a difficult first task, a protocol stack certification that could be tested by multiple vendor solutions. They had money, buy-in from key players, momentum, good PR and all that was needed was some simple to write and test certification criteria, such as minimal elements and evidence of a Security Development Lifecycle. It could have come out in six months and made ISCI a player. Now years later ISCI is still stuck in the mud, and it’s future uncertain.
When we are working with asset owner clients I often find myself thinking or saying, “If I was responsible for the security of this control system I would …” These are usually related to issues of implementing and maintaining an acceptable security posture over time. Customizing Bandolier Security Audit Files would be high on my list.
I’m not going to go over Bandolier in detail again, Jason covers it regularly and a lot more info is available in the SCADApedia. The Bandolier Security Audit Files we deliver through our subscriber content and through the vendor support systems represent and audit the optimal security settings for each control system component. However we all know there is some level of customization in control systems so a number of the audit tests could fail and yet the system could be in exactly the configuration an asset owner would want. Some examples:
- an asset owner loads an approved application from a third party onto a historian or EWS. Bandolier will check the open ports and will list any third party ports as non-compliant with the minimum required port list.
- an asset owner has a different policy for certain security parameters such as password length or log event processing. This different policy could be stronger or weaker, but if it does not match the optimal policy determined by guideline documents, the vendor and Digital Bond, the audit test will fail.
- an asset owner uses non-standard file or directory names for the control system application. This will cause an audit test to have inconclusive results. Not a fail, but not a pass either.
So if I was responsible for maintaining and auditably proving security for a control system, I would run the default Bandolier Security Audit Files. Look at all failed and inconclusive tests and determine if the settings were correct for my control system. For the settings that vary from the Bandolier recommendation and deemed and documented to be correct, I would customize or delete the specific audit test.
I would repeat this process until I had a customized Bandolier Security Audit File that passed 100% of the audit tests. Some of the vendors who participated with Bandolier are going to offer this Bandolier customization as a service, and the good news is it is not difficult or time consuming.
In fact, if you want to learn how to customize a Bandolier Security Audit File come to the training course on April 5th in San Antonio. This is the day prior to the ICSJWG event for convenience, but the training is not part of ICSJWG [See the course outline – – – register for the course] The morning module will teach you how to learn and customize Bandolier.
- Nate Lawson reverse engineers a smart meter that PG&E partially installed at his house [ht: Matt Franz, @frednecksec]. Actually he reversed the same radio module in a water meter he was able to buy on Ebay.
- The UK Centre for the Protection of National Infrastructure [CPNI] has posted nine control system security guideline documents. Past CPNI documents have been high quality so they are worth a look. Examples include Establish Response Capabilities and Manage Third Party Risk.
- The agenda is now published for the SANS Process Control & SCADA Security Summit, March 29 – 30 in Orlando, FL. Good news, they have picked up on the APT theme introduced to control systems at S4 in January and have the two lead in presentations focused on APT. The event is pricey at $1945 so I would wait for the free ICSJWG agenda, April 6-7 in San Antonio, to come out before deciding between the two. [Actually a small gripe to control system security events that open registration before an agenda is published]
- The SANS event does have three related training opportunities. Most interesting is the 5 day Ethical Hacking Course for Control Systems taught by Jonathan Pollet and Joe Cummins of Red Tiger Security. There is also a two day course on “Critical Infrastructure Protection” taught by Marcus Sachs. This course is restricted to US and Canadian citizens and UK/Aus/NZ government employees. Really guys? I have yet to attend one of these closed events where dangerous, radical knowledge was revealed. Finally the DHS and DoE courses will be available for free.
- A blog post on the Wurldtech site, which aside from the blog is now oddly content free, hints at something that may be very big for the community. It discusses a group of European major oil and gas end users agreeing on a set of cyber security best practices and then having a certification program for vendors related to these best practices. At the end of the blog, “the requirements document and certification program has been finalized and is being piloted with a selected group of five equipment manufacturers.” Over the past couple of years, I have been a skeptic in this blog on many of the Wurldtech initiatives that seemed grandiose and mostly PR, but if you get a few large purchasers together they can have a dramatic impact on vendor efforts and prioritization. From the early days Wurldtech has had strong proponents in oil/gas so this effort warrants some hopeful watching. [FD: Wurldtech is a past Digital Bond client]
This past Wednesday, SANS and CWE released their 2010 top 25 programming errors list. The list contains many errors that are present in control systems both developed recently or a few years back. For example, Daniel Peck of Digital Bond wrote a paper showing what can happen when error #8 is introduced into a system. This isn’t to say that all the errors on the list will show up in control systems (i.e. #23 – URL Redirection…) but enough do that makes this an interesting read. Now on to some general thoughts regarding the list.
Specifically, 11 out of the 25 errors have a low remediation cost and have been used in 0-day exploits. This means that they are easy to exploit because they are easy to find within systems. Also, We’ve recently been discussing several ways to minimize vulnerabilities on the blog recently (i.e. SDLC, Change Management, etc.) but we have yet to discuss the awareness aspect of the problem. We can scream at the top of our lungs how we need better designed/patched/hardened systems but we also need to ensure that the software engineers and developers get taught about these mistakes and how not to make them.
Testing has always been part of making changes to a control system. When a change is made (e.g. new component, upgrade, patch), we have to know if everything is still going to work. Progressive asset owners have incorporated a security element into their functional testing for a while now. Some would even argue that it’s inherently security-oriented anyway because of it’s close tie to availability. Regardless of how you want to classify it, security testing has become an integral part of managing change. For some organizations, NERC CIP-007 R1 has been the impetus.
“The Responsible Entity shall ensure that new Cyber Assets and significant changes to existing Cyber Assets within the Electronic Security Perimeter do not adversely affect existing cyber security controls. For purposes of Standard CIP-007, a significant change shall, at a minimum, include implementation of security patches, cumulative service packs, vendor releases, and version upgrades of operating systems, applications, database platforms, or other third-party software or firmware.”
The NERC FAQ for this requirement expands on some examples of what this testing should include – basic port scans, file integrity checking, review of active user accounts, validation off security-related functions, etc… We recognized early on that there was overlap here with the Bandolier security audit files, which already help with many of those things. Some of the Bandolier vendor participants recognized the testing benefit as well and use the audit files internally to help with patch testing and other validation processes.
Experience has shown that there are additional tweaks to the audit files that can make testing even more relevant. One example is file integrity checking for Linux and Unix systems. Tenable provides a tool, known as c2a (“configuration to audit”) that, among other things, will take as input a list of files and directories and generate an audit file that checks the md5 hashes for everything you specify.
Is Nessus the best file integrity checking tool out there? Maybe not. But if you’re going to be using the audit files and port scanning functions anyway, why not let it pull back this information as well? It means you’ll have a central report that can be easily compared pre-change and post-change to verify that the security level hasn’t deteriorated and there were no unexpected changes in existing files. Export the reports and you’ve got some hard documentation for compliance purposes.
Please note the subtle difference between this type of testing and the Bandolier project as a whole where we are going through the work of identifying the optimal security settings and providing a way to audit that over time. For testing purposes you can use a baseline of the current system, even if it is less-than-optimal from a security standpoint. So even if you haven’t been through a server/workstation hardening process or your vendor doesn’t yet participate in Bandolier, audit files can be used for simple validation testing. Example: before the change, the NetDDE service was disabled — is it still disabled after the change? There are tools available, such as the Windows Nessus Policy Creator (WNPC) that make it easy to capture a “current state” configuration.
We’ll cover some of the specific tools in future posts but I first wanted to address the broader possibilities of using Bandolier and Nessus for your test procedures. If you’d like to learn more about what’s possible with the audit files, check out the training we are offering in April. You’ll get hands-on experience using and customizing the Bandolier security audit files.
I have had talks with a number of other vendors about how control system life cycles will have to change, and slowly are changing. For a long time it has been buy and install a SCADA or DCS, change it as little as possible for ten to twenty years, and then completely replace the system. In SCADA it is common to have different lifecycles for the control center [realtime server, historian, HMI, EWS, web portal,…] and the field devices, but in almost all cases it is what IT calls a forklift upgrade. You haul the old stuff out and replace it.
When the community decided to embrace Windows, databases, web servers, JRE, sharing process data with the corporate network, sending scheduling info to the control center, … all with many benefits, we gave up the idea that we could install and forget. We just didn’t know it or admit it. But change is hard for a conservative group like our community so it has been avoided or fought for more than a decade now.
The forward thinking vendors are already looking at more frequent upgrades, patching, migration of systems, and very importantly the way control systems are budgeted and paid for. It is not going to be a huge, once every decade or two expenditure. It is going to be larger annual expenditures for the software, refreshing hardware every three years, planning for the personnel cost of testing and implementing more service packs and .upgrades. Actually committing to staying on all supported software, OS, supporting apps and the control system apps.
It is a struggle to convince the customers that this is the way forward, but it is starting to happen, slowly. The other options are to stay with a fragile system or find some vendor that is going to commit to a streamlined, completely purpose built system with all proprietary protocols – – – like the old days
Digital Bond’s class, Using and Customizing SCADA Security Tools, was a sellout when first offered the day prior to S4 last month. It teaches advanced students how to use and customize the Bandolier Security Audit Files and the SCADA IDS preprocessors, plugins and signatures. The goal is to help asset owners and vendors take full advantage of the results from our Dept of Energy and DHS funded research projects.
This is a technical course and students should have a solid background in networking. Students will not just be using Nessus and Snort. They will be writing new audit checks in a variety of different categories that use the Nessus compliance plugin so they can customize a Bandolier security audit file for their system. Students will be writing Snort rules that leverage the SCADA plugins and preprocessors developed in our Quickdraw project and even seeing the internals of a custom plugin and preprocessor.
We are offering the class again on April 5th, the day prior to ICSJWG. For the ease of the masses attending ICSJWG, the class will be held in the same hotel as ICSJWG, but to be clear this is not an ICSJWG activity. It is a Digital Bond training class and the cost of the day long class is $600.