CRISP (Cyber Security Risk Information Sharing Program) is a US Department of Energy (DoE) program with two related efforts underway to meet the goals.
There can be cases where the Market, in this case energy companies, are not sufficient to support a product or service. The Market may be interested in trying out the new offering, but not at the price required to sustain the business. The government or other entity can step in and subsidize some or part of the product or service. The subsidy should be short term, perhaps 1 to 3 years. If the market does not perceive the value or the cost does not come down, the offering is not sustainable.
DoE writes “This unique package and specialized low pricing represent a highly compelling enhancement to CRISP cyber security and the protection of energy related critical infrastructure.” The concept is the energy sector will see the value of this data and pay full price for it after two years.
This isn’t strictly a market failure issue because the offering exists and energy companies can buy it without DoE help. However it’s a small expenditure for the US Government, and it does not get them into the threat intelligence business. It’s low risk and allows participants and DoE to see if this information has value.
The more troubling aspect is the NERC/ES-ISAC/PNNL effort that forms the main part of CRISP. This could be a 3000-word post on its own, but here is the shorter, bulleted list of the major problems:
Pacific Northwest National Laboratory (PNNL) is performing the analysis of the collected data at $7.5M for one year. Why is PNNL competing with industry? A proven, competitive and growing industry that is more talented and experienced than PNNL in this area. The $7.5M is for sensors at 28 companies, $267K per company for an Internet sensor plus about another $33K per company paid to NERC.
There is an indisputable conflict of interest with NERC, the ERO/regulator, pushing an overpriced “security service” to the companies it regulates and can fine for not meeting the CIP regulations. They can talk about chinese walls and other separation, but this exacerbates the existing conflict of interest with the NERC as the regulator and ES-ISAC.
Moving forward NERC is considering staffing up the ES-ISAC to take over the PNNL role. So NERC is going to build a threat intelligence analysis company; it goes from bad to worse.
Finally, these sensors are not collecting data from an ICS. “The CRISP ISD is a network device which uses commercial off the shelf hardware. It’s placed at the transmitting site’s (e.g. utility) network border, just outside the corporate firewall.” The case that PNNL or some other organizations energy expertise is critical might be persuasive if this was an ICS security perimeter or interior being monitored, but the lack of relative experience and data feeds by NERC/PNNL will put this offering at a major quality disadvantage to commercial competitors. This is not even considering the pricing disparity.
CRISP has two differentiators from other commercially available cyber risk monitoring services. The first is the intent and ability to integrate other cyber related threat information provided through governmental sources with the cyber threat information gathered from the ISDs installed at the participant’s sites. Second is the ability of the program to look across organizations within the electricity subsector, identifying correlation and trends.
CRISP, like Cyberstorm, LOGIIC and many others, will undoubtedly be called a success. You can write the press release before the event or project is finished. The criteria for success is the various organizations come together to participate in the project for event.
You can visualize the press release and presentations already. A list of 28 utility companies, DoE, raw monitored traffic numbers, events, bulletins written and a couple of quotes from senior executives. The Norse Corporation portion of CRISP will be easy to evaluate. Do energy sector companies purchase this service from Norse or their competitors when the subsidy ends?
The criteria for success of the NERC/PNNL effort is more difficult. A larger program is as likely to be due to marketing pressure than any value of the information. It’s the C-level / Director question … are we part of this CRISP security thing? If it is too late to stop, NERC should be working on and announcing in early 2015 plans to spin CRISP, and I would argue ES-ISAC, off to a commercial entity. Then the market would determine if CRISP is a success.
This past Sunday’s edition of This Week With George Stephanopoulos had a 7-minute segment on critical infrastructure cyber security prompted by the BlackEnergy malware. The lead in by ABC’s Pierre Thomas was particularly bad and conflated attacks on company’s that run the critical infrastructure with attacks on the critical infrastructure. They even went back to the 2007 DHS Aurora footage while making it appear as if this is a recent data point.
An important and easy to understand point still seems to escape the mainstream media reporting. Brand new, top of the line ICS, not just ten-year old legacy systems, being deployed in the critical infrastructure are insecure by design. If an attacker gets through the perimeter, he will have complete control of most ICS. My hope is the “less security than your ATM/bank cash card” will eventually catch on.
One very positive aspect of this segment was Richard Clarke’s comments. He was hitting a lot of points I made at by S4x14 ICSage talk, to a much broader audience in very clear language. Some of the gems:
“half dozen countries that have already placed logic bombs” and he specifically included the US on this list
“you want to have the ability to push a button when the war starts” when talking about pre-staging ICS cyber weapons
“tried this with their potential enemies” again indicating this is already happening
Mr. Clarke also commented that most of these ICS cyber weapons will never be used if they are deployed by nations. However, the risk of a less responsible group with less to lose deploying and using these weapons is his concern. While brief, his comments were literally the best I have seen in the popular media in the last decade.
Congressman James Langevin was also on the program, and he echoed a lot of what Richard Clarke said. However when he came to discuss solutions, his big answer was for Congress to pass an information sharing law. If DHS and the US Government can’t say out loud the most basic and important information, that these insecure by design systems in the critical infrastructure need to be upgraded or replaced in the near term (I say 3 years), what practical use is an information sharing law?
The CLUSIF (Club de la sécurité de l’information français) has issued “an overview of existing documents, standards, guidelines and best practices” (link is for the document in English). The 24-page document gives an overview of the most popular and useful documents, and some advice on determining which documents might be most helpful to the reader based on a variety of criteria.
Waterfall Security, best known for their Unidirectional Security Gateways, has announced Application Data Control. While technical details are still limited, it appears to add deep packet inspection to their product line.
Perry Pederson of Langner Communications has written a 28-page RIPE Crosswalk document that compares and maps RIPE to NERC CIP, NEI, WIB, NIST CSF, …
We are well into the second tier pricing of S4x15 Week tickets (51-100). The price goes up $100 each tier so register early to save money. We were happy to add to Tim Yardley as a speaker this week as well as some additional OTDay sessions.
We added a bunch of info to the S4x15 site including the newly designed banner, see below. We are almost through the first 50 tier ticket pricing (42 sold).
“DHS ICS-CERT” and FBI announced, a bit clumsily, that they will be touring 13 cities across the US and providing ”a series of SECRET briefings …for cleared asset owners/operators. … These briefings will provide additional context and information about the BlackEnergy campaign as well as the Havex malware that both targeted industrial control systems.” Sounds like a worthwhile program if they have unique information. I always wonder why these briefings happen after, rather than before, the information is released publicly by researchers and vendors. This is related to an ICS-CERT Alert issued this week.
Some good news on the INL front, they recently added Andy Bochman to the team. I’ve always admired Andy’s writing on Smart Grid security and other ICSsec matters when at IBM and in his own startup. Good luck Andy.
First, DHS needs to stop putting everything they do under the ICS-CERT umbrella. There is a CERT function, and there is a bunch of other non-CERT activity. The naming confuses everyone, and you would almost think that is intentional.
Next, as Reid suggested they should be very clear about their vulnerability handling processes. Right now it is coordination of what researchers submit and the vendor response. There is no analysis, no evaluation of impact, no validation of the vulnerabilities, and no validation of the fix. If the vendor says it is fixed, the alert or advisory says it is fixed. The vendor is not even asked how they fixed the vuln. The process, the best that we can tell, is simply coordination of messaging from other peoples info. Figure out what boilerplate fits best, pull some info from the vendor announcements, and put out an Alert or Advisory.
You probably surmise from my tone that I think this is inadequate and actually of little use. It is particularly harmful when they measure success based on the number of alerts and advisories issued. My recommendation would be to shut ICS-CERT down and just roll it into US-CERT. The whole purpose of ICS-CERT at its creation was to provide second level support for US-CERT when ICS vulns were found. We did not need to replicate the existing coordination function.
However, I realize some see value from the Alerts and Advisories, so I would count it a success if ICS-CERT was simply forthright about how they handle ICS vulnerabilities and generate Alerts and Advisories. Reid is right that the public has assumptions about what they are doing that are totally wrong.
I’ve been sitting back and watching to see what activity Reid’s S4xJapan talk would generate. When he found the vulnerabilities in Version 2 of CoDeSys it generated some Advisories that eventually stated the problem was fixed in Version 3 based on the vendor provided information. As we now know Version 3 has the same vulnerabilities as Version 2.
Yet two weeks later there has been no correction or updated Advisory. This is an issue that affects PLC’s and RTU’s from over 100 different vendors, and many of these vendors and their customers believe all is well since they are running on Version 3 of CoDeSys.
Reid showed the exploits on two Japanese products, one from Hitachi and the other from Sanyo-Denki. The later is used to control robot arms. There have been no Alerts or Advisories for these specific examples or the 100′s of affected products.
To be clear, I’m not saying ICS-CERT should jump every time a researcher demonstrates a vulnerability. The whole vulnerability in ICS is overplayed given that ICS-CERT does not consider insecure by design as a vulnerability.
They should have a clear and public set of procedures for vulnerability handling so the community can understand what they can expect and how they should interpret the Alerts and Advisories.
One of the most thought provoking sessions at S4xJapan was Wataru Machii of the Nagoya Institute of Technology’s session on Dynamic Zoning in an ICS. One of the great things about S4xJapan is it provides videos and sessions in the Japanese language. The downside is it is not accessible if you don’t speak or read Japanese.
The basic concept is that the security zones and conduits in an ICS should change dynamically based on the state of the ICS. There are two parts to this. The first is how to set up the zone and conduit states that you will switch between based on ICS state. And second, what triggers the change in state. Machii san had good ideas on both of these questions, but it is an area worth further investigation to identify a methodology that can be applied across sectors and customized by owner/operators.
This session was the inspiration for our S4x15 Great Debate: Can Operators Use a Security Display. The control room is often staffed 24×7 by Operators, but they have little security knowledge. The S4xJapan session made me consider if the Security Display could be simple enough that an Operator could trigger a change in the dynamic zone based on the information in a security display.
This is only one possibility.
In the Great Debate we will have attendees submit their single screen security display, and importantly, explain the defined Operator actions based on information shown in the security display. A handful of attendees will explain their security displays, others will be flashed on the screen for consideration, and their will be skeptics voices heard I’m certain.
By the way, this was one of two sessions at S4xJapan from the Nagoya Institute of Technology. They have an active ICS security program there and seem to be working on research with real world implications.
At S4xJapan in Tokyo I presented on a couple things, this post is about Havex. During the talk I am speaking slowly and plainly as the conference was being simultaneously translated into Japanese. Altering your speaking style to help translators is a good exercise that everyone should do. It forces you to be concise and use simple language but warning: it’s a bit dry.
There has already been some excellent articles/research published on the ICS relevant aspects to Havex. Regarded as the second major ICS malware, Havex garnered some media attention which prompted the need for more analysis, writeups, and talks like this. The goal of the talk is to give an overview of what Havex is, what ICS components it has, and then to dive in to the codeflow of the downloadable OPC scanning module. At the end of the talk hopefully the What and How questions are answered but Who and Why still remain.
After the presentation we had some good discussion about OPC module internals/encryption as well as general ICS malware campaigns. The conference did well to foster that type of communication and I appreciated working with everyone there.
I received my samples from insecure Command & Control servers as well as from professional contacts. Shoutouts to Kyle Wilhoit, Daavid, other Kyle, Kaspersky, and Daniel.
Google is maybe a little TOO helpful in trying to save us from ourselves. In attempting to forward on samples I discovered that Google seems to try basic password attempts on encrypted zip files. Putting the samples in a zip archive with the standard password “infected” was insufficient to get past Google virus detection but changing to “infected1234″ worked fine (without changing any file names). Creepy….
For a summary, FTDI (Future Technology Devices, Inc) released a driver update via Windows Update yesterday. The driver update intentionally bricks unauthorized copies of FTDI’s popular USB to Serial converter by overwriting the USBID. Reports are that their mechanism for selecting devices was actually imperfect, resulting in the driver also bricking one of FTDI’s own legitimate chipsets including the FT2232H.
In Digital Bond Labs, one of the things that we discuss with vendor clients is hardware security. When considering how to build hardware security, one of the items we consider is, what can a vendor do to protect their intellectual property from being cloned?
We’ve seen and purchased a lot of cloned hardware for sale on eBay over the years. The equipment mostly comes from the Shenzhen/Guangzhou areas of China, and sells for a small fraction of the normal retail price of legitimate equipment. Everything from specialized PLC programming cables (chiefly for Siemens equipment) to knockoff digital protective relays can be found in this way. Knockoff quality ranges from pretty good and fully functional equipment, to equipment that is well-packaged and turns on, but which does nothing functionally.
FTDI’s response is a bit surprising, especially given where its devices are used. FTDI chipsets are a daily part of life for many engineers who work with critical infrastructure, medical devices, and other big industries. It is basically impossible for an end user to know their supply chain, to know whether every FTDI chip in USB devices they own is legitimate. Cables using FTDI chips often come included with hardware that has a serial port, such as network switches, PLCs, and other embedded devices. The chips are also frequently used internally in devices with USB ports. An end user will have a difficult time telling which of their cables or devices is using a legitimate FTDI chip, and which may contain a clone. It can be difficult for manufacturers of these chips to even know, as supply chains can be surprisingly obtuse. There are some good hackers working on detection scripts, although there may always be corner cases that are not properly detected.
Unauthorized hardware cloning is like insecure-by-design software in one way: once cloned hardware is out in the wild, it’s really too late to try and secure it. While FTDI found a way to disable the unauthorized equipment, the blowback from doing so is going to be extreme. This is precisely because it is the end users that wind up with the broken equipment. I immediately picture a nightmare scenario in which an engineer absolutely needs to reprogram a piece of field equipment in an emergency, and cannot because they didn’t know their chip was not legitimate. Cutting that engineer off is clearly not the right solution. I for one will recommend that my vendor clients discontinue using FTDI’s chipsets in future products in favor of competitors like Prolific. This incident suggests that FTDI’s management is too reactionary and is not thinking in the long-term about their own security or the safety of end users. A better idea would have been to implement detection in the driver similar to the script linked above, and simply warned the end user that the driver would stop communicating with the illegitimate chip after a certain date.
Adding good security to a chipset, or any hardware design, takes some dedication and has a cost associated with it. In this case, FTDI wants it both ways: they want to avoid paying the upfront cost of building their device in a manner that is difficult to clone, yet they also want to stop illegitimate chips. Thankfully, they have relented, but only after seriously harming both their reputation and the legitimacy of software updates.
We have opened the S4x15 website and registration. There still is a lot to add to the site, like the Conference Hotel, ICS Village CTF, Social Events, Area Info, FAQ, … But we have always believed it is important to provide attendees with information on the sessions and speakers so you can make an informed decision.
The agenda looks great and very different than anything you have seen before at an ICSsec event.
The Friday activities, ICSage and Advanced ICSsec Training are still in progress.
Register right away if you want to get one of the first 50 tickets at the same price we have charged every year since 2007. We will be providing event updates on this site. There is a lot to say about the event, but we wanted to get this open for registration.