All the fuss and tension over the security impact of Windows XP reaching its end of life next week is wildly overblown for the ICS community.
Yes there still are a lot of asset owners running Windows XP in their ICS environment. And yes, many of these asset owners are in critical infrastructure sectors. There is also a very high direct correlation between asset owners running critical infrastructure on XP and asset owners who are not applying security patches.
It doesn’t matter if security patches exist or not if you are not going to apply them even as infrequently as annually. The fact that Microsoft is not issuing patches doesn’t change their security posture one bit. In fact, some secretly are happy about this because they now have an excuse why they can’t patch.
Owner/operators need to come to grips with the fact they are running mission critical IT with ICS applications. Mission critical IT requires care and feeding and periodic upgrades. The days of install and don’t touch for decades has been gone for almost two decades now when the decision was made to move to Windows, Oracle, Ethernet, etc.
The security leaders in the ICS community, both asset owners and vendors have plans, and have implemented these plans, to address XP and other software obsolescence issues. They are well past the approach of install and don’t touch that leads to lurking fragility.
And it’s not as if the XP end of life snuck up on us.
Let’s talk a bit about Microsoft. It is entirely reasonable for Microsoft to end support for XP. It is a business decision by Microsoft. Owner/operators cannot on one hand point to cost and the bottom line on why they can’t improve security and then ask a vendor to sacrifice their profit.
It was ten years or so ago when Microsoft held the first Manufacturing User Group summit in Redmond. At the time the outcry from the audience was we want a manufacturing specific OS for HMI, EWS, and ICS servers, stripped down with only what was needed in ICS. Microsoft considered this, decided it was bad business, and passed on this new ICS OS. They have gone different directions with Server Core and other embedded solutions.
Over those ten years vendors have continued to develop applications that run on Windows workstation and server OS. Asset owners have bought these ICS applications. All with the full knowledge that Microsoft moves to new OS and eventually drops support for old OS. This is not a new development and should have been planned for a decade ago.
Microsoft provided ample warning of this end of life. Asset owners had years to plan to upgrade there current application to Windows 7, or move to a new application if the vendor is out of business or refuses to offer a version on a supported OS. The asset owner can choose not to, but this is not Microsoft’s problem. Yes it will cost the asset owner time and money, with time usually being the bigger issue, but again they should have a policy that they run supported software and they have many years of warning this was coming.
Jim Gilsinn and Bryan Singer of Kenexis Consulting Corporation had a quick 12-slide/15-minute session on analyzing ICS protocols. Good information on the what and why of pub/sub in these protocols, as well as some protocol plots showing some of the challenges of analyzing these protocols.
UPDATE – The video is added. I wrongly assumed this was the lost 15-minute session. Sorry Sean.
Sean McBride of Critical Intelligence goes into some real world examples of success and failure in ICS Vulnerability Analysis. Viewers should be aware there may be a bit of bias to point out shortcomings since this is what Critical Intelligence does for a living, but loyal blog readers and anyone with insight knows the ICS-CERT Alerts and Advisories rarely provide worthwhile analysis.
If you are looking for ICS vulnerability statistical data the first nine slides have very useful charts. The remainder of the presentation goes through some typical and important failures by ICS-CERT and vendor CERTs.
I have some hope that the vendors will learn and get better. I have little hope that ICS-CERT will improve because they have yet to admit they are lacking. The ICS industry doesn’t help by praising the fact that they are putting out so many more Alerts and Advisories than in years past. They could let US-CERT or CERT/CC handle at least 95% of these and truly use their ICS expertise to dive deep in the 5% that matter.
Some of the big names, AT&T, Cisco, GE, IBM and Intel, have created the Industrial Internet Consortium. GE has been pushing the term Industrial Internet and may be the hub of the five founding partners, who by the way hold a majority of permanent seats in the IIC. Others are encouraged to join and come along, but it’s the founding partners’ game. Expect Siemens and a couple of GE’s other big competitors to do something similar if they have not already. BTW, there is a Security Working Committee in the IIC.
Joe Weiss, who I like to call the Paul Revere of the ICS world, cancelled WeissCon 2014 due to his consulting workload. Joe’s event was the first ICSsec event and drew a good crowd of asset owners. I had heard good things about the last two WeissCon, a bit of revival, so I’m sure this will disappoint many. Joe says it will be back in 2015.
We submitted our BACnet-discover-enumerate.nse for inclusion in Nmap so you wouldn’t need to download and install our script separately. Some minor code changes were required and are in process to meet the Nmap style and format. We will let you know when it happens.
Thomas Brandstetter was the face of Siemens CERT, most famously at BlackHat during the Beresford vulns. About a year ago he left Siemens and founded Limes Security in Austria. You can add Limes Security to the list of ICSsec training options. They have European-based courses for Managers, Engineers and more technical security courses for those who want to assess DCS and SCADA systems.
The US Government Accountability Office (GAO) issued a report entitled: Observations on Key Factors in DHS’s Implementation of Its Partnership Approach. The first bullet in the summary is humorous and sad. GAO points out that they identified information sharing as key in 2003 and problems with DHS information sharing in 2010. And they continue to beat that information sharing drum again. I can’t take US Government information sharing seriously until they say out loud and repeatedly critical infrastructure ICS applications, devices and protocols are insecure by design and need to be upgraded or replaced now. Most of what ICS-CERT/DHS shares is noise to show they are doing something.
Security consulting firms take not that Trustwave was included in a lawsuit related to the Target breach. “Trustwave scanned Target’s computer systems on Sept. 20, 2013, and told Target that there were no vulnerabilities in Target’s computer systems. Trustwave also provided round-the-clock monitoring services to Target, which monitoring was intended to detect intrusions into Target’s systems and compromises of PII or other sensitive data. In fact, however, the data breach continued for nearly three weeks on Trustwave’s watch.”
Digital Bond has had an internal research project to develop tools that discover and enumerate ICS applications and devices. We call this project Redpoint, and we use the growing list of tools with care on ICS security assessments and other projects for our clients. They often begin as quick and dirty Python scripts, but the goal is to move as many as possible to Nmap scripts and make the most useful scripts generally available.
BACnet is widely used in building automation systems that monitor and control HVAC, lighting, fire detection, building security, … and of course it is insecure by design.
The discovery is more than just port scanning UDP/47808. The script sends a BACnet request to the port. Newer devices will respond with some helpful information; older devices send back a BACnet error message. Either way you know it is a BACnet device.
If the device is an IP BACnet Router you can often join the BACnet network as a foreign device. This slide from BACnet.org gives you some ideas on how helpful that would be in enumerating all of the devices, including serial connected devices, on a BACnet network. Those extensions and other more intrusive capabilities we keep in house.
If it is a device compliant with the BACnet specification post 2004, the script will pull some very helpful information as you see in the second and third examples in the screen shot.
Knowing the Object Identifier and having a BACnet client will usually allow you to issue commands to the BACnet device such as change setpoint, change schedule, or change program based on the capabilities of the BACnet device.
Vendor, Firmware and Software versions would be helpful in identifying default settings, device information and known vulnerabilities, although you really don’t need a vulnerability. We find it most helpful in telling the client what is where when an unknown building automation system is found accessible to everyone on the corporate network.
Where is the discovered device? The object name and location can give you a clue or very specific information if the asset owner or integrator used these fields. Again, take a look at the examples in the screen shot. This can be very helpful in an inventory effort or assessment.
We want to be clear on what this is script is not. It is not a discovery of a new protocol or protocol implementation vulnerability. It is using documented features of an insecure by design protocol. The “big hack” we did to create the script was read the specification.
We chose to start the publicly available version of Redpoint with BACnet because building automation systems are so widely deployed on corporate networks, and yes you will find many Internet accessible BACnet devices.
This BACnet script was a team effort with Michael Toecker digging into the protocol and generating some Python scripts and sample pcaps and Stephen Hilt wrote the parsing code and converted some of initial Python efforts into an Nmap script.
Stayed tuned for additional Redpoint releases, or even better add your ICS discovery and enumeration tool to Redpoint.
Libicki presents a nuanced argument for why cyber war/fare is significantly less revolutionary than it is often presented, a position also taken by several writers of this parish. I won’t rehearse those arguments here, except to say that Libicki is onto something fundamental here: success in the ‘fifth domain’ is often unpredictable, which makes it a very risky proposition, tactically, operationally and strategically. Says Libicki, ‘Everything appears contingent, in large part, because it is’. Hardly the basis for a grand theory of cyber war, he reasons.
There are two contentions in this paragraph that are worth some thought. The “fifth domain” is cyber, with air, land sea and space being the other four. Are cyber weapons and cyber offensive and defensive activities in a war less predictable than the other four domains? At this point the answer is yes, but this could be because we do not have the historical data or years of theory and analysis that the other domains have. Will it continue to be less predictable after experience and study?
I don’t have a position on this yet, but the question is interesting and important. I believe it is safe to say that decision makers on war activities are much less likely to rely on or use a weapon or strategy that is unpredictable.
Looking at the ICSsec world we live in, our experience indicates that we could create and use an ICS cyber weapon with very predictable results. There are counter examples of cyber weapons were the results are less predictable. However I imagine this would be true of weapons in the other four domains.
The other point in the quote worth considering is “cyber war/fare is significantly less revolutionary than it is often presented”. Thomas Rid indirectly takes this approach in his book Cyber War Will Not Take Place, but he is talking about a cyber war not cyber weapons or cyber warfare. With the level of ”cyber” in the weapons in the other four domains, it would seem that it is revolutionary.
One last related point and question … should militaries be creating a Cyber Force or integrating Cyber into the existing Army, Navy and Air Force organizations or both?
More questions and ramblings than answers or firm opinions in this post, but these are important topics and ideas and more of the reason why we added the ICSage: ICS Cyber Weapons day to S4.
Dragos Security founders Matt Luallen and Robert Lee announced their first product: CyberLens. CyberLens enables the passive discovery and identification of cyber assets on a network. I asked and Robert answered in a twitter discussion what makes CyberLens different than Tenable’s PVS and other solutions. The challenge products like Sophia and CyberLens have is: are the ICS intelligence advantages enough to warrant selecting a less complete, proven, likely to survive solution?
On a related note, the kerfuffle between Corey Thuen (Southfork Security) and INL on Sophia must have eased a bit as Corey is the guest presenter at the ICSJWG Webinar I Think, Therefore I Fuzz on March 27th. I couldn’t find a registration link on the ICSJWG site.
Continuing on disclosure, Jake Brodsky over on SCADASEC tells a story of finding a “wide open” FTP server at “a small controls firm that does ICS application software programming”. “It had correspondence regarding various ongoing projects with utility plant upgrades. It had application programs. It had drawings. It had spreadsheets of I/O maps and descriptions.” So they called DHS, who called the firm, and now there is a password on the FTP server. I’m sure loyal readers know that this is not enough. My question … has the firm notified their customers that sensitive data was Internet exposed for years? If not are Jake, DHS and the firm practicing “responsible” or even “coordinated” disclosure. Don’t answer that; it was to prove a point. Those words have always been subjective and ring hollow to me. And this is more evidence that disclosure is not worth the discussion because whoever finds the vuln will do what they choose to do.
The Japanese government recently held a cyber exercise. According to the JapanToday, “Some 50 cyber-defense specialists gathered at an emergency response center in Tokyo, with at least three times that many offsite, to defend against a simulated attack across 21 state ministries and agencies and 10 industry association.”
We’ve covered some of the main points of the Mining Malware project, but haven’t gotten to the real meat of the discussion; What would a search for automation software look like, and would it even be successful? To demonstrate this, I’m going to start with a small example, and then explain the issues with scaling it up to the amount of malware we currently have in our system. This demonstration is definitely cherry picking, I’m picking a set of conditions to demonstrate how a search could work, and discussing the various issues with scaling.
To start, I pulled down the APT1 dataset from the @VXShare torrents. I figured this would be a good place to start, as the set is reasonably familiar to everyone (even found an easter egg in it). Secondly, we have to take a step outside of automation at this point, as the APT1 dataset (to our knowledge) didn’t target automation systems specifically, it was designed to gather information from infected systems and transport it to the attackers.
I used a similar process of gathering string data from the VirusShare torrent, which I discussed in a previous post. I parsed plaintext out of the 293 items in the APT1 dataset, and saved them away for search.
So, hypothetical time: Let’s assume you’re an automation company that is concerned about their product being used by malware authors to do bad things. This could be through either direct usage, or simply using the name of the DLL to camouflage malicious software. You come up with a list of DLLs and maybe other unique information used in the product, and use a “hypothetical” service (not software, a subscription service) to watch incoming malware for signs of your product. In this case, I’m going to call this company the “Lotus Notes” company, and I’m going to alert on a small subset of the DLLs “Lotus Notes” uses. The DLLs I’m looking for are lcppn30i.dll and nNotes.dll.
I load up APT1 plaintext, and search for those DLLs, getting a single hit on the nNotes.dll in sample VirusShare_94a59ce0fadf84f6efa10fe7d5ee3a03. Now, as a representative of Lotus Notes, I would want to know why my DLL is showing up in a malware sample, and would pull up the 94a59c sample (in this case, in Virustotal). Looking over the various sections, I can see that this sample is identified as malware by 18/46 antivirus products, and the nNotes.dll is shown in the PE imports to the right. User comments flag it as part of an APT1 compromise, and the original filename of the sample is ‘nsfdump.exe‘ (apparently, a tool to dump out Lotus Notes data).
Sounds impressive right? Simply putting all your details into a search set and letting an automated malware sampling system do all the work of finding instances for you? Not so fast. This example is a contrived example, I’m working with an extremely small malware sample, and I’ve cherry picked my DLLs as well. In the real world, I worked on the Malware Mining set of blog posts for about 4 days to write. In that amount of time, VirusTotal received 445,000 distinct new files, orders and orders of magnitude over my example of 293. I currently don’t have access to the Private API of Virus Total, or I would search the PE imports for nNotes.dll, but I did some google-fu against the indexed portions of Virus Total and came up 176 unique matches, likely a much smaller number than a real Virus Total search would show.
So what’s the point I’m trying to make? While it’s likely that someone can search the dataset and retrieve matches for specific strings within malware, the amount of positives, or false positives, is going to make detailed analysis an ugly experience. What’s needed is specificity in generating the searches so that as many false positives are left out as possible, and that there are tools available to quickly and easily drill down into already searched datasets to help create more accurate searches.
For example, instead of using DLL names, a good search would attempt to use CLSIDs. CLSIDs are unique, and will be far more accurate when searching the datasets. Using hashes of specific files is far more effective, though the intent is to detect malware making use of files that may not be hashed. I wouldn’t mind seeing a capability that could search a filename and a hash, and let you search for all filename that did NOT have a specific hash (this would rule out legitimate uses, leaving illegitimate).
I provided an initial automation search set on a particular product to an interested third party last year. The individual wished to use the sample to search their malware samples, and I recently got feedback on how the search went. The response was millions of hits, most likely false positives due to the ambiguity in my search set, which didn’t consist of the unique CLSIDs and Hashes. The problem here is a big data problem.
Being able to mine malware data effectively is the difference between simply detecting malware and actually analyzing malware to look for trends, vulnerabilities, and new attacks. Additionally, the malware authors have started catching wise to static analysis techniques, and are branching out in different ways. One of the more interesting methods to counteract this is to run the malware in a sandbox for a small amount of time, an approach recently adopted by VirusTotal, and also shown (in far greater detail) on MalWr.com.
Monzy Merza of Splunk had a S4x14 defensive session. Working with an actual, deployed Building Management System (BMS), Monzy wrote python scripts to export the data from the BMS to Splunk for analysis. He focused solely on what could be detected from info logged by the BMS.
The BMS was known vulnerable in the general sense that BACnet is an insecure protocol and specific sense in that Rios/McCorkle had found vulnerabilities in the Tridium Niagara AX.
Once the data was in Splunk, Monzy showed examples of how anomalies that could be cyber attacks could be detected in the data. The examples are specific to a BMS and should provide hints to anyone looking for attack detection in an ICS.
The big news of the week is Industrial Defender will be acquired by Lockheed Martin. Terms of the acquisition were not disclosed; it would be very interesting to know how an ICSsec product is valued in the market. Industrial Defender, formerly known as Verano, was one of the earliest entrants into ICS security products and services. Over the past decade they have gone through funding rounds and bounced between strategies of products, managed services, consulting and back again. In recent years they greatly improved their marketing efforts and focused on their Automation Systems Manager (ASM). A major achievement was the partnerships with ABB, Elster, Itron and Schneider Electric around the ASM. While the partnerships are important and valuable, the key to the success of the acquisition will be how well that ASM was developed for expansion and long term support.
The New Zealand National Cyber Security Centre (NCSC) released a set of Voluntary Cyber Security Standards for ICS. They are based on the US NERC CIP standards, and anyone familiar with CIP will recognized the organization, formatting and text. Many would say NERC CIP is a bad set of documents to follow, but the key difference is the NZ documents are voluntary. Originally the CIP standards were going to be voluntary, and they actually do a good job of covering an ICS security program. It would likely be quite effective if an asset owner used NERC CIP or the NZ documents as a means to putting together a security program. It was the shift to using the NERC CIP standards as regulations where the good work started to fall apart in my opinion.
FERC ordered NERC to develop and submit a standard “to address physical security risks and vulnerabilities related to the reliable operation of the Bulk-Power System” … and NERC needs to do this in the almost impossible time period of 90 days. As usual Tom Aldrich covers the issue well. This is the FERC response to the shooting of the PG&E substation.