The term Protocol Differential Analysis needs to make Google as an infosec technique. I first heard the term from esSOBi at Indianapolis’ Circle City Con. I first encountered the trick, though, in a research lab a few years before: a quick and dirty tool was written by a colleague there to help analyze a very, very bizarre serial protocol.
The problem, briefly explained, is this: as an attacker, we want to find out what interesting packets are in a conversation between a controller and its engineering software. For example, we want to find out what packet represents the ‘Stop CPU’ command in a proprietary protocol. Since the protocol is undocumented, we are left either reverse engineering the master application, which can be extremely time-consuming, or analyzing the protocol stream itself to find the interesting packets.
Protocol analysis is often the easier path. Unfortunately, industrial proprietary protocols are extremely ‘chatty.’ Based upon the classic industrial poll-response model, the protocols may be sending tens or even hundreds of packets per second back-and-forth between the PC software and the industrial controller. By the time we interact with the software and click the ‘Stop CPU,’ button on a graphical interface, we may have thousands of packets to dig through. We want to find the packets that are interesting, but end up wading in a river, looking for the raindrop that holds the key to an attack.
Wurldtech announced the Achilles Industrial Firewall. It was hard to understand why GE purchased Wurldtech for their protocol testing, but if they were purchasing this product it begins to makes sense. The pricing for the perimeter model starts at $30K and the field model starts at $6K. This is significantly more than competitor products, not to mention non-industrial firewalls that are about 1/10 this price. The first release has some deep packet inspection for Modbus, DNP3 and OPC Classic, awaiting more details on this.
Mandiant announced an ICS Gap Assessment service. Not a lot of detail and not a big surprise given they had hired a handful of experts. Still my guess is this is a sidelight to the main goal of adding ICS expertise to the incident response service that Mandiant is known for. Many of the largest companies in the US and world own and operate ICS.
This week was the semi-annual, fall meeting of DHS’s ICSJWG in Idaho Falls, ID. There were between 140 and 160 attendees with half attending for the first time. Spy reports say the agenda was solid, but not much new from past events. It’s a reasonable free event for newcomers to ICSsec to attend, and there is probably a place for that.
S4x15 registration will open at noon EDT on October 23rd. Registering early will not only guarantee you a spot at the event, it will also save you some money.
We have kept the price for the two-day S4 event at $995 since the first S4 in 2007. We even added a third day, Operations Technology Day (OTDay), last year and kept the $995 price. This year there is a small price increase … unless you are in the first group of registrants.
We will be selling tickets for the main two-day S4x15 sessions in blocks of 50:
- First 50 tickets will cost $995
- Tickets 51-100 will cost $1,095
- Tickets 101 -150 will cost $1,195
- Tickets 151-capacity will cost $1,295
All tickets will include OTDay on Tuesday at no extra charge. The Friday events for S4 Week (ICSage: ICS Cyber Weapons conference or advanced ICS security training) will cost $600 as in past years.
All this may be a bit confusing, but it will be clear on the registration site. The key is if you are a humble researcher, independent consultant, student or work for a company that is hard to get training funds, register early. Capacity is 190 attendees, but that includes about 20 speakers who get in for free.
The agenda is both great and novel this year. I’ll write up a blog next Monday on the theme and characterize the talks. If you are tired of the same old talks at other ICS security events, you will love this agenda.
We are full up on the 30 and 45 minute sessions on the S4x15 agenda, except for saving a space for some late breaking, amazing research. We are still looking for two or three 15 minute sessions with very fresh content.
OTDay will have full two full tracks this year. One track is already full with a number of potential sessions being worked on / recruited for the second track. The challenge is we are not allowing vendor sessions at OTDay. Instead we are getting owner/operators to discuss what worked, lessons learned and practical applications of OT. OT is more than security. It is how to deploy and maintain a robust, mission critical system. So vendors find a good customer and asset owners, here is a way to attend S4 Week for free.
The agenda for ICSage – ICS Cyber Weapon is about 75% complete. I’m really pleased with how this event has matured in it’s second year. I actually believe that ICSage sessions will generate the most news in S4 Week. For the last 25% we are hunting for historical, economic, political theorist and other non-technical sessions.
The US Food and Drug Administration (FDA) published Content of Premarket Submissions for Management of Cybersecurity in Medical Devices. We haven’t had time to read it yet, but take a look at Patrick Coyle’s analysis. Pull quote, “Interestingly, in this section the FDA specifically abdicates responsibility for cybersecurity system updates, noting that: ‘The FDA typically will not need to review or approve medical device software changes made solely to strengthen cybersecurity.’”
Oops. Bloomberg reporter Jordan Robertson, who has written good articles on ICSsec, was led astray on ICS honeypot data by ThreatStream. Chattanooga appearing so high on the list should have been a red flag. This is a great cautionary tale with CSO covering the analysis flaws. ThreatStream made matters worse with “The scans were on tcp port 102 and the requests were mostly protocol compliant. Siemens utilizes port 102 … We are not familiar with other services that use this port.” ICCP, other iso-tsap …
Bob Radvanovsky and the Project Shine team have posted a paper showing the results of their search for Internet connected ICS devices. Great work by this volunteer team. It raised awareness for a lot of asset owners to look and pull these connections. It may also have encouraged John Matherly to add ICS scanning capabilities to Shodan. It is now so fast that Shodan has integrated and scanned for ICS devices within days of a Project Redpoint release.
If you want more on Internet connected PLC’s, read Distinguishing Internet-Facing ICS Devices Using PLC Programming Information by Paul Williams at AFIT.
Stephen Hilt’s presentation from DerbyCon on Project Redpoint is up on YouTube.
On October 11th Altamira is running a CTF called Scram Hackathon 2.0. The goal is to cause a nuclear power plant scram, emergency shutdown. (ht: Paul Asadoorian’s Security Weekly)
A near complete agenda is now up for the ICS Cyber Security Conference, Oct 20-23 in Atlanta, GA. Can we call it WeissCon for one more year even though Joe sold the event?
ISA99 Co-Chair Eric Cosman put together all of the work the committee has done on ICS cyber security. Eric wrote “the sum-total of our work to date, weighs in at slightly less than 900 letter sized pages, with a file size of just over 20MB.”
SSI Software and Technology acquired 60% of S21sec. S21sec is one of the largest ICS security consultancies in Spain, and perhaps in Europe. Schneider Electric is also a minority shareholder.
ARC Advisory Group continues to promote anytime, anywhere, any device control of an ICS. The latest is in their work with/for ICONICS mobile app. “While this is largely driven by the new Millennial generation of workers, most stakeholders are beginning to embrace smartphones, tablets, ‘phablets,’ and other mobile devices to access manufacturing processes, information and intelligence at any time from any location with wireless or cellular access.”
ICS-CERT published an advisory on web server vulnerabilities in Schneider Electric PLC’s including Quantums, Momentums, TSX and other Modicon models. It is a near perfect example of what is wrong with DHS and PLC vendors and in a way the ICSsec community for letting this decade long farce continue.
The PLC’s affected by the advisory are insecure by design. The directory traversal vulnerability is just the cherry on the top. The best example is most rely on Modbus function code 90, which we just talked about in the new Redpoint script this week.
Reid released three Modicon Metasploit modules in Feb 2012 with the best demonstration of the insecure by design, function code 90 problem being the modicon_stux_transfer. This lets you upload or download logic from the Schneider Electric PLC without authentication, a la Stuxnet, by using features in the PLC. 30 months later there is no fix or even an announced plan to fix the protocol and PLCs.
I have never heard or read anything from Schneider on how they plan to address this or other core security problems highlighted in Project Basecamp. They appear to be examples of Reid’s Forever Days™.
Even if you apply the firmware update for the web server vuln, your PLC is still insecure with the Modbus standard and Schneider proprietary function codes and other features. No offense to Billy Rios and the good work he does finding ICS vulns, but a web vulnerability does little to increase the risk when the main control protocol is purposely designed to allow anyone with access to affect the availability and integrity of the product in any way they can think of.
Why is the vendor bothering to address “vulnerabilities” when they provide an attacker with all the features required to attack the device? For marketing reasons? Because they don’t know better? Because DHS says insecure by design is not a vulnerability?
And then there is ICS-CERT … which is much more than a CERT. ICS-CERT is the brand DHS uses to market all of their ICS cyber security activities.
I am still waiting for DHS to state that insecure by design protocols and devices in the critical infrastructure need to be upgraded or replaced. crickets
There is a section in most ICS-CERT Alerts and Advisories where this would fit in perfectly. Right after “ICS-CERT encourages asset owners to take additional defensive measures to protect against this and other cybersecurity risks.”
Instead ICS-CERT/DHS processes all vulnerability reports like a clerk processing paperwork. Everything handled exactly the same with the goal of moving something from the inbox to the outbox. We need a leader not a clerk in this position that could have so much influence on the ICS community.
Why is DHS bothering with this type of vulnerability that does not markedly affect risk whether it exists or is patched and not mentioning the serious problems? Because it is how they convince their bosses in the Executive Branch, Congress, reporters and industry leaders who can be fooled that they are doing a good job. DHS shows them stats for number of Alerts and Advisories, number of fly away response trips, CSET interviews, and other busy work.
It is disconcerting when you talk to the leaders that DHS should be rallying to address the problem. They will tell you that DHS is doing a great job, yet they have no idea that most of the critical infrastructure is solely relying on an easily penetrated cyber and physical security perimeter for protection. That once inside critical infrastructure the perimeter the critical infrastructure has less security than your ATM card. In fact they will argue with you that the ICS protocol and device insecurity issue cannot be true.
I will admit, reluctantly, that Project Basecamp has failed, and full post on this subject is forthcoming. It has not instigated any significant move to upgrade or replace insecure by design PLC’s in the critical infrastructure. Siemens has made some improvements in the S7-1500, and the GE D20-MX at least now has the processing power that could handle security. SEL continues to introduce new security features into their substation equipment, and some of this no longer can be called insecure by design. These are the exceptions, and this latest Schneider/ICS-CERT fiasco is the final nail in the Project Basecamp coffin.
The funny thing is there was some hysteria, and we received a fair amount of pressure from vendors and others, after Project Basecamp. The main argument of the detractors was: how could we give the attackers such an advantage and not work with the vendors to fix the problem? In reality, I think most, but not all, of the objection was related to a fear that this would lead to exploits that then might force asset owners, vendors and DHS to face the silently accepted risk.
Let’s just keep it all quiet, in the ICS community, and deal with it in the next decade.
Since little has happened the only logical conclusion is the ICS community chooses to continue to accept the risk of relying solely, or at least primarily, on the security perimeter.
This admission of Basecamp failure doesn’t mean that we will stop raising the issue and educating C-level executives and key leaders to the risk they are accepting whenever we can. There are some leaders in the ICS space that are pushing vendors for DNP3 SA and other secure solutions, and some ICSsec people in the vendor organizations pushing the issue. We have even seen some SCADA Apologist conversions. I hope loyal readers will make their voice heard as well.
Image by novas0x2a
The folders that ICS applications are installed in are usually configured as exclusions to anti-virus scanning.
In some cases, the almost constant updating of the ICS data files leads to unacceptable performance if subjected to anti-virus protection. In other cases the vendor chose to avoid a potential, yet unseen problem.
To make this problem worse, the permissions on the ICS application folders are typically far from least privilege. Full Control for Everyone is not unusual for a default install. Folder permissions was an area we spent a lot of time with ICS vendors in developing the Bandolier Security Audit Files. They can be locked down, but rarely are.
We have not yet seen mass market malware seek out ICS application folders that were typically excluded from anti-virus scanning. However, a directed attacker might put his malware in these folders to prevent getting detected by a future signature.
Our Stephen Hilt released another Project Redpoint script as part of his DerbyCon presentation on Sunday. Modicon-info.nse will identify PLC’s and other Schneider Electric/Modicon devices on the network and then enumerates the device.
The script pulls information that would be helpful in maintaining an inventory as well as assessing the security status of the device, such as types of Ethernet, CPU modules and memory cards as well as the firmware version.
My favorite part of the script is the Project Information field. Here you will see the name of the Project, the version number of Unity Pro (the engineering workstation software) that was used to load the Project, and often where on the engineering workstation the Project file is stored. Below is sample output; click on the picture to see a larger version.
This script is possible due to Schneider Electric’s proprietary and magic function code 90. This is the same function code that Reid used in the modicon_stux_transfer metasploit module to upload and download logic to the PLC a la Stuxnet. You can do almost anything you want to the Schneider PLC’s through this unauthenticated, insecure by design function code 90.
The script begins using function code 43 to identify if there is a Modbus device at the targeted IP address. Schneider, unlike many vendors, supports function code 43 and will return some variant of Schneider in the response message. I should note that even if the Modbus device being queried does not support function code 43 the response can confirm it is a Modbus device.
We had an internal discussion on whether the script should include all identified Modbus devices, but decided to only report on Schneider/Modicon devices since there were already Modbus detection capabilities in Nmap. If we can pull additional useful information there may be a generic Modbus script in the Redpoint rack in the future.
There is a truism in information security, and it is that everything will eventually be found to be vulnerable.
I believe the lesson here should be, ‘plan to patch.’ It is tragically common in the embedded device space that vendors don’t take this advice. There is still an awful lot of embedded industrial control systems equipment being manufactured today which has no way to even apply update.
Today’s big news story in the infosec space is the ‘Bash bug’. In a nutshell, the bash bug is a mistake in command-line processing. A lot of embedded industrial control components will end up being affected. Basically, any industrial control system that runs embedded Linux, and which features a protocol that ends up calling GNU utilities will likely be vulnerable. Primarily the vulnerability will affect webservers that allow configuring and reading interesting information from a device, and protocols such as potentially CoDeSys which may end up calling other applications by using a shell for some vendor’s products.
Legally speaking, any control system vendor which sells a device running GNU software has to provide a notice with the device informing the end user what software is in use (and that the source code to said software is available from the vendor). Unfortunately not all vendors play nice by providing this notice. The only real way to know what is vulnerable is to test it.
Digital Bond Labs has a nice test environment with a variety of equipment in various forms of hackedness. One such device is Wago’s 758-870 series PLC. The product runs Linux and includes a version of bash that is vulnerable, as demonstrated above. It also runs an embedded webserver which executes cgi scripts (even calling execve() during some webapp command executions), so we will likely find a way to exploit the bash bug on these systems. Although, this system already has documented backdoor accounts, and Wago has already decided that they will not produce firmware updates for this product, so exploiting the bug here really has no point.
I think that the lessons we can draw from both the Bash Bug and Heartbleed is simply that vendors need to consider security upgrades in their product design. Bugs such as the Bash Bug provide a potential way to gain command-line access to some of these embedded systems. This access may be the only thing preventing unauthorized access to or even unauthorized cloning of a vendor’s product. Vendors owe it to themselves to protect their intellectual property, and owe it to customers to provide patches when the inevitable happens.
Be sure that whatever product you are rolling out to your control systems environment can at least have upgrade applied. Worrying about when you can apply a patch is a much better problem than worrying whether your IDS/IPS rule can be evaded because the patch will never come.
Pumpkin image by kams_world
The clock is ticking to get your session proposal in for S4x15 Week. Take a look at the full CFP and get it in by October 1.
We don’t just wait for the CFP responses. We actively chase down researchers and topics. So if you see something that is S4-worthy please send us an email.
I’ll take it a step further. If you have any idea for a S4 session, a Great Debate topic, onstage interviewee or proven good practice for OTDay, send us the idea and we will find the right speaker.
One other S4x15 Week note … we will have a slight increase in prices, our first since 2007. So the best way to get in for free is to present a great session. We also will have group pricing and the first 50 registrants will see no price increase. More on registration on October 1 after we finalize the agenda.
David Perera of Politico released a good article yesterday on the difficulty of taking out the electric grid. Unfortunately the headline writers missed the mark, “US Grid Safe From Large Scale Attack, Experts Say“, and it is difficult to write two very different points in one mainstream press article. Let me try with our ICS security focused audience.
Point 1 – Taking Down An ICS Doesn’t Necessarily Cause A Catastrophe
The article did a good job of capturing this point, but it is broader than the electric grid.
- Some ICS will continue to run just fine if large portions of the control systems are lost, particularly servers and workstations.
- There are often safety systems to prevent really bad things from happening. Admittedly the quality of implementation of these safety systems vary a great deal.
- Some of the safety measures cannot be changed over the network or even serial connections.
The skilled offensive cyber adversary / hacker will likely take control of the insecure by design and fragile ICS if he has network access, and he will be able to take all or part of the ICS down. The Operations Group will not be able to use the ICS to monitor and control the physical system. The impact of this will vary by sector and system.
Take down some electric distribution SCADA systems and there will be a delay in knowing about an outage. Take down a pipeline leak detection system, and they will likely shut down the pipeline in a few hours. Take down a gas or electric meter reading SCADA, and they will estimate bills and perhaps send people out for a manual read. Take down a turbine control system and that unit in the plant will likely not generate power until it is fixed. Take down a food manufacturing plant control system and some will run on manual operations, while others will be shut down.
The key point that David’s article captured is just because the ICS that run generation and transmission in the power grid are insecure by design and fragile it does not mean that even a skilled hacker or researcher can cause a widespread blackout.
Point 2 – The US Grid & Other Critical Infrastructure Are Definitely Not Safe From The Right Team Of Attackers
With the addition of ICSage: ICS Cyber Weapons to S4 Week we have been thinking a lot about nation state or well funded offensive security teams going after critical infrastructure ICS. We believe it would consist of:
- A “Hacker”. Actually the easiest job as Dillon Beresford, Project Basecamp and others have demonstrated.
- An Engineer. They need to understand the process or system that is being attacked, and determine what would cause the damage they desire. This could be expensive, hard to replace physical equipment damage that would cause a long term outage. Release of materials harmful to people and the environment. Damage to reputation. Or something subtle like Stuxnet that causes a maintenance or equipment failure issue that is costly, difficult to diagnose and saps confidence in the process.
- An Automation Expert. Once the Engineer has determined what should be done, and the Hacker has provided access to the ICS, the Automation Expert has to write the logic to implement the attack. This could be logic in a PLC, changed displays, database changes, or a variety of other ICS modifications. This is a real challenge since the Automation Expert likely cannot simulate the process completely. This may have been the most impressive aspect of Stuxnet.
I’m seeing a major shift that started at S4x14 and is continuing at S4x15 to the engineering and automation aspects of attacking and defending ICS. S4x13 showed exploit after exploit of vulnerable ICS components. The leading researchers have moved beyond that and are now looking at what to do with the owned ICS and how to defend against the really bad things a skilled attack team would want to do.
What David’s article probably couldn’t tackle is the somewhat conflicting ideas that while a highly skilled hacker or researcher likely couldn’t cause a catastrophic impact to a critical infrastructure ICS, the electric grid and other critical infrastructure is highly vulnerable to a talented and motivated team with the right mix of skills.
The vaunted safety systems often have holes in them, and the people on sites can usually tell you how they would cause long term damage to the physical system. Just a couple of examples:
- Safety systems are often implemented in safety PLC’s. These are your typical insecure by design PLC’s with extra redundancy. And there has been a push for years now by some vendors to integrate the control and safety systems. Change the safety logic and it will either stop the process when it shouldn’t or fail to stop the really bad things from happening.
- One of my favorite examples is vibration monitoring. This is often a separate system or application, such as Bently Nevada. It can be configured to trip a turbine or some other physical system if vibration reaches a certain level. Simply change the trip value, set it to a constant value, change the scale, … and it doesn’t provide the proper safety function.
- Or the safety system was designed to stop problems that have seen by equipment failure or human error, but they never considered what an active attacker would do. This is why efforts to take the ICS Safety Approach with ICS Security has never worked.
All that said, David did his homework and wrote a good article. Perhaps a better title would have been “Hackers Would Have A Very Difficult Time Taking Out US Power Grid”.