Who would have thought a podcast on insurance would be one of my favorite and most interesting I’ve done in the past few years.
Podcast: Play in new window
| Download (Duration: 53:16 — 48.8MB)
I spoke with Eireann Leverett and Jennifer Copic of the University of Cambridge Centre for Risk Studies. They were two of the researchers who helped Lloyds put together the paper Business Blackout: The insurance implications of a cyber attack on the US power grid.
While the temptation will be great for loyal blog readers to focus on the scenario for the blackout, that is the least important part of the paper.
In the podcast we talk a lot about what types of insurance would likely cover an incident with the scenario’s impact. What factors would make a claim covered or not covered. All risks cover, silent cover, advanced or affirmative coverage and other important terms are defined and discussed.
We also delve into how this insurance will be written given the lack of data. This is not the first time Lloyds and others have dealt with this problem, so it is not insurmountable.
After listening to this episode multiple times I’m more convinced that cyber insurance for ICS / OT is coming. Owner/operators will want to transfer risk once a true risk management program is in place. The cybersecurity framework and other factors, such as C-levels and boards awakening to the risks they are unknowingly accepting, are beginning to drive informed risk management programs. Insurance and reinsurance companies are always looking for new and growing markets. This is important information for mid and top level management at owner/operators.
The Switches Get Stitches crew has been hard at work on quite a few switching projects. Indeed they released a new exploit tool against GE and GarrettCom switches early this morning, after attempting to get a fix for a Denial of Service bug for at least one year according to the team.
Backdoors, authentication bypass, and lack of firmware signing are all fine and good, but we wonder: what can you really do with this kind of access?
It turns out, quite nasty things.
Most end users who are taking a strong defensive stance on their networks are deploying IDS or NSM. Feeding these systems with data requires configuring various managed switches around the network to mirror traffic from switch ports.
One of the configuration questions that comes up with NMS and IDS is, what to mirror? Most switch manufacturers document their SPAN and Mirroring configuration guidelines with a small caveat: if you are mirroring more than one source port to a given destination port, you may end up dropping packets on the mirror (if, for example, all of your sources are saturating a 1Gb link, then the 1Gb mirror port cannot receive data quickly enough to see everything).
So when IDS and NMS systems are deployed, we often set them up like this:
The best way to get the most of any conference is to be a speaker. At S4 you get a chance to present your great research or passionate viewpoint to an audience of advanced ICSsec pro’s who will get it. They might not agree, but they will get it.
So check out the CFP and send us your best ideas for the Technical Deep Dives on Stage 2 or the less technical, but more entertaining Main Stage. It’s not an academic, refereed process, so just send us your idea and we can talk about whether it would make a great S4 session.
We are also looking for a couple more 1-day classes for the Friday after the conference ends. Let us know if you have a course or a course suggestion.
I tweeted on this OSIsoft self-disclosure last week:
But I want to write a bit more because this is not only new, or at least extremely rare, but hopefully will be an example that other vendors pick up.
There are a lot of good things about these 56 fixed vulnerabilities. Yes good news about vulns. First, OSIsoft is looking for and fixing vulnerabilities in their legacy code, and not just when they get a report from a researcher. They have been working on their Security Development Lifecycle (SDL) for years, and presented on the success and failures at a couple of S4 conferences.
This is great for new projects and products, but OSIsoft and most other ICS vendors are dealing with a lot of legacy code. It is a tough internal sale to convince management it is worth spending money to go through old deployed code and find and fix the bugs that lead to vulns.
OSIsoft is not alone in doing this. There are a number of vendors that have been working on this for 5+ years and are showing some great results. Still not a majority of the vendors, and not in the embedded devices, but more than a handful of vendors writing ICS software for servers and workstations have really stepped up their SDL.
It is challenging enough to convince management to find security problems in deployed code that no one is complaining about, and then spending the money to fix the bugs that lead to vulns. This is where most ICS vendors stop. They do the dreaded silent fix. They put out a new minor release of the code, talk about a few small known bug fixes and features, but never mention that the release also fixes some important vulnerabilities.
The problem with this is owner/operators cannot make an informed decision about whether to upgrade without this information. I know SCADA lifers will say that almost no one upgrades, but it’s a brave new world in ICS and even if they choose not to upgrade it will be an informed decision that they will need to live with ramifications.
So OSIsoft announces that there are 56 vulnerabilities fixed in the new version and even provides basic information on the CVSS scores. They rate 21 of the 56 high, so you may want to consider the upgrade decision of a PI Server that often communicates between security zones of different trust levels carefully.
The final bit of applause comes from the information on their customer portal, which I cannot disclose here. They do provide a bit more information on the classes of vulnerabilities, but not enough to substantially help an attacker. More importantly, this information has been out for PI customers since the release date, June 30th. Some will disagree, but personally I like the vendor giving customers who follow the support channel a two to three month head start on vulnerabilities and patches that are not yet public.
I’ll end this post with Bryan Owen of OSIsoft’s tweet thanking some of those that helped make this happen:
BlackHat and DefCon are over, and vendors are breathing sighs of relief (or, digging trenches). Let’s look at this week’s top news, according to us.
In the database world, we have two stories (a fail and a win):
– Oracle’s CSO floated a vaguely threatening blog post concerning external researchers searching for bugs in Oracle software. For most software, this is a violation of the End User License Agreement (EULA), although well-respected vendors ignore this violation when it comes to security researchers reporting security issues in their software. This is noteworthy because Oracle has made inroads into certain control systems verticals as the database of choice. Oracle quickly removed the post (which may still be read here) and issued a statement that the CSOs attitude concerning 3rd-party testing is not in line with Oracle itself. This is hard to swallow. The opinion of a corporate executive certainly has an effect on how a company acts, otherwise the worker is truly not a ‘Chief’.
– As a foil to Oracle’s failure, OSISoft has released an alert with bug fixes to their PI Historian. Some 56 security issues were identified and fixed in OSISoft software. OSISoft currently leads the ICS space in self-reporting security issues and publicizing its internal security efforts.
A handful of vehicle hacking stories follow the Vegas cons:
I’d encourage loyal readers to check out the comments on the recent OT is Mission Critical IT article. Some are better written than my original article and others highlight the problem.
Most IT departments would take “mission critical” to mean do lots of backups and patch aggressively.
This is a likely true from Jake’s experience of IT departments within his organization and many of his peers. Most involved with ICS have probably never come in contact with a high value IT system with stringent availability, integrity and confidentiality requirements. The concept of the help desk being responsible for ICS uptime is terrifying.
In my experience Mission critical IT systems are supported very very differently from the normal desktop environment. I believe that most of us in OT security could learn a great deal from mission critical IT.
Mission critical systems (there’s a clue in the name) have specific requirements for confidentiality, availability and integrity. The idea that a mission critical systems such as, say, a financial trading system can be patched aggressively is just crazy. These systems have extremely stringent availability and integrity requirements, outages cost $MMM’s (again they are mission critical).
There are other gems in the comments, but it is clear that the understanding that OT is Mission Critical IT is based on your exposure to real high value IT systems. We need to cross pollinate better.
One area where you see a convergence is the control systems that monitor and control data centers.
The Tripwire team asked a number of people for 100 words on the following questions:
How does the IoT change the dynamics between IT and OT? What practical tips can you provide for working together effectively?
You can read the full set of responses in this linked article, but here was my answer:
The OT is different than IT fallacy stems from ICS professionals comparing OT to desktop management.
OT is mission critical IT. The areas where OT differs from highly secure and reliable mission critical IT systems are deficiencies in OT rather than differences in requirements, such as insecure by design protocols, equipment deployed incorrectly, lack of trained staff for deployed technologies, insufficient test environments, run to fail maintenance philosophy …
The looming insecurity of IoT is much more of a concern for end users than traditional ICS as long as you don’t fall for the everything must talk to everything myth.
100 words is quite constraining, so here is a bit more. I’d challenge you to ask HOW? whenever you hear IT is different than OT.
You will hear that OT is often controlling a critical process with potentially large impacts to human safety, critical services and the environment while IT is not. This is the reason driving availability and integrity requirements of the OT and does not represent a difference from how OT will meet these requirements. Many mission critical IT networks have as high or higher availability requirements than OT.
You will hear about patching, upgrades and other IT cyber maintenance activities that are performed without testing or coordination with the system users. This is true in corporate desktop management and non-critical servers. It is not the case for a mission critical IT system where the company can tell you the cost per minute of downtime.
The sad truth is the differences between mission critical IT and OT are due to the acceptance of a lack of availability, integrity and reliability by those running OT.
- No source or data integrity … loyal readers know I’ve been beating the drum that the ICS protocols lack integrity. Access = control or compromise. Mission critical IT uses secure protocols.
- Lack of trained staff for deployed technology … this is also tied to incorrectly deployed equipment. Example: the lead article of the ICS-CERT Monitor highlights the crack team being sent out to analyze “the router and switch configurations and found an error in how the spanning-tree protocol”. For years OT has been deploying Domain Controllers without trained Domain Admins and Cisco infrastructure without even basic CCNA Routing and Switching knowledge. Now sophisticated virtual environments are being deployed with the ICS vendors stating “don’t worry, you won’t need to touch it after its installed”. Mission critical IT insists on the right skill sets.
- Lack of a test environment … not all OT fails in this, but it is still uncommon to see a realistic test environment for OT. Mission critical IT will have a test environment and a regression test for functionality before changes are moved to production.
- Recovery … mission critical IT has well designed and tested recovery capability and can state with a high degree of confidence how long it will take to recover. OT relies heavily on redundancy that is of little help in a cyber incident.
- Patches break applications … this actually happens in both OT and mission critical IT, but that is why both need to test thoroughly before deployment. The biggest difference is the mission critical IT vendor expects to need to patch and usually follows the OS and other vendor instructions on how to design apps. I’ll admit this is an area where many ICS vendors have greatly improved. Patch incompatibility is markedly decreased over the past 5 years.
I could go on and on, but the most important item is OT all too often has a run to fail maintenance philosophy. Install it and don’t touch it unless it stops working. This results increasing fragility and lowering of reliability that would not be accepted in mission critical IT.
I know that not all mission critical IT operations are perfect, but they are far ahead of OT.
We have worked with clients that rely on IT for OT, others that share the responsibilities, and even some Operations Groups that basically create a mission critical IT group. Any of these will work, but the key is you need the trained team with sufficient time and processes to run your OT like a mission critical IT system. The amount of money spent to run and maintain a mission critical IT system dwarfs what you see for even very critical OT systems. If your OT is so important to the organization it should not be difficult to get the appropriate funding.
Can you think of a single process or control in mission critical IT that would not apply to the IP or Ethernet portion of OT?
A failing grade
When reading CERT advisories in the ICS space I used to skim to the CVSS score as a quick way to assess what the vuln was. I rarely like what I see when I think about the actual vulnerability to which the score is applied.
CVSS, or the Common Vulnerability Scoring System, is meant to provide an abbreviated summary of a vulnerability. Chiefly, it is a means to quickly convey how serious a vulnerability is by showing both how easy a vulnerability is to exploit, as well as what the impacts of exploitation are.
There are two big problems with CVSS in the ICS space. For starters, it doesn’t tell us much about the ICS impact of a vulnerability (whether a bug can cause a loss of view or a loss of control for a control system would be nice to know). More importantly, the scores published in official advisories are often just plain wrong.
The latest example of this would be the Garrettcom Magnum switch advisory, released only a few weeks ago.
One of the discoverers, Ashish Kamble, has a good technical summary of the vulnerabilities here: https://community.qualys.com/blogs/securitylabs/2015/06/16/device-vulnerabilities-fixed-garrettcom-magnum-series .
There are a few interesting issues, chiefly surrounding how Ashish’ writeups conflict with the ICS-CERT advisory:
SHAKACON was a well run and friendly conference with about 300 attendees and high quality talks over 2 days. If you are thinking about it for 2016:
GO – If you live in Hawaii. This is a no brainer. The opportunity to go to Hawaii draws better speakers than you would typically see at a local conference, and how many security conference options are there in Hawaii?
GO – If you want to combine a vacation in Hawaii with some business. Book ahead and you can get some good airline rates.
NO GO – If you are on the mainland, except for maybe California, and are looking for a 2-day security conference. This is not due to any deficiency at SHAKACON, but it’s a long trip for a two-day event, where you are inside and could be anywhere, if you are going to turn around and fly home.
Now for the ICS related highlights from Day 2.
Scott Erven on Medical Devices
He unveiled about 100 default and possibly unchangeable passwords for GE medical devices … sort of. These default passwords were available on the GE site for many years. The image below is the word cloud Scott provided in the session.
Scott reported this to ICS-CERT. GE responded to ICS-CERT that these were default passwords that could be changed, so they are not vulnerabilities.
The problem arises when you read the GE documentation. They have numerous, very strongly worded warnings to never change the default passwords at the risk of permanently breaking the medical device, eliminating the possibility of vendor support and other terrible things. So as you can imagine almost all of the deployments of these 100 medical devices have the default credentials for administration and other roles.
I had two questions for Scott:
- Did ICS-CERT know about the documentation saying not to change the default credentials? Answer: Yes and they chose not to issue an Advisory.
- Did GE commit to modifying processes and updating documentation to recommend changing default credentials. Answer: Unknown. GE did not respond to Scott’s requests to meet and discuss their solution.
Craig Smith on Auto Exploitation Techniques
I had to skip out on the second half of this session, but not before hearing about two new tools. Craig began the session by stating that “hacking cars is not really that hard”. It basically is the insecure by design ICS protocol problem combined with a lack of attention to the security of remote access to the car.
Craig announced his CAN of Fingers (c0f) that will listen on the CANBus and provide a make/model/year of car based on the monitored messages. This can then be checked against a database of vehicle profiles and eventually choose what attack or defense code to run.
Craig also announced his F1337 tool that essentially creates a botnet of autos using a Fleet Management application. This video from NBC News shows the tool in action. He was careful to not point to any specific auto manufacturer or fleet management service, but it appeared to be a tool that would work across numerous vendors.
Three sessions at Day 1 of SHAKACON in Honolulu were noteworthy for the ICSsec community.
Charlie Miller and Chris Valasek on Auto Hacking
The big session from this team will be at Blackhat where they will unveil and demo their ability to remotely control cars, most likely through the Bluetooth in the Entertainment System unless they were giving a head fake.
At SHAKACON the pair went into detail on how they moved their research from a real auto to the workbench. It reduced the cost of the hacking rig by more than 75%, and more importantly it reduced the danger of the research.
As they were talking and giving examples, I was a bit surprised they are still alive or at least not injured. It was very old school approach of let’s try this and see if it works like we think it will. This led to oversteering into ditches, running car into the garage wall, loss of brakes and more.
One particularly vivid video was bleeding the brakes on the bench system causing brake fluid squirting out of the braking system. Yes the command will lead to a loss of brakes.
The work Miller and Valasek have released to date shows impressive reverse engineering, but it also is not surprising to anyone in ICS. If you have direct access to an ICS that is monitoring and controlling a process that is running an insecure by design protocol, ipso facto you will be able to monitor and control the process.
It’s regrettable that all this reverse engineering was necessary to set up the next act of showing the risk of remote exploit, danger of increased connectivity, and risk of insecure by design protocols. The auto sector has the knowledge that Miller and Valasek had to figure out at great effort.
Deviant Ollam on Elevator Hacking
Fantastic session with a knowledgeable speaker presenting information in an entertaining manner and punching the key points.
Of course the actual control system has all the same problems loyal readers are well aware of with insecure protocols and applications, default credentials, Windows XP, …
The interesting part was the keys that override the controls, particularly the Fire key for emergencies. These keys are readily available and the bitting code can be found via search so you can make your own keys.
The main takeaway is to treat elevators as stairwells since any of the physical security features integrated into the elevator control are easily overwritten.
The good news is the speaker said elevators are “monstrously safe” primarily due to mechanical safety systems not accessible from the control system. Hmm, SIS separation from control.
Hacking Highly Secured Enterprise Environments by Zoltan Balazs, MRG Effitas
I will be using this presentation many times in the next few years.
Scenario: ICS owner/operator is allowing regular, every day, multiple people remote access to the ICS from the corporate network and Internet. After learning of the risk of this being a pathway for remote attacks on the ICS, and the recommendation that this should only be used in emergencies, the owner/operator wants to know what security software and hardware they need to safely allow everyday, regular remote access to the ICS.
Zoltan’s session showed an example with VPN, 2-factor authentication, AppLocker, firewall (and then “NextGen” firewall) and a bunch of other security controls. He showed busting through all these controls one by one with existing and some newly released tools. It was not a handwaving or vague, he provided the detail, real demos and tools he used are now released.
I’ll post the link to the Powerpoint and video when it is out, but it is a great example that if you use a remote access capability you are providing an attacker with a pathway into your ICS.