The Tripwire team asked a number of people for 100 words on the following questions:
How does the IoT change the dynamics between IT and OT? What practical tips can you provide for working together effectively?
You can read the full set of responses in this linked article, but here was my answer:
The OT is different than IT fallacy stems from ICS professionals comparing OT to desktop management.
OT is mission critical IT. The areas where OT differs from highly secure and reliable mission critical IT systems are deficiencies in OT rather than differences in requirements, such as insecure by design protocols, equipment deployed incorrectly, lack of trained staff for deployed technologies, insufficient test environments, run to fail maintenance philosophy …
The looming insecurity of IoT is much more of a concern for end users than traditional ICS as long as you don’t fall for the everything must talk to everything myth.
100 words is quite constraining, so here is a bit more. I’d challenge you to ask HOW? whenever you hear IT is different than OT.
You will hear that OT is often controlling a critical process with potentially large impacts to human safety, critical services and the environment while IT is not. This is the reason driving availability and integrity requirements of the OT and does not represent a difference from how OT will meet these requirements. Many mission critical IT networks have as high or higher availability requirements than OT.
You will hear about patching, upgrades and other IT cyber maintenance activities that are performed without testing or coordination with the system users. This is true in corporate desktop management and non-critical servers. It is not the case for a mission critical IT system where the company can tell you the cost per minute of downtime.
The sad truth is the differences between mission critical IT and OT are due to the acceptance of a lack of availability, integrity and reliability by those running OT.
- No source or data integrity … loyal readers know I’ve been beating the drum that the ICS protocols lack integrity. Access = control or compromise. Mission critical IT uses secure protocols.
- Lack of trained staff for deployed technology … this is also tied to incorrectly deployed equipment. Example: the lead article of the ICS-CERT Monitor highlights the crack team being sent out to analyze “the router and switch configurations and found an error in how the spanning-tree protocol”. For years OT has been deploying Domain Controllers without trained Domain Admins and Cisco infrastructure without even basic CCNA Routing and Switching knowledge. Now sophisticated virtual environments are being deployed with the ICS vendors stating “don’t worry, you won’t need to touch it after its installed”. Mission critical IT insists on the right skill sets.
- Lack of a test environment … not all OT fails in this, but it is still uncommon to see a realistic test environment for OT. Mission critical IT will have a test environment and a regression test for functionality before changes are moved to production.
- Recovery … mission critical IT has well designed and tested recovery capability and can state with a high degree of confidence how long it will take to recover. OT relies heavily on redundancy that is of little help in a cyber incident.
- Patches break applications … this actually happens in both OT and mission critical IT, but that is why both need to test thoroughly before deployment. The biggest difference is the mission critical IT vendor expects to need to patch and usually follows the OS and other vendor instructions on how to design apps. I’ll admit this is an area where many ICS vendors have greatly improved. Patch incompatibility is markedly decreased over the past 5 years.
I could go on and on, but the most important item is OT all too often has a run to fail maintenance philosophy. Install it and don’t touch it unless it stops working. This results increasing fragility and lowering of reliability that would not be accepted in mission critical IT.
I know that not all mission critical IT operations are perfect, but they are far ahead of OT.
We have worked with clients that rely on IT for OT, others that share the responsibilities, and even some Operations Groups that basically create a mission critical IT group. Any of these will work, but the key is you need the trained team with sufficient time and processes to run your OT like a mission critical IT system. The amount of money spent to run and maintain a mission critical IT system dwarfs what you see for even very critical OT systems. If your OT is so important to the organization it should not be difficult to get the appropriate funding.
Can you think of a single process or control in mission critical IT that would not apply to the IP or Ethernet portion of OT?
A failing grade
When reading CERT advisories in the ICS space I used to skim to the CVSS score as a quick way to assess what the vuln was. I rarely like what I see when I think about the actual vulnerability to which the score is applied.
CVSS, or the Common Vulnerability Scoring System, is meant to provide an abbreviated summary of a vulnerability. Chiefly, it is a means to quickly convey how serious a vulnerability is by showing both how easy a vulnerability is to exploit, as well as what the impacts of exploitation are.
There are two big problems with CVSS in the ICS space. For starters, it doesn’t tell us much about the ICS impact of a vulnerability (whether a bug can cause a loss of view or a loss of control for a control system would be nice to know). More importantly, the scores published in official advisories are often just plain wrong.
The latest example of this would be the Garrettcom Magnum switch advisory, released only a few weeks ago.
One of the discoverers, Ashish Kamble, has a good technical summary of the vulnerabilities here: https://community.qualys.com/blogs/securitylabs/2015/06/16/device-vulnerabilities-fixed-garrettcom-magnum-series .
There are a few interesting issues, chiefly surrounding how Ashish’ writeups conflict with the ICS-CERT advisory:
SHAKACON was a well run and friendly conference with about 300 attendees and high quality talks over 2 days. If you are thinking about it for 2016:
GO – If you live in Hawaii. This is a no brainer. The opportunity to go to Hawaii draws better speakers than you would typically see at a local conference, and how many security conference options are there in Hawaii?
GO – If you want to combine a vacation in Hawaii with some business. Book ahead and you can get some good airline rates.
NO GO – If you are on the mainland, except for maybe California, and are looking for a 2-day security conference. This is not due to any deficiency at SHAKACON, but it’s a long trip for a two-day event, where you are inside and could be anywhere, if you are going to turn around and fly home.
Now for the ICS related highlights from Day 2.
Scott Erven on Medical Devices
He unveiled about 100 default and possibly unchangeable passwords for GE medical devices … sort of. These default passwords were available on the GE site for many years. The image below is the word cloud Scott provided in the session.
Scott reported this to ICS-CERT. GE responded to ICS-CERT that these were default passwords that could be changed, so they are not vulnerabilities.
The problem arises when you read the GE documentation. They have numerous, very strongly worded warnings to never change the default passwords at the risk of permanently breaking the medical device, eliminating the possibility of vendor support and other terrible things. So as you can imagine almost all of the deployments of these 100 medical devices have the default credentials for administration and other roles.
I had two questions for Scott:
- Did ICS-CERT know about the documentation saying not to change the default credentials? Answer: Yes and they chose not to issue an Advisory.
- Did GE commit to modifying processes and updating documentation to recommend changing default credentials. Answer: Unknown. GE did not respond to Scott’s requests to meet and discuss their solution.
Craig Smith on Auto Exploitation Techniques
I had to skip out on the second half of this session, but not before hearing about two new tools. Craig began the session by stating that “hacking cars is not really that hard”. It basically is the insecure by design ICS protocol problem combined with a lack of attention to the security of remote access to the car.
Craig announced his CAN of Fingers (c0f) that will listen on the CANBus and provide a make/model/year of car based on the monitored messages. This can then be checked against a database of vehicle profiles and eventually choose what attack or defense code to run.
Craig also announced his F1337 tool that essentially creates a botnet of autos using a Fleet Management application. This video from NBC News shows the tool in action. He was careful to not point to any specific auto manufacturer or fleet management service, but it appeared to be a tool that would work across numerous vendors.
Three sessions at Day 1 of SHAKACON in Honolulu were noteworthy for the ICSsec community.
Charlie Miller and Chris Valasek on Auto Hacking
The big session from this team will be at Blackhat where they will unveil and demo their ability to remotely control cars, most likely through the Bluetooth in the Entertainment System unless they were giving a head fake.
At SHAKACON the pair went into detail on how they moved their research from a real auto to the workbench. It reduced the cost of the hacking rig by more than 75%, and more importantly it reduced the danger of the research.
As they were talking and giving examples, I was a bit surprised they are still alive or at least not injured. It was very old school approach of let’s try this and see if it works like we think it will. This led to oversteering into ditches, running car into the garage wall, loss of brakes and more.
One particularly vivid video was bleeding the brakes on the bench system causing brake fluid squirting out of the braking system. Yes the command will lead to a loss of brakes.
The work Miller and Valasek have released to date shows impressive reverse engineering, but it also is not surprising to anyone in ICS. If you have direct access to an ICS that is monitoring and controlling a process that is running an insecure by design protocol, ipso facto you will be able to monitor and control the process.
It’s regrettable that all this reverse engineering was necessary to set up the next act of showing the risk of remote exploit, danger of increased connectivity, and risk of insecure by design protocols. The auto sector has the knowledge that Miller and Valasek had to figure out at great effort.
Deviant Ollam on Elevator Hacking
Fantastic session with a knowledgeable speaker presenting information in an entertaining manner and punching the key points.
Of course the actual control system has all the same problems loyal readers are well aware of with insecure protocols and applications, default credentials, Windows XP, …
The interesting part was the keys that override the controls, particularly the Fire key for emergencies. These keys are readily available and the bitting code can be found via search so you can make your own keys.
The main takeaway is to treat elevators as stairwells since any of the physical security features integrated into the elevator control are easily overwritten.
The good news is the speaker said elevators are “monstrously safe” primarily due to mechanical safety systems not accessible from the control system. Hmm, SIS separation from control.
Hacking Highly Secured Enterprise Environments by Zoltan Balazs, MRG Effitas
I will be using this presentation many times in the next few years.
Scenario: ICS owner/operator is allowing regular, every day, multiple people remote access to the ICS from the corporate network and Internet. After learning of the risk of this being a pathway for remote attacks on the ICS, and the recommendation that this should only be used in emergencies, the owner/operator wants to know what security software and hardware they need to safely allow everyday, regular remote access to the ICS.
Zoltan’s session showed an example with VPN, 2-factor authentication, AppLocker, firewall (and then “NextGen” firewall) and a bunch of other security controls. He showed busting through all these controls one by one with existing and some newly released tools. It was not a handwaving or vague, he provided the detail, real demos and tools he used are now released.
I’ll post the link to the Powerpoint and video when it is out, but it is a great example that if you use a remote access capability you are providing an attacker with a pathway into your ICS.
Digital Bond Labs appeared at Black Hat Sessions in Ede, Netherlands. We gave a talk on vulnerability inheritance in PLCs, and also discussed some of the challenges associated with removing vulnerable internet-connected control systems from their wide attack surface.
The conference was a well-run one-day event put on by Madison Gurkha. Ede is a fairly small town, but thanks to being in the Netherlands is easily reached by train (or bicycle). BHS has been increasing in size steadily over the years, and this year’s attendance was just shy of 400 total conference-goers. While the keynote talk was in Dutch and thus incomprehensible to me, there were three good technical talks in English, including a talk by ERPScan. S4 alumni may remember ERPScan as the employer of Alexander Bolshev when he gave his excellent HART security research talk in 2014.
Greetings. Quick post to announce an updated release for the Digital Bond Labs CANBus utilities repository.
This release features the addition of a simple fuzzer to the toolkit. The fuzzer has two modes. The first mode (default with no options) is to send random data to random IDs on the CAN. The –min and –max arguments specify a range of potential IDs to send random data.
The second, and more interesting, mode of the fuzzer is a mutation-based approach working off of a base message buffer. Say you observe a message on ID 0x431 consisting of the data (here shown in hexadecimal string) of “00 02 00 00 82 13 00 01″. If you would like to fuzz the 4-5 bytes (0x8213) you would issue a command like: “node fuzz.js –min 0x431 –max 0x431 –basebuffer “0002000082130001” –mutateIndexMin 4 –mutateIndexMax 5″
We have opened the S4x16 Call For Presentations on the event website. Since 2007 S4 has been the place to show your ICS Security research to an advanced audience that will get it. In recent years we have added Operations Technology (OT) and ICS Cyber Weapons sessions to the event. But again these sessions are aimed at an audience that knows the basics and doesn’t want to hear SCADASEC 101.
The new venue in South Beach will allow us to produce sessions on two big stages, so we will be hunting harder than ever for quality, fresh and entertaining content.
Here is the short version of the CFP:
- Email your proposed idea for a S4x16 session to firstname.lastname@example.org
- Explain the session in 2 to 3 paragraphs highlighting what is new or novel about the session
- Identify if it is a Technical Deep Dive Session or Main Stage Session
- Identify the time requested for the session (15, 30, 45 or 60 minutes)
Also email us any ideas you may have for speakers or topics we should chase for S4x16. We evaluate submissions as they come in, so sending your session idea in early increases the odds it will be accepted. The CFP closes on September 1st.
There Will Be Cyberwar: How The Move To Network-Centric War Fighting Has Set The Stage For Cyberwar by Richard Stiennon
Read this book if you are looking for a summary of the attacks and cyber incidents that have occurred over the past 20 years in government, military, critical infrastructure and business. It also provides numerous concise examples of security controls that are needed to combat the attacks described in the book.
Don’t read this book if your focus is ICS. There is a bit of information on ICS incident, but not enough to justify reading for that purpose and you will find minor problems with the ICS text. Don’t read this book if you are looking primarily for a discussion and analysis of the future of “cyberwar”.
With the exception of the fictional scenario in Chapter 1 most of the book is focused on synopsis of past incidents. It does however convincingly make the case that weapons systems, communication systems and many other elements required to effectively fight a war are now connected to networks, more reliant on software and therefore subject to a cyber attack.
Given the title, There Will Be Cyberwar, and in light of Thomas Rid’s Cyberwar Will Not Take Place it is almost mandatory to see if Richard made his case and why the two authors come to diametrically opposed conclusions.
The answer is actually simple. The two authors have very different definitions for cyberwar. Thomas spent a lot of time defining war and then cyberwar in his book, and he made a convincing case why this definition of cyberwar will not be met. Read the book and listen to my podcast with Thomas to understand this point of view.
Richard has a much less stringent definition of what constitutes cyberwar.
Cyberwar is the use of computer and network attacks to further the goals of a war-fighting apparatus.
Richard has made the case clearly in his book that based on this definition cyberwar will happen and incidents have probably already occurred that would meet this definition.
I’ve heard no dispute that cyber weapons will be used in wartime, just a dispute over the term cyberwar.
A more interesting question is will we see a use of cyber weapons in war that is akin to the Battle of Britain / air warfare? I first heard this question from Jason Healey of the Atlantic Council in a panel discussion. The Battle of Britain proved that air power alone could be used to win a major battle. Will we see a major battle fought entirely in the cyber domain?
Richard also describes what would constitute a Cyber Pearl Harbor in the book.
It is not the destruction of the power grid, or the loss of communications from attacks against the Internet and telecom infrastructure, or even the collapse of the stock market that deservers Panetta’s dire warning. Only a crippling military defeat thanks to overwhelming control of the cyber domain deserves to be labeled a Cyber Pearl Harbor.
I believe the last sentence is a better definition of cyberwar, and perhaps a slightly modified version of the earlier definition is better for cyber weapons. In the end most of the disagreement is definitions, and this is less interesting or important than how cyber weapons will be created, deployed and used as well as defended against.
Note: I read the Kindle version on an iPad Mini 3 Kindle app. The formatting is wrong, but not so wrong to make the book unreadable on that device and still worth the convenience and savings over the print version for me.
S4x15 came on the heals of the attack on Sony. Everyone was discussing how cyber attack attribution can be done and the level of certainty that is possible, so we had a panel to discuss this very issue.
The second part of the panel discussed what does the victim due after they have attributed an attack to a nation or organization —retribution.
The panel included Bill Hagestad of Red Dragon Rising, Jonathan Pollet of Red Tiger, and Tim Yardley of University of Illinois.
There is a ‘talk franchise’ that has started titled ‘Switches Get Stitches.’ Started by Eireann Leverett and Colin Cassidy, it showcases problems in industrial network switch hardware and firmware. Digital Bond Labs offers a humble contribution to the cause: a demonstration of a firmware rootkit for an (admittedly somewhat dated) industrial switch. If you are attending Defcon 23, be sure to check out the ‘official’ SGS talk there.
One of the components in this year’s ICS Village CTF is going to be pretty unique: we have modified a network switch firmware. This ends up giving a lot of interesting leeway: we can now mangle packets, talk to a command and control server, and make a few other interesting flags for participants to find.
Most ICS equipment lacks any kind of firmware protection. Scarier is the fact that some operators, including a very small subset of utility operators, purchase safety-critical equipment from dubious sources such as eBay.
So, let’s take apart a network switch and show just how easy it is to trojan a device!