People Are Not THE Answer

Insecure By DesignEarlier this year at the SANS SCADA Security Summit, Michael Assante used his position as program chair to ask various speakers and panels whether People, Process or Technology was the most important issue to address to improve ICS security. The answer he wanted was People and almost everyone gave this to him, while quickly adding that all three are necessary and need improvement.

With the similar caveat that all three are necessary and need improvement, I’d answer technology.

My guess is most answering the question were probably thinking is it more important to deploy a Tofino, Waterfall, ICS/IDS, application whitelisting or some other solution that an increasing number of security products or better train people on security? I might agree with the people answer if this was the question.

The real hinderance to an organization that is serious about securing their ICS is the “technology” in the ICS itself. As we have proven and harped on in PLC’s and protocols in Project Basecamp, most ICS are insecure by design. Even if source and data integrity security features are built in, a sizable percentage haven’t been through a Security Development Lifecycle (especially threat model and fuzz testing). They are easy prey for a moderately skilled attacker who can write an exploit.

You can have the most well trained, professional people in the world, and if the bad guy or malware reaches the ICS it is all over. The underlying process is either stopped or modified, and the degree of damage is only limited by the time and skill of the attacker and an, inaccessible from the ICS, safety system.

Approaching the issue from a different direction, we have clients that have been diligently working on ICS security for over five years. This insecure by design and fragility of ICS solutions is the one problem they can’t solve. One client, who has an ICS security policy that is regularly audited, incident response plan, ICS IDS, great security perimeter, …, was forced into choosing between insecure by design PLC’s in their replacement project. So sad.

An argument can be made that processes are even more important than people. If technology limited what USB drives could be used, and there was a secure process for passing data between zones, would it really matter if the people understood the security ramifications. It’s not uncommon for processes to be followed in ICS without the people understanding the reasoning behind the process. When X alarm goes off, I do Y and Z.

Solutions should require the ICS team to think as little about cyber security as possible. It should be built in to the solutions and vendors should be deploying the solutions in a secure manner. A world where a large portion of the ICS team needs significant security training is a failure.

The only way I’d agree with the people answer is if we are talking about C-level executives and top government and industry officials. There leadership on demanding ICS security would make the biggest difference, but the thrust of the question meant “people” at the workforce level.

Once we have systems that can be secured the answer may change, but right now the biggest problem is technology.

Image by Verbatim

7 comments to People Are Not THE Answer

  • Apology in advance for engineering geek commentary.

    The classic activation energy diagram shown as exothermic. I’d tend to argue that automation follows more of an endothermic curve. A purpose for automation technology is to create a more capable process (higher potential energy) end state.

    Carrying the analogy a bit farther, it’s smart for folks to look for catalysts to reduce the activation energy. Depending on initial state capability (security baseline) you might find investing in catalyst A (compliance) as more effective than catalyst B (resiliency).

    To go full circle people are the answer. Progress in automation technology will eventually fail without people in the loop at the right places.

  • Dale Peterson

    Always an interesting thought and presentation from you Bryan.

    If you take the broad view of the People category, then of course it is right. People design and manufacture technology; people create requirements, purchase and deploy ICS; people create and enforce processes.

    But if you take that broad of a view, it really isn’t a question. I’d argue the loop you are talking about is the broad view.

    The question is more interesting if you narrow the definition to the people, process or technology at the client site who is deploying and operating the ICS.

  • In clarification about people consider this snip from Dan Geer at 2012 Suits and Spooks DC:
    http://www.taiaglobal.com/suits-and-spooks-dc/

    “…because the question on the table is not really whether
    a human is a failsafe or a liability — because the human is going
    to come out of the loop whether we like it or not. We can do nothing
    but turn over an increasing percentage of the tasks of cybersecurity
    to machines. In a sense, they’ve already won.

    The question is under what circumstances that we still control can
    that turning over be a good thing? How can we put a human back
    into the loop such that that human *is* a failsafe.”

  • “Solutions should require the ICS team to think as little about cyber security as possible”

    I’m not sure if that’s the attitude we should encourage for control system engineers. The whole plight we’re in can be explained by those engineers advocating cyber to stay out of their way as much as possible (while incorporating more, and more complex, cyber systems on the plant floor every year). This has lead to a philosophy that favors default-allow, if not to say default-encourage. One cannot use cyber the way we do it today without thinking hard about security and still expect everything to be risk-free at the end of the day.

  • C. Price

    All great points above. I would add that in the grand scheme of things, some improvements will be made on the technology front, but one should keep in mind that until we reach the share holders/investors of the companies that manufacture the ICS control systems (PLCs), we will continue to ice skate uphill (continue to be vulnerable to cyber attacks in the ICS realm). As more attacks continue to come that actually lead to downtime, the technology will improve. Until then, Awareness, continuous monitoring, and the right people coming up with solutions to fight the battle on-going is the best approach in my opinion. Just a thought.

    Price

  • Engineers will need to think of cyber security, but not likely in terms that IT folks are familiar with (i.e. port scans, and firmware dumps, and passwords). Engineers won’t be concerned with system security, but will instead be concerned about the security of the overall process.

    Imagine, for instance, a naval destroyer. It has many, many mechanical and electrical systems in use, such as propulsion, fire control, power generation, radar,and so on.

    I can imagine a case where incoming targets on the radar would be classified as hostile/non-hostile. In cases where a hostile target was seen, I could imagine an automated alert being sent out across a command and control subsystem, which would be forwarded to other systems as appropriate. Two of those systems could be propulsion and steering, directing the ship to change course and speed depending on parameters fed from the C&C system, maybe maneuvering it to a better counter-firing position.

    If hackers were to infiltrate the power generation system, they might be able to trick the propulsion system into thinking it had more power available than actually available. This could result in the destroyer not getting into position in time to counter the target. This is a inherited trust issue, the engines trust that the power gen system is ready to deliver, and report capability up to C&C.

    So, what to do? A typical response would be to run dedicated sensors all over the place, ensuring that mission critical data was replicated in each system. This costs a lot of money, space, weight, and has reliability issues though. It could be that use of encryption, authorization, and other cyber security mechanisms could provide the same assurance that data received from systems is trustworthy or not trustworthy. Then, the system could react, based upon the trustworthiness of the input as well, rather than simply the inputs value.

  • Dale Peterson

    Ralph – I think there is a disconnect between your comment here and your comment on the Practice, Practice, Practice. In the later entry you comment “I don’t favor a strategy that puts the cost for testing and documentation in the hands of thousands of end users, worst case assisted by pen testers who are not only unfamiliar with the device under test, but also with the testing tools.”

    My point is exactly that. The owner/operator should not have to deal with basic security issues like protocols with integrity, authentication that can be bypassed, hard coded credentials, systems deployed in a default/weak state … The first three examples are technology with the fourth being process, but all at the vendor or integrator level.

    To take it a step further, until the vendor solves these basic technology issues the owner/operator has to rely on a perimeter and detection.

    Also to clarify, the People thrust proponents envisions a massive effort to train and certify everyone involved in ICS, operators/engineers/technicians on security. My contention is that a focus on the insecure by design technical issues would have a much greater impact on the security posture of critical infrastructure ICS.

    It can turn into a circular argument. If the C-level executives, VP Operations and perhaps the ICS engineer guru understand security and risk better they could drive the technology.

Leave a Reply