Threatpost and a handful of other news outlets are reporting on a worm actively exploiting the Shellshock bug against unpatched NASes. As an aside I find it a bit strange that the attackers are only performing clickjacking attacks — a much more obvious attack would be to use CryptoLocker or other data ransomware, since the current worm is targeting storage devices.
The question becomes, whose job is it to find and patch these kinds of bugs?
I hate to always say ‘the vendors,’ although that is my default response. Vendors however often don’t have the personnel to do reviews on code that they write themselves, let alone to review external code. Third-party components are usually open source and are often volunteer-driven.
I feel that a group of vendors would be well-served to get together and fund code reviews for commonly-used components. Those vendors could then share back those findings with the public, or, at their discretion, keep the findings internal to their group for proper patching. ‘Collaboratition,’ is a phrase often used in national labs for this kind of information-sharing — not ideal financially, but oftentimes it is the right thing to do (or the only way to get it to work).
Lightweight web servers seem like a good candidate for review, since so many embedded systems make use of them. We came up with our list of candidate servers based on devices in our lab, then searched for fingerprintable servers on Shodan to get a feel for their popularity overall. Results are rounded to the nearest 10,000 to help anonymize the actual software that we’re looking at:
Server A: 150,000
Server B: 100,000
Server C: 80,000
Server D: 60,000
Server E: 20,000
We then went ahead and did a cursory code review on every server that we could find code for. This review was just a really basic ‘grep’ analysis, looking for unsafe uses of unsafe C functions: blind strcpy() calls or strncpy() calls that use user-supplied lengths, uses of malloc() that never check for success, calls to sprintf() that never check lengths of input. Our ‘code quality’ analysis is a generalization based on how much of a headache we got looking at the code: the bigger the headache, the more unmaintainable the codebase is and the more it will cost to fix.