Stunt Hacking Reply


An abbreviated response to Valsmith's post regarding "Stunt Hacking"

For reference, I've interacted with Vlasmith many times over the years and respect his work and opinions greatly. As with any discussion on a topic, some people may have different views, as I do in response to this post. This does not mean I have any grudge or problem with them and having differing opinions that leads to discussion of serious issues if a good thing, as is the case here.

That said; I beg to differ with Vlasmith. Allow me to retort

First off, I agree that "Stunt Hacking" is a thing and has become a problem in some ways. I've railed against talks that are 40 minutes of 'build log' slides culminating in a 'money shot' slide at the end. Lots of flash, but it's one isolated case/vuln/exploit that the patch was released the day before the talk. Looks cool, gets headlines and click throughs, but does not do much in terms of raising the ground level of security

The assumption in Vlasmith's post seems to be that Chris's actions were driven purely by publicity to generate business. Vis a Vi, Profit. That doesn't always have to be the case. As will hopefully be elaborated further, sometimes a line of research and it's results transcend that motivation and become one of altruism and wanting to see things be fixed because it's insanely dangerous and scary.

I cannot speak to Chris's motivations, thought processes, or anything along those lines. However, I can speak to my own research into ADS-B and next-gen air traffic control with authority. More ground based than Chris's work, but my work was also mentioned in the FBI/TSA PIN that was sent out. I'm assuming I'm under similar scrutiny as Chris, albeit 1 or 2 rings from the center.

When I started looking into ADS-B, it was sheer curiosity. I wanted to know how the plane finder app on my phone worked. I chased the rabbit down the hole and found myself in a very scary version of wonderland where the protocols between ATC and NextGen ready planes were unencrypted and unauthenticated and no evidence of mitigations for obvious attacks. My research took on a simple hypothesis: I believe the system is horribly insecure and risks unmitigated: Prove that this is wrong. As someone who flys alot, I wanted to be wrong.

In so far as proving to myself that they have mitigated the risks and that many elements of NextGen are safe and secure; I have so far failed. I could not find convincing evidence that it is secure and safe. But like any good scientist, I opened up my research to peer review in my DC20 talk on the subject. I say often in the video from that and other versions of the talk: I want to be wrong.

To date, about ~3 years later, I have not found or been presented with any public evidence to convince me the system is safe. Responses have usually been variations of 'Trust us' or 'we can't tell you'. Of course, as a hacker, this is an answer that will not do. So I keep digging and pushing because, as a frequent flyer, my ass literally at stake.

I can only speak for myself, but when faced with such things, I could not stand by quietly and not do everything I could to get an answer or force change and an improvement to security. Not as a profit center or market niche, but as a human being who cares what happens to other humans (and my ass).

For myself, this has led to a level of exasperation. and frustration. Lack of interest or responses from regulators, relevant authorities, manufacturers and airlines. So, in order to crack this nut and push harder than one independent researcher can do alone, the media becomes a powerful tool, but also an unpredictable one.

I won't rehash the problems with media and journalism here. It's too big to fit. But suffice it to say, it often happens that they suffer from a cranial-rectal interface and sensationalize small details and overlook the bigger picture. It's a double edge sword of sorts, but when you care enough about an issue and don't want to give it up, it's what you have to do. In Chris's case, there's a lot of sensationalism and 'OMG HAXORS HACKING PLANEZ!' that is not very helpful. However the message that there may be a problem here is certainly getting heard loudly by those who need to hear it. They are going to have to deal with this issue sooner or later because the media and public will continue to ask questions.

Chris's work, my own and that of many other researchers on various aircraft system vulnerabilities have shown that there is some level of systemic problems in aircraft systems. Pick a system and look at it from the hacker perspective and you always seem to find some truly facepalm worthy design decisions that reflect a core problem with this (and many other) industries conflating two things: Safety does not mean security.

Safety != Security

For obvious reasons, one would expect the aerospace industry to have a culture and engineering mindset around safety. After all, 500 Mph and 6 miles in the air is not a natural state for a human being and gravity is a harsh mistress. Again, speaking for myself, in all my research, discussion with air crew, ground crew, technicians and other industry insiders, it is very clear to me that the idea of security as used by the infosec industry, is not to be found in the aerospace industry. Their definition is very different from our own.

This is an industry focused on safety. They design for fault tolerance, extreme conditions, failure modes, redundancies and other factors. Things like checksums on data to ensure it's not been corrupted. That failure of one piece of equipment does not affect another, etc. These factors almost exclusively involve designing for certain types of failures that one can categorize as 'routine'. A piece of equipment fails, you know it has failed and you can switch to a backup or an alternate procedure, etc. The problem is, security threats do not have to involve failure, and certainly not the failures that engineers anticipated and built to expect.

To me, security engineering means having to deal with an intelligence (or lack there of) behind the failure. A security control can 'fail' and allow access where it should not have been granted, but that does not means that the control ceases to function 'normally' for everyone else. An aircraft system that is susceptible to external, intelligent influence can meet the design spec of 'safety' but fail horribly from a 'security' perspective. An aircraft system that allows 'bad' data to pass to decision makers (pilots, ATC, etc) but otherwise operates normally and within expected parameters will be trusted as working normally to whatever end the outside intelligence desires.

In my research, the ADS-B protocols were designed in the late 90's, long before security was at the forefront of our minds as it is now. It was a time where it was understood that, to even begin to experiment or think to tamper with a system required massive investment ($100K to millions) to get the equipment to even start. That equipment was also made by only a few vendors and as such, there could be some level of access control to who could obtain it.

Fast forward 20 years and technological advances have changed the field in ways that on one anticipated in the design stages. Things like SDR's that reduced the cost to send and receive previously specialized frequencies and protocols are now a sub $1000 problem, easily obtained by the public. Pushes in the aerospace, and many other industries to use common, off the shelf (COTS) parts and equipment mean that the public has easy access to needed tools and equipment.

To draw a very specific example in regards to Chris Roberts case; The safety mindset looked at the in-flight entertainment system for factors such as duty cycle, MTBF, electrical/fire hazards, weight, etc. Basically they looked at it and asked, can this catch fire and harm the aircraft. The closest they may have looked at human interaction was 'Can this be vandalized or damaged easily?' or 'Will it stand up to a toddler beating on it all flight with a toy?'

The inclusion of SEB ports under every seat row with minimal to no barriers (maybe a dust cover) to passengers accessing it? Commodity operating systems (often Linux), hardware, and protocols with little to no hardening? There are many examples where it is pretty clear that no one was thinking with a mindset towards security as we in infosec know it. These sorts of things would never be present in an infosec mindset of security.

So to the observations on Valsmith's list:

- RE: FAA (et al) auditing and approval: Was security properly included in those processes? You are assuming that there is a process for patches, etc. This is complicated by the fact that on aircraft, most anything is subject to dozens of regulators in any number of countries as well as the airlines themselves. If they cannot patch/fix it quickly, why leave such an open attack surface to begin with?

- Airlines (and airplane manufacturers) modifying device after receipt from the vendor: This doesn't happen as much as you'd think. Most airlines just contract out the IFE system and may add a bit of branding or other tweaks, but it's essentially an off the shelf system. You do hit on a great point though; because of the interconnections of these systems, it's not always apparent where responsibility lies to ensure security. I'm sure there's a lot of fingers pointing at each other.

- Product vendor issues: Quite true. Due to the safety mindset, the amount of QA and vetting for SAFETY is extreme. A lot depends on the system of course. The IFE is assumed (note I said assumed) to be an self contained system and as such, things like software changes are subject to less scrutiny, maybe just the Airlines QA team. Avionics and other flight critical systems of course are tested very rigorously for safety by a laundry list of stakeholders from industry and government. Again, they don't appear to be looking at things with a security mind. How are you going to find problems if you don't know to look for them?

- Aircrews, Maintenance crews changing settings: Again, depends on the system. They are highly likely to document such changes, but that doesnt do much to prevent stupid changes. The IFE, assumed to be closed, means that at worst, passengers can't watch movies on crappy small screens. Avionics, navigation, etc, all these systems would have settings and config options available. Training goes a long way to this to ensure proper operation. As does some technical controls that prevent the systems from being set to wrong, dangerous, or stupid settings. However, these systems can be operating perfectly to spec as designed, but if the vulnerability is baked into the basic design, protocols, etc, then it's a much larger issue here.

- Safety concerns: Again, safety != security. You can have an incredibly safe system that is insanely insecure

- End Of Life Cycles: Often the issues are systemic as I mentioned. The base protocol, the basic operational method is the problem. Specific equipment can come and go, but if it keeps adhering to the same flawed standard, it's a lost cause. This is where security controls and mitigations built in from the beginning are so important.

Put yourself in my shoes. You find yourself in a nightmare where you discover what appears to be a systemic and potentially fatal flaw in a mode of travel that was used by 3.3 Billion people last year. You keep digging to prove that it was just a bad dream, but the more you dig, the worse it gets.

What would you do? Could you sit on it and not do anything and hope for the best?

Could you ignore that hacker itch to get the answer that it was safe in order to assuage doubts in many others as well as yourself?

If something catastrophic did happen, Could you deal with knowing you didn't do everything you could do to see the problem fixed?

If you came upon and industry and bureaucracy that, despite you coming to them with documentation and evidence to backup your concerns, would not even answer your calls or emails with more than 'Trust us'?

In the face of all of this, would you not feel pushed to potentially push a line to prove your point?

If you could just dismiss this discovery and go on with your life as normal and not feel a hint of guilt or responsibility, then you are far more emotionally dead than I am.

There has been many times in history where compelling evidence of danger is dismissed by those in power as inconsequential and concerns dismissed with 'trust us, we know what we are doing'. I'm reminded of Clair Patterson, the scientist who fought for years to end the usage of lead additives in gasoline. The industry dismissed his evidence and claimed there was no danger despite overwhelming evidence. Today we now see evidence that the reduction in the usage of lead in gasoline and paint has had a direct relation to the drop in crime rates. (Watch Cosmos: A Spacetime Odyssey, Episode 7 for the story).

tl;dr, sometimes you have to go pull a 'stunt hack' for altruistic reasons in order to get the attention an issue deserves. Putting up a big and showy stunt may be the only way to get something solved. It's not always about profit or fame. Sometimes it's about making the world a better place.

Render 9/26/15


Return to Main