Benjamin Franklin said “Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety.” Those words are even more true today.
Too often when we hear about security, it is in the context of the NSA and other agencies forcing manufacturers to install “backdoors” on the computers they sell to you so they can spy on you. Unfortunately, introducing any backdoor into a computer system makes it less secure. It is like there is a master key that can open the front door of every home. Of course that key is going to fall into the wrong hands.
On the other hand, there is not enough work done on how to make computer systems more secure. Which brings us to our current story.
A cybersecurity researcher was able to repeatedly hack into the computer systems of airplanes while he was aboard flights.
Lucky for us, he announced what he was able to carry out rather than doing anything malicious. Unfortunately, the US tends to treat security researchers like criminals. So instead of helping us make our systems more secure, they are effectively muzzled.
IK – as a coder/programmer, you would know more about this than most. But hasn’t this always has been the unrecognized (until more recently) wild card in our wild and wonderful digital frontier? Security has long been given short shrift in the interests of progress and profit, as I could tell, so long as neither was serious threatened. But with this and other recent events (Sony, Target, etc.), our digital infrastructure clearly needs an entirely new security paradigm, especially as our infrastucture (cars, drones, jet&planes?, electric grid, etc.) becomes AI driven. See Ex Machina. Yikes, we ain’t seen nuttin’ yet!
Beyond that, we sabotage ourselves by neglecting to maintain, let alone upgrade, our nation’s physical infrastructure, which makes us the laughingstock of the industrialized world. The latest Amtrak derailing in Philly, alongside the looming budget cuts for rail coming out of DC, is only the latest example. But I digress.
This kind of crap is exactly what soured me on security as a research focus. Building secure systems is hard, and frankly, we don’t know how to do it. In particular, we don’t know how to do it in a way that is also usable, and that’s a big part of the problem. We’re also just really bad about predicting how systems of simpler components interact.
But the incentive structures are completely broken. You don’t get your name mentioned on the news by building a secure system. You don’t get promotion and tenure in the academic world by writing one really exceptional paper (and nothing else) every 3-4 years. You get all those rewards by poking holes in existing systems that were implemented on fundamentally flawed assumptions (buffer overflow, anyone?). If you can manipulate the media enough to spin your attack as the end of civilization, even better. You crank out 6-8 marginal papers a year that all look at variations of the same idea that could potentially happen based on completely unrealistic assumptions and attack models.
We are spending far too much money and resources rewarding “hackers” with fame and future fortune (because this guy’s going to be raking in speaker’s fees for quite a while), and not nearly enough on studying and teaching solid engineering principles. And it’s a self-defeating cycle because now there are thousands of Neo-wannabes hearing about this guy and imagining how awesome it would be to be him.
You may have a point. The researcher says that he repeatedly tried to get Boeing to fix the problem, but “nothing came of it”. Boeing had no incentive to fix the problem. So who can blame the researcher for going public? It certainly focused attention on the problem.
Ralph, this article was of particular interest to me because I am part of a group that is looking at “an entirely new security paradigm”. Will the world pay attention to it?
I’m highly skeptical of his claim regarding his contact with Boeing. Boeing has quite a bit of incentives to fix the problem, because their planes can be grounded if considered to be a public safety concern. If, in fact, he did try to disclose the problem to Boeing (most companies have policies for handling responsible disclosure) and nothing came of it, the next step is to contact the FAA and the TSA. It is most certainly not undertaking a proof-of-concept while in flight.
This type of stunt is atrociously unethical and dangerous on many levels.
Thanks guys, now I’m really freaked out!
Michael, I disagree with you.
Boeing has no incentive to fix the problem until something terrible happens. I don’t think a plane has ever been grounded because of a security concern.
Second, if you read anything about this, they had a simulator and he had already conducted a proof of concept on the simulator.
What he did might have been professionally unethical, but I think it might have been morally responsible.
This reminds me of that joke where the farmer is trying to sell his jackass to his neighbor. “This jackass is really really smart, he’ll do anything you ask of him. Just watch.” But after the jackass ignores his every command, he wacks him over the head with a 2×4 and says, “But first you have to get his attention.”
I think I heard that the airline, instead of finding out if the guy is correct, has banned him from flying on their airplanes. Where is that 2 x 4?