Security Holes

Recall the hoo-raw over Apple’s refusal to help the FBI hack an iPhone used by the San Bernardino terrorists, the FBI’s claim that they couldn’t get into the iPhone without Apple’s help, and then the FBI’s successful penetration of the iPhone from hiring a third party to do hack it.  Recall further the FBI’s subsequent refusal to tell Apple about the security hole in Apple’s iPhone software (supposedly limited to the generation of cell phone used by the terrorists) that was exploited by the hack and the FBI’s associated refusal to tell Apple how the hack itself worked.

The FBI has been in court defending those refusals.

…the Justice Department argued that the information it withheld, if released, could be seized upon by “hostile entities” who could develop their own “countermeasures”….

This is curious logic, indeed.  The FBI is leaving the security hole in place on the assumption, apparently, that hackers can’t already identify the hole and exploit it—as that third party did so promptly after being hired by the FBI.  So, leave the hole unplugged; a countermeasure would just be found by hackers.

Never mind that any hack—all of them to date, against any software in any milieu, as well as any future hack—is a countermeasure against the software being penetrated.  There’s nothing static about any of this; software security and hacks are in a dynamic arms race.  It’s foolish to leave existing holes in place in the expectation that the arms race will stop.

Leave a Reply

Your email address will not be published. Required fields are marked *