The FBI and Backdoors

Recall that the FBI has long wanted government-accessible backdoors into our personal but encrypted communications.  “Trust us,” FBI leadership assures us, “we wouldn’t misuse that access; we’ll only use for ‘criminal’ investigations, and only with government authorization.”  And they’ve claimed in support of that wide-eyed innocence that they can’t break into over 7,000 cell phones in the pursuit of criminal investigations.  Current FBI Director Christopher Wray even put the number at over 7,700.

However.

On Tuesday, the FBI told PCMag that a programming error resulted in a “significant overcounting” of the encrypted devices. “The FBI is currently conducting an in-depth review of how this over-counting previously occurred,” the agency said in a statement.

PCMag went on to cite the Washington Post as putting the actual number at around 1,200.

Oops, indeed.

According to the agency, starting in April 2016, it began using a new “collection methodology” with how it counted the encrypted devices. But only recently did the FBI become aware of flaws in the methodology, it said, without elaborating.

Right.

“Given the availability of these third-party solutions, we’ve questioned how and why the FBI finds itself thwarted by so many locked phones,” the Electronic Frontier Foundation said in a blog post.

Indeed.  Whether this government agency was being dishonest in its characterization of the encryption “problem,” or it was just being incredibly sloppy in using “collection methodology” that it has so plainly inadequately tested, this incident is just one more reason Government cannot be trusted with back doors into privately encrypted personal correspondence.

AI Surveillance

Police forces around the nation are on the verge of getting Artificial Intelligence assistance in identifying folks of interest to them in real time on our cities’ streets.  The image below and its caption illustrate the thing.

I’m all for assisting the police, especially regarding the subject of that cynically tear-jerking caption.  But this sort of thing needs to be looked at with a very jaundiced eye.  It isn’t too far away from what the People’s Republic of China already is doing in terms of routine surveillance and tracking of everyone.

It’s not that everything the PRC does is bad, but some things are inherently dangerous, no matter who developed them or uses them extensively.  This sort of technology can very easily become a direct assault on our ability to be anonymous in public spaces.

TaeWoo Kim, chief scientist at One Smart Labs, a New York-based startup that is working on such software, said the technology is “creepy and a bit Big Brother-y,” but said it is “purely intended to fight crime, terrorism and track wanted subjects.”

The road to Hell is paved with good intentions. Governments can’t be trusted with such capabilities, and we don’t even need to invoke nefarious intent or “Big Brother-y” conspiracies to see that. Governments will end up misusing, even abusing, this sort of thing just in the ordinary outcome of normal bureaucratic imperatives to justify the bureaucrat’s and his bureaucracy’s existence, to grow, to expand the bureaucracy’s power and budget.

William Bratton, the former commissioner of the NYPD, says that the public was similarly worried about DNA testing when the technology first emerged. The technology has been credited in freeing wrongfully convicted people from prison.

This is a false analogy, though.  DNA testing isn’t used for routine, real-time surveillance of the population or even of small groups or of individuals, and current technology doesn’t allow such use.  AI-based image surveillance technology lends itself to exactly that real-time watching.

Too Much Privacy?

That’s actually a serious question.

The firestorm over Facebook Inc’s handling of personal data raises a question for those pondering a regulatory response: is there such a thing as too much privacy?

And

Law-enforcement agencies rely on access to user data as an important tool for tracking criminals or preventing terrorist attacks. As such, they have long argued additional regulation may be harmful to national security.

Unfortunately, no government can be trusted with citizens’ privacy, as the Star Chamber secret FISA court, the FBI leadership (and not just the current or immediately prior crowd—recall J Edgar Hoover), prior DoJ leadership, the Robert Mueller “investigation,” and much more demonstrate.

If our government wants to learn things, it needs to get back into the HUMINT business rather than relying so much on hacking IT systems.  And get an honest warrant, not just a FISA one.

Statutes, Judges, and DoJ

The Supreme Court last Tuesday heard a case between Microsoft and DoJ concerning whether the emails of an alleged drug dealer must be turned over to the government pursuant to a search warrant to that effect.  The catch is that the emails are stored exclusively on servers in Ireland—nominally beyond the reach of the US’ long arm of the law.

The statute in question is the Stored Communications Act, enacted 30 years ago before email and similar electronic communications were available.

Microsoft handed over some account data that was stored in the US but said it shouldn’t have to hand over the emails, which were stored on a server in Ireland.

The Second US Circuit Court of Appeals sided with Microsoft, ruling the 1986 law didn’t apply beyond US territory.

DoJ and the participating States’ Attorneys General argued that the appellate decision, if left intact, would hamper the government’s crime-fighting ability.  That’s likely accurate, but there are two things about that.  One is that the convenience of government is not an excuse for limiting individual liberties either directly or through the companies we own. Some of you have heard that from me before.

The other thing, though, is that extending the statute to reach beyond our borders is a political decision, not a legal one.  Only the political arms of our government—Congress and the President acting together (or with Congress overruling a veto)—can make that decision; only the political arms of our government can extend the Act or write a new one to fill the apparent gap.

There’s this bit of disingenuousity, too, from Solicitor General Noel Francisco:

Microsoft’s employees could prepare that disclosure without leaving their desks in the United States[.]

They could prepare such disclosures without leaving their desks in the US in 1986 when the Act was passed, too.  All they had to do was write letters to the managers of the overseas storage facilities.  Nothing has changed here except that email has replaced gofers and the mail room.  Nor has the status of the material stored overseas changed.

On the other hand, Microsoft and other massive tech companies also are raising red herrings.

Microsoft, Google, and other technology companies say…the case could threaten American dominance in the $250 billion cloud-computing industry, because foreign clients won’t use US firms if their data isn’t protected.

That also may be true, and it’s also not relevant.  That’s a question that’s strictly a business matter and not a legal one.  To the extent government help is useful in filling this business gap, it’s also a political question, and these businesses need to seek their recourse through those political arms of our government.

Finally, there already is an alternate route to getting the emails, as admitted by DoJ in their filings:

There is a diplomatic process, governed by legal assistance treaties, that allows the US to request that foreign law-enforcement counterparts share sought-after data, but it can be slow and ineffective, the department said.

There’s that convenience thing, again.

What does the text of the Act say? That’s what the Justices must apply, not a phantom Act that doesn’t exist but that does represent what Justices or DoJ officials might wish the Act to say.  Article I, Section 1, is quite clear about who gets to write the statutes in our system of government, and extending the reach of an existing statute is law-making that is beyond the reach of any member of the judiciary or of the DoJ.

Warrantless Searches of Cell Phone Data

The Supreme Court has a case before it, Carpenter v US (it heard oral argument Wednesday), concerning the 4th Amendment and the personal data of a defendant in the form of his cell phone location data.  The data were obtained from the cell phone company by police without first getting a search warrant.  There is precedent.

The high court reasoned then [in ’70s cases involving business records that banks and landline phone companies maintain about customer transactions and that the Supreme Court then reasoned police could seize without warrants] that individuals had voluntarily revealed their financial transactions or numbers they dialed to a third party—the bank or phone company—and so had forfeited any privacy interest in that information.

Smith v Maryland is illustrative of that general position.

There is growing criticism of that position.

allowing authorities to compile such granular data about an individual’s life, without a judicial warrant, no longer meets society’s “reasonable expectation of privacy”—the touchstone of the Supreme Court’s approach to constitutional limits on searches and seizures.

The objectors’ heart is in the right place, but their criticism is wide of the mark.  Compiling data—seizing a person’s personal information, which most assuredly includes where he situates himself from time to time—without a court’s order never has met society’s or that individual’s “reasonable expectation of privacy.”

Consumers (the individuals, the particular members of society in question here) have a reasonable—indeed, a loud and vociferously stated—expectation of privacy concerning their personal data, and an equally loud and vociferously asserted ownership of those data held by third parties.  This is clearly demonstrated by the raucous and repeated hoo-raw raised every time a Facebook or a Twitter or a bank or a phone company gets caught using those personal data in ways to which the consumer-owner objects.

This is further and just as clearly established by the even louder hoo-raw raised every time one of those third parties is discovered to have inadequately protected those personal data entrusted to it by being hacked and those personal data stolen, and too often exposed.

The Supreme Court ruled erroneously then, and Carpenter is a good opportunity to correct that error.  The Court should have known at the time that revealing financial transactions or numbers they dialed to a third party was not at all a voluntary action.  The revealing was a mandatory condition of doing business with the bank or phone company, and there was no opportunity to go elsewhere—all the banks and phone companies required that: give up the financial data or the phone numbers, or don’t do business at all.  Take careful note: that the technology of the time—or today—means that [phone numbers] must be revealed to [phone companies] in no way makes the reveal voluntary: it’s still a wholly involuntary privacy exposure.  The data are owned in whole by the consumer; the third party is merely a caretaker, bound to protect the privacy and sanctity of these papers, and effects.

Prosecutors can indict ham sandwiches with their grand juries, and policemen can just as easily get search warrants, but do get the warrant.  Cell phone location data, financial transaction data, et al., all are part of the papers, and effects, of the individual.

Full stop.