In Which Zuckerberg is Right

Attorney General William Barr has taken up ex-FBI Director James Comey’s battle for government backdoors into private citizens’ encrypted private messages.  Apple MFWIC Tim Cook won a similar fight regarding iPhone passwords and a demand that government should be allowed backdoors into those, and Comey’s FBI was shown to have been dissembling about that difficulty by the speed with which a contractor the FBI hired successfully broke into an iPhone the FBI had confiscated.

Now Barr has broadened the fight, demanding Facebook give Government backdoors into Facebook’s planned rollout of encryption for its messaging services.  He wants Facebook, too, to hold off on its rollout until Government is satisfied it has such backdoors.  Barr’s cynically misleading plaint includes this tearjerker:

Companies cannot operate with impunity where lives and the safety of our children is at stake, and if Mr Zuckerberg really has a credible plan to protect Facebook’s more than two billion users it’s time he let us know what it is[.]

Zuckerberg has been quite clear on what it is.  It’s facilitating private citizens’ ability to encrypt their private messages on Facebook’s platform.  Many of whom live in outright tryannies, others of whom live in so-far free nations, but whose government officials want to be able to pierce the protections of enforceable privacy at will.

The concern that bad guys, terrorists as well as common criminals, will take advantage of such encryption to evade government law enforcement facilities is entirely valid.  Two things about that, though. First is Ben Franklin’s remark about the relationship between safety and security.

The other is for law enforcement to do better with their own IT skills and with their own human policing skills.  Just as the FBI did in cracking that iPhone after Apple refused to give break-in assistance to Government.

In Which Alphabet may be Getting One Thing Right

Alphabet’s Google subsidiary is developing a new Internet protocol, and competitors are worried that the protocol would mak[e] it harder for others to access consumer data. Some thoughts on that below.  Congress is concerned, too, and its “antitrust investigators” are looking into the matter.

The new standard modernizes a fundamental building block of the internet known as the domain name system, or DNS. This software takes a user’s electronic request for a website name such as wsj.com and, much like a telephone book, provides the series of internet protocol address numbers used by computers [to provide user access the website].
Google and another browser maker, Mozilla Corp, want to encrypt DNS. Doing so could help prevent hackers from spoofing or snooping on the websites that users visit, for example. Such a move could complicate government agencies’ efforts to spy on Internet traffic. But it could prevent service providers who don’t support the new standard from observing user behavior in gathering data.

Alphabet, via Google, also runs its own DNS service, Google Public DNS, which lends credence to monopoly abuse concerns.  Alphabet also pointed out, in its proposal, that the new standard would

improve users’ security and privacy and that its browser changes will leave consumers in charge of who shares their Internet surfing data.

My thoughts are these:

  • There’s nothing wrong with Alphabet developing any new Internet nav protocol, including this one. I’d expect them to be required to license it, though, much like chip makers are required to license their tech.
  • There’s nothing wrong with alter[ing] the internet’s competitive landscape as the article put some of the concerns. Product and tech development and innovation always alter the existing competitive landscape. That’s to the good.
  • They [cable and wireless providers] fear being shut out from much of user data.… That’s a bit of too bad. They’re not the providers’ data; they belong to the user. It’s exclusively (or should be) the user’s call whether to share his data with any provider or other vendor.

And this:

Mozilla…will move most consumers—but not corporate users who use providers such as Akamai—to the new standard automatically, even if the change involves switching their DNS service providers.

Users better be able to override that switch. Otherwise, this may resume the browser wars between Mozilla/Netscape and Microsoft.  To Alphabet’s credit, if they can be believed, its Google subordinate has no plans to ape Mozilla and compel a change in DNS providers.

Given licensing, the only real concern is this:

[T]he new system could harm security by bypassing parental controls and filters that have been developed under the current, unencrypted system.

That’s fairly straightforward to restore, though.

A Chinese Firewall

…erected by the European Court of Justice.  The ruling is a partial victory for Alphabet’s Google subsidiary in a “right to be forgotten” case brought by Google as it appealed a fine imposed by the French watchdog, the National Commission for Computing and Liberties, which wanted Google to delete all references worldwide to personal data an EU citizen wanted “forgotten.”

The ECJ ruled that the EU’s “right” applied only within the EU—the partial victory.  However, it added that

search engine operators such as Google must put in place measures to discourage internet users from going beyond European borders to obtain information.
Dereferencing must “if necessary, be accompanied by measures that effectively prevent or, at the very least, seriously discourage Internet users” from accessing “via a version of this engine and outside the EU, the links that are the subject of the request,” the court added.

And so it begins in Europe, too.

Dishonest Journalism

Kyle Smith is too polite to call it that, but he comes very close in his National Review piece about an interview Robin Pogrebin gave to WMAL back on the 17th.

Some excerpts:

[Pogrebin’s and Kelly’s story [sic]] failed to mention that a woman who, according to a man named Max Stier, had Kavanaugh’s penis pressed into her hand at a campus party by multiple friends of his has said she recalls no such incident. That woman has also declined to talk about the matter with reporters or officials. Why even publish Stier’s claim, which was discounted by Washington Post reporters who heard about it a year ago, that he witnessed such an incident during a Yale party in the 1980s? Because of the narrative, Pogrebin says. “We decided to go with it because obviously it is of a piece with a kind of behavior,” she said on WMAL.

“Behavior” that has already been shown nonexistent, repeatedly.  Of what piece, exactly?  And what incident? The principle doesn’t remember it, and the principle witness refused to be interviewed.

Even if she were the victim of sexual misconduct, the [New York] Times would ordinarily take steps to protect her identity. Yet she has made no claim along these lines, and Pogrebin and Kelly outed her anyway. Is there no respect for a woman’s privacy?

Not when she needs to be outed in order to tell a tale.

[Emphasis in the original]:

Pogrebin repeatedly refers to the woman as a “victim.” This word choice is instructive about Pogrebin’s thought process. … She has made no claim to be a victim, yet Pogrebin describes her as one anyway. This is a case of a reporter overriding her reporting with her opinion.

And [emphasis in the original]:

If this is true, it means Max Stier was also drunk and his memories also can’t be trusted. (Someone should ask Pogrebin whether she was present at this party about which she knows so much.) By what journalistic standard does a reporter discount what is said by the person with the most direct and relevant experience of a matter—the woman in question at the Yale party—in favor of a drunken bystander? If both the woman and Stier were drunk, why is his memory more credible than hers? If something like this had actually happened to her, wouldn’t she be more likely than anyone else to remember it? Maybe Stier is remembering a different party. Maybe he’s remembering a different guy. Maybe he made it up.

And the kicker:

Of the woman at the party, she says, “Remember that she was incredibly drunk at that party as was everyone. And so I think we’re talking about memory here as really kind of a questionable issue. There are plenty of things that are conceivable that could happen when people are too drunk to remember them.” So the standard here is not whether something is true, it’s whether it’s “conceivable.” If a story is “of a piece with a kind of behavior,” even if such behavior is itself not established, and if a story is “conceivable” when filtered through that confirmation bias, and even if it’s undercut by the person the story supposedly happened to, and even if the person telling the story was “incredibly drunk,” you just go with it anyway.

That’s not gross journalistic malpractice, as Smith put it.  That’s blatantly, deliberately dishonest reporting.

RTWT.

In Which the 9th Gets One Right

Facebook’s use of the output of its facial recognition software—imagery of individuals’ faces—without those individuals’ prior permission can be contested in court, according to the Ninth Circuit.  Facebook had demurred when the case was brought.

On Thursday, the US Court of Appeals for the Ninth Circuit rejected Facebook’s efforts to dismiss the ongoing class-action lawsuit, which could potentially require the company to pay billions in compensation.
The lawsuit dates back to 2015 when three Facebook users living in the state [Illinois] claimed the tech giant had violated the Illinois Biometric Information Privacy Act, which requires companies to obtain consent when collecting their biometric information.

Judge Sandra Ikuta, writing for the court, wrote:

We conclude that the development of a face template using facial-recognition technology without consent (as alleged here) invades an individual’s private affairs and concrete interests.

Yewbetcha.  However, the courts, ultimately, the Supreme Court, need, in the end, to rule decisively that no company gets to steal a man’s personally identifying information—which his face assuredly is in this day of highly accurate facial recognition software—and theft is what it is when the data are taken without permission.

It’s even worse when these data, these facial recognition image outputs, are monetized for the benefit of the company in question with that done behind the individuals’ backs, too.