The Wisconsin Supreme Court is considering exactly that.
The state’s highest court is set to rule on whether such algorithms, known as risk assessments, violate due process and discriminate against men when judges rely on them in sentencing.
No. Even when sentencing a criminal, where his crime is substantially similar to other criminals’ crimes, the key is that substantial part. No two crimes really are alike, no two criminals really are identical, even the criminal convicted today is not the same man he was when he was convicted—even of a substantially similar crime yesterday—history has happened. One size cannot fit all, even here; sentencing must be unique.
And that sentence must be handed down by a judge or, in many jurisdictions (and my personal favorite), a jury. It takes a human to assess the man, and it especially takes a human to assess his likelihood of recidivism or rehabilitation. It takes a human, or a collection of us, to assess the man’s potential redeemability and his likelihood of redemption.
Computers have none of the comprehension, conscience, intuition, or moral capacity that are so critical to such judgments. Even a computer’s risk assessment must be suspect, as the inputs cannot include everything a human or that collection of humans that is a jury sees when they assess the man’s record and look into the eyes of the man standing before them.
Aside from the principle of the question, the particular tool in question in the case before the Wisconsin Court is badly flawed.
…a widely used tool called COMPAS, or Correctional Offender Management Profiling for Alternative Sanctions, a 137-question test that covers criminal and parole history, age, employment status, social life, education level, community ties, drug use, and beliefs.
The assessment includes queries like, “Did a parent figure who raised you ever have a drug or alcohol problem?” and “Do you feel that the things you do are boring or dull?” Scores are generated by comparing an offender’s characteristics to a representative criminal population of the same sex.
Tests, questionnaires, surveys, and the like are extremely easy to game, and any lawyer worthy of his pro bono fee is fully capable of coaching his client to game this one.
Computers shouldn’t sentence humans; humans should sentence humans. And it shouldn’t be done on the basis of input-limited machine-calculated predictions of the future, in any event. It’s tough to make predictions, especially about the future.