One of the things that scares me about the US criminal justice system is that it relies almost entirely on magical thinking and torture to secure convictions. I don’t believe anyone is guilty unless they were literally photographed holding the bloody knife while standing over the body and even then I’ll need some convincing to rule out potential mitigating circumstances.
Yeah, you can trust a computer if you’re know what you’re doing and not. Societally, though? Fucking hell no.
Poland had some AI driven nonsense for their benefits claims for a while. Except of course it was always, in the end, a human who said “yeah do whatever”. Except basically none of them ever did, it got so bad with it they just tossed it.
>As presented in last year’s report, in May 2014, the Ministry of Labor and Social Policy introduced a simple ADM system that profiles unemployed people and assigns them three categories that determine the type of assistance they can obtain from local labor office. Panoptykon Foundation, and other NGOs critical of the system, have been arguing that the questionnaire used to evaluate the situation of unemployed people, and the system that makes decisions based on it, is discriminatory, lacks transparency, and infringes data protection rights. Once the system makes a decision based on the data, the labor office employee can change the profile selection before approving the decision and ending the process. Yet according to official data, employees modify the system’s selection in less than 1% of cases. This shows that, even if ADM systems are only being used to offer suggestions to humans, they greatly influence the final decision.