Страницы

Monday, January 21, 2019

AI security

AI is sending people to jail—and getting it wrong


Now populations that have historically been disproportionately targeted by law enforcement—especially low-income and minority communities—are at risk of being slapped with high recidivism scores. As a result, the algorithm could amplify and perpetuate embedded biases and generate even more bias-tainted data to feed a vicious cycle. Because most risk assessment algorithms are proprietary, it’s also impossible to interrogate their decisions or hold them accountable.
The debate over these tools is still raging on. Last July, more than 100 civil rights and community-based organizations, including the ACLU and the NAACP, signed a statement urging against the use of risk assessment. At the same time, more and more jurisdictions and states, including California, have turned to them in a hail-Mary effort to fix their overburdened jails and prisons.
Data-driven risk assessment is a way to sanitize and legitimize oppressive systems, Marbre Stahly-Butts, executive director of Law for Black Lives, said onstage at the conference, which was hosted at the MIT Media Lab. It is a way to draw attention away from the actual problems affecting low-income and minority communities, like defunded schools and inadequate access to health care.

No comments:

Post a Comment