’Is it fair for a judge to increase a defendant’s prison time on the basis of an algorithmic score that predicts the likelihood that he will commit future crimes? Many states now say yes, even when the algorithms they use for this purpose have a high error rate, a secret design, and a demonstratable racial bias. The former federal judge Katherine Forrest, in her short but incisive When Machines Can Be Judge, Jury, and Executioner, says this is both unfair and irrational ...’ See full reviewJed S RakoffUnited States District Judge for the Southern District of New YorkNew York Review of Books This book explores justice in the age of artificial intelligence. It argues that current AI tools used in connection with liberty decisions are based on utilitarian frameworks of justice and inconsistent with individual fairness reflected in the US Constitution and Declaration of Independence. It uses AI risk assessment tools and lethal autonomous weapons as examples of how AI influences liberty decisions. The algorithmic design of AI risk assessment tools can and does embed human biases. Designers and users of these AI tools have allowed some degree of compromise to exist between accuracy and individual fairness.Written by a former federal judge who lectures widely and frequently on AI and the justice system, this book is the first comprehensive presentation of the theoretical framework of AI tools in the criminal justice system and lethal autonomous weapons utilized in decision-making. The book then provides a comprehensive explanation as to why, tracing the evolution of the debate regarding racial and other biases embedded in such tools. No other book delves as comprehensively into the theory and practice of AI risk assessment tools.
???zh_TW.webpac.bookDescSource???:博客來網路書店