Police in Durham are ready to go live with an artificial intelligence (AI) system to help them decide whether a suspect should be kept in custody or not.
The system classifies suspects as low, medium or high risk of offending, it has been tested by police.
It has been trained on five years of offending history data.
One expert has said the system could come in useful but the risk that it could change decisions needs to be carefully assessed.
Data for the Harm Assessment Risk Tool (Hart) was taken from Durham police records between 2008 and 2012.
The system was then tested during 2013 and results were monitored over the following two years.
Forecasts that a suspect turned out to be low risk were 98 per cent accurate, forecasts that they were high risk were accurate 88 per cent of the time.
Prof Cary Coglianese, a political scientist at the University of Pennsylvania who has studied algorithmic decision-making, said: “To some extent, what learning models do is bring out into the foreground hidden and tacit assumptions that have been made all along by human beings,”
“These are very tricky [machine learning]models to try and assess the degree to which they are truly discriminatory.”