The rapid growth of artificial intelligence (AI) within policing is unsurprising.
The speed and accuracy with which AI technology can aid police processes make it an attractive method to deliver an effective and efficient service. AI can transform the way the police investigate crimes, identify patterns and links in evidence, and produce vast amounts of data far more quickly than any human.
However, the application of AI can be contentious. Transparency and fairness must be at the heart of the criminal justice system’s implementation of the technology, to ensure proportionate and responsible use that builds public confidence. Failure to be transparent and fair will put innocent civilians at risk of a miscarriage of justice; when an unfair outcome occurs in criminal or civil proceedings, such as the conviction and punishment of a person for a crime.
How is AI used in policing?
Policing’s use of AI is advancing rapidly. All National Police Chiefs’ Council (NPCC) forces use data analytics, and at least 15 NPCC forces have advanced data analytics capabilities. Most of the NPCC forces’ AI applications focus on organisational effectiveness and workforce planning rather than predictive analytics. AI applications include demand management functions, such as the live triage of incoming 999/111 calls and the automation of data quality assurance tasks. In addition, many forces also use AI in identification algorithms, face recognition, and safety features within unmanned aerial vehicles.
AI promises to transform the way the police investigate and solve crimes. It can be used in forensic science and crime prevention to identify patterns and links in evidence and sift through vast amounts of data far more quickly than any human. In forensic science, AI can analyse large amounts of forensic data, including fingerprints, deoxyribonucleic acid (DNA), and tool marks, much faster and more accurately. As a result, this can lead to the quicker identification of suspects and the discharge of the innocent.
In crime prevention, AI can analyse crime data to predict when crimes are most likely to occur. For example, when assisted with surveillance footage, AI can detect suspicious activity and identify patterns and locations where crimes are likely to occur. As a result, AI can identify areas of potential crimes, such as fraud, money laundering, terrorist financing, murder, or theft.
How well do you really know your competitors?
Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.
Thank you!
Your download email will arrive shortly
Not ready to buy yet? Download a free sample
We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form
By GlobalDataMoreover, AI can pinpoint gunshots in the absence of the police. This integration will allow the police to respond rapidly to shooting events. AI technology sensors can be installed in public infrastructure and connected to a cloud-based computer to produce accurate identification of gunshots. Each sensor records the timing and sound of gunfire, producing large amounts of data that can aid in the investigation of an incident, including the shooter’s location.
Risks of incorporating AI
However, there is a risk that some AI systems have no human chain of command to review results and ultimately be accountable for decisions. As a rule, transparency and fairness must be at the heart of the AI implementation in the CJS to ensure its proportionate and responsible use. The world has already seen missteps in the use of AI technology by law enforcement. For example, there were numerous reports in the US last year about AI-powered facial recognition software failing to accurately identify offenders by producing fabricated cases and falsely accusing the innocent.
In Woodruff versus the City of Detroit [2023], Porcha Woodruff, a pregnant woman, was wrongly sentenced to eight months in prison for carjacking after a facial recognition system falsely identified her due to her ethnicity. Furthermore, in Williams versus the City of Chicago [2013], Michael Williams was wrongly sentenced to 11 months in prison for the first-degree murder of Safarian Herring after an AI algorithm failed to detect gunshots.
Instances such as this highlight the potential risk of using AI in criminal trials when there is little or no human oversight. Ultimately, while AI has the potential to transform the way police officers prevent crime by identifying patterns and links between reported crimes faster and more accurately than human capabilities, it currently poses a substantial risk to innocent civilians in criminal trials and police investigations.