36

How AI is changing evidence disclosure in the UK 

Privacy Culture | March 25, 2025

The UK justice system is under pressure. Case backlogs are growing, and police officers spend countless hours reviewing digital evidence. In response, an independent review led by Jonathan Fisher KC has recommended a bold shift: adopt artificial intelligence (AI) tools to help prosecutors and police process evidence more efficiently — and update outdated rules that are slowing things down. 

The idea is simple. Use AI to sift through large volumes of material — emails, texts, documents, CCTV footage — and flag what might be relevant to a case. This could free up an estimated 300,000 hours of police time each year. But while the tech may offer speed, it brings new risks — especially around data privacy management and data governance. 

For data protection officers (DPOs), this is not just a tech challenge. It’s a chance to shape how AI gets used in a high-risk, high-impact area while dealing with data related to vulnerable data subjects — and make sure it happens responsibly. 

What’s being proposed 

Fisher’s review is direct. He argues that the current disclosure rules — written in the 1990s — are no longer fit for a world flooded with digital evidence. He recommends a more proportionate approach, supported by AI that can automate the triage of material. It’s not about replacing prosecutors — it’s about reducing the noise so they can focus on what matters. 

But this shift means data will be processed in new ways, often at increasing speeds, and possibly at larger scales. Without the right safeguards in place, the risk to individuals — and to justice — increases. 

The data protection questions 

If AI is being used to process personal data, DPOs need to check whether there’s a lawful basis. Is the processing necessary? Is it proportionate? Do we know what the system is doing — and can we explain it? 

It’s easy to assume that AI is just “sorting,” not making decisions. But if it flags something as irrelevant and that material is never reviewed by a human, that’s also to be flagged as an automated decision. And under UK GDPR, that triggers additional duties — including transparency, the right to contest decisions, and human oversight. 

Data minimisation also matters. AI tools often work best when fed with lots of data — but that doesn’t mean they should process everything. This may sometimes include data that they do not have the permission to process. DPOs need to check that irrelevant or sensitive personal data isn’t being swept in unnecessarily including any data for which they do not have a legal basis to include. 

Then there’s data retention policy management. How long is the AI keeping the data? Is it being reused to train future models? Is it being shared with vendors? And if so, is that documented and fair? 

Transparency is vital too. Can you explain what the AI does in plain terms? Can you describe how the system was trained, and what controls are in place? That matters for both compliance and public trust. 

Finally, you need accountability. If something goes wrong — a breach, a bad call, a missed file — can you show the risks were assessed and managed? Can you produce a record of how and why the system was used? 

A real-world example 

Let’s say a digital forensics unit is handed 200GB of data from a suspect’s phone. Previously, a team would have spent days combing through every message, image, and video. With AI, the system scans for keywords identifies patterns, and flags anything that looks relevant to the case. 

It saves time. But now you’re processing personal data from countless individuals who were never suspects or were never a part of the investigation. You’re relying on an algorithm to judge what’s relevant. And unless you build in clear and extensive checks, you may miss something important — or keep something you shouldn’t. 

That’s where DPOs step in. Not to stop the work, but to make sure it’s lawful, fair, and proportionate — and that there's a paper trail to prove it. 

What DPOs should do now? 

Here are a few practical steps to take now, whether or not your organisation has started using AI: 

1. Map what’s already happening 

Are teams trialling AI? Have tools been brought in by IT, procurement, or operations without privacy review? Bring those into your privacy management platform and log them properly. 

2. Run risk assessments such as DPIA or AI Assessment

Any use of AI in policing or justice is likely to be high risk. Use a privacy impact assessment (PIA) tool to branch out to other assessments such as AI Assessments to flag the key concerns — like sensitive data detection, automated decisions, or third-party processors — and track outcomes clearly. 

3. Train users 

AI tools are often used by frontline teams, not privacy professionals. They don’t need every legal detail — just enough to know when to pause and escalate. A bit of awareness goes a long way. 

4. Document your governance 

Tools like Horizon help DPOs keep a clear record of how AI is used, what decisions it makes, and how those decisions are checked. That forms part of your data governance solution and supports audit readiness, accountability, and trust. 

5. Be clear on data retention 

If training data is based on real case files, make sure there’s a proper retention schedule. If data is being shared with vendors, bring that into your vendor risk management process. 

Looking ahead 

AI isn’t just coming — it’s already in policing, healthcare, and civil service operations. The pressure to adopt faster tools is growing. 

DPOs are the game changers: The DPO must not to stall innovation but help shape it safely. Tools like AI-driven privacy solutions, automated compliance reporting, and privacy compliance software don’t just tick a box — they protect people, sharpen decision-making, and help organisations build trust at scale. 

The AI disclosure reforms recommended by Jonathan Fisher KC are just one example of how fast things are moving. If you’re a DPO, this is the time to get ahead of the change, not chase it. 


 

Case Study Angle: How Horizon Helps DPOs Keep AI Accountable 

The setup: 

A police force wants to bring in AI to sort through digital evidence. It should speed things up – but the privacy team is under pressure to make sure it’s done properly. 

The problem: 

  • The AI touches personal and sensitive data including data belonging to vulnerable groups
  • Nobody’s sure where human oversight ends 
  • DPIAs are slow or missing 
  • No clear map of what data is being used 

What they did: 

They used Horizon to bring order to the chaos. 

  • Live risk alerts flagged high-risk data flows straight away 
  • DPIA templates were set up to trigger the right questions early 
  • ROPA tools helped map the data, who it came from, and how it was used 

The result: 

  • They rolled out the AI without stumbling into legal hot water 
  • Oversight teams had answers ready when questions came 
  • The force could show they’d thought through the risks 
  • The public could see that care had been taken 

The takeaway: 

Horizon didn’t block the AI project. It helped shape it – keeping it quick, careful, and accountable. 

Related Articles

Loading...