The ICO has spoken on agentic AI. Here are 6 things privacy teams should focus on
The ICO’s Tech Futures: Agentic AI report lands at a moment many privacy teams will recognise.
Agentic AI is already appearing across organisations, often quietly, sometimes informally, and rarely with the same governance applied to core systems. At the same time, regulators are still working through how existing frameworks apply when systems can plan, act, and adapt with limited human involvement.
The report does not introduce new legal obligations. What it does is clarify where pressure will build first. For privacy leaders who are already stretched, the value of the report is not theoretical insight, but practical direction.
Agentic AI does not change the law. It changes how quickly weak spots are exposed.
What follows are six areas the ICO is clearly signalling privacy teams should focus on now, alongside how these challenges are being addressed in practice.
1. Accountability will be tested operationally, not conceptually
The ICO isclear. AI agents do not become data controllers. They do not hold intent. Responsibility does not move simply because systems act autonomously.
What does change is how difficult accountability becomes to evidence. Agentic systems operate continuously, generate new data, and interact with other systems without explicit prompts. Governance approaches built around static documentation or periodic review struggle to keep pace.
In practice, accountability now depends on ongoing operational oversight. Privacy teams need visibility into what systems are doing, how risks are being surfaced, and whether follow-up actually happens. This is why many organisations move away from treating accountability as an annual exercise and instead embed it into day-to-day privacy operations.
At Privacy Culture, this is typically addressed by combining clear accountability models with ongoing operational support. Some teams retain strategic ownership internally but rely on structured oversight, escalation, and reporting to ensure accountability is demonstrable at any point, not reconstructed after the fact.
2. Automated decision-making will appear before it is recognised
The report repeatedly returns to automated decision-making, not because the law has changed, but because detection has become harder.
As agents shift from supporting human decision-making to acting on behalf of users, legally significant decisions may begin to occur. Processes such as complaint prioritisation, eligibility filtering, or account restrictions may appear operational rather than decision-making until they are challenged..
The legal position remains unchanged. Where automated decision-making applies, individuals must be informed, able to challenge outcomes, and offered meaningful human intervention.
In practice, these risks are rarely identified during design workshops. They surface during operational moments. A DSAR reveals a pattern. A complaint escalates. An incident prompts questions about how an outcome was reached.
This is why consistent DSAR handling and incident management matter. When these processes are run in a structured, repeatable way, they become early warning systems rather than administrative burdens. They allow privacy teams to spot emerging automated decision-making risks before regulators or individuals do.
3. Purpose limitation will be where discipline breaks first
Agentic systems work best when given broad objectives. Data protection law works best when purposes are specific, explicit, and limited. That tension sits at the centre of the ICO’s concern.
The report is clear that organisations must resist defining purposes so widely that almost any processing becomes justifiable after the fact. Each stage of an agent’s lifecycle needs a purpose that can withstand scrutiny, and access to data must be justified rather than assumed.
A familiar pattern is already emerging. An internal agent is introduced to improve efficiency. Initially it accesses a narrow dataset. Over time, access expands to make the agent more useful. When challenged later, it becomes difficult to explain why particular data was required at all.
Teams that manage this well treat DPIAs and records of processing as living tools, not one-off deliverables. As systems evolve, assessments and records are revisited and updated. At Privacy Culture, this is typically supported by linking operational assessments, records of processing, and risk tracking so purpose drift is visible early, rather than discovered under scrutiny.
4. Transparency will fail at pressure points, not on paper
Agentic AI complicates transparency in ways that are easy to underestimate. Systems may generate new uses of data that were not foreseen at deployment. Interactions between agents may be invisible to the humans responsible for oversight.
The ICO highlights the risk of invisible processing, where individuals are unaware that their data is being used or combined by autonomous systems. This directly undermines people’s ability to exercise their rights.
In reality, transparency failures surface under pressure. A DSAR arrives and processing cannot be clearly explained. A regulator asks how a decision was reached. An internal escalation reveals uncertainty about where data flows.
At that point, transparency depends on having a connected view of processing activity and the ability to respond consistently. Privacy Culture typically supports this by helping organisations connect records, assessments, and response processes so explanations reflect reality rather than assumptions or fragmented knowledge.
5. Accuracy and rectification will become operational flashpoints
Accuracy issues can quickly translate into data subject rights issues.
Hallucinations are often discussed as a model limitation. The ICO reframes them as a data protection issue.
In agentic systems, inaccurate information may be inferred, stored, reused, and acted upon repeatedly. An incorrect assumption about an individual can shape multiple downstream outcomes before anyone notices.
Accuracy remains a legal requirement. Individuals retain the right to rectification, even when data has been generated rather than collected.
In practice, rectification requests usually arrive through DSARs, at exactly the moment when teams must understand how data was created, where it has travelled, and what it has influenced. Where organisations have consistent DSAR handling and access to experienced review, these situations are challenging but manageable. Where they do not, accuracy quickly becomes a compliance and reputational risk.
This is why Privacy Culture approaches DSARs not just as fulfilment tasks, but as a critical control point for understanding data quality and system behaviour.
6. Culture will determine where agentic risk concentrates
Although technical on the surface, the report quietly reinforces a familiar truth. Privacy outcomes are shaped as much by behaviour as by systems.
Many agentic deployments will not originate from central programmes. They will emerge from teams solving immediate problems. Where privacy awareness is uneven, risk appears in unexpected places.
Organisations that manage this well invest in understanding how privacy is actually experienced across the business. Not what policies say, but how confident people feel making decisions, when they escalate concerns, and where assumptions replace knowledge.
Privacy Culture supports this through culture insight and targeted training, helping teams identify where agentic risk is most likely to surface next and address it before it becomes visible to regulators.
Final reflection
The ICO’s Tech Futures: Agentic AI report is not a warning shot. It is a steady signal about where weak interpretations of existing law will be exposed first.
The principles remain familiar. Accountability, purpose limitation, transparency, accuracy, and rights have not changed. What has changed is how quickly small gaps turn into material risk.
Privacy teams that treat privacy as an operational discipline, supported by clear processes, ongoing oversight, and experienced support, are well placed to adapt. Agentic AI does not require a new philosophy. It requires existing principles to be applied with greater discipline and less reliance on assumption.
For many organisations, this future is not approaching. It is already quietly underway.