36

Agentic AI: What It Is and Why Privacy Teams Need to Prepare Now

Privacy Culture | June 26, 2025

Artificial intelligence is evolving. It is no longer just about making predictions or analysing data. A new kind of system is emerging, one that can take action on its own. These systems are known as agentic AI, and they are now firmly in the sights of the UK Information Commissioner’s Office (ICO).

The ICO has committed to a formal review of agentic AI as part of its 2025 AI and Biometrics Strategy. A public report is due within the year, alongside updates to key regulatory guidance. For privacy professionals, this is the time to take notice.

This article explains what agentic AI is, the benefits it can offer, the risks it creates for privacy, and the practical steps you can take now.

What Is Agentic AI?

Agentic AI refers to artificial intelligence systems that can make plans and take actions independently in order to achieve a goal. Unlike traditional AI, which needs prompts or specific input from a user, agentic systems can act on their own.

These systems can:

  • Plan a sequence of tasks
  • Use external tools and services
  • Adjust their behaviour based on feedback
  • Act without direct human control

In simple terms, it is AI that doesn’t just respond to requests. It decides what to do and then does it.

A few real-world examples:

  • A virtual assistant that schedules meetings, drafts emails, and books travel without constant instructions
  • A trading bot that monitors global news and adjusts investments on the fly
  • An AI tool that writes, tests, and refines software based on a goal

This type of autonomy creates opportunities. But it also raises questions — especially around data protection.

Why Is the ICO Paying Attention?

The ICO’s June 2025 AI and Biometrics Strategy names agentic AI as a key emerging risk. Here’s what they have committed to:

  • A horizon-scanning report that will explore the data protection risks of agentic AI. This is due by mid-2026
  • Updates to the statutory code of practice on automated decision-making. These updates will specifically cover autonomous and agentic systems
  • Public consultation on refreshed guidance around AI profiling and complex decision-making
  • Ongoing collaboration with other UK regulators, through the Digital Regulation Cooperation Forum (DRCF), to ensure a joined-up approach in areas like finance and healthcare

The Commissioner has made it clear that innovation must not come at the expense of privacy, trust or accountability.

What Are the Benefits of Agentic AI?

Done well, agentic AI can provide real value to organisations and their customers.

Task automation

It can complete multi-step processes without constant supervision. This saves time and reduces human error.

Personalisation

It can adapt to individual needs in real time. That is especially useful in sectors like healthcare, retail and education.

Speed and scale

It operates continuously, without rest, and can handle large volumes of data and tasks.

Better decision-making

It can gather new information, adjust its strategy, and respond quickly to changing conditions.

For many businesses, this means faster service, lower costs, and smarter outcomes.

What Are the Risks?

The same features that make agentic AI powerful also make it risky — especially when personal data is involved.

Lack of transparency

When a system takes decisions independently and changes its behaviour over time, it becomes much harder to explain what happened and why.

Unclear accountability

If an autonomous system makes a mistake or breaches someone’s rights, who is responsible? The developer? The organisation? The system itself? These questions are not yet fully resolved.

Scope creep

A system with too much freedom may access or use data in ways that were not originally intended. This could lead to unlawful processing or breach of consent.

Security concerns

Autonomous systems that can trigger actions in the real world must be hardened against misuse, prompt injection, or loss of control.

Legal uncertainty

The ICO and other regulators are only beginning to address these systems. For now, the rules are not always clear, especially in high-risk sectors.

What Privacy Professionals Should Do Now

While we wait for official guidance, there are several actions you can take now.

Know when AI is agentic

Understand whether your organisation is using systems that can act without instruction. Know what tools they can access and what kinds of decisions they can make.

Update your DPIAs

Review your data protection impact assessments. Make sure they cover:

  • The chain of actions the system can take
  • Whether those actions involve processing personal data
  • How the system adapts or learns over time

Build in oversight

Ensure there are human checkpoints where necessary. This is especially important for decisions that affect rights, freedoms or access to services.

Improve auditability

Make sure your systems can log what they did, when they did it, and why. You need enough detail to explain the decision to regulators, customers or auditors.

Stay alert to changes

The ICO’s horizon-scanning report and updated guidance will shape expectations. Being ready to respond to these updates will help your organisation stay compliant.

What the ICO Plans to Do

AreaAction
Horizon scanningPublish a full report on risks and implications of agentic AI
Code of practiceExpand scope of statutory code on automated decision-making
Guidance updatesRevise profiling and ADM guidance to reflect agentic systems
Cross-sector collaborationWork with other UK regulators to align approaches
Public consultationInvite feedback on new rules and requirements

These steps are already underway, and will shape the UK regulatory landscape over the next 12 months.

Final Thought

Agentic AI is more than a technical trend. It is a shift in how machines behave. They no longer wait for instructions. They act.

For privacy teams, this changes the game. It demands new thinking about transparency, control, and responsibility. The ICO has recognised the risks. The question is whether organisations are ready to meet them.

Those who start now — updating governance, revisiting impact assessments, and embedding proper oversight — will be far better placed when the formal rules land.

If you are already exploring agentic AI in your organisation, now is the time to get your privacy framework in order.

Related Articles

Loading...