Rethinking security in the age of Agentic AI. March 23, 2025

Read the blog

You are using an obsolete browser (Internet Explorer < 11). For a safe user experience use the latest version.

Rethinking security in the age of Agentic AI.

March 20, 2026
Raphaël Guilley, SVP Strategic Portfolio and Growth.
Rethinking security in the age of Agentic AI.

In our previous articles in this series, we explored the rise of agentic AI and its implications for digital commerce and identity. We outlined how autonomous agents are beginning to act on behalf of users and organizations, executing decisions and transactions with increasing independence.

This evolution introduces a fundamental question: how should trust be established, governed, and enforced in a world where actions are no longer initiated directly by humans?

Security has traditionally been built around a clear assumption that there is a human at the point of interaction. Agentic AI challenges that assumption. Systems are now being designed in which software agents determine what to purchase, who to pay, and when to act, often without real time human intervention.

As a result, existing security models must evolve. The focus is no longer simply on protecting systems from bad actors, but on enabling trusted autonomy within clearly defined and enforceable boundaries.

The legacy security model: verifying humans.

Traditional digital security frameworks were designed around a relatively simple model. A user authenticates, initiates a transaction, and the system evaluates risk based on credentials, device signals, and behavioral patterns.

Automation has historically been treated as a threat. Bots were associated with fraud, and high velocity activity was considered suspicious. This led to the widespread adoption of controls such as CAPTCHA, device fingerprinting, and step up authentication.

These approaches were effective because they aligned with how systems were used. The human was the decision maker, and the system’s role was to verify that identity and detect anomalies.

The shift: enabling trusted agents.

Agentic AI introduces a different paradigm. Systems must now recognize that non human actors can be legitimate participants.

The question is no longer whether an interaction is automated, but whether the agent is authorized, operating within defined constraints, and acting on behalf of a verified principal.

This requires a shift from identity verification to delegated authority management. Organizations must be able to determine who authorized an agent, what it is permitted to do, under what conditions, and for what duration. Equally important is the ability to revoke that authority immediately if needed.

The concept of Know Your Agent (KYA) is emerging as a natural extension of Know Your Customer (KYC), reflecting the need to establish trust not only in individuals, but also in the systems acting on their behalf.

Identity becomes contextual and delegated.

Digital identity has never been as singular or static as many systems assume. It is inherently contextual and often involves delegation across individuals, organizations, and devices.

Agentic AI makes this explicit. Agents require a form of identity that allows them to be uniquely identified, linked to a principal, and governed through defined permissions. Their actions must be observable and auditable.

As agents begin to execute financial transactions, enter into agreements, or trigger operational processes, they effectively become actors within regulated environments. Without clear identity and accountability, their behavior becomes indistinguishable from malicious activity.

A new attack surface: decision integrity.

Traditional cyber security focuses on protecting systems from unauthorized access and malicious code. Agentic systems introduce a different class of risk: manipulation of decision making.

Threats such as prompt injection, agent impersonation, and data poisoning can lead to outcomes that are logically consistent from the system’s perspective but undesirable or harmful in practice. In these cases, the system has not been compromised in a traditional sense, yet the outcome is still insecure.

This highlights the need to embed security within the decision-making process itself, rather than relying solely on perimeter controls.

Managing excessive autonomy.

One of the most significant risks in agentic systems is excessive autonomy. This occurs when agents are given broad objectives without sufficiently precise constraints.

In these scenarios, agents may take actions that technically fulfill their objectives but exceed the user’s intent. This is not malicious behavior, but rather a consequence of optimizing for outcomes without adequate governance.

Capturing user intent and translating it into enforceable constraints becomes a critical requirement. Security must focus not only on preventing unauthorized actions, but also on ensuring that authorized actions remain aligned with expectations.

Tokenization evolves into policy enforcement.

Tokenization remains an important control mechanism, particularly in payments. However, in an agent-driven environment, tokens must carry more than just a reference to an underlying asset or credential.

They need to encode delegation scope, policy constraints, time limits, and provenance. Without this context, tokenization simply accelerates transactions without improving control.

This is driving the need for more sophisticated token frameworks that incorporate policy and authorization directly into the transaction layer.

Auditability and accountability.

As agents act on behalf of users, the ability to explain and reconstruct decisions becomes essential. It will no longer be sufficient to attribute actions to a system without understanding how and why those actions were taken.

Organizations must be able to trace decisions back to the original intent, the data inputs considered, the constraints applied, and the logic followed. This level of auditability is necessary to support both regulatory requirements and user trust.

Without it, accountability becomes unclear, creating risk for both service providers and end users.

The complexity of multi agent systems.

The challenges become even more pronounced in environments where multiple agents interact. Coordination between agents introduces new risks, including unauthorized participation, misaligned objectives, and cascading errors.

Security in these systems requires a shift toward managing interactions and emergent behavior, rather than focusing solely on individual endpoints. This calls for new approaches that combine elements of system design, governance, and continuous monitoring.

From security to governance.

The most significant transformation is the shift from security as a defensive function to governance as an enabling capability.

Organizations must define who can create agents, how they are certified, what rules they must follow, and how they are monitored and controlled over time. This elevates the role of identity providers, payment networks, and regulators in establishing shared frameworks for trust.

Without deliberate governance, the ecosystem risks fragmentation and inconsistency, with different platforms imposing their own rules and standards.

Conclusion: enabling trust at scale.

The challenge is not to prevent agents from acting, but to enable them to act safely, transparently, and within clearly defined boundaries.

Agentic AI represents a fundamental shift in how decisions are made and executed. Security must evolve accordingly, moving beyond traditional models focused on authentication and fraud detection.

The future of digital trust will depend on our ability to combine identity, policy, and auditability into a cohesive framework that supports both autonomy and control.

In this context, programmable trust becomes a foundational capability. It allows organizations to define, enforce, and verify the conditions under which agents operate, creating a system where autonomy can scale without compromising accountability.

Discover more in our agentic AI commerce blog series:
Chapter IAgentic AI and payments: when AI gets a wallet and a will of its own.
Chapter II: Agentic commerce: when your wallet gets a brain.
Chapter III: Agentic commerce: issue on Llamas.


Raphaël Guilley, SVP Strategic Portfolio and Growth

Raphaël has over 20 years of experience in the consulting industry, with extensive involvement in managing large-scale international projects across payments, smart mobility and digital identity. His areas of expertise include product management, agile development and product launches.

At Fime, Raphaël leads the global Consulting team under the Consult Hyperion brand, following Fime’s acquisition of the company. He supports a wide range of stakeholders, including payment networks, financial institutions and transport operators, to solve complex challenges, explore new opportunities and expand into new markets. 

Prior to joining Fime, Raphaël was VP of Risk & Compliance Solutions at IPC Systems Inc. He also worked in similar roles for Etrali Trading Solutions and Orange Business Services.

You might be interested in.

Explore the latest insights from the world of payments, smart mobility and open banking.
Share your challenge.

Our Fime experts are here to help you make innovation possible,
from defining, designing to delivering and testing your products
and services.

Contact us