Agentic AI
,
Artificial Intelligence & Machine Learning
,
Data Privacy
As Visa and Mastercard Deploy AI Agents, Experts Ask: Who Holds the Receipt?

Visa and Mastercard are introducing artificial intelligence-powered agents that can make purchases directly – as long as users consent to using them. The challenge is walking the tightrope between convenience and control. Privacy and security experts are raising concerns about data access, misuse and liability, even as the payments industry moves faster toward agentic AI than regulation can keep up.
See Also: Proof of Concept: Rethinking Identity for the Age of AI Agents
Branded as Visa’s Agent Interface and Mastercard’s Agent Pay, the respective platforms on each payment network enable AI assistants to access payment credentials and make transactions independently.
Visa says consumers retain control over AI-led transactions by setting specific spending limits, merchant categories and preferences. “Visa Intelligent Commerce empowers consumers with control over what a designated AI agent purchases on their behalf,” the company said, adding that users can establish controls ensuring an agent’s payment actions are “aligned with the user’s original authenticated instructions.”
To safeguard payment data, Visa relies on a data privacy-preserving framework built on tokenization. Consumers’ payment details are replaced with tokens that are device-bound and secured using passkeys. These tokens encrypt the data in motion and support granular consent management, the company said. Transactions initiated by AI agents will be subject to the same dispute resolution processes and protections consumers expect from card-based transactions, with Visa’s Risk Operations Center monitoring for anomalies.
Mastercard takes a similar route with what it calls “agentic tokens.” Gene Reda, Mastercard’s vice president of core payments, described these tokens as an evolution of traditional tokenization, with better metadata and biometric authentication. This helps “banks and merchants recognize agentic transactions, giving greater transparency and control,” Reda said.
Mastercard’s approach also focuses on trust in the ecosystem. Only registered and verified providers of AI agents are allowed to use agentic tokens, and transactions can only proceed once agents prove they’ve received proper consumer authorization. “Consumers have complete control over what the agent is allowed to purchase,” Reda told Information Security Media Group. The program includes alert settings, transaction limits and the ability to revoke credentials.
Still, these protections don’t eliminate risk, particularly around privacy. Visa and Mastercard insist that data shared with AI agents is consent-based and minimal. But legal experts warn that such assurances may fall short in the current regulatory environment.
“There may well be privacy concerns when large organizations deploy AI agents with access to customer data,” a representative from Khaitan Legal Associates told ISMG. “Especially when such access goes beyond what a user has knowingly and specifically consented to.”
Consent, in this context, is complicated. Traditional models assume static, one-time permission. But as AI agents operate dynamically and autonomously, consent must be equally adaptive. “Consent today must be treated not as a checkbox, but as a continuous conversation embedded into the organization’s design and governance,” the spokesperson said.
This issue is compounded by the design of AI agents, which often rely on historical behavior and preferences to make decisions. “Will it need to know everything I’ve ever purchased to understand my preferences?” asked Tarun Samtani, a board member at the International Association of Privacy Professionals. “And who else gets access to this information?”
Samtani told ISMG that AI-driven commerce may extend the reach of what he termed the “invisible economy for people’s data.” He raised concerns about whether commercial partnerships could influence the AI’s recommendations, and whether consumers would even be aware of such nudges. “We are potentially creating a new layer in the already problematic invisible economy,” he said.
Visa says it addresses these concerns by sharing “basic spending insights” within a tokenized and consent-based framework. Samtani said consumers need more clarity on what “basic” means: “Does that include where I shop, when I shop, what brands I prefer and how much I typically spend, or does it include connecting my regular shopping accounts like Amazon and Walmart?” Along with that clarity, the Khaitan Legal Associates spokesperson said that “if a trend can be shown without storing every transaction, that should be the path taken. There is a fine line between smart insights and over-collection.”
Both companies say their fraud detection systems are evolving to meet the unique risks associated with AI-led payments. Visa cites its Anomaly Detection Platforms and a dedicated Intelligence and Controls team, while Mastercard uses enhanced metadata and transparency to identify when an AI agent is transacting. Issuers and merchants can then use this information in conjunction with Mastercard’s fraud tools to pinpoint suspicious behavior.
Who Is Liable?
But in practice, if something goes wrong, who is liable? That remains murky. Khaitan Legal Associates said that laws, including those in India, currently lack AI-specific liability statutes. In the absence of such rules, courts may fall back on traditional tort principles, asking: Who had control, who could have prevented the issue and where the process broke down? Financial institutions and tech developers could end up sharing responsibility under a model of joint and several liability.
Current consumer protection laws are not designed with autonomous AI agents in mind, Samtani said. “The key question is how much control users will have over these systems. Will users be given meaningful options to set specific parameters and budgets, and review transactions before completion? Or are they essentially approving a black box process with limited understanding of how decisions will be made?”
Consumers still have recourse under existing systems. Mastercard said that its standard chargeback protections apply if users dispute a transaction completed by an AI agent. As Agent Pay matures, additional metadata such as “order intent” could help banks assess whether a purchase was truly authorized by the user.
But oversight of third-party AI developers remains critical. Both Visa and Mastercard require agents to be vetted and governed under ecosystem-specific rules. Mastercard maintains a registration framework, while Visa is implementing onboarding protocols and compliance standards to ensure agents perform up to the network’s expectations.
Legal experts argue that ecosystem-level rules are only part of the solution. “When AI becomes part of your payments ecosystem, it demands board-level visibility, ongoing monitoring and active compliance oversight,” the Khaitan Legal Associates spokesperson said. Integrating AI into sensitive infrastructure is not just a technology upgrade but a governance challenge.
Regulators are trying to catch up to the rise of AI-led commerce. While no regulations explicitly address automated decision-making or AI accountability, sectoral regulators are exploring their own AI frameworks. But experts say coordinated national strategies are overdue.
“As AI governance is catching up, we still lack harmonized guardrails,” the Khaitan Legal Associates spokesperson said. In fact, recent cases have highlighted “how the absence of well-defined boundaries for AI-led access has triggered both regulatory and public backlash, even for the most established global players.”