AI Fraud Detection in Payments: Banks Turn to Intelligent Defenses
San Diego — Payments firms are turning to artificial intelligence to counter payment fraud that is now being amplified by the same technology.
At Nacha’s Smarter Faster Payments conference in San Diego on Monday, banking and payments leaders said financial institutions intend to use AI to keep pace with fraudsters who have upgraded their tactics.
Matt Vega, chief fraud strategist at Sardine, said the threat landscape has shifted so dramatically that “AI has to confront AI” for effective fraud prevention, adding that lawmakers are considering measures to expand industry use of the technology.
As payment fraud becomes more automated and adaptive, detection systems need to learn and respond at machine speed, not on a manual review cycle.
- Criminals use generative AI for rapid scheme development.
- Cybersecurity teams are adopting AI tools to keep up.
- Those tools can strengthen defenses by correlating signals across channels and customer touchpoints.
- The goal is reducing fraud losses while protecting customer trust.
Vega told attendees during a panel on AI’s impact in payments that he’s seeing a surge in “polymorphic agentic agents,” adaptive AI programs that change behavior to get around the controls organizations put in place, challenging traditional fraud detection.
He cited a recent case in which a financial institution faced a wave of automated onboarding attacks. After the bank tightened verification steps, the attacker pivoted within seconds, probing for a new path through the controls.
Executives said the same toolkits are also enabling newer fraud patterns, including synthetic identities stitched together from real and fabricated data, and deepfake audio or video used to pressure call centers or impersonate account holders during high-risk requests.
Elizabeth Bourgoin, head of banking for Google Cloud, said this month brought a noticeable push among financial institutions to harness AI against rising threats. She noted that fraudsters use the same AI to bypass safeguards, interact with payments processing, and hunt for platform weaknesses.
Google rolled out a cloud-based fraud defense that deploys AI-driven agents to surface vulnerabilities and patch them quickly. Most partner banks are investing in similar countermeasures, Bourgoin said, arguing that institutions must field AI agents to confront AI-powered fraud.
In practice, banks are deploying a mix of models, including machine learning classifiers, decision trees, neural networks and deep learning systems, anomaly detection tuned to spot deviations from normal customer behavior, and agentic AI that can take constrained actions such as escalating cases, triggering step-up verification, or prioritizing alerts for investigators.
Panelists said the advantage over traditional, rule-based approaches is that AI systems can score activity quickly, adapt as patterns shift, scale across large volumes of transactions, and surface novel fraud behaviors that may not match prewritten rules.
They also said modern models can reduce false positives by learning context — such as customer history, device and network signals, merchant patterns, and typical payment timing — which helps distinguish unusual but legitimate activity from truly suspicious behavior.
Building a Regulatory Framework
On Capitol Hill, the House Financial Services Committee is exploring a framework that would make it easier for financial institutions to deploy AI to combat fraud, Vega said. The Office of the Comptroller of the Currency and the Federal Deposit Insurance Corp. are also contributing guidance.
Vega told the audience those bodies are strongly supportive, indicating that even heavily regulated banks can begin piloting agentic AI within existing oversight.
Regulatory clarity on testing, documentation, and accountability can help banks deploy AI more safely while still moving from pilots to production.
After the session, he declined to name which congressional offices Sardine has engaged, but described bipartisan momentum for updating the regulatory approach to enable broader use of AI.
He said the goal is a playbook that accelerates adoption of AI and agentic use cases in financial services. While some bank applications have received regulatory approval, a clear, unified framework is still missing. The legislative effort is early but expected to progress quickly.
Executives said limitations remain, including uneven data quality across channels, the need to explain model decisions to internal reviewers and regulators, and the cost of building the infrastructure and talent needed to run models safely at scale. They also warned that adversarial attacks can deliberately probe models and workflows, forcing institutions to harden systems and continuously validate performance.
Ben Chance, who leads Certos identity and payment risk services at Early Warning Services, wasn’t tracking the legislative specifics but backed formal oversight. He called AI a powerful tool that should be governed in a way that improves the ecosystem and disrupts criminal enterprises.
Adapting to Escalating Fraud
- Recalibrating fraud detection strategies.
- Adopting adaptive, real-time monitoring.
- Accelerating deployment of AI models.
- Responding to industrialized fraud schemes.
Joe Fuqua, head of intelligent automation architecture at Truist, said protecting payment rails and networks depends on building detection and case-handling workflows that can ingest richer signals, automate triage, and route complex cases to the right teams without breaking existing controls.
Fuqua noted that larger institutions tend to face heavier integration and validation burdens, and that aligning security, fraud, and operations teams is often as important as the underlying tooling.
Greg Williamson, head of fraud commercialization strategy at Nasdaq Verafin, said criminals now match financial institutions in their use of AI. He spoke on a separate panel titled “Is AI Friend or Foe to Fraud Fighters?”
Williamson’s biggest concern is that adversaries can run extensive experiments against onboarding, authentication, and transaction controls, quickly identifying which combinations of identity signals and behaviors are most likely to slip through.
He warned that while banks may take up to 18 months to roll out a new AI model, criminals iterate, learn, and deploy rapidly.
Featurespace founder Dave Excell said a client’s call center was recently inundated with AI-generated robocalls, marking a new attack type this year.
Conversely, Excell said Featurespace, acquired by Visa in 2024, applies machine learning to forecast likely customer transactions and flag anomalies to detect potential fraud. The company also uses AI to assist investigators and refine software strategies.
Executives said enterprises looking to harness AI typically start by choosing a narrow, high-impact use case, then improving data pipelines, defining clear decision boundaries for automation, and keeping humans in the loop for exceptions and appeals. They said ongoing monitoring for drift, periodic adversarial testing, and coordination with compliance and model-risk teams are critical to sustaining performance after deployment.