Money Talks. We Speak Its Language Payment Week

Ai drives global fraud surge

2026 Global Financial Crime Report: Nasdaq Verafin Sees Artificial Intelligence-Driven Surge

A new analysis from Nasdaq Verafin details how artificial intelligence is accelerating illicit financial activity worldwide, setting the stage for 2026 findings on the scope of fraud and scams.

Briefing: Losses to Fraud Rise as Artificial Intelligence Propels Scams

Nasdaq Verafin’s 2026 report focuses on bank fraud and scams, not the full universe of illicit financial activity. While the study’s estimate puts 2025 bank-fraud-and-scam losses in the hundreds of billions of dollars, broader global illicit flows in 2025—including money laundering, corruption proceeds, tax evasion, and sanctions evasion—are commonly discussed as running into the low trillions of dollars annually.

Category 2023 Losses (Dollars, Billions) 2025 Losses (Dollars, Billions) Percent Change
Con Artists and Fraud Rings (Global) Not stated Not stated 9.2%
Bank Fraud and Scams (Nasdaq Verafin Estimate) $526.1 $579.4 10.1%
Scam-Related Losses $52.0 $62.0 19.3%
Impact Absorbed by Banks $478.1 $517.4 8.2%
Technology-Assisted Schemes (Including Artificial Intelligence-Enabled) $12.0 $14.3 19.6%

Colin Parsons, head of fraud product strategy at Nasdaq Verafin, told a Wednesday Payments Dive virtual panel that artificial intelligence now threads through day-to-day fraud operations.

Examples include voice-cloning for call-center impersonation, deepfake video used to pressure employees into urgent payments, and synthetic identities that blend real and fabricated data to slip past onboarding checks. Criminal groups are also using generative tools to scale personalized phishing lures across email, text, and social platforms, then adapt scripts in real time as victims push back.

Artificial intelligence is becoming a force multiplier on both sides: it can industrialize deception, but it can also industrialize detection when governance, data quality, and accountability keep pace.

Insights: Financial Institutions Confront Artificial Intelligence-Era Schemes

Losses tied to scams and technology-assisted methods rose sharply over the past two years, with banks bearing most of the impact.

Ninety percent of crime professionals surveyed reported more artificial-intelligence-driven attacks in the past two years. The assessment draws on a model synthesizing nearly 500 global studies and estimates, plus a survey of cybersecurity experts.

Parsons noted that newer tools have boosted the polish of scams.

Obvious tells like spelling errors are far less common, and artificial-intelligence-written chat scripts keep growing more persuasive.

At the same time, defenders are deploying artificial intelligence to spot recurring patterns in scams and trigger alerts, including transaction-monitoring models, anomaly detection for unusual payment behavior, network analytics that map mule activity, and identity checks that look for inconsistencies across devices and sessions.

These systems can stop suspicious transfers in flight and prevent funds from leaving, Parsons said, but he cautioned that the technology is not foolproof. Models can be tricked by adversarial tactics, drift as criminal behavior changes, or generate false positives that create friction for legitimate customers—problems that can be amplified when institutions scale automation without strong testing and oversight.

For compliance teams heading into 2026, the challenge is not only more sophisticated fraud but also tighter expectations around how institutions manage risk. Emerging pressure points include faster payments that compress investigation windows, higher standards for customer verification and beneficial-ownership clarity, and more scrutiny over how artificial-intelligence models are trained, explained, and audited inside anti-financial-crime programs.

In response, institutions are leaning toward continuous monitoring rather than periodic reviews, more automation in triage and case management, and stronger controls around model governance and data lineage. The operational shift also raises staffing and process issues, as investigators need workflows that can handle higher alert volumes without losing decision quality.

Beyond a single institution’s controls, global collaboration remains a key lever. Cross-border information sharing can help connect fragmented signals—shared beneficiaries, reused infrastructure, mule networks, and repeat scam narratives—before they migrate to new jurisdictions. Common cooperation channels include regulator-to-regulator coordination, financial-intelligence-unit networks, and public-private partnerships that allow banks and authorities to exchange typologies and indicators under defined safeguards.

Collaboration has limits, however. Privacy rules, bank-secrecy constraints, uneven data standards, and geopolitical friction can slow or narrow what participants are able to share, and criminals can exploit gaps between jurisdictions to keep investigations siloed.

Customers face direct risks as schemes scale, including identity theft, account takeover, impersonation that pressures victims into real-time payments, and long-tail harms when stolen data is recycled into new fraud attempts. The fallout can include drained accounts, damaged credit, loss of savings, and the time-consuming work of restoring identity and access.

Basic prevention steps still matter for individuals: use multifactor authentication, treat unsolicited payment requests as high risk, verify contacts through a second channel before sending money, turn on bank alerts, and limit what is shared publicly that can be stitched into convincing impersonations.

Cryptocurrencies also remain part of the landscape for illicit finance, including laundering scam proceeds through rapid hops between assets and services, layering via exchanges and peer-to-peer intermediaries, routing through mixers, and exploiting cross-chain bridges and some DeFi protocols to obscure provenance. They can also be used for small, distributed fundraising in terrorism financing, where speed and reach matter even when the individual transfers are modest.

Sanctions and geopolitics continue to shape patterns as well. Sanctions can push illicit actors toward front companies, third-country intermediaries, trade-based manipulation, and payment pathways designed to conceal the true counterparty; enforcement matters often center on disguised ownership, falsified documentation, and routing that masks sanctioned end users.

Human trafficking adds another layer of financial-crime risk because it generates recurring illicit flows—recruitment fees, coercive debt, transport and lodging payments, and cash-out activity—often spread across many small transactions. Detection is difficult when trafficking-related activity is mixed with legitimate commerce, when victims are forced to use their own accounts, or when facilitators rely on cash, prepaid instruments, and money-mule networks.

To prepare for the evolving 2026 environment, recommended actions generally converge on faster detection and clearer accountability: institutions can harden onboarding and ongoing monitoring, stress-test controls for faster payment rails, and tighten artificial-intelligence governance so models are explainable and resilient; regulators can promote consistent expectations for data sharing and model oversight while enabling lawful collaboration; and customers can reduce exposure by strengthening account security, slowing down high-pressure requests, and verifying identities before transferring funds.

Beyond digital schemes, artificial intelligence is amplifying check fraud, Parsons added. Fraudsters can make far more realistic alterations to checks stolen from mail by using advanced image-editing tools.

Security advisers also emphasize that the playbook remains familiar, often leaning on trust-building narratives and high-pressure prompts that steer victims toward quick payments.

  • Romance scams
  • Investment scams

What shall we search for? For example,bitcoin

We are on social media