Global Anti-Scam Summit Insights: Why Fraud Is Now a National Security Issue

Reflections from the Global Anti-Scam Summit (GASA), Washington, D.C. — with perspectives from Amazon, Google, Meta, DOJ, AARP, Capital One, Carnegie Mellon University, NCFTA, VISA, Zelle, FBI, Microsoft and many other leading private and non-profit organizations.

Fraud and scams are no longer a side issue in financial crime—they are reshaping consumer trust, draining local economies, and exposing gaps in how governments, platforms, and financial institutions work together. At the Global Anti-Scam Summit in Washington, D.C., one theme was impossible to miss: fraud has become a national security issue, but no one is truly in charge.
Panel at the Global Anti-Scam Summit (GASA)

From Scattered Efforts to “Strike Forces”

One of the most striking concepts discussed at GASA was the idea of a dedicated Scam Center Strike Force—an approach focused on:

  • Going after the leaders and infrastructure behind large-scale scam operations;
  • Recovering stolen funds where possible;
  • And building truly operational public-private partnerships to share data and intelligence.

This is a shift away from treating scams solely as isolated consumer incidents and toward recognizing them as coordinated, cross-border enterprises with sophisticated supply chains.

The Promise and Pain of Information Sharing

Everyone agrees it’s critical. Almost no one is sure how to do it.

Across panels—from big tech to banks to non-profits—one message was consistent: information sharing is essential if we want to slow the global fraud problem. But in practice, it is constrained by three major barriers:

  • Time and technology: Effective sharing requires infrastructure, data standards, and dedicated teams that many organizations simply don’t have.
  • Legal and regulatory uncertainty: Organizations are unsure what they’re allowed to share, with whom, and under which legal authority.
  • Liability and reputational risk: Many fear that sharing data could pull them into class-action lawsuits or regulatory scrutiny.

As a result, the perceived risks of sharing data often outweigh the perceived benefits. A truly global public-private fraud partnership is still a relatively new—and fragile—experiment.

The global challenge: uneven capacity and corruption

Information sharing is further complicated by the fact that not all countries are equally positioned to participate. Some:

  • Lack sophisticated judicial or law-enforcement capacity;
  • Face internal corruption that undermines trust and cooperation;
  • Or simply don’t yet treat digital fraud as a high-priority national issue.

Fraud is structurally global; enforcement and policy are still stubbornly local.

In the U.S., Fraud Is a National Problem with No National Owner

In the United States, no single entity is responsible for leading the fight against scams. At GASA, speakers noted that:

  • There are roughly a dozen bills in Congress aimed at fraud and exploitation;
  • Very few ever make it to the floor for a vote;
  • And even fewer come with real appropriations attached.

Key takeaway: Without White House–level commitment, there is no durable policy framework or coordination mechanism for fighting scams at scale. Fraud is a national security issue with no one clearly running it.

Several speakers argued that the U.S. needs something comparable to the National Center for Missing and Exploited Children—but focused on fraud and scams. Until then, progress is likely to remain uneven and driven largely at the state-by-state level.

Still no simple place for victims to report

The United States still lacks a clear, centralized reporting system that is easy for the public to understand and access. This fragmentation:

  • Makes it confusing for victims to know where to turn;
  • Leads to under-reporting and inconsistent data;
  • And ultimately weakens the intelligence picture available to law enforcement and industry.

The Long Tail of Victimization

Victims aren’t just calling to report scams—they’re calling for help to rebuild their lives

AARP alone receives more than 250 calls per day from victims. These are not mere “incident reports”—they are calls from people who have already lost money, and often their life savings, and are struggling with the emotional and practical fallout.

Many seek help within about 30 days of the scam, which is faster than we often see with other types of trauma. The need for support is immediate and deeply human: “What do I do now?” “Who can I trust?” “How do I tell my family?”

Fraud erodes local economies and community life

Fraud is not just about individual losses. When people lose their savings:

  • They cut back on local spending;
  • They stop donating to charities and community causes;
  • And they often withdraw from activities where trust is required.

This is the long-tail cost of fraud: our communities lose both financial resources and social capital.

Trust, Responsibility, and the Reality of Low Prosecution

One of the most sobering statistics discussed at GASA was this: only about 0.005% of scammers are prosecuted. In practical terms, most victims will never see their scammer identified, much less brought to justice.

Because they cannot direct their anger or grief toward the actual perpetrator, victims often redirect it toward the organizations connected to the scam:

  • Their bank or credit union;
  • The platform or app used in the scam;
  • Or the technology provider in the middle.
When scams erode trust, it is extremely hard to rebuild. People often feel that their technology providers and financial institutions should have prevented the scam entirely. At the same time, meaningful prevention also requires individuals to be empowered and willing to take more responsibility for their own online and financial behavior.

For banks and credit unions, this creates a structural challenge:

  • Fraud controls (e.g., card fraud, account takeover) are relatively well understood and technically manageable.
  • Scam controls (where a customer is socially engineered into sending money) are far more complex, because the customer is an active participant in the transaction.

Children, Sextortion, and the Hidden Violence of Scams

They are coming after our kids

One data point that should be front-page news: 35% of people between the ages of 7 and 17 have already been scammed. This is not a fringe issue. It is a mainstream childhood experience.

Sextortion: a rapidly escalating threat

Sextortion was described as one of the most urgent and lethal forms of digital exploitation. According to the figures discussed:

  • Sextortion has taken more U.S. lives than the terrorist group ISIS.
  • There were just 139 known cases in 2021.
  • By 2025, cases had exploded to around 50,000.
Sextortion Cases Have Exploded in Just a Few Years
Illustrative comparison of known cases, 2021 vs. 2025
2021
139
2025
50,000
Each bar is scaled relative to 50,000 cases in 2025.

These numbers underscore that scam and extortion dynamics are not just financial issues—they are literally life-and-death problems for young people and families.

Why Human Intervention Still Matters

Despite all the talk about AI, automation, and analytics, one of the most important messages from GASA was simple:

Human intervention is often the only way to break the spell of a scammer.

Whether it is:

  • A romance scam,
  • An investment scam,
  • Or an “authority” scam impersonating law enforcement or government,

victims are frequently under a kind of psychological spell. Logical arguments and warnings alone often aren’t enough. What works is:

  • A human from a non-profit,
  • A trusted representative from a financial institution,
  • Or another person in their life who can calmly interrupt the groomer’s narrative and offer an alternative frame.

Scams Are Evolving. Our Education Model Has to Evolve Faster.

Beyond fear: training decision patterns and risk instincts

One of the clearest educational insights from the summit can be summarized this way:

Since scams change, effective education needs to train the underlying decision patterns and risk instincts.

Traditional “fear-based” messaging (“Don’t click links!”, “Never trust strangers!”) is not a sustainable strategy. In fact, speakers noted that fear as a primary tool can:

  • Create pushback and resistance;
  • Lead to shame and silence after victimization;
  • And in some cases, cause people to disengage entirely from learning.

Education must be tailored, not one-size-fits-all

Anti-fraud education works best when it is tailored to specific audiences:

  • Older adults vs. teens and young adults;
  • First-time digital banking customers vs. crypto-curious investors;
  • Small business owners vs. frontline employees at a financial institution.

Each group brings different life experiences, risk perceptions, and digital habits. A single generic tip sheet cannot meet them where they are.

The Confidence Paradox: Why “I Can Spot a Scam” Is So Dangerous

Perhaps the most memorable statistic from GASA was the deep gap between confidence and outcomes:

  • 80% of people in a 5,000-person U.S. sample said they believe they can spot a scam.
  • 70% of the U.S. population has actually lost money to a scam.

In other words, most people think they are better protected than they really are. This is a classic example of overconfidence—and it has serious implications for fraud prevention.

The Confidence Gap: “I Can Spot a Scam” vs. Reality
Survey results from a 5,000-person U.S. sample
Say they can spot a scam
80%
Have lost money to a scam
70%
High confidence does not equal low risk. In some cases, it may increase risk.

The paradox is that education both reduces and can accidentally increase risk:

  • Education and awareness are essential to preventing scams.
  • But as people become more knowledgeable, they may also become more confident—and in some cases, more willing to take risks.

This is why modern scam education must be designed not just to deliver information, but to shape self-awareness, humility, and healthy doubt—especially when money, love, or fear are on the line.

AI as a Force Multiplier for Scammers

Artificial intelligence now sits at the center of both sides of this struggle. On the attacker side, AI:

  • Makes it easier and faster for scammers to establish trust and credibility;
  • Accelerates grooming and social engineering, which used to be very labor-intensive;
  • Expands the pool of potential scammers: less technical individuals can now execute more sophisticated attacks.

One example highlighted at GASA was a kind of “scam-as-a-service” model, where:

  • Corporate AI tools with fewer guardrails are compromised or misused;
  • Access to those tools is then sold to scammers;
  • And those scammers use AI to generate malware, phishing content, or hyper-personalized scam scripts at scale.

The playing field is no longer defined just by who has technical skills, but by who has access to powerful models and how few guardrails surround them.

Implications for Banks, Credit Unions, and Consumer-Facing Organizations

For financial institutions, platforms, and consumer-facing organizations, the GASA conversations point to several practical implications:

  • Reframe scams as a strategic risk, not just a customer-service issue. Fraud and scams directly affect trust, retention, and long-term brand value.
  • Invest in human intervention capacity. Frontline staff, call centers, and non-profit partners are often the last line of defense before funds leave the system.
  • Build education that trains thinking, not just rules. Help people recognize patterns of social engineering and emotional manipulation, rather than memorizing static lists of “do’s and don’ts.”
  • Tailor messaging to specific audiences. Different segments need different stories, examples, and calls to action.
  • Participate in emerging information-sharing efforts—carefully but proactively. Even in the absence of perfect legal clarity, there is value in shaping the standards and frameworks that will define the next decade.

Where We Go from Here

The Global Anti-Scam Summit made one thing clear: the scam problem is not going away. If anything, AI and global connectivity are accelerating it. But the summit also highlighted the levers we still control:

  • How we design and deliver fraud education;
  • How seriously we treat victims and their longterm needs;
  • How much we are willing to collaborate across sectors and borders;
  • And how quickly we can move from fragmented efforts to coordinated strategy.

Fraud will always adapt. Our challenge now is to build systems—policy, technology, and education—that adapt faster, while keeping the human beings at the center of every scam story firmly in view.

Top