FeaturedKnowledge PartnersNewsPersona

Deepfakes and Dating Apps: How to Stop AI-Powered Fraud

The scale and sophistication of scams on dating apps is increasing at an alarming pace for two reasons: there are more people running these scams and generative AI makes it much easier for scammers to look and sound convincing.

In 2025, the United Nations Office on Drugs and Crime reported that organized crime groups have hundreds of large-scale scam compounds. Inside the compounds, the criminal groups hold human trafficking victims hostage, give them playbooks and AI-generated or stolen assets (like images or videos), and force them to work long hours running scams, including catfishing and romance scams.

Dating apps that promise their users genuine connections have to contend with this new reality and the growing challenge of safeguarding profile authenticity and match safety. And that’s where multi-layered identity verification comes in. Verifying the authenticity of profiles can create trust among users and deters fraudsters looking to exploit your platform.

To better understand why a multi-layered approach is important, let’s take a closer look at how people use AI to create or take over dating profiles.

How fraudsters use deepfakes to pass identity verification

Fraudsters increasingly use generative AI to create convincing deepfakes and generate identity documents. They use the images and videos to pass verification checks and create new accounts. When targeting existing users, they might use the person’s profile pictures to create a deepfake before attempting to reset the account’s password.

Either way, they’ll often inject the deepfake video into a virtual camera’s feed during the identity verification process. If you don’t know what to look for, you might think the injection attack is actually a live capture.

You can see real examples of injection attacks and AI-generated documents in Persona’s webinar, Deepfakes and AI-based fraud: Strategies to protect your business. You can also find examples in the annotated transcript of the webinar.

The next evolution of these types of attacks is replay attacks. Rather than creating a deepfake, fraudsters will buy or steal recordings of real individuals taking selfies and inject a replay of the video into verification flows. The webinar and transcript cover those as well.

Read more: 7 expert takes on stopping AI-based fraud, injection attacks, and the latest fraud trends

You need a multi-layered defense to stop these attacks

Layering different types of fraud detection is generally the best way to catch fraudsters, but it’s especially important for stopping bad actors using generative AI. Some common layers that can go into your defense include:

  • Passive signals: The risk signals you can collect without asking your users to submit more information. These include the user’s location, IP address, device fingerprint, and browser fingerprint.
  • Behavioral signals: These risk signals are a type of passive signal that specifically rely on how the user behaves, such as how often they use keyboard shortcuts or hesitate between actions.
  • Active signals: The risk signals that depend on the user taking an action, such as uploading a picture of their ID or a selfie. Active signals can provide valuable insights, but they also introduce new friction for users.
  • AI-specific models: Some fraud prevention solutions build models trained to detect deepfakes. The models are often automatically added to common verification checks, such as a liveness detection model during a selfie check.
  • Profile-to-selfie comparisons: Dating apps have a slight advantage over other organizations because legitimate users often upload selfies to their profiles. If you notice other potentially risky signals, such as someone logging in from a new device and location, you could request a selfie and compare it to the profile’s pictures. A mismatch could indicate that someone is trying to take over the account.

Compare vendors’ offerings because solutions may offer different signals, models, and tools. Additionally, dating websites often collect data points that can be helpful for fraud prevention. Ideally, you can add these to your fraud platform — or use an orchestration tool to connect everything — to make better-informed decisions.

Next steps: balance security with user experience

The rise of AI-powered fraud doesn’t mean you have to choose between security and smooth onboarding. By implementing risk-based verification flows, you can use real-time signals to stop sophisticated attacks while keeping friction low for legitimate users.

For more information, Persona’s signal strategy ebook is a thorough guide on creating and using a signals-based strategy. It also has a checklist of active, behavioral, and passive risk signals that you can incorporate into your defenses.

Louis DeNicola spent over 12 years running a freelance content business and writing about finances and fraud for clients. He is currently a content marketing manager at Persona, where he focuses on fraud prevention, identity verification, and age assurance. Outside of work, you can often find him at the climbing gym, in the kitchen (cooking or snacking), or relaxing with his wife and cat in Oakland.

Global Dating Insights is part of the Industry Insights Group. Registered in the UK. Company No: 14395769