Match Group Uses AI to Encourage Better User Behavior
Match Group, the company behind Tinder and Hinge, is introducing artificial intelligence to address inappropriate behavior on its platforms. The initiative aims to promote more respectful interactions, particularly among male users, by detecting potentially offensive messages and prompting senders to reconsider before hitting send.
Yoel Roth, Match’s head of trust and safety, explained that the company’s goal is to drive behavioral change in online dating. “A big part of our safety approach is focused on driving behavioural change so that we can make dating experiences safer and more respectful,” Roth said. According to Match, about 20% of users who receive these AI-generated prompts choose to revise their messages.
This move comes at a time when dating apps are seeing a decline in engagement, particularly among Generation Z users. The industry has been grappling with “dating app fatigue”, leading platforms like Tinder, Badoo, and Bumble to introduce new features, including community-building tools, to counteract slowing growth. Match has acknowledged that young women, a key demographic, are especially affected by burnout on dating apps.
Beyond improving user behavior, Roth is also focusing on combating romance scams, which have become a growing issue. The U.S. Federal Trade Commission estimated that consumers lost $823 million to such scams last year. While AI-driven fraud, such as deepfakes and automated bots, is a concern, Roth emphasized that the biggest threats still come from organized crime rings using real people to deceive victims.
Despite shifts in U.S. online safety policies, Roth stated that Match’s commitment to trust and safety remains unchanged, as well the platform’s overall products and core policies. “We’re not just doing it because we think it’s the right thing to do morally. We’re doing it because we know it’s the right thing from a business perspective,” he said.