This article is brought to you by Stream’s Content Partnership with Global Dating Insights. Stream helps apps like Feeld, Paired, Trueflutter, and Wink build best-in-class Chat and Activity Feed experiences to help spark meaningful conversations.
The number one piece of advice given to singles looking for a partner is, “Be yourself!” But singles struggle to be themselves when they’re fearful of safety issues, harmful behaviour, and potential scams. That’s why trust & safety measures are such a key part of the online dating world.
In a recent episode of The GDI Podcast, Adnan Al-Khatib, Principal Product Manager – Moderation at Stream, shared some important insights about how dating brands can identify harmful content and bad actors.
Why Do We Need Moderation?
There are multiple reasons why online social platforms seek out ways to moderate the content on their platforms, Adnan shares.
First, content moderation directly boosts user retention and growth rates. When users who come onto the platform for the right reasons face harmful language or fraudulent scammers, they are quickly driven away.
Second, many governments around the world are cracking down on regulating online safety measures, and moderation keeps platforms on the good side of new policies. For example, Match Group identified how the EU’s Digital Services Act will require safety & moderation of online platforms.
Finally, Adnan shares that the founders of online dating platforms have an ethical responsibility to ensure they’re creating a safe space for individuals to connect. To do this effectively, content moderation is required.
What Can Be Done to Make Platforms Safer?
Adnan explains that the optimal tool for content moderation is something that combines the best traits of human and automated moderators.
Human moderators are great at analyzing a user’s message and the context around it. But with thousands of messages being sent constantly, automation is needed to do the heavy lifting.
Therefore, Adnan’s team has developed moderation tools where human moderators are only required to make a few judgments each day, only focusing on new issues that they haven’t moderated before. Repetitive decision-making can be left to the automation to handle.
Focusing on a content’s intent rather than specific keywords or phrases is a unique aspect of Stream’s moderation services. This means that even as bad actors change the script / talking points they use for fraudulent conversations, Stream will still be able to help identify these harmful interactions.
All content moderators need to do is collect examples of messages or content they want to prohibit or ban. From there, Stream’s moderation tools will flag up content with a similar intent, even if the phrasing is different.
This process is particularly important as scammers adopt new generative AI technologies (such as ChatGPT). While these technologies can create more convincing content, moderating through analyzing intent will reduce bad actors, regardless if they’re human or an advanced chatbot.
Navigating Adoption Challenges
Adnan shares that in his experience when online social platforms reach around 100,00 users, they begin to form a team of content moderators. Developing the correct policies requires input from the legal department, the product team, trust & safety experts, and engineers.
When it comes to the tech side, integrating moderation tools into the existing infrastructure for user messaging and posting “can take time and effort,” he admits.
What helps the process of adopting moderation tools is when platforms already use Stream’s chat functionality. In these cases, moderation teams are not required to do any integration or development work, as moderation can be switched on immediately.
These teams can go straight to Stream’s website and dashboard to moderate their platforms, taking advantage of the auto-moderation tools discussed previously.
And with the ability to customize the content and intent that they’re trying to prohibit, moderation teams are able to create policies that are flexible and work for their unique dating product.
A safe in-app chat environment when online dating is paramount, so choosing a messaging solution for your dating app that emphasizes safety and security allows your users to comfortably connect and build relationships. Unlock Stream’s enterprise-grade APIs and SDKs with powerful auto moderation tools to empower your users to feel safe – they can mute, block, and report unwanted interactions, and you can swiftly identify and remove trolls through the moderation dashboard when needed. You can trust Stream to power your dating chat app and keep users safe in the process—try it risk free for 30 days.