Grindr Partners With Spectrum Labs To Develop AI Moderation
Grindr has announced its partnership with Spectrum Labs, developing a better method of content moderation.
The Spectrum and the Grindr Trust and Safety teams are working together to train models carefully, and specifically for Grindr’s user base, so that they are as accurate and fair as they can make them. Their goal is to implement machine learning with an ethical, human-centered framework that is in the community’s best interest and helps make Grindr a safer, more inclusive place for everyone. Grindr said that Spectrum Labs is a “great partner for us – they recently put out a whitepaper on moderation best practices for the LGBTQ+ community – a nice complement to some of our own work.”
Grindr released a statement saying:
There are few things more important than building a positive environment on Grindr. Over the years, we’ve invested heavily in enhancing our safety practices in service of making Grindr a place where our users feel safe and welcome, and we’re taking another big step today.
We’re excited to announce a partnership with Spectrum Labs to implement machine learning models for proactively fighting bad actors on our platform. This will be an ongoing project: first we’re focusing on fighting drugs, solicitation, and underage users; then we’ll move to harassment; eventually we hope to build tools that could encourage friendlier behaviour and safer interactions.
Historically, Grindr has reviewed potentially illicit text on Grindr through two means: reports from our community and keywording. Reports from our community are crucial, and give us much-needed insight and context, and keywording allows us to detect content proactively, but is also limited – there can be a high number of false positives, terms change quickly, and context can be lost.
Machine learning will help us detect and take action on bad actors and illicit content automatically, which frees up our moderators to concentrate on the nuanced, difficult cases that really need their attention. Our moderators’ mental health is important to us, and this is one step we can take to make their jobs better.
We know that machine learning isn’t perfect, and that’s why we will still operate our industry-leading ban-appeal process, which gives all of our users the right to have a manual human review of any automated decision.
Because it’s important to get this right, we’re going to take our time to implement these models carefully over the next year. This is a big step, and I’m excited for what we’ll be able to do to improve the experience for our users all over the world.