A new post from Payton Iheme, Vice President, Head of Global Public Policy at Bumble explains how the dating platform is working with other organisations to address unethical uses of AI.
The recent column highlights that AI chatbots and synthetic media (AI-created images, text, music, etc) are gaining attention. While these innovations are exciting, she warns that these tools can be used in a harmful way.
For example, AI-created images may lead to issues such as deepfake porn, where individuals’ likeness is used to create sexually explicit media against their consent.
“If women and folks from underrepresented groups don’t have a seat at the table at the genesis of new technologies, we’re, as the adage goes, on the menu. We must have a voice in the very creation of this emerging media, not just the conversation surrounding its evolution”, Iheme wrote.
To address these issues, Bumble has been working behind the scenes with Partnership on AI, a non-profit coalition which focuses on ethical AI use. Bumble is partnering with organisations such as the BBC, Adobe, TikTok, OpenAI, Synthesia and more for the launch of the coalition’s new framework for responsible synthetic media.
This isn’t the first time Bumble has helped address online misogyny. It recently made its AI detection tool, which reduces the sharing of unwanted lewd images, open source. It also worked alongside legislators in the UK and US to criminalise cyberflashing.