News

Facebook Forms Ethics Team to Prevent Bias in A.I. Software

Facebook is currently under pressure to prove that its algorithms are being deployed correctly and responsibly.

The company has revealed that it has formed a special team to develop discrete software to ensure that its artificial intelligence systems will make decisions at ethically as possible.

Recently, Facebook announced that it will start offering to translate messages which people receive via the Messenger app. Translation systems must be trained first on data, and the ethics will be pushed to help ensure that Facebook’s systems are taught to give fair translations.

Isabel Kloumann, a research scientist at Facebook told CNBC: “We’ll look back and we’ll say, ‘It’s fantastic that we were able to be proactive in getting ahead of these questions and understand before we launch things what is fairness for any given product for a demographic group.”

Facebook has stopped short of forming a board focused on AI ethics, as Axon has also undertaken recently.

The moves also align with a broader industry recognition in which AI researcher have to make their systems inclusive.

Kloumann added: “Facebook doesn’t plan to release the new Fairness Flow software to the public under an open-source license, but the team could publish academic papers documenting its findings.”

Read more here.

Chloe Gay

Chloe is a reporter at Global Dating Insights. Originally from Bracknell, she is studying Communication & Media at Bournemouth University. She enjoys writing, travelling and socialising with her friends and family.

Global Dating Insights is part of the Industry Insights Group. Registered in the UK. Company No: 14395769