Facebook has announced the readiness to block and label misleading videos created by AI, known as deepfakes, reads the company post from January 6.
Monika Bickert, Vice President at Global Police Management, has shared that Facebook has partnered with more than 50 global experts in various areas to improve the detection mechanisms and strengthen it with special AI system.
As a result of these partnerships, social media company is updating its policies and commits to remove and label misleading video content, that was edited “beyond adjustments for clarity or quality” or is “the product of artificial intelligence or machine learning that merges, replaces or superimposes content onto a video, making it appear to be authentic”.
Moreover, Facebook takes a step forward in labelling the misleading content in order to prevent further spreading in other platforms:
“If we simply removed all manipulated videos flagged by fact-checkers as false, the videos would still be available elsewhere on the internet or social media ecosystem. By leaving them up and labelling them as false, we’re providing people with important information and context”
Bickert has highlighted that, while deepfakes “are still rare on the internet, they present a significant challenge” both for social media industry and society, given the fact that AI could be used to manipulate people. She further shared that Facebook’s enforcement strategy will help to find those behind the creation of misleading content, and reassured that these rules do not apply to the content created as “parody or satire”.
As Future Time earlier reported, Witness Lab pointed out to potential challenges of using current technological solutions against deepfakes invasion, identifying no less than 14 dilemmas for public discussion. Researchers warned that if implemented without precautions, these tools could give certain companies more power, raise even more data storage, access and ownership questions, further diminishing people’s ways to control their own data.