Facebook has integrated the AI tool that will recognize сyberbullying on Instagram, giving users a chance to reconsider sharing potentially offensive content, reads Facebook announcement from December 16.
AI tool will prompt Instagram users in case their photos, videos, or comments have chances to be considered offensive. To be precise, AI informs users that similar content was previously reported for bullying. In their turn, users might pause and rethink their content in advance.
The AI integration comes after the testing period earlier this year, when users were notified about potentially offensive comments. Due to promising results, now it has been expanded for all Instagram content, first in selected countries, with a plan for global integration in the upcoming months.
Founder and CEO of STOMP Out Bullying Ross Ellis shared in a comment to Forbes, that while this might be a good start, more should be done to really drop cyberbullying rates, starting from the integration to all platforms. She has also highlighted that tech savvy bullies might find ways to circumvent rules or move to other apps to hit their targets.
Despite huge criticism on the amount of bullying content on Facebook, the AI feature is for now limited only to Instagram. Liza Crenshaw, a Facebook company spokesperson, also commented that the extension to Facebook is in the plans for the future.