To adjust to the social distancing required by the Covid-19 coronavirus pandemic, social media platforms will lean more heavily on artificial intelligence to review content that potentially violates their policies. That means your next YouTube video or snarky tweet might be more likely to get taken down in error.
As they transition their operations to a primarily work-from-home model, platforms are asking users to bear with them while acknowledging that their automated technology will probably make some mistakes. YouTube, Twitter, and Facebook recently said that their AI-powered content moderators may be overly aggressive in flagging questionable content and encouraged users to be vigilant about reporting potential mistakes.
In a blog post on Monday, YouTube told its creators that the platform will turn to machine learning to help with “some of the work normally done by reviewers.” The company warned that the transition will mean that some content will be taken down without human review, and that both users and contributors to the platform might see videos removed from the site that don’t actually violate any of YouTube’s policies.
The company also warned that “unreviewed content may not be available via search, on the homepage, or in recommendations.”
Similarly, Twitter has told users that the platform will increasingly rely on automation and machine learning to remove “abusive and manipulated content.” Still, the company acknowledged that artificial intelligence would be no replacement for human moderators.
“We want to be clear: while we work to ensure our systems are consistent, they can sometimes lack the context that our teams bring, and this may result in us making mistakes,” said the company in a blog post.
To compensate for potential errors, Twitter said it won’t permanently suspend any accounts “based solely on our automated enforcement systems.” YouTube, too, is making adjustments. “We won’t issue strikes on this content except in cases where we have high confidence that it’s violative,” the company said, adding that creators would have the chance to appeal these decisions.
Facebook, meanwhile, says it’s working with its partners to send its content moderators home and to ensure that they’re paid. The company is also exploring remote content review for some of its moderators on a temporary basis.
“We don’t expect this to impact people using our platform in any noticeable way,” said the company in a statement on Monday. “That said, there may be some limitations to this approach and we may see some longer response times and make more mistakes as a result.”
The move toward AI moderators isn’t a surprise. For years, tech companies have pushed automated tools as a way to supplement their efforts to fight the offensive and dangerous content that can fester on their platforms. Although AI can help content moderation move faster, the technology can also struggle to understand the social context for posts or videos and, as a result make inaccurate judgments about their meaning.
In fact, research has shown that algorithms that detect racism can be biased against black people, and the technology has been widely criticized for being vulnerable to discriminatory decision-making.
Normally, the shortcomings of AI have led us to rely on human moderators who can better understand nuance. Human content reviewers, however, are by no means a perfect solution either, especially since they can be required to work long hours analyzing incredibly traumatic, violent, and offensive words and imagery. Their working conditions have recently come under scrutiny.
But in the age of the coronavirus pandemic, having reviewers working side by side in an office could not only be dangerous for them, it could also risk further spreading the virus to the general public. Keep in mind that these companies might be hesitant to allow content reviewers to work from home as they have access to lots of private user information, not to mention highly sensitive content.
Amid the novel coronavirus pandemic, content review is just another way we’re turning to AI for help. As people stay indoors and look to move their in-person interactions online, we’re bound to get a rare look at how well this technology fares when it’s given more control over what we see on the world’s most popular social platforms. Without the influence of human reviewers that we’ve come to expect, this could be a heyday for the robots.