Are content moderators fighting a losing battle in war against online abuse?

By Pearse McGrath

Social media sites like Twitter and Facebook have come under fire in recent years for their content moderation – or lack thereof. The City spoke to Mark Walsh, who is a content moderator for Accenture in Dublin.

Image via Crearive Commons license

With the sheer number of people on social media, some would think it is impossible to properly moderate all content. Walsh agrees, “there are something like 400 people doing the exact same job as me throughout Accenture in Dublin. I don’t think that even with a lot more staff that we could ever be quick enough to screen all of the reported content in the UK and Ireland before more is reported. Things are constantly being reported in huge amounts so Accenture will always be looking for more people to do this job.”

To the uninformed observer it seems that stories about abuse on social media are becoming more and more common. Walsh says it’s hard to know “whether abuse is getting better or worse. I’ve only been working with Accenture for just over 3 months so it hasn’t been a long enough time to see considerable change in online abuse. There is certainly tons of it out there at the moment however as even some of the things I’ve seen personally and other people have told me they’ve seen in the job are absolutely horrific.”

Walsh described the extent of the training he underwent before starting his job, saying, “the training for the job is in 2 stages. The first stage is 3 weeks long. This is where a team leader goes over almost all of the policies with us and we take 2 tests at the end to see if we progress onto the next stage of training. The next stage of training has us move on to actually deciding whether or not reported content stays up or needs to be taken down. This stage of the training takes a minimum of 2 weeks but usually takes far longer for most people. You finish this stage of the training when you get a score of 81% or higher on your actioned jobs for 2 weeks in a row.”

When explaining how content comes to the attention of moderators, Walsh said: “We log onto something called SRT every morning and this is where all of the reported content is displayed for us to action. It’s an endless cycle that we never finish as there is always more content being reported than we can action at a time. We simply log in and the reported content appears, we decide if it’s violating or not, we hit confirm, and the next piece of content appears instantly.”

The policies of social media giants like Twitter and Facebook remain unclear to many, and Walsh says, “There are policies that we use to action jobs but a lot of it also comes down to judgement. The policies can be quite unclear at times and certain people will read into certain parts of the policy completely differently. Often times our answers will differ with the people who are correcting them.”

Walsh describes some of the frustrations he has faced saying, “particular exceptions to the policies are very frustrating. For example, you can’t say that you hate or are disgusted by any group of people based on things like their ethnicity, race, gender etc. Something like “Women are disgusting” would clearly violate this rule and be taken down. However, there are exceptions to this that are hidden throughout the policy and aren’t made clear. One of the exceptions is that if it’s in the context of a breakup scenario. But a breakup scenario is never clearly defined and my interpretation of one could differ significantly from someone else’s.”

Many onlookers have made suggestions for how to improve upon content moderation, and Wlash had some ideas of his own.

“Clear up blatant problems with their policies,” he says,”far too much is left up to interpretation and this causes drastic differences in results. Also, a particular annoyance of mine is Facebook’s policies surrounding animal cruelty. Virtually no animal cruelty gets taken down – it’s absolutely disgraceful.”

With social media continuing to grow in popularity, many will feel hopeless looking towards the future in terms of content moderation – and it is hard to see any significant improvement being made without drastic changes.

Leave a Reply