Page 1 of 1

Facebook Moderation Firm Calls it Quits

Posted: 2019-11-02 03:33pm
by SolarpunkFan
Futurism
Moderating content for Facebook is traumatic. That’s not an opinion — it’s a fact.

Thousands of people spend their work days deciding whether posts violate Facebook’s content policies. And a growing number have spoken to the media about the terrible toll of seeing countless images and videos depicting violence, sex abuse, child pornography, and torture. In March 2018, one moderator in Tampa, Florida, actually died right at his desk.

That man, Keith Utley, was employed by a firm called Cognizant, which reportedly signed a two-year, $200 million contract with Facebook to keep the platform free of objectionable content — and, in a huge blow to Facebook’s moderation strategy, it just announced it’ll cut ties with the social media company when that contract runs out.

“We have determined that certain content work in our digital operations practice is not in line with our strategic vision for the company, and we intend to exit this work over time,” Cognizant told BBC News. “This work is largely focused on determining whether certain content violates client standards — and can involve objectionable materials.”

“In the meantime, we will honor our existing obligations to the small number of clients affected and will transition, over time, as those commitments begin to wind down,” the firm later added. “In some cases, that may happen over 2020, but some contracts may take longer.”

BBC News wrote that the decision will lead to the loss of an estimated 6,000 jobs and affect both the Tampa moderation site and one in Phoenix, Arizona.

“We respect Cognizant’s decision to exit some of its content review services for social media platforms,” Facebook’s Arun Chandra told BBC News. “Their content reviewers have been invaluable in keeping our platforms safe — and we’ll work with our partners during this transition to ensure there’s no impact on our ability to review content and keep people safe.”

Cognizant wasn’t Facebook’s sole source of content moderators — the company has 20 review sites employing approximately 15,000 people across the globe. But even that army of moderators hasn’t been enough to prevent policy-violating content from slipping through the cracks.

Perhaps most notably, an Australian man used Facebook to livestream an assault on two mosques in New Zealand in March that led to the deaths of 51 people. Not only did the bloody video rack up thousands of views before Facebook’s moderators took it down, but the company struggled to remove copies of the footage from its platform in the aftermath of the slaughter.

If Facebook considers its platform “safe” now, it’s hard to imagine what it could look like if the social network doesn’t quickly replace the Cognizant employees currently comprising more than a third of its moderation work force.

Editor’s Note: This article was updated to correct the citizenship of the man who attacked the New Zealand mosques.
More issues regarding Farcebook. How did a day ending in "y" come so soon again? :roll:

Re: Facebook Moderation Firm Calls it Quits

Posted: 2019-11-02 03:40pm
by Ace Pace
You mean "more issues when running internet scale communities". Google, Twitter, Reddit, Facebook along with the chinese social media companies, all have the same issues.

When you connect more than 2 billion people, you get horrible people. When you have no firm definition of what counts as bad content, stuff slips through.

A humane computer based alternative does not exist right now.

So either we need to dump the assumption that people will be nice online and curate everything, killing every user content driven website on earth, or need to figure out how we live with the social costs of this activity.

Re: Facebook Moderation Firm Calls it Quits

Posted: 2020-01-05 08:31am
by Xisiqomelir
re: paid moderator mental health and having fewer people die at their desks...

How about an automosaic/blur filter for all of this unfiltered content? On the various UNIX/UNIX-like systems some hideous offensive raw user content like:

Image

would come in and the content moderation could do something like

Code: Select all

convert hideous_racist_guro_pornographic_rawimage.png -blur 0x8 sanitized_SFW_cleanimage.png
yielding this hopefully less mentally traumatic image for the mod drones:

Image

Then if it wasn't very clear from the blur if they should reject, they could be given a dial to progressively unblur in their workstation interfaces.

Boom, drastic drop in staff depression/death rates, probably a jump in productivity since you can click through blurs without taking a 2 minute retching break, and some :wanker: PR you can drop about how your miserable corporate hive cares sooo much about worker mental health!

BRB pitching $500,000 tender to Cognizant!

Re: Facebook Moderation Firm Calls it Quits

Posted: 2020-01-05 12:55pm
by Starglider
That has been tried along with converting content to greyscale. It helps a little bit.

Re: Facebook Moderation Firm Calls it Quits

Posted: 2020-01-06 02:56pm
by SolarpunkFan
Xisiqomelir wrote: 2020-01-05 08:31amSnip
From what I gather the big hurdle is that computer vision is still nowhere near the level of your average human. Progress has been made, but we're still in the "Kitty Hawk" era when what's needed is a Stratolaunch aircraft.

Re: Facebook Moderation Firm Calls it Quits

Posted: 2020-01-06 04:14pm
by LadyTevar
I was an AOL Chatroom mod (volunteer/unpaid) back in the day. It was not easy back then, keeping chatrooms clear of interruption, especially the busy ones. You were expected to Announce Yourself, Warn Them to Stop TWICE, and only THEN were you allowed to use the Block/Ban codes. Which really didn't work well when said a-hole was typing offensive words as fast as they could type (or copy/pasting).

Now, there's photos, there's gifs, there's memes, and they're shared and reposted, and commented on. I'm amazed the company is able to do as much filtering as they have been.