For years, non-profits in Sri Lanka have warned that Facebook posts are playing a role in escalating ethnic tensions between Sinhalese Buddhists and Tamil Muslims, but the company had ignored them. It took six days for Facebook to respond to the hate speech<\/a> report. “Thanks for the feedback,” the company told the whistleblower, who posted the response to Twitter. The content, Facebook continued, “doesn’t go against one of our specific Community Standards.”
The post stayed online, part of a wave of calls for violence against Muslims that flooded the network last year. In late February 2018, a mob attacked a Muslim restaurant owner in Ampara, a small town in eastern Sri Lanka. He survived, but there were more riots in the mid-size city of Kandy the following week, resulting in two deaths before the government stepped in, taking measures that included ordering Facebook offline for three days.
The shutdown got the company’s attention. It appointed Jessica Leinwand, a lawyer who served in the Obama White House, to figure out what had gone wrong. Her conclusion: Facebook needed to rethink its permissive attitude toward misinformation. Before the riots in Sri Lanka, the company had tolerated fake news and misinformation as a matter of policy. “There are real concerns with a private company determining truth or falsity,” Leinwand says,summing up the thinking.
But as she began looking into what had happened in Sri Lanka, Leinwand realised the policy needed a caveat. Starting that summer, Facebook would remove certain posts in some high-risk countries, including Sri Lanka, but only if they were reported by local non-profits and would lead to “imminent violence.” When Facebook saw a similar string of sterilisation rumours in June, the new process seemed to work. That, says Leinwand, was “personally gratifying”— a sign that Facebook was capable of policing its platform.
But is it? It’s been almost exactly a year since news broke that Facebook had allowed the personal data of tens of millions of users to be shared with Cambridge Analytica, a consulting company affiliated with Donald Trump<\/a>’s 2016 presidential campaign. Privacy breaches are hardly as serious as ethnic violence, but the ordeal did mark a palpable shift in public awareness about Facebook’s immense influence. Plus, it followed a familiar pattern: Facebook knew about the slip-up, ignored it for years, and when exposed, tried to downplay it with a handy phrase that chief executive officer Mark Zuckerberg<\/a> repeated ad nauseam in his April congressional hearings: “We are taking a broader view of our responsibility.” He struck a similar note with a 3,000-word blog post in early March that promised the company would focus on private communications, attempting to solve Facebook’s trust problem while acknowledging that the company’s apps still contain “terrible things like child exploitation, terrorism and extortion.”
If Facebook wants to stop those things, it will have to get a better handle on its 2.7 billion users, whose content powers its wildly profitable advertising engine. The company’s business depends on sifting through that content and showing users posts they’re apt to like, which has often had the side effect of amplifying fake news and extremism.
Unfortunately, the reporting system they described, which relies on low-wage human moderators and software, remains slow and under-resourced. Facebook could afford to pay its moderators more money, or hire more of them, or place much more stringent rules on what users can post — but any of those things would hurt the company’s profits and revenue. Instead, it’s adopted a reactive posture, attempting to make rules after problems have appeared. The rules are helping, but critics say Facebook needs to be much more proactive.
Today, Facebook is governed by a 27-page document called Community Standards. Posted publicly for the first time in 2018, the rules specify, for instance, that instructions for making explosives aren’t allowed unless they’re for scientific or educational purposes. Images of “visible anuses” and “fully nude closeups of buttocks,” likewise, are forbidden, unless they’re superimposed onto a public figure, in which case they’re permitted as commentary.
The standards can seem comically absurd in their specificity. But, Facebook executives say, they’re an earnest effort to systematically address the worst of the site in a way that’s scalable. This means rules that are general enough to apply anywhere in the world — and are clear enough that a low-paid worker in one of Facebook’s content-scanning hubs in the Philippines, Ireland, and elsewhere, can decide within seconds what to do with a flagged post.
The working conditions for the 15,000 employees and contractors who do this for Facebook have attracted controversy. In February, the Verge reported that US moderators make only $28,800 a year while being asked regularly to view images and videos that contain graphic violence, porn and hate speech.
Some suffer from post-traumatic stress disorder. Facebook responded that it’s conducting an audit of its contract-work providers and that it will keep in closer contact with them to uphold higher standards and pay a living wage.
Zuckerberg has said artificial intelligence<\/a> algorithms, which the company already uses to identify nudity and terrorist content, will eventually handle most of this sorting. But at the moment, even the most sophisticated AI software struggles in categories in which context matters. “Hate speech is one of those areas,” says Monika Bickert, Facebook’s head of global policy management, in a June 2018 interview at company headquarters. “So are bullying and harassment.”
On the day of the interview, Bickert was managing Facebook’s response to the mass shooting the day before at the Capital Gazette in Annapolis. While the massacre was happening, Bickert instructed content reviewers to look out for posts praising the gunman and to block opportunists creating fake profiles in the names of the shooter or victims. Later her team took down the shooter’s profile and turned victims’ pages into what the company calls “memorialised accounts,” which are identical to regular Facebook pages but place the word “remembering” above the deceased person’s name.
<\/body>","next_sibling":[{"msid":68421700,"title":"Social media abuzz with calls for boycott of Chinese products","entity_type":"ARTICLE","link":"\/news\/social-media-abuzz-with-calls-for-boycott-of-chinese-products\/68421700","category_name":null,"category_name_seo":"telecomnews"}],"related_content":[{"msid":"68421267","title":"facebook ap","entity_type":"IMAGES","seopath":"small-biz\/startups\/features\/facebooks-crisis-management-algorithm-runs-on-outrage\/facebook-ap","category_name":"Facebook\u2019s crisis management algorithm runs on outrage","synopsis":"If Facebook wants to stop those things, it will have to get a better handle on its 2.7 billion users, whose content powers its wildly profitable advertising engine.","thumb":"https:\/\/etimg.etb2bimg.com\/thumb\/img-size-266737\/68421267.cms?width=150&height=112","link":"\/image\/small-biz\/startups\/features\/facebooks-crisis-management-algorithm-runs-on-outrage\/facebook-ap\/68421267"}],"msid":68422635,"entity_type":"ARTICLE","title":"Facebook\u2019s crisis management algorithm runs on outrage","synopsis":"One year after the Cambridge Analytica scandal, Mark Zuckerberg says the company really cares. Then why is there an endless cycle of fury and apology?","titleseo":"telecomnews\/facebooks-crisis-management-algorithm-runs-on-outrage","status":"ACTIVE","authors":[],"Alttitle":{"minfo":""},"artag":"Bloomberg","artdate":"2019-03-15 12:50:33","lastupd":"2019-03-15 12:50:33","breadcrumbTags":["facebook","mark zuckerberg","Whistleblower","hate speech","Trust law","Donald Trump","artificial intelligence","Internet"],"secinfo":{"seolocation":"telecomnews\/facebooks-crisis-management-algorithm-runs-on-outrage"}}" data-news_link="//www.iser-br.com/news/facebooks-crisis-management-algorithm-runs-on-outrage/68422635">