\"\"
<\/span><\/figcaption><\/figure> San Francisco: Last year, a Facebook<\/a> user in Sri Lanka posted an angry message to the social network. “Kill all the Muslim babies without sparing even an infant,” the person wrote in Sinhala, the languag e of the country’s Buddhist majority. “F---ing dogs!” The post went up early in 2018, in white text and on one of the playful pink and purple backgrounds that Facebook began offering in 2016 to encourage its users to share more with one another. The sentiment about killing Muslims got 30 likes before someone else found it troubling enough to click the “give feedback” button instead. The whistleblower<\/a> selected the option for “hate speech<\/a>”, one of nine possible categories for objectionable content on Facebook.

For years, non-profits in Sri Lanka have warned that Facebook posts are playing a role in escalating ethnic tensions between Sinhalese Buddhists and Tamil Muslims, but the company had ignored them. It took six days for Facebook to respond to the
hate speech<\/a> report. “Thanks for the feedback,” the company told the whistleblower, who posted the response to Twitter. The content, Facebook continued, “doesn’t go against one of our specific Community Standards.”

The post stayed online, part of a wave of calls for violence against Muslims that flooded the network last year. In late February 2018, a mob attacked a Muslim restaurant owner in Ampara, a small town in eastern Sri Lanka. He survived, but there were more riots in the mid-size city of Kandy the following week, resulting in two deaths before the government stepped in, taking measures that included ordering Facebook offline for three days.

The shutdown got the company’s attention. It appointed Jessica Leinwand, a lawyer who served in the Obama White House, to figure out what had gone wrong. Her conclusion: Facebook needed to rethink its permissive attitude toward misinformation. Before the riots in Sri Lanka, the company had tolerated fake news and misinformation as a matter of policy. “There are real concerns with a private company determining truth or falsity,” Leinwand says,summing up the thinking.

But as she began looking into what had happened in Sri Lanka, Leinwand realised the policy needed a caveat. Starting that summer, Facebook would remove certain posts in some high-risk countries, including Sri Lanka, but only if they were reported by local non-profits and would lead to “imminent violence.” When Facebook saw a similar string of sterilisation rumours in June, the new process seemed to work. That, says Leinwand, was “personally gratifying”— a sign that Facebook was capable of policing its platform.

But is it? It’s been almost exactly a year since news broke that Facebook had allowed the personal data of tens of millions of users to be shared with Cambridge Analytica, a consulting company affiliated with
Donald Trump<\/a>’s 2016 presidential campaign. Privacy breaches are hardly as serious as ethnic violence, but the ordeal did mark a palpable shift in public awareness about Facebook’s immense influence. Plus, it followed a familiar pattern: Facebook knew about the slip-up, ignored it for years, and when exposed, tried to downplay it with a handy phrase that chief executive officer Mark Zuckerberg<\/a> repeated ad nauseam in his April congressional hearings: “We are taking a broader view of our responsibility.” He struck a similar note with a 3,000-word blog post in early March that promised the company would focus on private communications, attempting to solve Facebook’s trust problem while acknowledging that the company’s apps still contain “terrible things like child exploitation, terrorism and extortion.”

If Facebook wants to stop those things, it will have to get a better handle on its 2.7 billion users, whose content powers its wildly profitable advertising engine. The company’s business depends on sifting through that content and showing users posts they’re apt to like, which has often had the side effect of amplifying fake news and extremism.

Unfortunately, the reporting system they described, which relies on low-wage human moderators and software, remains slow and under-resourced. Facebook could afford to pay its moderators more money, or hire more of them, or place much more stringent rules on what users can post — but any of those things would hurt the company’s profits and revenue. Instead, it’s adopted a reactive posture, attempting to make rules after problems have appeared. The rules are helping, but critics say Facebook needs to be much more proactive.

Today, Facebook is governed by a 27-page document called Community Standards. Posted publicly for the first time in 2018, the rules specify, for instance, that instructions for making explosives aren’t allowed unless they’re for scientific or educational purposes. Images of “visible anuses” and “fully nude closeups of buttocks,” likewise, are forbidden, unless they’re superimposed onto a public figure, in which case they’re permitted as commentary.

The standards can seem comically absurd in their specificity. But, Facebook executives say, they’re an earnest effort to systematically address the worst of the site in a way that’s scalable. This means rules that are general enough to apply anywhere in the world — and are clear enough that a low-paid worker in one of Facebook’s content-scanning hubs in the Philippines, Ireland, and elsewhere, can decide within seconds what to do with a flagged post.

The working conditions for the 15,000 employees and contractors who do this for Facebook have attracted controversy. In February, the Verge reported that US moderators make only $28,800 a year while being asked regularly to view images and videos that contain graphic violence, porn and hate speech.

Some suffer from post-traumatic stress disorder. Facebook responded that it’s conducting an audit of its contract-work providers and that it will keep in closer contact with them to uphold higher standards and pay a living wage.

Zuckerberg has said
artificial intelligence<\/a> algorithms, which the company already uses to identify nudity and terrorist content, will eventually handle most of this sorting. But at the moment, even the most sophisticated AI software struggles in categories in which context matters. “Hate speech is one of those areas,” says Monika Bickert, Facebook’s head of global policy management, in a June 2018 interview at company headquarters. “So are bullying and harassment.”

On the day of the interview, Bickert was managing Facebook’s response to the mass shooting the day before at the Capital Gazette in Annapolis. While the massacre was happening, Bickert instructed content reviewers to look out for posts praising the gunman and to block opportunists creating fake profiles in the names of the shooter or victims. Later her team took down the shooter’s profile and turned victims’ pages into what the company calls “memorialised accounts,” which are identical to regular Facebook pages but place the word “remembering” above the deceased person’s name.

<\/body>","next_sibling":[{"msid":68421700,"title":"Social media abuzz with calls for boycott of Chinese products","entity_type":"ARTICLE","link":"\/news\/social-media-abuzz-with-calls-for-boycott-of-chinese-products\/68421700","category_name":null,"category_name_seo":"telecomnews"}],"related_content":[{"msid":"68421267","title":"facebook ap","entity_type":"IMAGES","seopath":"small-biz\/startups\/features\/facebooks-crisis-management-algorithm-runs-on-outrage\/facebook-ap","category_name":"Facebook\u2019s crisis management algorithm runs on outrage","synopsis":"If Facebook wants to stop those things, it will have to get a better handle on its 2.7 billion users, whose content powers its wildly profitable advertising engine.","thumb":"https:\/\/etimg.etb2bimg.com\/thumb\/img-size-266737\/68421267.cms?width=150&height=112","link":"\/image\/small-biz\/startups\/features\/facebooks-crisis-management-algorithm-runs-on-outrage\/facebook-ap\/68421267"}],"msid":68422635,"entity_type":"ARTICLE","title":"Facebook\u2019s crisis management algorithm runs on outrage","synopsis":"One year after the Cambridge Analytica scandal, Mark Zuckerberg says the company really cares. Then why is there an endless cycle of fury and apology?","titleseo":"telecomnews\/facebooks-crisis-management-algorithm-runs-on-outrage","status":"ACTIVE","authors":[],"Alttitle":{"minfo":""},"artag":"Bloomberg","artdate":"2019-03-15 12:50:33","lastupd":"2019-03-15 12:50:33","breadcrumbTags":["facebook","mark zuckerberg","Whistleblower","hate speech","Trust law","Donald Trump","artificial intelligence","Internet"],"secinfo":{"seolocation":"telecomnews\/facebooks-crisis-management-algorithm-runs-on-outrage"}}" data-authors="[" "]" data-category-name="" data-category_id="" data-date="2019-03-15" data-index="article_1">

Facebook的危机管理算法运行在愤怒

—剑桥一年后丑闻,马克•扎克伯格说,公司真的在乎。那么为什么有无尽的愤怒和道歉吗?

  • 发布于2019年3月15日中午12点是
旧金山:去年,一脸谱网用户在斯里兰卡愤怒的消息发布到社交网络。“杀死所有穆斯林婴儿没有保留甚至一个婴儿,“人在僧伽罗语写道,中国佛教多数派的门外语e。“F - ing狗!”《华盛顿邮报》在2018年年初上升,在白色的文本和一个顽皮的粉红色和紫色背景,Facebook在2016年开始提供鼓励用户分享更多。杀害穆斯林的情绪得到了前30喜欢别人发现它令人不安的足够的点击“反馈”按钮。的告密者选择的选项”仇恨言论”的九个可能的类别反感内容在Facebook上。

广告
多年来,非营利组织在斯里兰卡警告称,Facebook帖子中扮演一个角色升级民族僧伽罗佛教徒和泰米尔穆斯林之间的紧张关系,但该公司忽略了它们。Facebook回应花了六天仇恨言论报告。“谢谢你的反馈,公司对告密者,谁发布到Twitter的响应。内容,Facebook继续说道,“不违背我们的一个特定的社区标准。”

呆在网上,部分的要求针对穆斯林的暴力浪潮淹没了去年网络。2018年2月下旬,一群暴徒袭击了穆斯林餐馆老板Ampara,斯里兰卡东部的一个小镇。他活了下来,但有更多的骚乱在中型城市康堤接下来的一个星期,导致两人死亡在政府介入之前,采取措施,包括要求Facebook离线三天。

关闭引起公司的注意。任命杰西卡雷旺德,一名律师曾在奥巴马白宫,找出了错误的。她的结论是:Facebook需要考虑其宽容的态度错误信息。在骚乱在斯里兰卡之前,公司已经容忍虚假新闻和错误信息的政策。乐动扑克“有真正的担忧与私人公司确定真实性或虚假性,“雷旺德说,总结思考。

但是当她开始调查发生了什么在斯里兰卡,雷旺德意识到政策需要一个警告。那个夏天开始,Facebook将消除某些帖子在一些高风险国家,包括斯里兰卡,但只有当他们被当地的非营利组织和报道将导致“迫在眉睫的暴力。“当Facebook看到类似的一系列6月冲销的传言,新流程似乎工作。雷旺德说,这是“个人可喜”——一个信号,表明Facebook警务平台的能力。

广告
但真的是这样吗?已经整整一年消息传出后,Facebook让数千万用户的个人数据共享与剑桥分析乐动扑克”,隶属于一家咨询公司唐纳德·特朗普2016年总统竞选。隐私泄露是不严重的种族暴力,但折磨并标志着一个明显的变化:在公众对Facebook的巨大影响力。另外,它遵循着一个熟悉的模式:Facebook知道疏忽,忽略了它多年来,暴露时,试图淡化它与一个方便的短语,首席执行官马克•扎克伯格在他4月国会听证会反反复复的重复:“我们正采取更广泛的观点我们的责任。”他达成了类似的注意一篇3000字的博文在3月初,承诺该公司将专注于私人通信,试图解决Facebook的信任问题在承认公司的应用程序仍然包含“可怕的事情像儿童剥削,恐怖主义和勒索。”

如果Facebook想要阻止这些事情,它必须有一个更好的把握其27亿用户,其内容的权力非常有利可图的广告引擎。公司的业务取决于内容筛选和显示用户帖子他们倾向于喜欢,经常在放大假新闻和极端主义的副作用。乐动扑克

不幸的是,他们描述的报告系统,依靠低工资人类版主和软件,依然缓慢,资源不足。Facebook可以支付其版主更多的钱,雇佣更多的人,或者更严格的地方规定了用户可以发布,但任何这些东西会伤害公司的利润和收入。相反,它是采用被动姿态,试图使规则后出现的问题。规则是帮助,但批评人士说,Facebook需要更加积极主动。

今天,Facebook是由27页的文档称为社区标准。首次公开发布的2018年,指定的规则,例如,指令使炸药不允许,除非他们对科学或教育的目的。“可见的图像菊花”和“完全裸体靠近的臀部,“同样的,是被禁止的,除非他们是叠加在一个公众人物,在这种情况下,他们允许评论。

标准似乎滑稽荒谬的特异性。但Facebook高管说,他们是一个认真努力,系统地解决最糟糕的网站在一个可伸缩的方式。这意味着规则一般足以应用在世界任何地方,足够清晰,Facebook的content-scanning枢纽之一的低薪工人在菲律宾,爱尔兰,和其他地方,在几秒内可以决定如何处理一个标记。

15000名员工和承包商的工作条件做这个Facebook吸引了争议。今年2月,即将报道,我们版主让每年只有28800美元而被要求定期查看图片和视频,包含图形暴力,色情和仇恨言论。

一些患有创伤后压力心理障碍症。Facebook回应说,它的开展审计合同工作提供者和它会与他们保持密切联系坚持更高的标准和支付生活工资。

扎克伯格曾说人工智能算法,该公司已经使用识别裸体和恐怖内容,最终将处理大多数的排序。但现在,即使是最复杂的人工智能软件斗争类别中上下文很重要。“仇恨言论是其中的一个地区,”莫尼卡说Bickert, Facebook的全球策略管理,在公司总部2018年6月的一次采访中。“所以欺凌和骚扰。”

面试当天,Bickert管理Facebook的应对枪击事件的前一天在安纳波利斯的首都公报》。虽然发生了大屠杀,Bickert指示内容审查员寻找文章赞扬枪手和阻止投机者创造假资料在射击游戏的名称或受害者。后来她的团队取下射击的概要,并将受害者的页面转化为该公司所称的“账户,纪念着”,这是相同的常规的Facebook页面但是地方上面的“记忆”这个词的死去的人的名字。

  • 发布于2019年3月15日中午12点是
是第一个发表评论。
现在评论

加入2 m +行业专业人士的社区

订阅我们的通讯最新见解与分析。乐动扑克

下载ETTelec乐动娱乐招聘om应用

  • 得到实时更新
  • 保存您最喜爱的文章
扫描下载应用程序
\"\"
<\/span><\/figcaption><\/figure> San Francisco: Last year, a Facebook<\/a> user in Sri Lanka posted an angry message to the social network. “Kill all the Muslim babies without sparing even an infant,” the person wrote in Sinhala, the languag e of the country’s Buddhist majority. “F---ing dogs!” The post went up early in 2018, in white text and on one of the playful pink and purple backgrounds that Facebook began offering in 2016 to encourage its users to share more with one another. The sentiment about killing Muslims got 30 likes before someone else found it troubling enough to click the “give feedback” button instead. The whistleblower<\/a> selected the option for “hate speech<\/a>”, one of nine possible categories for objectionable content on Facebook.

For years, non-profits in Sri Lanka have warned that Facebook posts are playing a role in escalating ethnic tensions between Sinhalese Buddhists and Tamil Muslims, but the company had ignored them. It took six days for Facebook to respond to the
hate speech<\/a> report. “Thanks for the feedback,” the company told the whistleblower, who posted the response to Twitter. The content, Facebook continued, “doesn’t go against one of our specific Community Standards.”

The post stayed online, part of a wave of calls for violence against Muslims that flooded the network last year. In late February 2018, a mob attacked a Muslim restaurant owner in Ampara, a small town in eastern Sri Lanka. He survived, but there were more riots in the mid-size city of Kandy the following week, resulting in two deaths before the government stepped in, taking measures that included ordering Facebook offline for three days.

The shutdown got the company’s attention. It appointed Jessica Leinwand, a lawyer who served in the Obama White House, to figure out what had gone wrong. Her conclusion: Facebook needed to rethink its permissive attitude toward misinformation. Before the riots in Sri Lanka, the company had tolerated fake news and misinformation as a matter of policy. “There are real concerns with a private company determining truth or falsity,” Leinwand says,summing up the thinking.

But as she began looking into what had happened in Sri Lanka, Leinwand realised the policy needed a caveat. Starting that summer, Facebook would remove certain posts in some high-risk countries, including Sri Lanka, but only if they were reported by local non-profits and would lead to “imminent violence.” When Facebook saw a similar string of sterilisation rumours in June, the new process seemed to work. That, says Leinwand, was “personally gratifying”— a sign that Facebook was capable of policing its platform.

But is it? It’s been almost exactly a year since news broke that Facebook had allowed the personal data of tens of millions of users to be shared with Cambridge Analytica, a consulting company affiliated with
Donald Trump<\/a>’s 2016 presidential campaign. Privacy breaches are hardly as serious as ethnic violence, but the ordeal did mark a palpable shift in public awareness about Facebook’s immense influence. Plus, it followed a familiar pattern: Facebook knew about the slip-up, ignored it for years, and when exposed, tried to downplay it with a handy phrase that chief executive officer Mark Zuckerberg<\/a> repeated ad nauseam in his April congressional hearings: “We are taking a broader view of our responsibility.” He struck a similar note with a 3,000-word blog post in early March that promised the company would focus on private communications, attempting to solve Facebook’s trust problem while acknowledging that the company’s apps still contain “terrible things like child exploitation, terrorism and extortion.”

If Facebook wants to stop those things, it will have to get a better handle on its 2.7 billion users, whose content powers its wildly profitable advertising engine. The company’s business depends on sifting through that content and showing users posts they’re apt to like, which has often had the side effect of amplifying fake news and extremism.

Unfortunately, the reporting system they described, which relies on low-wage human moderators and software, remains slow and under-resourced. Facebook could afford to pay its moderators more money, or hire more of them, or place much more stringent rules on what users can post — but any of those things would hurt the company’s profits and revenue. Instead, it’s adopted a reactive posture, attempting to make rules after problems have appeared. The rules are helping, but critics say Facebook needs to be much more proactive.

Today, Facebook is governed by a 27-page document called Community Standards. Posted publicly for the first time in 2018, the rules specify, for instance, that instructions for making explosives aren’t allowed unless they’re for scientific or educational purposes. Images of “visible anuses” and “fully nude closeups of buttocks,” likewise, are forbidden, unless they’re superimposed onto a public figure, in which case they’re permitted as commentary.

The standards can seem comically absurd in their specificity. But, Facebook executives say, they’re an earnest effort to systematically address the worst of the site in a way that’s scalable. This means rules that are general enough to apply anywhere in the world — and are clear enough that a low-paid worker in one of Facebook’s content-scanning hubs in the Philippines, Ireland, and elsewhere, can decide within seconds what to do with a flagged post.

The working conditions for the 15,000 employees and contractors who do this for Facebook have attracted controversy. In February, the Verge reported that US moderators make only $28,800 a year while being asked regularly to view images and videos that contain graphic violence, porn and hate speech.

Some suffer from post-traumatic stress disorder. Facebook responded that it’s conducting an audit of its contract-work providers and that it will keep in closer contact with them to uphold higher standards and pay a living wage.

Zuckerberg has said
artificial intelligence<\/a> algorithms, which the company already uses to identify nudity and terrorist content, will eventually handle most of this sorting. But at the moment, even the most sophisticated AI software struggles in categories in which context matters. “Hate speech is one of those areas,” says Monika Bickert, Facebook’s head of global policy management, in a June 2018 interview at company headquarters. “So are bullying and harassment.”

On the day of the interview, Bickert was managing Facebook’s response to the mass shooting the day before at the Capital Gazette in Annapolis. While the massacre was happening, Bickert instructed content reviewers to look out for posts praising the gunman and to block opportunists creating fake profiles in the names of the shooter or victims. Later her team took down the shooter’s profile and turned victims’ pages into what the company calls “memorialised accounts,” which are identical to regular Facebook pages but place the word “remembering” above the deceased person’s name.

<\/body>","next_sibling":[{"msid":68421700,"title":"Social media abuzz with calls for boycott of Chinese products","entity_type":"ARTICLE","link":"\/news\/social-media-abuzz-with-calls-for-boycott-of-chinese-products\/68421700","category_name":null,"category_name_seo":"telecomnews"}],"related_content":[{"msid":"68421267","title":"facebook ap","entity_type":"IMAGES","seopath":"small-biz\/startups\/features\/facebooks-crisis-management-algorithm-runs-on-outrage\/facebook-ap","category_name":"Facebook\u2019s crisis management algorithm runs on outrage","synopsis":"If Facebook wants to stop those things, it will have to get a better handle on its 2.7 billion users, whose content powers its wildly profitable advertising engine.","thumb":"https:\/\/etimg.etb2bimg.com\/thumb\/img-size-266737\/68421267.cms?width=150&height=112","link":"\/image\/small-biz\/startups\/features\/facebooks-crisis-management-algorithm-runs-on-outrage\/facebook-ap\/68421267"}],"msid":68422635,"entity_type":"ARTICLE","title":"Facebook\u2019s crisis management algorithm runs on outrage","synopsis":"One year after the Cambridge Analytica scandal, Mark Zuckerberg says the company really cares. Then why is there an endless cycle of fury and apology?","titleseo":"telecomnews\/facebooks-crisis-management-algorithm-runs-on-outrage","status":"ACTIVE","authors":[],"Alttitle":{"minfo":""},"artag":"Bloomberg","artdate":"2019-03-15 12:50:33","lastupd":"2019-03-15 12:50:33","breadcrumbTags":["facebook","mark zuckerberg","Whistleblower","hate speech","Trust law","Donald Trump","artificial intelligence","Internet"],"secinfo":{"seolocation":"telecomnews\/facebooks-crisis-management-algorithm-runs-on-outrage"}}" data-news_link="//www.iser-br.com/news/facebooks-crisis-management-algorithm-runs-on-outrage/68422635">