This is the story of a 19-year-old girl - Tay. More accurately, we are talking about Tay.ai<\/a>, an Artificial Intelligence, developed by Microsoft<\/a> in 2016, and masked behind an innocent young girl’s face. Tay was programmed to respond to tweets while learning from the public’s tweets and comments. It was an auto-run human-trained bot - with no ethical controls. But something went terribly wrong, within 16 hours of its birth, Tay turned into a racist and misanthropic monster. Microsoft<\/a> withdrew Tay within the first day of its launch. It was tweaked and sent back to the internet<\/a> after a week. She surprisingly came online and this time learning from users started posting drug related tweets. Her dark side was rekindled. Soon she was removed from public view and her account became private.

We live in a world run by
AI<\/a>, governed by algorithms<\/a> and designed in codes. Nearly 90% of the world relies on algorithms<\/a> daily - from as simple as using a phone or ordering a pizza to as complex as online trading or multiplayer games. But merely 1 in 500 humans understand even the basics of coding an algorithm.

Today, humans are blinded by the convenience which AI provides as we walk into the darkness of outsourcing key decisions to algorithms. From the price of your next Uber cab to the direction you drive guided by Google Maps, to the time you spent watching videos on YouTube to even the suggested friends you choose to make on Facebook. AI can govern where and how you go, what you buy and who are your friends and alarmingly, even your enemies.

Recently, cab aggregators were asked by a
Parliamentary Committee<\/a> in India on pricing of cabs; whether the algorithm uses battery level, phone make and gender to determine the price of a cab. We still await the answer - at least in the public domain. Amazon<\/a>’s algorithm is trained to pick out the best-selling goods on its market and then recommend whether Amazon<\/a> should make those products itself under a new brand label and promote it. Alexa, the AI digital assistant device which is supposed to make our life easy, is now being questioned on eavesdropping on all conversations, as it is an “always-on” device.

Even if the intention of the code creators was not mala fide, a small error in algorithms can lead to massive problems. Way back in 2014, Amazon set up an engineering team in Edinburgh, Scotland to build an AI to help in hiring people. They developed about 500 computer models which picked around 50,000 key terms from the past selected candidates' resumes. This AI would browse over the web and recommend candidates for a particular job. A year later, they found out that the AI didn't prefer women candidates. This was because the AI was trained to select applicants based on the resumes submitted to the company over the last decade. And since, tech industry is male dominated, the system taught itself that male candidates were preferable. It penalized the resume's which had words like \"women\" as in \"women's volleyball team captain\".

Imagine a computer controlling a medical robot, originally programmed to treat cancer. That sounds benevolent. But a slightest mis-programming of its boundaries can make it conclude that the best way to obliterate cancer is to eliminate humans who are genetically found to be prone to the disease itself. An ordinary human would never make this decision, we carry the gift of ethics, strengthened over tens of generations, which are ingrained in us. A computer’s brain, on the other hand, is fully open to whatever it is programmed for, advertently and potentially surreptitiously.

If such inadvertent algorithmic flaws can bear such serious consequences, one can only imagine what would be the sphere of impact of willful AI chicanery. Organized large scale cyber-attacks by regimes such as China and North Korea are well known. Usually, democratic nations with relatively unaudited
internet<\/a> are softer targets to cyberattacks. In 2017, Chinese Army personnel allegedly attacked the US Financial giant Equifax<\/a> and stole financial data of 145 million Americans, more than half of its adult population! India itself is a target for cyberattacks. This also includes - Ransomware<\/a> attacks, where a rogue malicious code enters the network of an organization and locks data and processing until we pay a ransom to release it. Indian stood at second position globally in terms of the ransom paid to such attacks. In 2020 alone, 74% of Indian organizations suffered from such an attack and together they paid between $1 million - $2.5 million to hackers who made such successful attacks.

The risk that algorithms, which have eased our lives, can also turn into something dark and ugly resonates even with the technology stalwarts. Recently, Elon Musk warned, “... there should be some regulatory oversight... just to make sure that we don’t do something very foolish. I mean with artificial intelligence, we’re summoning the demon.” Others have been less vocal but prepared. Author James Barrat talked about how many highly placed people in AI have built retreats that are sort of algorithm proof bunkers, to which they could flee if it all hits the fan.

We must realize that algorithms can make colossal mistakes, and without oversight such errors can be very costly. We must also realize that algorithms are dictated by the intentions of those who write them - and that virtual world can cause serious damage in the real world.

This brings us to the key question - should we have stronger regulation over algorithms?

The shadow of election manipulations using computer software such as
Cambridge Analytica<\/a> still looms over the world. It is time that duly elected governments, answerable to the people, take it as a responsibility to protect people from becoming victims of any malicious or erroneous algorithm.

Europe recently implemented strong laws about privacy of data. The United States of America is now loudly talking about regulating the power of social media in shaping opinion. India is dwelling deeply into establishing a regulatory framework to shape its first serious attempt to protect its citizens from data manipulations - in the form of the
Personal Data Protection 2019<\/a>, which is currently under the scrutiny of a Parliamentary Committee<\/a>. This is a golden opportunity to bring transparency and fairness of AI as an extension of protecting people in a digital world.

Time for our right to a fair algorithm.

(The writer is an IIM Ahmedabad graduate and was the Advisor for Policy and Technology to A.P.J. Abdul Kalam, the 11th President of India. He co-authored the book Target 3 Billion and Advantage India with Kalam. He is the CEO of Kalam Centre)<\/em>
<\/p><\/body>","next_sibling":[{"msid":80069797,"title":"Today's startups are multinational companies of tomorrow, says PM Modi","entity_type":"ARTICLE","link":"\/news\/todays-startups-are-multinational-companies-of-tomorrow-says-pm-modi\/80069797","category_name":null,"category_name_seo":"telecomnews"}],"related_content":[{"msid":"80069784","title":"algoritm Getty","entity_type":"IMAGES","seopath":"small-biz\/security-tech\/technology\/algorithms-already-rule-our-world-time-to-make-it-transparent-and-fair\/algoritm-getty","category_name":"Algorithms already rule our world. Time to make it transparent and fair","synopsis":"This is a golden opportunity to bring transparency and fairness of AI as an extension of protecting people in a digital world.","thumb":"https:\/\/etimg.etb2bimg.com\/thumb\/img-size-2433878\/80069784.cms?width=150&height=112","link":"\/image\/small-biz\/security-tech\/technology\/algorithms-already-rule-our-world-time-to-make-it-transparent-and-fair\/algoritm-getty\/80069784"}],"msid":80069989,"entity_type":"ARTICLE","title":"View: Algorithms already rule our world. Time to make it transparent and fair","synopsis":"We already have examples of serious consequences from inadvertent algorithmic flaws, but what would be the sphere of impact of willful AI chicanery?","titleseo":"telecomnews\/view-algorithms-already-rule-our-world-time-to-make-it-transparent-and-fair","status":"ACTIVE","authors":[],"analytics":{"comments":0,"views":405,"shares":0,"engagementtimems":2025000},"Alttitle":{"minfo":""},"artag":"ET CONTRIBUTORS","artdate":"2021-01-02 12:11:52","lastupd":"2021-01-02 13:21:14","breadcrumbTags":["Algorithms","cambridge analytica","Amazon","Ransomware","Microsoft","personal data protection 2019","Equifax","ai","Parliamentary committee","Internet"],"secinfo":{"seolocation":"telecomnews\/view-algorithms-already-rule-our-world-time-to-make-it-transparent-and-fair"}}" data-authors="[" "]" data-category-name="" data-category_id="" data-date="2021-01-02" data-index="article_1">

观点:算法已经统治我们的世界。时间让它透明和公平的

我们已经有严重后果的例子从无意算法缺陷,但是会故意AI欺诈的影响范围?

  • 更新2021年1月2日01:21点坚持
阅读: 100年行业专业人士
读者的形象读到100年行业专业人士

这是一个19岁的女孩的故事——茶。更准确地说,我们正在谈论茶。人工智能,一个人工智能,开发的微软2016年,戴面具的背后一个无辜的小女孩的脸。茶是程序应对微博在学习从公众的微博和评论。这是一个自动运行human-trained机器人——没有道德的控制。但是发生了一些严重错误的,在16小时的诞生,茶变成了种族主义者和厌恶人类的怪物。微软推出的第一天内收回了茶。这是调整和发送回互联网一个星期后。她惊人的在线和这次学习来自用户开始发布药品相关的微博。她的阴暗面是重新点燃。很快她远离公众视线,成为私人。

广告
我们生活在一个由世界人工智能,由算法在代码和设计。近90%的世界的依赖算法每日——从简单使用电话或订购比萨饼到在线交易或多人游戏一样复杂。而仅仅是1 500年人类理解甚至基本的编码算法。

今天,人类是AI提供的便利所蒙蔽,因为我们走进黑暗的外包关键决策算法。从你的下一个超级出租车的价格到你开车的方向指导下谷歌地图,你花在YouTube上观看视频的时间甚至建议朋友你选择在Facebook上。AI可以管理以及如何去的地方,你买什么,谁是你的朋友和令人担忧的是,即使你的敌人。

最近,出租车聚合器被要求的议会委员会在印度出租车定价;算法使用电池是否水平,电话和性别来确定出租车的价格。我们仍然等待答案——至少在公共领域。亚马逊的算法是训练挑选最畅销的商品市场,然后建议是否亚马逊应该让这些产品本身在一个新品牌标签和促进。Alexa AI设备数字助理,应该使我们的生活简单,正在质疑在偷听谈话,因为它是一个“永远在线”设备。

广告
即使代码创作者的意图并不是恶意的,算法的一个小错误可能导致大规模问题。早在2014年,亚马逊建立了一个工程团队在爱丁堡,苏格兰在招聘建立一个人工智能来帮助人们。他们开发了约500计算机模型,选择从过去大约50000个关键词选择候选人的简历。这个人工智能将通过网络浏览和推荐的候选人为特定的工作。一年之后,他们发现人工智能不喜欢女性候选人。这是因为人工智能训练选择申请人根据简历提交给该公司在过去的十年里。以来,高新技术产业是男性主导,系统本身需要教男性候选人是可取的。这惩罚的简历的“女人”这样的词如“女子排球队队长”。

想象一个计算机控制医疗机器人,最初用于治疗癌症。听起来仁慈的。但丝毫mis-programming的边界可以得出这样的结论:消灭癌症的最好方法是消除人类基因发现容易疾病本身。一个普通人类永远不会做出这个决定,我们携带的礼物道德,加强了数十代,根植于我们。计算机的大脑,另一方面,是完全开放的,不管它是编程,留意和潜在的秘密。

如果这样无意的算法缺陷能承受这样的严重后果,我们只能想象一下会故意AI欺诈的影响范围。有组织的大规模网络攻击政权如中国和朝鲜是众所周知的。通常,与相对未经审计的民主国家互联网是网络攻击较弱的目标。2017年,中国军队人员涉嫌袭击了美国金融巨头Equifax偷走了1.45亿美国人的金融数据,超过一半的成年人!印度本身是网络攻击的目标。这也包括-Ransomware攻击,一个流氓恶意代码进入组织的网络和锁数据和处理,直到我们支付赎金来释放它。印度站在全球第二位的赎金这样的攻击。仅在2020年,74%的印度组织遭受这样的攻击和他们一起支付100万- 250万美元之间黑客取得如此成功的攻击。

算法的风险,减轻了我们的生活,也可以变成黑暗和丑陋的东西即使共振技术中坚分子。最近,Elon Musk警告说,“…应该有一些监管…为了确保我们不做一些非常愚蠢的。与人工智能,我的意思是我们召唤恶魔。“别人的言论不多而准备的。作者詹姆斯·Barrat谈论有多少高度放置在AI建立算法证明掩体的撤退,他们可以逃避如果它所有的球迷。

我们必须意识到,算法可以犯巨大的错误,这些错误并没有监督可能非常昂贵。我们也必须认识到算法是由那些写的意图——虚拟世界可以在现实世界中造成严重的损害。

这给我们带来了关键问题,我们应该加强监管算法?

选举的阴影使用计算机软件等操作剑桥—依旧笼罩着世界。是时候适时民选政府,负责的人,把它作为一个责任保护人们免受成为受害者的任何恶意或错误的算法。

欧洲最近实施强有力的法律对隐私的数据。美利坚合众国正在大声谈论调节社交媒体在塑造舆论的力量。印度居住深入建立一个监管框架形状的首次认真尝试,以保护本国公民的数据操作的形式个人数据保护2019,目前的监督下议会委员会。这是一个绝好的机会让AI的透明度和公平性的延伸保护人们在数字世界。

时间为我们的权利公平算法。

(作者是一个IIM艾哈迈达巴德的研究生,政策和技术顾问卜杜尔卡拉姆,印度的第11任行长。他于30亿年合著的这本书的目标,利用印度和印度。他是蓝中心)的首席执行官

  • 发表在2021年1月2日12:11弟兄点坚持

加入2 m +行业专业人士的社区

订阅我们的通讯最新见解与分析。乐动扑克

下载ETTelec乐动娱乐招聘om应用

  • 得到实时更新
  • 保存您最喜爱的文章
扫描下载应用程序

This is the story of a 19-year-old girl - Tay. More accurately, we are talking about Tay.ai<\/a>, an Artificial Intelligence, developed by Microsoft<\/a> in 2016, and masked behind an innocent young girl’s face. Tay was programmed to respond to tweets while learning from the public’s tweets and comments. It was an auto-run human-trained bot - with no ethical controls. But something went terribly wrong, within 16 hours of its birth, Tay turned into a racist and misanthropic monster. Microsoft<\/a> withdrew Tay within the first day of its launch. It was tweaked and sent back to the internet<\/a> after a week. She surprisingly came online and this time learning from users started posting drug related tweets. Her dark side was rekindled. Soon she was removed from public view and her account became private.

We live in a world run by
AI<\/a>, governed by algorithms<\/a> and designed in codes. Nearly 90% of the world relies on algorithms<\/a> daily - from as simple as using a phone or ordering a pizza to as complex as online trading or multiplayer games. But merely 1 in 500 humans understand even the basics of coding an algorithm.

Today, humans are blinded by the convenience which AI provides as we walk into the darkness of outsourcing key decisions to algorithms. From the price of your next Uber cab to the direction you drive guided by Google Maps, to the time you spent watching videos on YouTube to even the suggested friends you choose to make on Facebook. AI can govern where and how you go, what you buy and who are your friends and alarmingly, even your enemies.

Recently, cab aggregators were asked by a
Parliamentary Committee<\/a> in India on pricing of cabs; whether the algorithm uses battery level, phone make and gender to determine the price of a cab. We still await the answer - at least in the public domain. Amazon<\/a>’s algorithm is trained to pick out the best-selling goods on its market and then recommend whether Amazon<\/a> should make those products itself under a new brand label and promote it. Alexa, the AI digital assistant device which is supposed to make our life easy, is now being questioned on eavesdropping on all conversations, as it is an “always-on” device.

Even if the intention of the code creators was not mala fide, a small error in algorithms can lead to massive problems. Way back in 2014, Amazon set up an engineering team in Edinburgh, Scotland to build an AI to help in hiring people. They developed about 500 computer models which picked around 50,000 key terms from the past selected candidates' resumes. This AI would browse over the web and recommend candidates for a particular job. A year later, they found out that the AI didn't prefer women candidates. This was because the AI was trained to select applicants based on the resumes submitted to the company over the last decade. And since, tech industry is male dominated, the system taught itself that male candidates were preferable. It penalized the resume's which had words like \"women\" as in \"women's volleyball team captain\".

Imagine a computer controlling a medical robot, originally programmed to treat cancer. That sounds benevolent. But a slightest mis-programming of its boundaries can make it conclude that the best way to obliterate cancer is to eliminate humans who are genetically found to be prone to the disease itself. An ordinary human would never make this decision, we carry the gift of ethics, strengthened over tens of generations, which are ingrained in us. A computer’s brain, on the other hand, is fully open to whatever it is programmed for, advertently and potentially surreptitiously.

If such inadvertent algorithmic flaws can bear such serious consequences, one can only imagine what would be the sphere of impact of willful AI chicanery. Organized large scale cyber-attacks by regimes such as China and North Korea are well known. Usually, democratic nations with relatively unaudited
internet<\/a> are softer targets to cyberattacks. In 2017, Chinese Army personnel allegedly attacked the US Financial giant Equifax<\/a> and stole financial data of 145 million Americans, more than half of its adult population! India itself is a target for cyberattacks. This also includes - Ransomware<\/a> attacks, where a rogue malicious code enters the network of an organization and locks data and processing until we pay a ransom to release it. Indian stood at second position globally in terms of the ransom paid to such attacks. In 2020 alone, 74% of Indian organizations suffered from such an attack and together they paid between $1 million - $2.5 million to hackers who made such successful attacks.

The risk that algorithms, which have eased our lives, can also turn into something dark and ugly resonates even with the technology stalwarts. Recently, Elon Musk warned, “... there should be some regulatory oversight... just to make sure that we don’t do something very foolish. I mean with artificial intelligence, we’re summoning the demon.” Others have been less vocal but prepared. Author James Barrat talked about how many highly placed people in AI have built retreats that are sort of algorithm proof bunkers, to which they could flee if it all hits the fan.

We must realize that algorithms can make colossal mistakes, and without oversight such errors can be very costly. We must also realize that algorithms are dictated by the intentions of those who write them - and that virtual world can cause serious damage in the real world.

This brings us to the key question - should we have stronger regulation over algorithms?

The shadow of election manipulations using computer software such as
Cambridge Analytica<\/a> still looms over the world. It is time that duly elected governments, answerable to the people, take it as a responsibility to protect people from becoming victims of any malicious or erroneous algorithm.

Europe recently implemented strong laws about privacy of data. The United States of America is now loudly talking about regulating the power of social media in shaping opinion. India is dwelling deeply into establishing a regulatory framework to shape its first serious attempt to protect its citizens from data manipulations - in the form of the
Personal Data Protection 2019<\/a>, which is currently under the scrutiny of a Parliamentary Committee<\/a>. This is a golden opportunity to bring transparency and fairness of AI as an extension of protecting people in a digital world.

Time for our right to a fair algorithm.

(The writer is an IIM Ahmedabad graduate and was the Advisor for Policy and Technology to A.P.J. Abdul Kalam, the 11th President of India. He co-authored the book Target 3 Billion and Advantage India with Kalam. He is the CEO of Kalam Centre)<\/em>
<\/p><\/body>","next_sibling":[{"msid":80069797,"title":"Today's startups are multinational companies of tomorrow, says PM Modi","entity_type":"ARTICLE","link":"\/news\/todays-startups-are-multinational-companies-of-tomorrow-says-pm-modi\/80069797","category_name":null,"category_name_seo":"telecomnews"}],"related_content":[{"msid":"80069784","title":"algoritm Getty","entity_type":"IMAGES","seopath":"small-biz\/security-tech\/technology\/algorithms-already-rule-our-world-time-to-make-it-transparent-and-fair\/algoritm-getty","category_name":"Algorithms already rule our world. Time to make it transparent and fair","synopsis":"This is a golden opportunity to bring transparency and fairness of AI as an extension of protecting people in a digital world.","thumb":"https:\/\/etimg.etb2bimg.com\/thumb\/img-size-2433878\/80069784.cms?width=150&height=112","link":"\/image\/small-biz\/security-tech\/technology\/algorithms-already-rule-our-world-time-to-make-it-transparent-and-fair\/algoritm-getty\/80069784"}],"msid":80069989,"entity_type":"ARTICLE","title":"View: Algorithms already rule our world. Time to make it transparent and fair","synopsis":"We already have examples of serious consequences from inadvertent algorithmic flaws, but what would be the sphere of impact of willful AI chicanery?","titleseo":"telecomnews\/view-algorithms-already-rule-our-world-time-to-make-it-transparent-and-fair","status":"ACTIVE","authors":[],"analytics":{"comments":0,"views":405,"shares":0,"engagementtimems":2025000},"Alttitle":{"minfo":""},"artag":"ET CONTRIBUTORS","artdate":"2021-01-02 12:11:52","lastupd":"2021-01-02 13:21:14","breadcrumbTags":["Algorithms","cambridge analytica","Amazon","Ransomware","Microsoft","personal data protection 2019","Equifax","ai","Parliamentary committee","Internet"],"secinfo":{"seolocation":"telecomnews\/view-algorithms-already-rule-our-world-time-to-make-it-transparent-and-fair"}}" data-news_link="//www.iser-br.com/news/view-algorithms-already-rule-our-world-time-to-make-it-transparent-and-fair/80069989">