\"\"
<\/span><\/figcaption><\/figure>
New Delhi: Meta<\/a> (formerly Facebook<\/a>) has announced a long-term artificial intelligence<\/a> (AI) research initiative to better understand how the human brain processes speech and text, and build AI systems<\/a> that learn like people do.

In collaboration with neuroimaging center
Neurospin<\/a> (CEA) and Inria, Meta<\/a> said it is comparing how AI language models and the brain respond to the same spoken or written sentences.

\"We'll use insights from this work to guide the development of AI that processes speech and text as efficiently as people,\" the
social network<\/a> said in a statement.

Over the past two years,
Meta<\/a> has applied deep learning techniques to public neuro-imaging data sets to analyse how the brain processes words and sentences.

Children learn that \"orange\" can refer to both a fruit and colour from a few examples, but modern
AI systems<\/a> can't do this as efficiently as people.

Meta research has found that language models that most resemble brain activity are those that best predict the next word from context (like once upon a... time).

\"While the brain anticipates words and ideas far ahead in time, most language models are trained to only predict the very next word,\" said the company.

Unlocking this long-range forecasting capability could help improve modern AI language models.

Meta recently revealed evidence of long-range predictions in the brain, an ability that still challenges today's language models.

For the phrase, \"Once upon a...\" most language models today would typically predict the next word, \"time,\" but they're still limited in their ability to anticipate complex ideas, plots and narratives, like people do.

In collaboration with
Inria, Meta<\/a> research team compared a variety of language models with the brain responses of 345 volunteers who listened to complex narratives while being recorded with fMRI.

\"Our results showed that specific brain regions are best accounted for by language models enhanced with far-off words in the future,\" the team said.


<\/body>","next_sibling":[{"msid":91251562,"title":"HFCL drops 8% after PAT falls in March quarter","entity_type":"ARTICLE","link":"\/news\/hfcl-drops-8-after-pat-falls-in-march-quarter\/91251562","category_name":null,"category_name_seo":"telecomnews"}],"related_content":[],"msid":91252340,"entity_type":"ARTICLE","title":"Meta building AI that processes speech & text as humans do","synopsis":"In collaboration with neuroimaging center Neurospin (CEA) and Inria, Meta said it is comparing how AI language models and the brain respond to the same spoken or written sentences.","titleseo":"telecomnews\/meta-building-ai-that-processes-speech-text-as-humans-do","status":"ACTIVE","authors":[],"Alttitle":{"minfo":""},"artag":"IANS","artdate":"2022-05-02 11:51:37","lastupd":"2022-05-02 12:02:26","breadcrumbTags":["meta","internet","neurospin","artificial intelligence","facebook","inria, meta","modern AI language models","social network","AI systems"],"secinfo":{"seolocation":"telecomnews\/meta-building-ai-that-processes-speech-text-as-humans-do"}}" data-authors="[" "]" data-category-name="" data-category_id="" data-date="2022-05-02" data-index="article_1">

元建设人工智能处理语音和文本作为人类做的

与神经影像中心合作Neurospin (CEA)和法国元说,比较人工智能语言模型和大脑如何应对同样的口语或书面语的句子。

  • 更新2022年5月2日,12:02点坚持

新德里消息:(原脸谱网)宣布了一项长期人工智能(AI)研究计划更好地理解人类的大脑如何处理语言和文本,并构建人工智能系统学会像人一样。

与神经影像学合作中心Neurospin(CEA)和法国信息元说,比较人工智能语言模型和大脑如何应对同样的口语或书面语的句子。

“我们将使用的见解从这个工作指导人工智能处理语音和文本的发展有效地人,”社交网络在一份声明中说。

在过去的两年里,深度学习技术应用于公共神经成像数据集分析大脑如何处理单词和句子。

广告
孩子学习“橙色”既可以指水果和颜色从几个例子,但现代人工智能系统不能做这个人一样有效。

元模型,研究发现,语言最像大脑活动是那些最好的预测下一个单词的上下文(如从前……时间)。

“虽然大脑预期的话和思想超前,大多数语言模型训练只预测下一个字,“说,该公司。

解锁这个远程预测能力可以帮助改善现代人工智能语言模型。

元最近披露的证据远程预测在大脑中,仍然挑战今天的语言模式的能力。

的短语,“从前…”今天大多数语言模型通常预测下一个单词,“时间”,但他们仍然有限的能力来预测复杂的想法,情节和故事,像人一样。

合作法国信息元研究小组比较了各种语言模型和345名志愿者,他们的大脑反应听复杂的叙述而被记录功能磁共振成像。

“我们的研究结果显示,特定的大脑区域是最好的语言模型增强了遥远的词汇占的未来,”研究小组说。


  • 当天早上11时51分5月2日,2022年出版的坚持

加入2 m +行业专业人士的社区

订阅我们的通讯最新见解与分析。乐动扑克

下载ETTelec乐动娱乐招聘om应用

  • 得到实时更新
  • 保存您最喜爱的文章
扫描下载应用程序
是第一个发表评论。
现在评论
\"\"
<\/span><\/figcaption><\/figure>
New Delhi: Meta<\/a> (formerly Facebook<\/a>) has announced a long-term artificial intelligence<\/a> (AI) research initiative to better understand how the human brain processes speech and text, and build AI systems<\/a> that learn like people do.

In collaboration with neuroimaging center
Neurospin<\/a> (CEA) and Inria, Meta<\/a> said it is comparing how AI language models and the brain respond to the same spoken or written sentences.

\"We'll use insights from this work to guide the development of AI that processes speech and text as efficiently as people,\" the
social network<\/a> said in a statement.

Over the past two years,
Meta<\/a> has applied deep learning techniques to public neuro-imaging data sets to analyse how the brain processes words and sentences.

Children learn that \"orange\" can refer to both a fruit and colour from a few examples, but modern
AI systems<\/a> can't do this as efficiently as people.

Meta research has found that language models that most resemble brain activity are those that best predict the next word from context (like once upon a... time).

\"While the brain anticipates words and ideas far ahead in time, most language models are trained to only predict the very next word,\" said the company.

Unlocking this long-range forecasting capability could help improve modern AI language models.

Meta recently revealed evidence of long-range predictions in the brain, an ability that still challenges today's language models.

For the phrase, \"Once upon a...\" most language models today would typically predict the next word, \"time,\" but they're still limited in their ability to anticipate complex ideas, plots and narratives, like people do.

In collaboration with
Inria, Meta<\/a> research team compared a variety of language models with the brain responses of 345 volunteers who listened to complex narratives while being recorded with fMRI.

\"Our results showed that specific brain regions are best accounted for by language models enhanced with far-off words in the future,\" the team said.


<\/body>","next_sibling":[{"msid":91251562,"title":"HFCL drops 8% after PAT falls in March quarter","entity_type":"ARTICLE","link":"\/news\/hfcl-drops-8-after-pat-falls-in-march-quarter\/91251562","category_name":null,"category_name_seo":"telecomnews"}],"related_content":[],"msid":91252340,"entity_type":"ARTICLE","title":"Meta building AI that processes speech & text as humans do","synopsis":"In collaboration with neuroimaging center Neurospin (CEA) and Inria, Meta said it is comparing how AI language models and the brain respond to the same spoken or written sentences.","titleseo":"telecomnews\/meta-building-ai-that-processes-speech-text-as-humans-do","status":"ACTIVE","authors":[],"Alttitle":{"minfo":""},"artag":"IANS","artdate":"2022-05-02 11:51:37","lastupd":"2022-05-02 12:02:26","breadcrumbTags":["meta","internet","neurospin","artificial intelligence","facebook","inria, meta","modern AI language models","social network","AI systems"],"secinfo":{"seolocation":"telecomnews\/meta-building-ai-that-processes-speech-text-as-humans-do"}}" data-news_link="//www.iser-br.com/news/meta-building-ai-that-processes-speech-text-as-humans-do/91252340">