• 父母千万不要这么做 孩子性早熟可能和你有关 _书法学习网站
  • 2018年大学生寒假到陕西省政府见习活动今天启动_周末情妇
  • 广州恒大球迷开放日训练 队员踩场气氛轻松_虱立清
  • “和合”理念具有重要价值_林繁男
  • 再迎政策利好 海南旅游能否成功进阶 _dj打碟机2
  • 日本“鹳”7号机搭载回收密封舱启程返回地球_荷兰队服
  • 23岁女生被全城“拦截”?!原来是因为……_高丽棒子是什么意思
  • 2018年第一批越冬灰鹤飞抵盐城大丰麋鹿保护区_呼叫被转移是被拉黑
  • 河南新密扫黑除恶漫画上墙_oppo乐园
  • Uma em cada cinco crian?as tem alergias alimentares nos EUA atendidas em sala de emergência_win8怎么安装
  • 浙江旅行商吉林行长春旅游推介会进一步深化两省合作--旅游频道_固态硬盘区别
  • 闲适独行!游走在华丽的艳遇之巅(组图)--旅游频道_妖兽道
  • “留学育人,轻功利、重本真”_好签名
  • 曹路宝:提升河湖管护水平 建设美好水绿家园_彩之网互动区首页
  • 上海推进建设健康保险交易中心 _服装厂招工启事
  • Scientists Create Device to Turn Brain Signals into Speech


    05 May, 2019

    蜗蜗牛小游戏网 www.liandama.cn Scientists say they have created a new device that can turn brain signals into electronic speech.

    The invention could one day give people who have lost the ability to speak a better way of communicating than current methods.

    The device was developed by researchers from the University of California, San Francisco. Their results were recently published in a study in the journal Nature.

    Scientists created a "brain machine interface" that is implanted in the brain. The device was built to read and record brain signals that help control the muscles that produce speech. These include the lips, larynx, tongue and jaw.

    The brain machine interface, shown here, was developed by researchers at the University of California, San Francisco, to turn brain signals into electronic speech. (University of California San Francisco)
    The brain machine interface, shown here, was developed by researchers at the University of California, San Francisco, to turn brain signals into electronic speech. (University of California San Francisco)

    The experiment involved a two-step process. First, the researchers used a "decoder" to turn electrical brain signals into representations of human vocal movements. A synthesizer then turns the representations into spoken sentences.

    Other brain-computer interfaces already exist to help people who cannot speak on their own. Often these systems are trained to follow eye or facial movements of people who have learned to spell out their thoughts letter-by-letter.

    But researchers say this method can produce many errors and is very slow, permitting at most about 10 spoken words per minute. This compares to between 100 and 150 words per minute used in natural speech.

    Edward Chang is a professor of neurological and member of the university's Weill Institute for Neuroscience. He was a lead researcher on the project. In a statement, he said the new two-step method presents a "proof of principle" with great possibilities for "real-time communication" in the future.

    "For the first time, this study demonstrates that we can generate entire spoken sentences based on an individual's brain activity," Chang said.

    The study involved five volunteer patients who were being treated for epilepsy. The individuals had the ability to speak and already had electrodes implanted in their brains.

    The volunteers were asked to read several hundred sentences aloud while the researchers recorded their brain activity.

    The researchers used audio recordings of the voice readings to reproduce the vocal muscle movements needed to produce human speech. This process permitted the scientists to create a realistic "virtual voice" for each individual, controlled by their brain activity.

    Future studies will test the technology on people who are unable to speak.

    Josh Chartier is a speech scientist and doctoral student at the University of California, San Francisco. He said the research team was "shocked" when it first heard the synthesized speech results.

    The study reports the spoken sentences were understandable to hundreds of human listeners asked to write out what they heard. The listeners were able to write out 43 percent of sentences with perfect accuracy.

    The researchers noted that - as is the case with natural speech - listeners had the highest success rate identifying shorter sentences. The team also reported more success synthesizing slower speech sounds like "sh," and less success with harder sounds like "b" or "p."

    Chartier admitted that much more research of the system will be needed to reach the goal of perfectly reproducing spoken language. But he added: "The levels of accuracy we produced here would be an amazing improvement in real-time communication compared to what's currently available."

    I'm Bryan Lynn.

    Bryan Lynn wrote this story for VOA Learning English, based on reports from Reuters, Nature and online sources. Hai Do was the editor.

    We want to hear from you. Write to us in the Comments section, and visit www.liandama.cn.

    _________________________________________________________________

    Words in This Story

    interface n. connection between pieces of electronic equipment

    decoder n. device used to discover the meaning of a coded message

    synthesizer n. electronic machine that creates sounds and music

    virtual adj. something that can be done or seen using computers or the Internet instead of going to a place

    accuracy n. correctness

    恒兴烧坊酒 | 游戏秘籍 | 广州代孕 | 高鹰代孕 |