Saturday, 29 August 2020

Neuralink demonstrates its next-generation brain-machine interface

During a conference streamed online from Neuralink’s headquarters in San Francisco, scientists at the Elon Musk-backed company gave a progress update. It came just over a year after Neuralink, which was founded in 2016 with the goal of creating brain-machine interfaces, first revealed to the world its vision, software, and implantable hardware platform. Little of what was discussed today was surprising or necessarily unanticipated, but it provided assurances the pandemic hasn’t prevented Neuralink from inching toward its goals.

Neuralink’s prototype can extract real-time information from many neurons at once, Musk reiterated during the stream. In a live demo, readings from a pig’s brain were shown onscreen. When the pig touched an object with its snout, neurons captured by Neuralink’s technology (which had been embedded in the pig’s brain two months prior) fired in a visualization on a television monitor. That isn’t novel in and of itself — Kernel and Paradromics are among the many outfits developing under-skull brain-reading chips — but Neuralink uniquely leverages flexible cellophane-like conductive wires inserted into tissue using a “sewing machine” surgical robot. Musk says it received a Breakthrough Device designation in July and that Neuralink is working with the U.S. Food and Drug Administration (FDA) on a future clinical trial with people suffering from paraplegia.

Founding Neuralink members from the University of California, San Francisco Tim Hanson and Philip Sabes along with University of California, Berkeley professor Michel Maharbiz pioneered the technology, and the version demonstrated today is an improvement over what was shown last year. Musk calls it “V2,” and he’s confident it will someday take less than an hour without general anesthesia to embed within a human brain. He also says it will be easy to remove and leave no lasting damage, should a patient wish to upgrade or discard Neuralink’s interface.

V2

Neuralink collaborated with Woke Studios, a creative design consultancy based in San Francisco, on the design of the sewing machine. Woke began working with Neuralink over a year ago on a behind-the-ear concept that Neuralink presented in 2019, and the two companies re-engaged shortly after for the surgical robot.

Woke head designer Afshin Mehin told VentureBeat via email that the machine is capable of seeing the entirety of the brain.

Neuralink surgical robot

“The design process was a close collaboration between our design team at Woke Studios, the technologists at Neuralink, and prestigious surgical consultants who could advise on the procedure itself,” Mehin said. “Our role specifically was to take the existing technology that can perform the procedure, and hold that against the advice from our medical advisors as well as medical standards for this type of equipment, in order to create a non-intimidating robot that could perform the brain implantation.”

The machine consists of three parts. There’s a “head,” which houses automated surgical tools and brain-scanning cameras and sensors, against which a patient situates their skull. A device first removes a portion of skull to be put back into place post-op. Then, computer vision algorithms guide a needle containing 5-micron-thick bundles of wires and insulation 6 millimeters into the brain, avoiding blood vessels. (Neuralink says the machine is technically capable of drilling to arbitrary lengths.) The wires — which measure a quarter of the diameter of a human hair (4 to 6 μm) — link to a series of electrodes at different locations and depths. At maximum capacity, the machine can insert six threads containing 192 electrodes per minute.

Neuralink surgical robot

A single-use bag attaches with magnets around the machine’s head to maintain sterility and allow for cleaning, and angled wings around the inner facade ensure a patient’s skull remains in place during insertion. The machine’s “body” attaches onto a base, which provides weighted support for the entire structure, concealing the other technologies that enable the system to operate.

Neuralink surgical robot

When asked about whether the prototype would ever make its way into clinics or hospitals, Mehin danced around the question, but noted that the design was intended for “broad-scale” use. “As engineers, we know what’s possible and how to communicate the design needs in an understandable way, and likewise, Neuralink’s team is able to send over highly complex schematics that we can run with,” he said. “We imagine this is a design that could live outside of a laboratory and into any number of clinical settings.”

The Link

As Neuralink detailed last year, its first in-brain interface designed for trials — the N1, alternatively referred to as the “Link 0.9” — contains an ASIC, a thin film, and a hermetic substrate that can interface with upwards of 1,024 electrodes. Up to 10 N1/Link interfaces can be placed in a single brain hemisphere, optimally at least four in the brain’s motor areas and one in a somatic sensory area.

Musk says the interface is dramatically simplified compared with the concept shown in 2019. It no longer has to sit behind the ear, it’s the size of a large coin (23 millimeters wide and 8 millimeters thick), and all the wiring necessary for the electrodes connect within a centimeter of the device itself.

During the pig demo, the pig with the implant — “Gertrude” — playfully nuzzled its handlers in a pen adjacent to pens containing two other pigs, one of which had the chip installed and later removed. (The third pig served as a control; it hadn’t had a chip implanted.) Pigs have a dura membrane and skull structure that’s similar to that of humans, Musk explained, and they can be trained to walk on treadmills and perform other activities useful in experiments. This made them ideal guinea pigs — hence the reason Neuralink chose them as the third animals to receive its implants after mice and monkeys.

Neuralink

Above: Elon Musk holding a prototype neural chip.

Image Credit: Neuralink

The electrodes relay detected neural pulses to a processor that is able to read information from up to 1,536 channels, roughly 15 times better than current systems embedded in humans. It meets the baseline for scientific research and medical applications and is potentially superior to Belgian rival Imec’s Neuropixels technology, which can gather data from thousands of separate brain cells at once. Musk says Neuralink’s commercial system could include as many as 3,072 electrodes per array across 96 threads.

The interface contains inertial measurement sensors, pressure and temperature sensors, and a battery that lasts “all day” and inductively charges, along with analog pixels that amplify and filter neural signals before they’re converted into digital bits. (Neuralink asserts the analog pixels are at least 5 times smaller than the known state of the art.) One analog pixel can capture the entire neural signals of 20,000 samples per second with 10 bits of resolution, resulting in 200Mbps of neural data for each of the 1,024 channels recorded.

Neuralink sensor

Above: Neuralink’s N1/Link sensor, shown at Neuralink’s conference in 2019.

Image Credit: Neuralink

Once the signals are amplified, they’re converted and digitized by on-chip analog-to-digital converters that directly characterize the shape of neuron pulses. According to Neuralink, it takes the N1/Link only 900 nanoseconds to compute incoming neural data.

The N1/Link will pair wirelessly via Bluetooth to a smartphone up to 10 meters through the skin. Neuralink claims the implants will eventually be configurable through an app and that patients might be able to control buttons and redirect outputs from the phone to a computer keyboard or mouse. In a prerecorded video played at today’s conference, the N1/Link was shown feeding signals to an algorithm that predicted the positions of all of a pig’s limbs with “high accuracy.”

One of Neuralink’s aspirational goals is to allow a tetraplegic to type at 40 words per minute. Eventually, Musk hopes Neuralink’s system will be used to create what he describes as a “digital super-intelligent [cognitive] layer” that enables humans to “merge” with artificially intelligent software. Millions of neurons could be influenced or written to with a single N1/Link sensor, he says.

Potential roadblocks

High-resolution brain-machine interfaces, or BCI for short, are predictably complicated — they must be able to read neural activity to pick out which groups of neurons are performing which tasks. Implanted electrodes are well-suited to this, but historically, hardware limitations have caused them to come into contact with more than one region of the brain or produce interfering scar tissue.

That has changed with the advent of fine biocompatible electrodes, which limit scarring and can target cell clusters with precision (though questions around durability remain). What hasn’t changed is a lack of understanding about certain neural processes.

Neuralink

Above: The N1/Link’s capabilities.

Image Credit: Neuralink

Rarely is activity isolated in brain regions, such as the prefrontal lobe and hippocampus. Instead, it takes place across various brain regions, making it difficult to pin down. Then there’s the matter of translating neural electrical impulses into machine-readable information; researchers have yet to crack the brain’s encoding. Pulses from the visual center aren’t like those produced when formulating speech, and it is sometimes difficult to identify signals’ origination points.

It’ll also be incumbent on Neuralink to convince regulators to approve its device for clinical trials. Brain-computer interfaces are considered medical devices requiring further consent from the FDA, and obtaining that consent can be time-consuming and costly.

Perhaps anticipating this, Neuralink has expressed interest in opening its own animal testing facility in San Francisco, and the company last month published a job listing for candidates with experience in phones and wearables. In 2019, Neuralink claimed it performed 19 surgeries on animals and successfully placed wires about 87% of the time.

The road ahead

All these challenges haven’t discouraged Neuralink, which has over 90 employees and has received $158 million in funding including at least $100 million from Musk. However, they’ve potentially been exacerbated by what STAT News described in a report as a “chaotic internal culture.” Responding to the story via a New York Post inquiry, a Neuralink spokesperson said many of STAT’s findings were “either partially or completely false.”

While Neuralink expects that inserting the electrodes will initially require drilling holes through the skull, it hopes to soon use a laser to pierce bone with a series of small holes, which might lay the groundwork for research into alleviating conditions like Parkinson’s and epilepsy and helping physically disabled patients hear, speak, move, and see.

That’s less far-fetched than it might sound. Columbia University neuroscientists have successfully translated brain waves into recognizable speech. A team at the University of California, San Francisco built a virtual vocal tract capable of simulating human verbalization by tapping into the brain. In 2016, a brain implant allowed an amputee to use their thoughts to move the individual fingers of a prosthetic hand. And experimental interfaces have allowed monkeys to control wheelchairs and type at 12 words a minute using only their minds.

“I think at launch, the technology is probably going to be … quite expensive. But the price will very rapidly drop,” Musk said. “We want to get the price down to a few thousand dollars, something like that. It should be possible to get it similar to LASIK [eye surgery].”

Source

The post Neuralink demonstrates its next-generation brain-machine interface appeared first on abangtech.



source https://abangtech.com/neuralink-demonstrates-its-next-generation-brain-machine-interface/

No comments:

Post a Comment