What does it mean to be a conscious creature? We’ll understand vastly more about the nature of consciousness as a result of this. And then ultimately, I think this helps mitigate the civilizational risk of artificial intelligence.
ELON MUSK: We already sort of have three layers of thinking. There’s the limbic system, which is kind of your instincts, your cortical system, which is your higher level planning and thinking, and then the tertiary layer, which is the computers and machines that you interact with, like your phone, all the applications you use. So people actually are already a cyborg. You can maybe have an intuitive sense for this by how much you miss your phone if you leave it behind. Leaving your phone behind is almost like missing limb syndrome. Your phone is somewhat of an extension of yourself, as is your computer.
ELON MUSK: So you already have this digital tertiary layer, but the bandwidth between your cortex and your digital tertiary layer is limited by speech and by how fast you can move your fingers and how fast you can consume information visually. So I think it’s actually very important for us to address that input-output bandwidth constraint in order for the collective will of humanity to match the will of artificial intelligence. That’s my intuition, at least.
ELON MUSK: So let’s see. And what this presentation is mostly about is attracting smart humans to come and
DJ: Hey, everyone. My name is DJ. I’m co-founder and president of Neuralink. As Elon mentioned, we’re standing in the middle of our robot space. We have a space set up. But this is actually where some of the next generation, most advanced, vertical robots are being built. So welcome to our space.
It’s important to highlight that this technology is not being built in the dark. This is not a secret lab where we’re not sharing any of the progress. In fact, we’re actually sharing the progress very openly and as well as also telling you exactly what we’re going to be doing. And we’re hoping to progress on that as diligently and as safely and as carefully as possible.
Clinical Trials and Progress
So to start off, two years ago when we did our previous fundraising round, we outlined this path and timeline to First Human. And we currently have clinical trials in the U.S. for a product that we call Telepathy, which allows users to control phone or computer purely with their thoughts. And you’re going to see how we do this and what the impact that this has had.
And not only have we launched this clinical trial, but as of today, we have not just one, but seven participants. And we have an approval. And we also have an approval to launch this trial in Canada, UK, and the UAE.
So I guess before we dive into what this technology is and what we built, but I wanted to quickly share a video with you guys of when our first five participants met each other for the first time. So here you go.
First Participant Meeting
All right. We have everyone together. What’s up, guys? Thanks, everybody, for joining. Definitely want to introduce all of you.
NOLAN: Yeah. I’m Nolan, a.k.a. P1.
ALEX: My name’s Alex. I am the second participant in the Neuralink study.
BRAD: I am Brad Smith. It’s the ALS Cyborg P3.
MIKE: My name is Mike. P4.
UNIDENTIFIED SPEAKER: Crewman, what’s up? It is pretty sweet. That’s sweet, yeah.
Yeah. I have a little Arduino that takes input from my quad stick, converts it into a PPM signal to go to a RC truck. Cool, a little rock crawler. Well, with the BCI, I wrote code. I wanted the plane with the quad stick. That’s awesome.
The best thing I like about Neuralink is being able to continue to provide for my family and continue working.
I think my favorite thing is probably being able to turn on my TV. Yeah, like the first time in two and a half years I was able to do that, so that’s pretty sweet.
I like shooting zombies. That’s kind of nice. I’m excited to see what BCI’s got going on.
I have a question. What’s your shirt say? It says I do a thing called whatever I want.
Usage Statistics and Results
DJ: Now one of the major figure of merits that we have is to keep track of monthly hours of independent BCI use. Effectively, are they using the BCI? Not at the clinic, but at their home. What we have noticed, and this is a plot of all of the different participants, first five participants and their usage per month over the course of the last year and a half. We’re averaging around 50 hours a week of usage, and in some cases, peak usage of more than 100 hours a week, which is pretty much every waking moment.
So I think it’s been incredible to see all of our participants demonstrating greater independence through their use of BCI. Not only that, we’ve also accelerated our implantation cadence as we’ve amassed evidence of both clinical safety as well as value to our participants. So to date, we have four spinal cord injury participants as well as three ALS participants with the last two surgeries happening within one week of each other. And we’re just beginning. This is just the tip of the iceberg.
Whole Brain Interface Vision
Our end goal is to really build a whole brain interface. And what do we mean by whole brain interface? We mean being able to listen to neurons everywhere, be able to write information to neurons anywhere, be able to have that fast data wireless transfer to enable that high bandwidth connection from our biological brain to the external machines, and be able to do all of this with fully automated surgery, as well as enable 24 hours of usage.
And towards that goal, we’re really working on three major product types. Elon mentioned earlier that our goal is to build a generalized input-output platform and technology to the brain. For the output portion of it, which is extremely slow through our meat sticks, as Elon calls them, meat hands that are holding the mics, we’re starting out with helping people with movement disorders, where they lost the mind-body connection either through a spinal cord injury, ALS, or a stroke, be able to regain some of that digital as well as physical independence through a product that we’re building called Telepathy. And this is our opportunities to build a high-channel read and output device.
On the input side of things, there’s opportunities for us to help people that have lost the ability to see, be able to regain that sight again through a product that we’re calling BlindSight. And this is our opportunity to build high-channel write capabilities.
And last but not least, be able to also help people that are suffering from neurological, debilitating dysregulation, or psychiatric conditions, or neuropathic pain, by inserting our electrodes in reaching any brain regions to be able to insert them not just on the cortical layer, but into the sulci as well as deeper parts of the brain, the so-called limbic system, to really enable better opportunities to just regain some of that independence.
North Star Metrics and Technology Development
Our North Star Metrics is, one, increasing the number of neurons that we can interface with, and second, to expand to many diverse areas, any parts of the brain, starting with microfabrication or lithography to change the way in which we can actually increase the number of neurons that we can see from a single channel, and also doing mixed-signal chip design to actually increase the physical channel count, to increase more neurons that we can interface to sort of allow more information from the brain to the outside world.
And then, everything we’ve built from day one of the company has always been read and write capable, and with Telepathy, our first product, the focus has been on the read capabilities or the output, and we want to hone in on our write capability and also show that through accessing deeper regions within the visual cortex, that we can actually achieve functional vision.
Three-Year Product Evolution Roadmap
So now, just to step you through what the product evolution is going to look like in the next three years, today, what we have is 1,000 electrodes in the motor cortex, the small part of the brain that you see in this animation called the hand-knob area, that allows participants to control computer cursors as well as gaming consoles.
Next quarter, we’re planning to implant in the speech cortex to directly decode attentive words from brain signals to speech.
And in 2026, not only are we going to triple the number of electrodes from 1,000 to 3,000 for more capabilities, we’re planning to have our first blind site participant to enable navigation.
And probably another triple, so 10,000 channels, and also enable, for the first time, multiple implants, so not just one in motor cortex, speech cortex, or visual cortex, but all of the above.
And finally, in 2028, our goal is to get to more than 25,000 channels per implant, have multiple of these, have ability to access any part of the brain for psychiatric conditions, pain, dysregulation, and also start to demonstrate what it would be like to actually innovate with AI.
So, we’re really excited
to be able to do this, not just these debilitating neurological conditions, but be able to go beyond the limits of our biology. And this vertical integration and the talent and team that we have at Neuralink has been and will continue to be the key recipe for rapid progress that we will be making. Just to recap real quick, Neuralink is implanted with precision surgical robot, it’s physically invisible, and one week later, users are able to see their thoughts transform into actions. And to share more about what that experience is like, I’d like to welcome Sehej to the stage.
What’s up guys? My name is Sehej. I’m from the Brain-Computer Interface team here at Neuralink and I’m going to be talking about two things today. The first thing is what exactly is the Neuralink device capable of doing right now? And the second one is how does that actually impact the day-to-day lives of our users?
Device Capabilities: Control by Thought
SEHEJ: Very simply put, what the Neuralink device does right now is it allows you to control devices simply just by thinking. Now to put that a bit more concretely, I’m about to play a video of our first user. His name is Nolan, if you remember from DJ section. And what Nolan is doing is he’s looking at a normal off-the-shelf MacBook Pro and with his Neuralink device, as you’re going to see, he’s going to be able to control the cursor simply with his mind, no eye tracking, no other sensors.
And what’s special about this particular moment is this is the first time someone is using a Neuralink device to fully control their cursor. This is not your ordinary brain-controlled cursor. This is actually a record-breaking control, literally on day one, beating decades of brain-computer research. And I’m about to show you the clip.
On day one, Nolan breaking the BCI world record. Whoa! Whoa! Whoa! He’s a new world record holder. He’s a new world record holder. Oh, shit. Oh, shit. Oh, shit. Oh, shit. Oh, shit. Oh, shit. Oh, shit. Oh, shit. Oh, shit. Oh, shit. Oh, shit. Oh, shit. Oh, shit. I thought it was higher. I thought I would have to get to five or something. Oh my gosh, that’s crazy. It’s pretty cool.
Gaming Applications
SEHEJ: Yeah, another really fun thing you could do with the Neuralink device outside of controlling a computer cursor is you could actually plug it in through USB through a lot of different devices. And here we actually have Nolan playing Mario Kart. Now, what’s special about this particular clip is Nolan is not the only cyborg playing Mario Kart in this clip. We actually have a whole community of users, as mentioned earlier, and this is literally five of our first users of Neuralink playing Mario Kart together over call.
Now, yeah, Mario Kart is, it’s cool. You know, you’re using one joystick and then you’re clicking like a couple buttons to throw items. What would be even cooler is what if you could control two joysticks at once, simultaneously with your mind. What I’m about to show you, and I think this is for the first time someone playing a first person shooter game with a brain computer interface. This is Alex and RJ playing Call of Duty, controlling one joystick to move and then the other joystick to think, point your gun and then shooting people’s butt in. Here’s Alex shooting another person’s butt.
Oh, dear God. Oh, that was a good shot. I don’t know how to do it. I want him to freaking shoot as well when I do that. RJ, Alex got you. I know, dude, shot me in the face.
Real-World Impact on Daily Life
SEHEJ: Now that we have a bit of a sense of what the BCI can do, a very important question to answer is how does it impact the day-to-day lives of the people that use it every day? So I’m about to show you a clip, going back to Nolan for a second, where he talks. We simply just asked him randomly during a day how he enjoys using the BCI a couple months ago. And this is a candid reaction.
NOLAN: I work basically all day from when I wake up, I’m trying to wake up at like six or seven a.m. and I’ll do work until session. I’ll do session and then I’ll work until, you know, 11, 12 p.m. or 12 a.m. I’m doing like, I’m learning my languages. I’m learning my math. I’m like relearning all of my math. I am writing. I am doing a class that I signed up for. And I just, I wanted to point out that, like this is not something I would be able to do without the Neuralink.
SEHEJ: Next I want to talk a bit about Brad. You guys may already know him as the ALS cyborg. And Brad also has ALS. And what separates him from our other users is he’s actually non-verbal. So he can’t speak. Why this is pretty relevant is he relies, at least before the Neuralink, on an eye gaze machine to communicate. And a lot of eye gaze machines you can’t use outdoors. You really need like a dark room. So what this means is for the last six years since Brad’s been diagnosed with ALS, he’s really unable to leave his house. Now with the Neuralink device, we’re going to show you a clip of him with his kids at the park shot by Ashley Vance and the team.
Okay, get ready. Yay! You have to speak with me.
BRAD: I am absolutely doing more with Neuralink than I was doing with eye gaze. I’ve been a Batman for a long time, but I go outside now. Going outside has been a huge blessing for me. And I can control the computer with telepathy.
Dad’s watching. Look, he’s watching on the camera. Can you move his arm?
Robotic Control and Future Applications
SEHEJ: The last user I want to talk about is Alex. You’ve seen some clips of him earlier. What’s special about Alex to me is he’s a fellow left-handed guy who writes in cursive all the time. And what he mentioned is since his spinal cord injury from, like, three, four years ago, he’s been unable to just, like, draw or write. And he always brags about how good his handwriting was. So we actually got to put it to the test. We gave him a robotic arm. And I think this is the first time he tried using the robotic arm to write anything. And this is a sped-up version of writing at the convoy trial and drawing something.
$1,000, yeah. $1,000, yeah.
SEHEJ: Now, yeah, controlling a robotic arm is cool. But this one has a clamp. And what would be cooler is if you could decode the actual fingers, the actual wrists, all the muscles of the hand in real time. Just in the past couple weeks, we were able to do that with Alex. And you’re about to see him and his uncle playing a game.
Rock, paper, scissors, shoot. Rock, paper, scissors, shoot. Rock, paper, scissors, shoot. Rock, paper, scissors, shoot. That was his thought. I mean, bullshit. He didn’t even say anything. You got more? Cool. Controlling, yeah, that’s pretty dope. I don’t know.
SEHEJ: And controlling a robotic hand on screen is obviously not super helpful for most people. Fortunately, we have connections with Tesla who have the Optimus hand. And we’re actually actively working on giving Alex an Optimus hand so that he could actually control it in his real life. And here’s the actual replay of the end of that video using Alex’s neural signals on an Optimus hand. Sean, if you want to play that.
Future Vision: Full Body Control and Limb Replacement
ELON MUSK: Actually, let me maybe add a few things to that, which is, so as we advance the Neuralink devices, you should be able to actually have full body control and sensors from an Optimus robot. So you could basically inhabit an Optimus robot. It’s not just the hand, the whole thing. So you could basically mentally remote into an Optimus robot and be kind of cool. The future is going to be weird, but pretty cool. But pretty cool.
And then another thing that can be done also is like for people that have, say, lost a limb, lost an arm or a leg or something like that, then we think in the future we’ll be able to attach an Optimus arm or legs. And so you kind of like, I remember that scene from Star Wars where Luke Skywalker gets his hand, you know, chopped off with a lightsaber and he gets kind of a robot hand. And I think that’s the kind of thing that we’ll be able to do in the future, working with Neuralink and Tesla. So it goes far beyond just operating a robot hand, but replacing limbs and having kind of a whole body robot experience. And then I think another
The thing that will be possible, I think is very likely in the future, is to be able to bridge where the damaged neurons are. So you can take the signal from the brain and transmit that signal past where the neurons are damaged or strained to the rest of the body so you could reanimate the body. So that if you have a Neuralink implant in the brain and then one in the spinal cord, then you can actually bridge the signals and you could walk again and have full body functionality. Obviously that’s what people would prefer, to be clear. We realized that that would be the preferred outcome. And so that even if you have a broken neck, or you could still, we believe, I’m actually at this point, I’d say fairly confident that at some point in the future we’ll be able to restore full body functionality. So thank you.
Brain-Computer Interface Applications
NIR: Hello everyone, my name is Nir, and I am leading the BCI Taking group, and I think the video that Sahed just showed with you, I probably watched them maybe thousands of times, but still I get a goose bump every time I watch them. And I think this is one of the cool perks here at Neuralink when you get a job, is that you might get a goose bump every week, or maybe every few days in a good week. And this is really fun.
As an engineer, it’s really cool because you can build a new feature, you can build a new machine learning model, a new software feature, and test it on the same day with a participant and get feedback. And you already saw with our first device, Telepathy, that we can address a very diverse needs of the different users that we have, from moving a cursor, to playing games, to moving a robotic arm with multiple fingers. And we could not have done it without the Neuralink device.
The Neuralink device gives us something that no other device can give us, which is a single neuron recording from thousands of channels simultaneously. The Telepathy product is basically recording the neural activity from the small area in the motor cortex that involve an execution of hand and arm movements. But if we go only about two or three inches below, there’s another brain area that’s involved in execution of speech. And with the same device, with the same machine learning model architecture, the same software pipeline, the same surgical robot, we can have a new application, and we can do it very quickly.
Silent Communication Technology
It’s really interesting that if we can decode someone’s intention to speak silently and on vocal communication, we can use that to revolutionize the way we interact with computers, with technology, and with information. Instead of typing with your finger, or like moving the mouse, or talking to your phone, you’ll be able to interact with computer with the speed of thought. It will make this interaction much faster and much more intuitive. The computers will understand what you want to do. And we can also expand that to AI. We can now build an interface with AI that you will be able to retrieve information, will be able to store our thoughts, anywhere, anytime, privately and silently.
Again, because we build a fundamental technology platform, and we do everything in-house. We own the entire stack, from neurons to pixels on the user’s computer. Now I’ll pass it to Ruz to talk about UI for Bicep.
User Interface Design
RUZ: Thank you, Nir. Each spike that our implant detects goes on a fairly remarkable journey to ultimately form a pixel on a participant’s display. And that experience starts with, of course, unboxing. The very first time that a participant pairs to and meets their implant, this invisible part of their body, and sees their own spikes materialize across the display.
From there, they’ll go into body mapping and actually imagine moving their arm again and get a feel for what feels natural to them and what doesn’t. And they’ll take that into calibration, using one of those motions to actually move a cursor again, iteratively refining their control as they go throughout this process until, finally, they’re teleported back to their desktop and can experience the magic of neural control for the very first time.
And our control interfaces is where the OS integration that we do really shines, letting us adapt both control and feedback for every interaction. So, for familiar interactions like scrolling, we can surface an indicator over the scrollable parts of the display, add a touch of gravity to automatically pop a participant’s cursor onto that indicator as they approach, show the actual velocities that we decode inside of it, and add a bit of momentum to those velocities to carry them forward as they glide across the page.
There are also unique interactions that we need to solve for in this space. For example, when a participant is watching a movie or just talking to somebody next to them, the brain is very active still, and that activity can actually induce motion in the cursor, distracting them from that moment. So, when a participant wants to just get their cursor out of the way, they can push it into the edge of the display to park it there. And, of course, we add gravity to sort of hold it still, but they can push it out with either just a firm push or, in this case, a gesture.
And, of course, it goes without saying that all of these control interfaces are designed hand-in-hand with our participants, so a huge shout-out to both Noland and Brad for helping us design these two. And those control interfaces, of course, extend typing. We have a great software keyboard that does everything you’d expect it to, popping up when a participant clicks on a text field, giving them feedback about the click along the surface of the key, and supporting both dictation and swipe.
Machine Learning Engineering
HARRISON: Hi, everyone. I’m Harrison, an ML engineer here at Neuralink, and I must say, being an ML engineer at Neuralink is a bit like being a kid in a candy store. When you think of the inputs to most ML systems out there, you might think of pixels, of tokens, or of a user’s Netflix watch history. The input to our systems is a little different. It is pure, raw brainpower.
And when we think about the ML systems we can build here at Neuralink, really we’re limited by our imagination and our creativity. There’s no reason our ML systems can’t do anything that the human brain can do, such as controlling a phone, typing, or even gaming.
Right here to my left is actual footage of Alex, one of our participants, playing a first-person shooter against RJ, another one of our participants. Now, for those unfamiliar with first-person shooters, this is not a trivial feat. It requires two fully independent joysticks, or four continuous degrees of control, as well as multiple reliable buttons.
Now, contrary to popular belief, the Neuralink does not simply read people’s minds. It’s simply reading neuronal activations corresponding to motor intent. So one of the fun challenges of this project was figuring out which motions were going to be matched to the joystick. We started with the typical left thumb and right thumb, but quickly found that the dominant hand overshadowed the non-dominant hand. My personal favorite is we had one of our participants imagine walking for the left joystick and aiming for the right joystick. So in-game, they were simply doing naturalistic motions, like you might do in virtual reality in Ready Player One, and that was really cool to watch. What we ended up on was the thumb for the left joystick and the wrist for the right joystick, and I challenged the audience to try to replicate their motions. I’m really in awe of them being able to pull this off.
Calibration Progress
I want to talk a bit about the progress to our cursor calibration experience. To my left here, you can see RJ completing his first ever cursor calibration with a redesigned open loop flow where he first gathered information about his intent and how to map the neural activity to the first time he controls the cursor to the final product where he has smooth and fluid control of his computer.
And most remarkably, this experience took only 15 minutes from start to finish. 15 minutes from not… 15 minutes from no control to fluid computer use. Contrast that to a year and a half ago with P1, where that was multiple hours to get to the same level of control and several engineers standing around a table pulling their hair out. There was virtually no need for Neuralink engineers to even be at this session.
This was basically an out-of-the-box experience for our participants. And even more remarkably, we’re continuing to smash day one records with RJ being able to achieve 7 BPS on his very first day with a Neuralink.
Now such an effective and efficient calibration process is only made possible by high fidelity estimations of a user intention or labels. And to briefly illustrate just how challenging of a problem that is, this is an animation of myself trying to draw circles on my desktop with a mouse. Now the task was simple, draw uniform circles at a constant speed repeatedly. And as you can see by that animation, I am horrible at that. Even though my intent was pretty obvious, unambiguous, the execution was really poor. There is a ton of variation in both speed and the shape itself.
To visualize this a little differently, each row here is one of those circles unwound in time with synchronized starts. And you can just see how much variation there is in the timing of each circle as well as what I’m doing at any given point in time.
Orthogonal to the labeling problem is neural non-stationarity, or the tendency of neural signals to drift over time. And I think that’s honestly a beautiful thing, right? If your neural signals didn’t drift, you couldn’t grow. When you wake up the next day, you’re not the same person you were the day before. You’ve learned, you’ve grown, you’ve changed. And so, too, must your neural data change.
This animation here is a simple illustration of the learned representation by the decoder and how it drifts the further away we get from the day it was trained on. This is one of the key challenges we need to solve here at Neuralink to unlock fluid and product-level experience for our users.
Blindsight: Restoring Vision Through Brain Stimulation
Blindsight is our project to build a visual prosthesis to help the blind see again. Blinds would wear a pair of glasses with an embedded camera and receive an implant in their visual cortex. Scenes from the environment are recorded by the camera and processed in the patterns of simulation delivered to the brain, causing visual perception and restoring functionality.
Now, blindsight will be enabled by placing our implant into visual cortex. This is a new brain area for us, and this brings new opportunities and challenges. So the surface of the brain for visual cortex represents just a few degrees of angle in the center of the visual field. Larger fields of view are represented deep within the cortical folds of the calcarine fissure. Our threads are able to access these deeper structures, providing the possibility of restoring vision over a functionally useful visual field.
So the N1 implant has had experimental stimulation capabilities for quite some time, but our new S2 chip is designed from the ground up for stimulation. It provides over 1,600 channels of electrical stimulation, high dynamic range recording capabilities, and a wide range of micro-stimulation currents and voltages. We can achieve these capabilities because we are vertically integrated, and we designed this custom ASIC in-house.
Similarly, we design and fabricate our electrode threads in-house, and here you can see one of our standard threads designed for recording in an electron micrograph. For blindsight, our requirements are a little different, and our vertical integration allows us to rapidly iterate on the design and manufacturing of these threads for this new purpose. So here I’m using red arrows to highlight the electrode contacts, which are optimized for stimulation, and as you can see, they’re a little bit larger, which results in a lower electrical impedance for safe and effective charge delivery, which is important for blindsight.
Now how can we calibrate our implant for blindsight? So here’s one way. We stimulate on the array, picking, say, three different channels. The user perceives something, say three spots of light, somewhere in their visual field, and points at them. We track their arm and eye movements, and repeat this process for each of the channels on the array. And here’s what a stimulated example of a blindsight vision could look like after calibration.
Advanced Medical Imaging and Surgical Planning
Now I showed you how for blindsight, we need to insert threads deeper into the brain than we have previously, and doing this requires state-of-the-art medical imaging. So we worked with Siemens to get some of the best scanners on earth. We built out our imaging core from scratch in the past year. Actually it was faster than that. It was about four months from dirt to done. Since bringing the scanners online, we’ve scanned over 50 internal participants, building out a database of human structural and functional anatomy.
What can we do with the imaging information from these scanners? So medical imaging can be used for surgical placement. It lets us parcellate out brain regions by their function. And we use our imaging capabilities to refine the placement for telepathy. It also gives us the capability to target new brain regions for future products, such as blindsight or speech prosthesis.
And we’re working towards more capabilities. So one-click automated planning of surgery from functional images to robot insertion targets. Here you can see a screen capture from one of our in-house tooling to do end-to-end surgical planning. You can see a region of motor cortex known as hand knob, and the thread trajectory plans that will be sent directly to the robot. This is a really incredible degree of automation that’s only possible because we’re controlling the system from one end to the other.
Next-Generation Surgical Robot
JOHN: My name is John, and I lead the robot mechanical team. This is our current R1 robot. It was used to implant the first seven participants. This robot works really well, but it has a few flaws. One of which is the cycle time is rather slow. So to insert each thread, it takes, in a best-case scenario, 17 seconds. And many cases, external disturbances cause us to have to retry to re-insert, grasp that thread and then re-insert it. To scale our number of neurons, or neurons accessed through higher channel count. increased numbers of threads, we need to have a much faster cycle time.
So let me introduce our next generation robot, which is right here. Through rethinking the way that we hold the implant in front of the robot by holding it directly on the robot head, we’re able to achieve an 11 times cycle time improvement. So each thread takes one and a half seconds. We also scale up a lot of surgery workflow process improvements through deleting the separate operator station and implant stand.
Now the outside of the robot looks pretty similar between the two, but it’s what’s inside that really counts. Each system has been redesigned from the ground up with a focus on reliability, manufacturability, serviceability, and using a lot of our vertical integration techniques, it’s enabled us to have a lot more control of the system end to end.
Now that fast cycle time doesn’t mean much if it’s not compatible with a significant portion of the human population. Prior to each surgery, we scan a participant’s anatomy and ensure that they will be compatible with the robot and vice versa. Unfortunately, the robot isn’t compatible with everyone, so we had to extend the reach of the needle in the next generation robot, and now we’re compatible with more than 99% of the human population. We’ve also increased the depth that the needle can insert threads. Now we can reach more than 50 millimeters from the surface of the brain, accessing and enabling new indications.
Manufacturing Innovation and Cost Reduction
We have to produce a ton of custom sterile components for each surgery. We actually supply more than 20 of these parts. Many of these parts are made through traditional CNC manufacturing capabilities, which we do just on the other side of this wall, actually, and some custom developed processes like this femtosecond laser milling used to manufacture the tip of the needle. Now these processes take quite a bit of time, effort, and cost, so let’s take a look at how we’re going to reduce cost and time for one of the components.
So the current needle cartridge has a total cycle time of about 24 hours, and the machine components cost about $350. The final assembly is performed by a set of highly skilled technicians. They have to glue a 150 micron diameter cannula onto this wire EDM machined stainless steel base plate. They have to electro-polish a 40 micron wire into a sharp taper, and then they have to thread that 40 micron wire into a 60 micron hole in the cannula. This is done manually. And then they finally have to laser weld all the components together.
Next generation needle cartridge takes only 30 minutes of cycle time and $15 in components. We were able to delete the wire EDM machined base plate and the cannula gluing step by switching to an insert molded component, so
JULIAN: Hi, I’m Julian, I’m one of the leads on the implant team. So the way humans communicate today, if they want to output information, is by using their hands and their voice, as I’m doing right now. And if you want to receive information, you use your ears and your eyes, and of course that’s how you’re receiving this very talk. But we built this implant, and this implant is very special, because it is the first time that we’re able to add a completely new mode of data transfer into and out of the brain.
If you look at this device in a nutshell, it’s really just sampling voltages in the brain and sending them over radio. But if you zoom out and look at the system from end to end, what you actually see is that we’re connecting your brain, or biological neural net, to a machine learning model, or a silicon neural net, on the right-hand side. And I actually think this is really elegant, because the machine learning model on the right-hand side is in fact inspired by neurons on the left-hand side. And so in some sense, we’re really extending the fundamental substrate of the brain. For the first time, we’re able to do this in a mass-market product. That’s a very, very special piece of hardware.
Evolution from Prototype to Implantable Device
So these are some of the first implants that we ever built. There are electrodes that were made with our in-house lithography tools. We have custom ASICs that we also designed in-house. And this was really a platform for us to develop the technology that allows us to sense micro-level volts in the brain across thousands of channels simultaneously. We learn a lot from this, but as you’ll notice in the right two images, there are USB-C connectors on these devices. These are not really the most implantable implants.
This next set of images are the wireless implants. And there was a complete evolution that we went through to add the battery, the antenna, the radio, and to make it actually fully implantable. Once it’s implanted, it’s completely invisible. It’s very compact, it’s modular, and it’s a general platform that you can use in many places in the brain. Going from that top row to the bottom row is very challenging. The implant you see on the bottom right here is in fact the device that we have working in seven participants today. And it’s augmenting their brain every day and restoring their autonomy.
But getting to that point involved a huge number of formidable engineering challenges. We first had to make a hermetic enclosure, passing a thousand separate conductors through the enclosure of the device. We had to figure out how to make charging seamless and work with very tight thermal constraints in a very, very small area. And then we also had to scale up our testing infrastructure so that we could support large-scale manufacturing and very safe devices and have confidence in our iteration cycle.
The Future: Manufacturing Scale and Channel Count
So what’s next? We’re going to be increasing our manufacturing so that we don’t just produce a small number of implants per year, but thousands and then eventually millions of implants per year. We’re also going to be increasing channel count. More channels means more neurons are sensed, which means more capabilities. In some sense, we often think a lot about the Moore’s Law of neurons that we’re interacting with. And in the same way that Moore’s Law propelled forward many subsequent revolutions in computing, we think that sensing more and more neurons will also completely redefine how we interact with computers and reality at large.
The Bandwidth Revolution
I want to leave you with one final thought. When I was a child, I used a 56-kilobit modem to access the internet. If you remember what it’s like, you would go to a website…
ELON MUSK: 56? You’re lucky. You’re a lucky bastard. Yeah, yeah. When I was a child, we had acoustic couplers.
JULIAN: Oh, yeah. Okay. Does it just beep at each other?
ELON MUSK: Yeah. The first modem was the acoustic coupler. Incredible device, honestly.
JULIAN: But then if you… I guess if you’re my age, you started with the 56-kbit modem, and you would go to a website and there would be an image, and it would scroll like slowly. It was loading pixel by pixel on the screen. So that’s what it’s like to be bandwidth limited. Now imagine using the current internet with that same modem. It’s inconceivable. It would be impossible to do. So what broadband internet did to the 56-kilobit modem is what this hardware is going to do to the brain. We are trying to drastically expand the amount of bandwidth that you have access to, to have a much richer experience and superhuman capabilities.
Closing Remarks
So I guess just to kind of close out and to recap today, Neuralink is working reliably and has already changed the lives of seven participants and making a real impact. And our next milestone is to go to market and enable scaling of this technology to thousands of people, as well as expand functionalities beyond just the movement to enable sophisticated robotic arm control, speech, vision, give sight back, and even getting to the speed of thought. I hope you got a good sort of sample of our technology stack and the challenges that we have. And I’d like to hand over the mic to Elon for any closing remarks.
ELON MUSK: Well, we’re trying to give you a sense of the depth of talent at Neuralink. There’s a lot of really smart people working on a lot of important problems. This is one of the most difficult things to actually succeed in creating and have it work and work at scale and be reliable and available for millions of people at an affordable price. So super hard problem and we’d like to have you come join and help us solve it. Thank you.
Thank you. Thank you. Thank you.
Related Posts