Living Human Brain Cells Play DOOM on a CL1
By Cortical Labs
Summary
Topics Covered
- Pong Took 18 Months, Doom One Week
- Biological Neurons Master Doom
- Interface Solved, Learning Next
- Open Neurons Await Your Challenges
Full Transcript
Heat. Heat.
This is the Cortical Labs CL1.
Inside this box, there are around 200,000 living human neurons on a microchip called a multi-elerode array.
My name is Dr. Alon Lurl and today I'll be taking you through our journey at Cortical Labs to get living human brain cells to play Doom.
Since launching the CL1 last year, we've been working hard behind the scenes to make sure our Cortical Labs API is as friendly, effective, and enjoyable for
the user as possible.
Today, we're excited to show you how together with one of our collaborators, an independent researcher named Sha, we coded the first working version of Doom
using the Cortical Labs API and running on a CL1.
So we showed that biological neurons could play the game pong. This was a massive milestone cuz it demonstrated adaptive real-time goal-directed
learning. But it took us over 18 months
learning. But it took us over 18 months with our original hardware and software to get this to work. And Pong was much simpler. There was a direct
simpler. There was a direct relationship. The ball went up, the
relationship. The ball went up, the paddle went up. It was a direct input output relationship. Doom was much more
output relationship. Doom was much more complex.
Doom is chaos. It's 3D. It has enemies.
It needs to explore its environment and it's hard. So, we've built a
it's hard. So, we've built a neurocomputing system to allow anyone to program this game.
To bridge that gap, we needed to translate the digital world of doom into the biological language of neurons, which is electricity.
We built the CL API so that any user can interact with cells living in the CL1 with simple Python commands. Using the
API, Sha managed to map the video feed from the game into patterns of electrical stimulation.
When a demon appears on the left of the screen, specific electrodes stimulate the sensory area of the neural culture on the left side. The neurons react to
that stimulation. We then listen to
that stimulation. We then listen to their response, the spikes, and interpret that activity as motor commands. If the neurons fire in a
commands. If the neurons fire in a specific pattern, the doom guy shoots.
If they fire in another pattern, he moves right, and so on. Once our API was ready, Sha implemented this method with very minor tweaking in less than a week
using the Cortical Labs cloud platform.
So, can the cells learn to play Doom?
Yes. They're receiving information.
They're sending commands to move their character around. They're able to find
character around. They're able to find enemies, shoot. Is it an esports
enemies, shoot. Is it an esports champion? Absolutely not. Right now, the
champion? Absolutely not. Right now, the cells play a lot like a beginner who's never seen a computer. And in all fairness, they haven't. But they show evidence that they can seek out enemies,
they can shoot, they can spin. And while
they die a lot, they are learning. But
just like a human or an animal learning a task, the cells need feedback. They
need to know what the right actions are and what the wrong actions are and how they can be trained to improve. While
there's still a lot of work left to do on this, the exciting thing is we've solved the interface problem. We have a way to interact with these cells in real
time and train them and shape their behavior to do things even like doom. So
the interface is solved. Next we need better learning, better encoding, better rewards, better feedback and we can do this with the CL1 >> and explain what we're seeing here. So
this is the interface that basically show like all the vital stuff about the seal one as long as the flow rate for the pump and the little window for the activity of the chip that is currently
inside the seal one.
>> Can you show us the chip? The chip is Yeah, I can show you the chip.
Now, it's time to figure out how do we further enhance the learning to allow these cells in the dish not just to play the game Doom, but to really begin to excel at it and then take on even more
complicated tasks. So from here we're
complicated tasks. So from here we're ready to challenge the community to take a look at this this field take this platform and see what they can do with
it. You know work on the problem of you
it. You know work on the problem of you know different ways of encoding information uh through stimulation how best to do that how to interpret the
response of neurons to information that you've encoded.
>> The neurons ability to play Doom demonstrates the flexibility of biomputation.
We're inviting developers and researchers to see what they can build.
The API is open, cortical cloud is open, and the neurons are ready. The only
question left is what will you teach them next?
Loading video analysis...