Be the first to know.
Get our Robotics weekly email digest.
podcast

Podcast: Reading a Fish's Mind to Build Smarter Robots

In this episode, we cover how scientists mapped zebrafish brains when swimming and used that map to power a real robot fish. The robot can hold its position in flowing water using vision alone, revealing new neuron types, and proving biology can teach robots to move smarter.

author avatar

02 Dec, 2025. 12 minutes read

In this episode, we cover how scientists mapped zebrafish brains when swimming and used that map to power a real robot fish. The robot can hold its position in flowing water using vision alone, revealing new neuron types, and proving biology can teach robots to move smarter.


This podcast is sponsored by Mouser Electronics


Episode Notes

(2:20) – Roboticists reverse engineer zebrafish navigation

This episode was brought to you by Mouser, our favorite place to get electronics parts for any project, whether it be a hobby at home or a prototype for work. Click HERE to learn more about the evolution of soft robotics.

Become a founding reader of our newsletter: http://read.thenextbyte.com/


Transcript

What's going on, friends? Hope you guys are excited. Hope you have your swimsuit on because we are getting wet this episode. This one is all about zebrafish and how we're actually learning from the brain, the function of zebrafish, to make better robots. So, if you're excited, let's dive right in.

What's up friends, this is The Next Byte Podcast where one gentleman and one scholar explore the secret sauce behind cool tech and make it easy to understand.

Farbod: All right friends we are talking about zebrafish today. But before we get started, we got to talk about today's sponsor and that's Mouser electronics. Now if you've been rocking with us for a minute, you know we love Mouser and there's a reason for that You know these guys they're so well connected with academia with industry with the latest and greatest, they know exactly what's coming down the pipeline, right? And they have such a great ability to predict what's next ah that they sometimes write articles and tell us what we should be on the lookout for. So, for example, today we're going to be talking about bio-inspired technology and soft robotics. Well, a while back, they wrote an article that we're going to link to the show notes talking about the evolution and the potential of soft robotics. They go all the way back to the 1950s in this article. They bring you up to speed about why this was a thing to begin with, how it's been evolving, what are the advantages and disadvantages, what are the industries that are leveraging it, where the ball is at in terms of academia and the cutting edge of research being done. And it's a really good primer if you want to dive deeper into this area. But I really enjoyed reading it because of the historical element, just kind of seeing what came before what we're going to talk about today. Daniel always likes to say, he likes to use the phrase of, if I've seen further, it is by standing on the shoulders of giants. So, it's really cool that Mouser shouts out some of these giants, and you get to see these little snippets of what they've been working on.  

Daniel: And in a similar vein, if we have any success in talking about neural networks and soft robotics today, it's because we're standing on the shoulders of Mouser Electronics.

Farbod: Absolutely. And on that note, let's talk about some of the work being done today. So, we're going all the way to EPFL, but then we're staying stateside, because it's a collaboration between EPFL and Duke University. And like I mentioned, we're talking about zebrafish. And you might be wondering, OK, why? Why zebrafish? And it turns out that it really is because the larvae is transparent. And it's going to make sense in a second why I'm saying that. But essentially, in the field of neuroscience, right, the way that we've been trying to understand neural circuits, which is how the neurons firing kind of map to each other to make the complex system that is our brains, we've been studying them in isolation. And that's great. It's provided us with a lot of incredible insight. But what that also means is that we're mapping this activity away from the environment and away from the body where it evolved from, right? Imagine like, let's relate it back to human beings for a second. Imagine you study my brain without the context of the surroundings or my physical being and how that maps to everything else. You get a little bit of information, but you're probably missing out like 90% of that context, which could be really important if for say, you wanted to mimic me somehow. So, they had this idea that if we could somehow understand the surrounding element, it would help us better understand how something like a zebrafish navigates waters and how we could potentially take that knowledge and put it into a robotic system for unmanned navigation. Now earlier I said they focused on zebrafish because they're transparent and here's where that comes into play. Because the larvae is transparent what that means is that we can study the little neurons firing off actually live. We don't need to like cut them open, can just kind of, I forgot what it was called, was it a calcium, a calcium sensor or whatever, a calcium scanner, something of the sort that allows you to scan the neuron activity of these fish live without cutting open into them. So that's where it all kicked off.

Daniel: And that only is possible because they're transparent.

Farbod: Correct.

Daniel: Because they're see-through, they're easy to study, you can use special imaging. In my mind, I had it as a two-photon microscope. I don't know if that's the same as the calcium imaging or if we were both wrong, probably you're right and I'm wrong. But like they use these special types of scopes where they can see neural activity firing inside the brains of these zebrafish because they're see-through. So, this allows them to study the full picture, the brain and the body and the environment all in one as opposed to like watching it in a controlled laboratory environment where you can only look at the neurons in a Petri dish or something like that. Like this is, you can actually see the organism and study its brain and see the stimulus that is causing certain changes in the brain. You can see this all at once. Look at the full system at the same time. And that's a unique way of studying specifically of trying to understand how neural networks work, how brains work. Like there's a lot of work trying to be done to make computers a lot smarter. And they think that a lot of the next series of breakthroughs are going to come by being um bio-inspired. And so, if you're going to be bio-inspired, what better way to learn how brains work inside organisms like fish than to watch the full system at play, as opposed to just looking at one aspect of it and trying to infer how the rest of it works.

Farbod: That's true. And the way I try to sum it up for myself is like, prior work was all about how do you visualize the stimuli and how the kind of stimuli responses in the neurons, right? So, like something comes into the eyes, the lenses pick it up, what kind of neuron activity do you see? What these folks wanna do is stimuli coming in, neuron activation happening, body mechanics, and then how does it maintain its position, interactions with the environment, et cetera, et cetera. So those are the layers that they're trying to add, right? And what all of that funneled into was this uh idea of having a simulated zebrafish, right? So, you have this neuron activity right now. How do you take a quote unquote perfect simulation of a zebrafish based on the mapping of the neuron architecture that you've studied in situ and create this like pond-like environment with fluid mechanics so that the water is flowing and the mechanics of the zebrafish's body to see how it actually behaves, right? Can you reverse engineer it? Can that mapping actually make sense, et cetera, et. And the way they fine-tuned the system to make sure its mapping reality is that they, in situ, they were able to actually monitor how a zebrafish moves as they change light reflections underneath it to make it think that water is flowing in different ways. And they used that behavior in the simulation to have one-to-one parity. And once they did, this is where it got really interesting. They said, well, we wonder how much of the zebrafish's ability to do X, and Z is because of this one specific aspect. For example, the way that the light is coming in versus the lens depth versus its vision system as a whole. Something that they can't do in a laboratory setting is to modify the biology live one by one. Like one that's inhumane, two that's just incredibly difficult. But what you can do in a simulated environment is have full control over the configuration of the zebrafish and its surroundings to really hone in on what's going on there.

Daniel: They've made a simulated brain.

Farbod: Yes.

Daniel: So, in addition to the simulated environment, they have a simulated brain and they can pull levers in that brain and be like, turn on this part of the brain, turn off this part of the brain, turn everything on, turn everything off and see how the physical or see how the simulated model of the body of the fish responds when there's different aspects of the brain that are or aren't working well. And then also in response to different simulated stimuli. And that's interesting and awesome and cool. They take it even one step further than that. And they're like, now that we've made this simulated model where we look at the eyes, the brains and the muscles of a fish swimming in a flowing river with different light stimuli, they also made a physical robot version to try and test it as well, which is really cool.

Farbod: Agreed. And before we move further, one thing I did want to note is, yes, they're able to pull all those levers, which is interesting, but the real depth of the sauce here comes from the fact that we have this neuron mapping. We don't fully understand the intricacies and the dependencies that they have with each other. So, what the simulation allows the researchers to do is, what if we break this part of the path? What happens next? And by being able to modify those things, it allows us to better understand that neural path to begin with. So, that's like a, I'm trying not to like hype it up too much or undersell it. I'm trying to find my happy medium here, but that's what's like really exciting about this. And you're right. They did all of this and we're like, okay, can we actually make something useful from this? Can we validate what we think we've recreated in a simulated environment in the real world? And what that amounted to was like an 80-centimeter-long robot. That has, it has the same, two inputs   for the visual stimuli, basically two cameras, and then it has a body that is  supposed to mimic the muscles of the zebrafish, and then they put it in the water and they're like, let's see how it behaves when it has this neural mapping on it versus not having it on it, or rather turning it on and off. I don't know if you want to dive into that, you were pretty excited about the robotic part.

Daniel: No, I think it's super interesting, specifically the aspect around showing that vision alone is enough to keep a fish steady in a flowing stream, both in the simulated form and then they try it in the physical form. But when I draw the parallel to building the physical robot, that's where it started to click for me, which is if you were to build a physical robot to try and control things like a robotic fish underwater, you may start to look at all the different sensory systems that exist in a fish and say, do I need to build all of these? Do I need to be able to tell my X, Y, Z location and my orientation underwater? Fish are actually really, really good at that. Do I need to be able to tell the pressure of the water? Do I need to be able to tell the temperature of the water? Do I need to feed? Do I need do I need proprioception to know the shape that my body's in? Do I need to have sense of touch to feel when I bump into stuff? Like there's a lot of senses that fish have. They have a really good sense of smell, too. Like there's a lot of senses that fish have. So, if you're to build a robotic fish, you're saying, do I need to build all these sensory systems or what do I need to prove that I can do motion controllably underwater? Well, they proved in the simulated setting that all they needed was the vision inputs. So then when they're building the physical robot, all they need is building a vision system that mimics what the brain got in the simulation. And they showed that physically testing as well, granted with a physical prototype that's much larger than the actual fish are in real life. But they showed that vision alone was enough to help keep the fish steady and control the fish in a stream of water, which is like really interesting parallel. Something that you could never prove with real fish tests. Like you can't turn off the rest of their sensory systems and believe just their eyeballs and say swim around.

Farbod: Exactly.

Daniel: But it also helps like it obviously helps us understand the brain and build a better brain model for fish. But I thought it was interesting when they start to say, this helps us build smarter robots without over-engineering them as well.

Farbod: And again, I'm going to go back to how incredible it is that you can, in this simulation, toggle these things on and off. Because like you're saying, it could be that the way a fish does its path planning and stabilization is actually like maybe 70% vision, but then 20% the feel of the water against its skin, and then 10% smell, right? But this model allows you to be like, okay, like what if I just toggle everything else off except vision? Does it still operate good enough for my application? If so, now I know that I can just kind of shed all those requirements that I originally had in my mind and focus on this one thing. So incredible, incredible feat by this collaborative duo from EPFL and Duke. And, in terms of what all this amounted to, we talked about the model they made. They put this robot into the water, we do our best to try to explain cool stuff via audio, but you gotta check this video out. It's on the article. With the model off, the robot is just drifting. And it's not like the robot is off. The robot is on, trying to stay steady against the stream that's coming to it. But it simply cannot determine what is the ideal way for it to swim against. Now with the neural network turned on, it's pretty much steady the whole time, just like you would expect, just like the videos they shared of the zebrafish larva. So, incredible to see that we've mimicked that ah to a great extent and what that means for amphibious robotic systems.  

Daniel: And we talk a lot on this podcast about what's interesting and applicable for the future. And we also talk about what's exciting is when researchers are collaborative with one another for cross discipline inspiration and collaboration. So, this is one awesome application of cross-disciplinary collaboration, right? You've got scientists that are studying the zebrafish and how their brains react to moving sites. And then also you've got teams building a simulation in a robot based off of the study studies and those findings. But one of the things that's awesome is they made these tools free and available to other scientists as well. So hopefully there's an opportunity for someone to take a look at this work and be like, wow, that's awesome. And then build on top of that. Back to what we were talking about, standing on the shoulders of giants. That's how a lot of scientific progress is made. So, I hope someone is listening to this and…

Farbod: Maybe one of our listeners is going to be the one that takes it one more step.

Daniel: Watches the video and checks it out and be like, holy cow, I can do this with that.

Farbod: So, to kind of like wrap it all up in a nice little bundle, right? You have the typical way of studying neuroscience, neural activity, which is just based on the stimuli coming in and the neurons that are being fired off. And it's usually done in a lab setting without the rest of the context of the environment that this neural architecture was developed in or the mechanics of the body. Where these folks from Duke and EPFL came in, they were like specifically with zebrafish because they're transparent and we can see those neuron activations. What if we studied everything else? They try to get a good enough mapping from simulating water movements by changing light patterns underneath the zebrafish. And then they created this simulated environment called SIMZ fish, where they did a one-to-one match between a simulated fish and a real fish to fine tune parameters, making sure that their simulated fish is actually behaving like a real one. And then started turning these levers up and down to say, happens if the vision system was compromised. What happens if X or Y or Z to see if it could still keep its position against an impending current. Then they took all that knowledge and that insight that they gathered and made an actual robot to test out in the water. And just as the simulator had chomped them, they had accomplished full bio-inspiration, biomimicry, because their robot with this new neural architecture that they had learned from the zebrafish larvae worked better than the robot without it. That's the TLDR.

Daniel: Fire.

Farbod: Boom, boom, boom. Anything else?

Daniel: No, I think that's all.

Farbod: Cool. That's the pod.

Daniel: Peace.


As always, you can find these and other interesting & impactful engineering articles on Wevolver.com.

To learn more about this show, please visit our shows page. By following the page, you will get automatic updates by email when a new show is published. Be sure to give us a follow and review on Apple podcasts, Spotify, and most of your favorite podcast platforms!

--

The Next Byte: We're two engineers on a mission to simplify complex science & technology, making it easy to understand. In each episode of our show, we dive into world-changing tech (such as AI, robotics, 3D printing, IoT, & much more), all while keeping it entertaining & engaging along the way.

24,000+ Subscribers

Stay Cutting Edge

Join thousands of innovators, engineers, and tech enthusiasts who rely on our newsletter for the latest breakthroughs in the Engineering Community.

By subscribing, you agree to ourPrivacy Policy.You can unsubscribe at any time.