podcast

Podcast: Reverse Engineering The Brain With AI

author avatar
Podcast: Reverse Engineering The Brain With AI

In this episode, we talk all about connectomics - the study of animal brains - and how researchers at MIT have started leveraging AI to break through the primary bottleneck: brain image acquisition.

In this episode, we talk all about connectomics - the study of animal brains - and how researchers at MIT have started leveraging AI to break through the primary bottleneck: brain image acquisition.


This podcast is sponsored by Mouser Electronics


EPISODE NOTES

(2:46) - Using AI To Optimize For Rapid Neural Imaging

This episode was brought to you by Mouser, our favorite place to get electronics parts for any project, whether it be a hobby at home or a prototype for work. Click HERE to learn more about how AI is already helping doctors better detect diseases like cancer!


Transcript

Hey folks, do you ever think, how does my brain actually work? Like, you know, really, how does it work? Well, turns out you're not alone. There's an entire field of study around this called connectomics. Now, not only do they wanna know how it works, they wanna know how they can prevent diseases from affecting it negatively. And researchers from MIT have kinda started to crack the code. So, if that's got you excited, then buckle up and let's get into it.

I'm Daniel, and I'm Farbod. And this is the NextByte Podcast. Every week, we explore interesting and impactful tech and engineering content from Wevolver.com and deliver it to you in bite sized episodes that are easy to understand, regardless of your background. 

Farbod: Alright folks, as you heard, today we're gonna be talking about microscopes. But before we get into today's article, let's talk about today's sponsor, Mouser Electronics. So, you guys know we love Mouser. The reason we love Mouser is that they're one of the world's biggest electronics suppliers. That's already cool in itself. You can get project parts, components for the work that you're doing at work, all that good stuff. But they have connections with industry partners and academia. And what that means is that they can have some pretty great insights about what's going on in the world. They have topics on AI, additive manufacturing, and the one that's gonna kind of fit in here today is about how healthcare and AI are coming together. There's this technical resource that we're gonna link in our show notes that I think is worth checking out. It's a great primer for what we're gonna talk about today, but it's about how machine learning can help cure, not cure necessarily, but better diagnose cancer, to work towards the cure. Human beings, we've been fighting cancer for quite some time. As time has gone on, we've gotten quite good at it. When we do our diagnosis, human beings are obviously involved. They're looking at the images. They're running tests looking for biomarkers. But obviously, there's an x percent of human error involved here, right? Now, as technology has gotten better over the past couple years, we've been including machine learning into this analysis. It's been helping us reduce the amount of error that's present in the biological marker testing. And in addition to that, it's becoming, it's allowing us to do imaging much more accurately. So, things that a normal human being might miss in an MRI scan, we're now finding in these scans because of machine learning. I think it's interesting because it's so relevant to what we're gonna be talking about today, which is very imaging focused. But it's also like kind of planting that seed of what healthcare is gonna look like over the next 10 years or so, if this is what we've been able to achieve so far. With that said.

Daniel: Let’s talk about another fusion, right? Yeah. Of AI technology, with medical technology and learning about connectomics, I think that's the word.

Farbod: Connectomics. And did you know what it was before this?

Daniel: No idea.

Farbod: Okay. Connectomics, the field of study to map the animal brain. And I guess it makes sense because we've been hearing about how important it is for us to recreate the human brain. I just didn't know that there was an entire field of study around it. And the drive behind doing this is if we understand how a animal brain, a human brain works, like the what is it? Imagine like us doing the first principle thinking on any topic. If we understand the first principles of how the brain works, we better understand diseases and how to cure it. And so on and so forth.

Daniel: And that's something that was interesting to me. I've definitely heard about it in the context like you're saying around, let's understand how the brain works so we can try and replicate that with artificial intelligence, with neural networks, et cetera. But I think it's completely different here to understand they're actually studying the physics of brain connections. Physical study of the brain's physical complex network to understand how cognition works, but also like you're saying to understand at the core fundamental principles of what brain disorders are, how does something like Alzheimer's affect the brain? How does it affect the physics of the connections and cognition inside the brain at the sub-molecular level, like talking at the electron level.

Farbod: Literally, imagine they're trying to map it as a circuit, like they're looking at synapses, right? If they can understand all of these connections from different regions of the brain, then they can hopefully come up with cures for all the ailments that we're seeing that are so connected to our brains.

Daniel: But trying to understand any level of brain is obviously very challenging because there are millions and millions and millions of synapse connections inside every brain. The brain is such a complex network. That's why we haven't yet been able to truly understand what's at the core root of, you know, at the physics level of why Alzheimer's happens and how we can treat it. That's why there are effective treatments, but there hasn't yet been a cure. So, this team from MIT takes a look at what connectomics looks like today, which is involves a lot of humans using scanning electron microscopes to understand samples at the brain, how the brain's connections work, and then try and simulate that and take notes on it and understand, again, the basic physics of how the brain connections work. This is still something that's still really, really vastly understudied and needs a lot more human resource and a lot more human time to be able to understand, to gain an expertise in how the brain works.

Farbod: And I'm going to take a step back real quick as we talked about with the Mazer technical resource, right? We've been able to, with the help of computers, expedite a lot of our processes. Right? So, the question here is why haven't been, we've been able to apply the same thing to this situation. Kind of fun fact. We kind of have like with these scanning electron microscopes, they're getting images, right? And computers have been able to help us post-process this stuff, but the process of acquisition has still been manual. Like you have a researcher that is looking at every one of these connections, trying to get the right image, moving from one of the region to another point in the region, and that's what's taking up a lot of time.

Daniel: And if you were to try and brute force this, say, let's just take as many photos as we can of the brain with this electron microscope and upload these all to a computer and then let a machine learning algorithm help us process it, there are so, so many millions of connections inside the brain that it's truly nearly impossible because the complexity of all these images and the total data volume, just the total volume required to collect and process all these images, it far exceeds the capability of what we're able to do today. So basically, the bottleneck in neuroscience research is we need humans, even if we're incorporating machine learning in the image post-processing, we've still needed humans today to help us understand what images need to be taken in the first place.

Farbod: There you go.

Daniel: For us to then send to a machine learning model later for post-processing.

Farbod: You need to have that context of what regions to look at and why they're important in relation to the other image that you just took, right? That's the big idea here. And I feel like we teased it enough. What did the team at MIT do?

Daniel: Well, they secret sauce, they've developed, I think they call it SmartEM, Smart Electron Microscope. I give it probably a 6.1 out of 10 on our naming rating.

Farbod: Standard stuff, you know? Yeah. Could be better, could be worse. 6.7 is pretty solid.

Daniel: Yeah. Smart EM, it integrates machine learning with the electron microscopy. So basically, the machine learning helps control the microscope and helps the microscope to understand and focus on what are the important parts of the image. What are the parts that are worth taking a photo of and doing further study on? And which parts are routine or not interesting and which parts can we basically discard. And what this machine learning model is doing it's got a GPU Embedded in the microscope support computer to help run this machine learning model guide the microscope similar to the way that human eye control would be controlling this microscope to focus on the complex areas longer and the really simple areas that are well understood don't spend a lot of time don't spend a lot of resources on those spots.

Farbod: Yeah. Yeah, so What made this really click for me and I love? I love examples, I love analogies. One of the researchers was like, when you look at a human face, like in the first couple instances, as your mind is processing what a face looks like, you're concentrating on specific focal points, right? Like the eyes, the nose, the mouth, things like that, and then filling in the rest of the image. In kind of like a similar fashion, when you're reading a book, like as your eye is scanning the page, you're not focusing on all the empty space, you're concentrating on the words, because that's what has value here. They said in the same way, what this machine learning algorithm is doing, is that as it's scanning everything with the microscope, it's finding regions of interest, and then properly assessing whether or not to linger longer, scan better, or like try to go deeper than what it's currently looking at. And it has that level of contextual thinking of like, this region is important, this one is not, get this data, stitch it together. And that's what's helping them expedite this acquisition process.

Daniel: Well, I love that example, right? This method basically mirrors how exactly the human eye focuses on important parts of a visual scene. I wasn't sure about this. I just looked it up to make sure I was correct. But I think the human eye, your focus, your think of it, think of it as a cone protruding from your eye, your focus cone is about only one sixtieth of the total field of view of your eye. So, your eyes actually only looking at one sixtieth or less of the total visual image at any given time. The rest is peripheral image that's either filled in the blanks by your brain or the brain completely discards that information. So, imagine you're trying to analyze a scene around you. You can only focus on one sixtieth of the image at a time at best. Your brain does quick processing to understand what are the complex parts of the image to help me focus and understand what's going on. Which parts can I scan over quickly and almost discard because I don't need that need that information to understand the scene. This smartEM algorithm that this team from MIT has developed has allowed a microscope trying to understand at the electron microscope level, at the very, very small molecular level.

Farbod: At the nanometer scale, yeah.

Daniel: Trying to understand what's going on inside a brain network. They've taught an AI to help focus the microscope in a very similar manner. So, it only focuses on the really complex parts, the parts that are worth studying, and the parts that are routine, the parts that are known, the parts that are simple don't get as much time and effort dedicated towards them. The resources are focused toward the parts of the image that need to be studied. Very similar, like you're saying, if you're trying to recognize someone, your eyes focus very quickly on specific features of the face to identify who that is, as opposed to trying to understand the entire scene of what's going on.

Farbod: Yeah, and I wanna emphasize again, once it finds regions of interest, it slows down the scanning so that it can have the best possible image before moving on. So that you're not, I guess, not only are you expediting this, but you're not losing quality in your analysis either. Like it really does feel like you have a researcher's mind of what to focus on with the speed of computing applied like as a layer on this acquisition process.

Daniel: Well, and I want to place a caveat on that because I think it's even better, even faster than a research, you know, a human researcher doing it. This is kind of taking us out of the secret sauce and onto the so what, right? The impact here. But this team from MIT says they can dramatically reduce the time and cost for this detailed brain imaging. They said, as an example, if you were trying to map 100,000 neurons, which to me actually doesn't sound like that much, knowing that the brain is millions of them, if you're trying to understand and map 100,000 neurons, using current methods, it would take us over a decade to achieve that task. Right now, with a standard research team, with a standard electron microscope, they say if you take four of these microscopes that they developed, these smartEM microscopes, you can achieve this mapping of 100,000 neurons in under three months, as opposed to taking a decade. That's speeding things up by a factor of about 40, which is incredible, and it's insane. Honestly, I was expecting it to be something more like 400, but I guess I've been desensitized to how impressive these machine learning developments are, just knowing that if you were to fund an entire research team for a decade to do the study of 100,000 neurons. It makes this team of four microscopes, they said it would come out to be just under $4 million. It makes that $4 million investment seem much more manageable than trying to fund an entire research team for an entire decade. And then you don't get the answer for 10 years. The opportunity cost of what this research team could have been focusing on over that 10-year period versus putting something in an automated system that runs for three months and then you get the answer and it's spit out and your research team can focus on additional developments in the meantime. Seems like it really will accelerate the pace of advancement in neuroscience and open up possibilities for understanding the brain's architecture, physical basis of cognitive function so we can start to develop cures to these cognitive diseases a lot faster.

Farbod: I mean, consider the raw outcome of being able to understand the brain mapping, right? It gives us the foundational knowledge to then use as a junk board for medical development improving computing, things like that. If that's gonna take 10 years, that means everything else that is gonna come from it is gonna be at least delayed by 10 years. If you can do it in three months, then imagine what you can accomplish in the nine years and nine months that now you've saved, right? So, there's nothing more valuable than time in terms of resources that we have available to us. $4 million is like pocket change in comparison here. Of course, like, you know, it's still a lot of money, but I'm actually going to counter what you were saying, which is, I mean, it's impressive. 40x is a lot, but you and I were talking about it before we started shooting that we were hoping for something higher. My stance is actually that if they've achieved 40x now with their first go at this AI algorithm assisting the acquisition portion of this process, if this is the beginning, then this can just get better as time goes on, right? Just think about how much data analysis has gotten better over the years, how much exponentially we're improving that. And now imagine that same curve applied to the acquisition process here. So, I think it's just the beginning. I think these are the baby steps that we're taking to get to being able to do a three-month process in three days.

Daniel: No, I agree. And we're definitely, it's a huge meaningful improvement the level of advancement, we're used to scientific research saying we make things 10% faster, not we make things 40 times faster. So, we definitely have to appreciate that. And also, like I mentioned, the opportunity cost of having an AI team be able to do this, AI team be able to do this pretty much autonomously versus having to have a whole team of humans, very intensively involved in the entire study, that's a huge plus. In context though, right? There are 86 billion neurons in the brain, even at this pace of development. If you were to try to analyze all the neurons in the brain, it would still take 24,000 years for this new AI developed team to be able to take one brain and understand and map all the neurons inside the brain. So, there's definitely, you know, if you're a neuroscience major, know that you're not becoming obsolete as a result of this development. There's still a lot of input needed as to where we should focus, right? Which 100,000 neurons should we study over the next three months? Because there's another 86 billion of them in the brain that are lying there. How do we know which ones are the focus area? And then obviously this will lead to further developments to help us hone in on what are the important ones to study? Where should we focus our efforts? Like you said, I don't think this is the end of neuroscience. I honestly think this is just the beginning of us starting to understand the core physics of what happens in the brain.

Farbod: Yeah, and it's another collaborative tool. We hear so many sensationalized headlines of AI is going to end the world. It's going to kill all jobs. We've been covering a lot of topics in this podcast over the past three years. A lot of it seems to be more complementary to the industries and the roles that they're getting adopted in. So, kind of a good news, in my opinion. It's going to help neuroscientists do what they do even better. And I feel like it's worth doing a little recap.

Daniel: Yeah, well, I just want to say like one before we wrap this up, one theme that I felt like this reminds me of. I mean, it's super sensationalized, right? Elon Musk is always in the headlines. I had the opportunity of working at Tesla, one of Elon's companies for a little while. And one of the things that I appreciated, despite all his personality and his antics, et cetera, is that he mentioned, if we're going to make any. Meaningful advancement in the physical realm, like say we're building cars, we want to make a meaningful advancement and how we can make cars a step change better than cars in the past. We have to understand the basic physics of the materials that are involved in building that car. So, I had the opportunity to be on the materials team, the materials engineering team, that was awesome to see us pioneering new materials and then using those materials one week later in building a car that was going to get sold to millions and millions of people around the world. That was really, really exciting. But what it required first was a fundamental understanding of the molecular physics of those materials to understand how can we improve these materials and how they can be used in the future. I think that this team from MIT is just now opening up the same level of understanding that we've had, you know, mechanics of materials, we've had this understanding for decades and decades, we’re just now starting to have a chance having of understanding what the core physics are of the neural connections inside the brain. I can't wait to see all the step changes and advancements that come out as a shakeout as this. This is very similar to me as just starting to understand material science for the first time as a species. Now we're just being able to start to understand the core physics of what's going on in the brain. I think it's really exciting and we will see tons and tons of advancement come out, as a huge tidal wave in the future, maybe started by this team at MIT that we're starting the snowball at the top of the hill and it'll start to roll and get faster and faster and faster.

Farbod: I totally agree with you. Like I said, I feel like it's the foundational knowledge for all the other side efforts that are gonna require this. Like if it's medicine, if it's computing, whatever.

Daniel: Well, and one other thing they said is interesting is they think they can apply this to other realms that need an electron microscope as well. So not just studying the brain. They think they can retrain this algorithm to help do other things in clinical pathology or the study of other complex biological systems, not just the brain. So maybe this helps us unlock a fundamental understanding of the physics of medicine as a whole or biology as a whole, which would definitely be really interesting. Although I think the brain is probably the most interesting out of any of those systems to study.

Farbod: I would agree with you, yeah. All right, so to do a quick recap.

Daniel: Yes, sir.

Farbod: Connectomics is the study of animal brains. And the reason it's been such a hot topic is because if we can reverse engineer the brain, then we can utilize the same structure for computing. But more importantly, if we understand how it works, then we can cure the diseases of the brain even better. Now, our scientists have been working on this for quite some time, but unfortunately, even though we've gotten better at analyzing images from the brain, the part of acquiring those images is still the bottleneck. Now, these MIT researchers have come up to kind of relieve us from that. They're using AI to be able to do the imaging portion of this process by understanding the context of what regions of the brain are most important to linger on an image and which parts of it can kind of discard. Now, to kind of make that more relatable, you as a human being, when you're looking at someone's face, your eyes aren't seeing the whole face. They're focusing on focal points like the nose, the eyes, and the mouth and filling in everything else. Same way when you're reading a book. When you look at a page, you're focusing on the words, not the blank space. This AI is doing the same and in doing so, it's taking a process of imaging that would take 10 years and reducing it to only about three months.

Daniel: Nailed it.

Farbod: I try, I try, what can I say? Now, before we wrap up the episode, we had a very sweet, kind, awesome shout out from a fan. Right?

Daniel: Bruno Marquier. Yeah. I wanna shout him out, cause I've got the post open to read it. We saw this post this morning, honestly, right before we started recording.

Farbod: What a great start to the day. It just gave us both a great smile.

Daniel: I'm like bleary eyed had just woken up, just woken up. I got this notification. I screenshot of the message and I sent it to Farbod as I'm like sitting on the toilet, just trying to wake up in the morning. I'm like, Bruno, you made my day. So, I asked Bruno if it's okay if we share his post on the, on the podcast and he said, yes, so I want to share it, but first, before we do that. Bruno Marquier. He lives in Grenoble, France. Meaningful for us because the goal of this podcast has been to reach people around the world. We're both based in the Washington DC area in the United States, it's incredible that we were able to make a friend in France and a friend like Bruno who has 20 years experience and he's a leading AI architect. Incredible to have a conversation about AI today and then talk about an awesome glowing review from a friend like Bruno who's leading the field in AI architecture. But he said he loves the format. He said the podcast is a keeper. It's a vulgarization of scientific progress across various application fields. We had to look up what vulgarization means. It tells you how smart Bruno is. Apparently, it's taking very complex technical topics and boiling them down into a format that everyone can understand.

Farbod: And that's the thesis of this entire podcast. Yeah.

Daniel: So, we're taking that word vulgarization and tucking it in our back pocket for later. But thank you, Bruno, for saying that besides making him learn something new each time, the hosts are excellent. The conversation format is brilliant. And in each episode, they demonstrate the power of repetition, how reframing helps assimilate contact, how to structure thoughts and showcasing the effectiveness of storytelling. Bruno, those are all the goals that we go out to check every single week. And even gave a link to a couple of his favorite episodes. We appreciate that, Bruno. And one thing that he said is as a piece of feedback, he was hoping that we could bring people on as opposed to just discussing their research papers in the future, bring people on, interview them.

Farbod: We got some good news.

Daniel: Yeah, we've got some great news, Bruno. I can't attribute all of this credit to you, but our next episode coming out will be an interview episode. With a leading CEO and researcher in the field of where, robotics meets AI meets manufacturing. So, we're really, really excited to share that.

Farbod: Perfect timing, Bruno.

Daniel: Perfect timing, Bruno.

Farbod: Truly couldn't have asked for better timing. And we say it all the time, but it's feedback like this, it's engagement like this from fans. It's making friends like this. That makes this podcast so worthwhile for us. It started, all of this started over three years ago because we couldn't find a podcast like this. We were hoping to connect with other like-minded individuals that enjoyed STEM and tech and science just like we do. So yeah, I don't know. It's just super rewarding to get messages like this. Thank you again.

Daniel: Yeah, thank you, Bruno. And I think that's where we can wrap up our episode.

Farbod: Yeah. Folks, thank you for listening and as always, we'll catch you in the next one.

Daniel: Peace.


As always, you can find these and other interesting & impactful engineering articles on Wevolver.com.

To learn more about this show, please visit our shows page. By following the page, you will get automatic updates by email when a new show is published. Be sure to give us a follow and review on Apple podcasts, Spotify, and most of your favorite podcast platforms!

--

The Next Byte: We're two engineers on a mission to simplify complex science & technology, making it easy to understand. In each episode of our show, we dive into world-changing tech (such as AI, robotics, 3D printing, IoT, & much more), all while keeping it entertaining & engaging along the way.

article-newsletter-subscribe-image

The Next Byte Newsletter

Fuel your tech-savvy curiosity with “byte” sized digests of tech breakthroughs.