podcast

Podcast: Can We Trust Self-Driving Cars?

In this episode, we discuss a joint effort between the Laboratory for Information and Decisions Systems and the Institute for Data, Systems, and Society at MIT to tackle the trust issue with autonomous vehicles.

author avatar

09 Aug, 2023. 17 min read

In this episode, we discuss a joint effort between the Laboratory for Information and Decisions Systems and the Institute for Data, Systems, and Society at MIT to tackle the trust issue with autonomous vehicles. This team has proposed a human in the loop solution as an intermediate step to full self-driving vehicles that can allow manufacturers to further develop the technology without compromising safety that they’ve proven via mathematical modeling.


EPISODE NOTES

(0:50) - Exploring new methods for increasing safety and reliability of autonomous vehicles


Transcript

Hey folks, today we're talking about self-driving, and in this world, we hear a lot of noise about autopilot, full self-driving, driver assistance, but today we're asking the tough question is, can we trust self-driving cars with our lives? Spoiler alert, the answer right now is no, but we will tell you what MIT is doing to help bring autonomous vehicles to life anyway and do it really safely, so let's drive right into it.

I'm Daniel, and I'm Farbod. And this is the NextByte Podcast. Every week, we explore interesting and impactful tech and engineering content from Wevolver.com and deliver it to you in bite sized episodes that are easy to understand, regardless of your background. 

Daniel: What's up folks, today we're talking all about autonomous or self-driving vehicles, and how they find certain tasks challenging, and what this team from MIT is doing to solve those problems. I think at a high level, everyone that lives in a society where you've got cars on highways is starting to become aware of this trend of the increasing level of autonomy, that the cars that are on the market are coming out with these features, right? So, we've got Tesla self-driving, autopilot, we've got a bunch of other similar competitors from other OEMs, I think Tesla's is probably the most notable, but I think across the board, there's a level of autonomy that is not only expected from certain car makers, if you're going to go buy a brand-new car, in places like the EU, it's actually regulated that they're required to have certain safety features that are considered autonomous or assisted driving, so it's really, really interesting. We’re at this juncture where technology is starting to intrude on something that we've had for a couple decades now, which is humans driving on the road, the big question is, can we trust self-driving cars with our lives? Are they ready yet? Or should there always be a human behind the wheel? And this team from MIT is trying to toe the line between these two different schools of thought to make sure that we bring safe self-driving to reality.

Farbod: Now, if we can be a little bit more transparent, right? It feels like every time this debate comes up, you have the two extremes, which is one, the technologists who are like, this is amazing. It's going to make our lives so much better. Autonomous vehicles will get into less accidents than people. How can our regulators be so dumb? You know, putting it nicely, that's how it's going. And then you have the regulators are like, no, these things are killing people. We need a billion miles on the road until we're even like slightly comfortable with doing this. They're just being super cautious. And I wouldn't say like we're at a standstill, but it feels like at least on the regulatory front, it doesn't look like wide scale adaptation of autonomous vehicles is coming up anytime soon. Right? Like that's the general feel. And because both of these two parties have very strong opinions, and there doesn't seem to be any middle ground. But what we're seeing from these folks that we're talking about today is a proposal for some sort of middle ground that, I don't know, I feel like it has a lot of logic in it. It's rational, right?

Daniel: Yeah. Well, and obviously, I think we can say that as in many cases where we've got loud, outspoken people at either end of an ideological spectrum, there's probably a lot of people in the middle who are feeling alienated by a lot of that cross talk, you know, people, people firing angry comments at each other on Facebook. And, you know, 80-90% of the people are in the middle saying, you know what, like, maybe the technology is ready in some ways, and maybe it's not ready in others. And one of the people that I think of is actually, I'm going to give a shout out to Missy Cummings. She's a professor at George Mason University, both of our alma mater. She doesn't really get dropped. I think she used to be like the head of the NTHSA or something like, like a big wig in the, in the realm of transportation and regulation. It's specifically focused on autonomous vehicles. She does a great job of kind of going down the middle and saying, yes, there is data showing that in some ways, self-driving is safer than human drivers. There's also other data showing that in other ways, self-driving is more dangerous than human drivers. So, we have to take a really nuanced approach at how we roll this technology out into the world to make sure that we're prioritizing safety above all other things. And I think that's, that's probably the right approach is prioritizing safety. If you have to pick something you don't want to compromise on safety. I appreciate that this team from MIT has taken a very similar approach. So, what they're doing is they're prioritizing safety. They're prioritizing reliability in these tricky situations where autonomous vehicles are known to be faulty or they're known to have some challenges. One of those namely being merging onto highways. That's something that autonomous vehicles don't do a great job of. Cathy Wu and this team from MIT are designing a hybrid system that relies on autonomous vehicles, self-driving cars to do the things that they do best when they're the most safe, like cruising up and down the highway, and then relying on human supervisors that oversee numerous different autonomous vehicles. These human supervisors assist remotely with the more complex tasks like highway merging.

Farbod: Yeah. And I think that's really important to highlight that in this paper, they're like, hey there are things that autonomous vehicles are great at. Then there's like, let's say these like 5 to 10% of scenarios where you're trying to merge onto like an oncoming lane to two-way highway or whatever, not highway, two-way road. And you have instances where you're going on the ramp and are coming off a ramp. And the human touch there is actually far more superior than having an autonomous system try to take over. So, their whole proposal here has been what if we can have a human-in-the-loop system for those scenarios. And Dan, you and I, I feel like we're kind of close to this because of the experience we had on campus with Starship Robotics, these cute little marshmallow-looking robots that would deliver food or Starbucks to you. And just like these autonomous vehicles, they did a great job at traversing our campus, except when it came to crosswalks. And in those instances, you would have a human being get in the driver's seat and navigate the robot through that little tough area.

Daniel: I want to be specific here because I don't want people to misconstrue what you're saying. They're not physically getting in the driver's seat, right? We've got a team of humans remotely assisting these robots when they're in a tricky situation. So, if the robot got stuck or if it's at a crosswalk and it doesn't know like if it's safe to cross, you would have a team of humans sitting remotely somewhere at a computer, I'm guessing with like a screen of all these robots they're helping oversee. And when something needs help, they're able to patch into it and give it remote control and get it out of the sticky situation. And then it goes back to autonomous operation. I think it's very, very similar to what this team from MIT is suggesting, having this hybrid system with a team of humans that can remotely monitor dozens of autonomous vehicles and then jump in and assist during specific situations when human oversight is safer than autonomous operation of the vehicle. Then it returns back to self-driving mode and the car is able to drive itself and the humans aren't required to be driving. So, it's not like you've got, you know, you're sitting in a car and it's supposed to be driving you or it feels like it's driving you, but you've actually got, you know, someone in a call center halfway across the world remote driving you in a simulator the whole time. That's not the case. So, you're relying on the technology in the car to do the bulk of the driving. It's these fringe scenarios where self-driving is known to not be reliable. And in this case, there's probably more scenarios today than there will be at some point 10-15 years in the future when we're able to get better at self-driving vehicle development. But at this point, there are some spots where you might have to have a human jump in. And, you know, instead of having humans waiting to jump in on every single car, this team from MIT says, I think they could get to it. How many was it? 47 people? 47 cars managed by one person.

Farbod: By one person. Correct. Yep. And like you're saying, basically what they've come up with is a system that says, imagine your algorithm, the algorithm driving your car is like your chauffeur. And now they're just tag teaming that responsibility with the human touch in the cases where you need that extra help. Now, you know, off the bat, when I'm hearing this, the first thing that comes to my mind is how many people do you need to do that? Right? Like when I'm thinking about, you know, peak hours, just like for energy consumption, you know, we have peak hours because people are home, they're turning on their A.C., yada, yada, yada. You're going to have the same problem like during rush hour where people are coming home and everybody's on the road and everyone's trying to do something and they're frustrated and yada, yada, yada. So how do you like come up with the right number of people needed to adequately meet this demand that's not going to be so easy to like determine using like a back of the napkin, quick math equation. And they came up with two theorems. The first one is an entire model that's supposed to tell them just that it's supposed to model out demand and how many supervisors they need for every single autonomous vehicle on the road. And the second theorem assists the first theorem by saying, hey, what if we actually lived in a society where autonomous vehicles could talk to each other? And that's, by the way, one of the goals, one of the added benefits of autonomous vehicles, of smart cars. What if they could talk to each other and in doing so made those difficult scenarios like getting on a ramp or off a ramp or merging or whatever, a lot easier because they're nicer to each other than human beings who just want to go home.

Daniel: Well, and think about it, right? Humans also tend to make a lot of mistakes in similar scenarios, right? Merging, switching lanes, et cetera. If you've got autonomous vehicles that are able communicate with one another, an autonomous vehicle on the highway, for instance, could slow down to make space to allow another merging AV to get in safely. That's something that, like you said, humans are very self-interested. They don't always do this. And out of their own self-interest, they end up causing traffic jams because of the mistakes that humans make when they're trying to protect their own self-interest.

Farbod: Absolutely.

Daniel: In addition, we're also, I don't think we're in as, we're not in as high of a degree of control of the vehicle, humans are. I don't think that they are as in high of a degree of control over the vehicle as the autonomous driving systems eventually will be at some point down the road. So, there are also traffic maneuvers that it will be safe for autonomous vehicles to execute that I don't think humans could execute. Just purely from a vehicle dynamics and vehicle control and how fast the human response time is and humans’ tendency to over-correct when there's someone that's super close to them, they slam on the brakes as opposed to letting up, letting off the gas slightly. I think that in the future, we'll see autonomous vehicles merging with each other with much less following distance between cars than we need to have with humans because humans are also flawed in their vehicle operations.

Farbod: Absolutely. I mean, have you ever seen anyone use a zipper lane the right way? No, but that's the promise of an autonomous vehicle future, it'll finally work. To the designer that came up with it, it's happening. But anyways, the second theorem is supposed to tell us what the influence of cooperative autonomous vehicles is on the number of supervisors needed. And what they came up with is that their models show that if 30% of these AVs are cooperative cars, it would lead to that number that you suggested earlier, which is one supervisor per 47 AVs on the road. And it would give us, based on the model, it would give us a 99.999. Yes, that's correct.

Daniel: Six nines total, four after the decimal point.

Farbod: Wait, six nines or five nines. I don't know.

Daniel: Lots of nines.

Farbod: Enough nines for it to be very certain in terms of human safety.

Daniel: We consider this to be an acceptable scenario, having 99.99, maybe 9% of merging cases covered by one human supervisor for every 47 autonomous vehicles. So, it's not one driver per car. It's one driver for every 47 vehicles can cover 99.999999% of these merging cases safely. That's a big deal, I think. When you're talking about the potential demand, the potential labor required to have 30% of these cars on the road managed by a remote human supervisor, the fact we're able to get an almost 50 to one ratio, 50 cars on the road per every one human supervisor, and I imagine that number, actually that ratio gets bigger and bigger. So, one human is able to manage even more autonomous vehicles as the saturation of collaborative, connected cars, let's say, on the highway that are communicating with each other and making maneuvers that make it safe for one another to make their way through the world. I imagine that number only gets higher. Maybe at some point when we've got something like 50 or 60% of cars on the road or these collaborative AVs, maybe someone can manage 100 vehicles or 150 vehicles as opposed to being around one to every 50. But I still think this one to 50 ratio seems really impressive to me at first blush.

Farbod: No, I'm with you. And, it goes back to the problem we mentioned earlier. You have these two very passionate camps, one who wants the technology to be adopted by everybody and out there in the world, and that's the only way it's going to grow. You have the other camp that just wants absolute safety and they're super cautious, and this is showing us through this model that you can have both. You can have very high certainty to the level that our regulatory bodies deem acceptable and still have this new technology out in the world without it being super perfect yet. We know we've refined this process to be good-ish, and we're not very close to it being as perfect as a human driver yet. So, until we get there, this system could be a good compromise. Now, what happens, by the way, if you remove this 30% of cooperative cars? Well, the certainty of those merging cases, which is the main thing they focused on, drops to 99%, which is unacceptable in terms of the standards that they have to meet for this to be like an actual deemable solution, deemed safe by the regulatory bodies.

Daniel: Well, I mean, if you think about it, how many times there are autonomous vehicles merging on the highway, or there could be when we get tons of them on the road. If they aren't able to collaborate with each other, they're saying, one out of every 100 of those cases, there could be an unsafe incident. That's not something that we’d look forward to. My guess here is there is the reliability of humans executing the same task is way higher than 99%. I don't think that there's one crash for every 100 merges on the highway. Otherwise, traffic would be even worse than it already is.

Farbod: I was going to say, if you're like a heavy commuter, you're definitely hitting that 100 mark, like at least once a month, like depending on what your route is. So that's not super safe. And I wouldn't want to get in a car that has that level of safety associated with it. But this number is also interesting, the one in 47, you know, there's roughly 290 million cars in America right now. And if we were to if we would say like tomorrow, all of those cars are going to be AVs, and 30% of them are going to be so advanced that there will be cooperative cars, you're looking at like a little over 6 million people to have to do the supervision for this. And you know, for some context, I think there's like one and a half million people employed by Walmart right now. And that's the biggest employer in the United States. So, this solution cannot scale to the point where it becomes permanent. But it makes sense if you think about where we are right now, which I think 2 million of the cars on the road are EVs. Typically, EVs are the ones that have those, the higher tech, the self-driving and you know, you talked about Tesla earlier. So, if those were to have, let's say, all of them abided by the one in one supervisor per 47 AVs and 30% of them could communicate with each other, then you're looking at I think a little over 4,000 people to monitor them. And that that seems feasible.

Daniel: Sounds reasonable.

Farbod: Yeah. And it makes sense that, you know, from the let's say 2 million-mark until the 10 million-mark, we get to release this algorithm for self-driving, get even more data on it than we could by just having these like Waymo cars that stay in this isolated random town, they keep driving around, needing to hit, I don't know, the 100 million-mark, we get to slowly release it into the public with all these, you know, I think of it like training wheels when you're riding a bicycle for the first time. And in doing so we get to train these algorithms on how to be better in those edge cases. I think one of the, we've done an episode in the past, I can't put my finger on which one it was. But we were talking about how I think University of Michigan is trying to simulate all these like bad scenarios over and over again using VR. And that's how they want to get their algorithm to be good at these cases. Well, what if you could just do it in real life, and you get to see how this, like person in the middle is actually mitigating the situation and adapt as time goes on. Like that sounds like an awesome solution to me.

Daniel: Yeah, I'm with you in it. I'm going to liken it something you said earlier, where it feels like there's like people in opposite camps, they're fighting with each other. But the actual ball, let's say, like, let's say that the actual reality of autonomous vehicle regulation and adoption is like a snowball sitting on a hill. It's reached this little dip where the snowball started rolling down the hill, and then it got stuck in the dip, and it's not going to roll the rest of the way down. I feel like this team from MIT could give the snowball just the push that it needs, right, to get this snowball back rolling down the hill again. It immediately addresses some of the safety and reliability concerns associated with self-driving cars. That will help speed up the adoption of autonomous vehicles that will help us collect more data. That data will help us create more rigorous algorithms that then permanently address the safety and reliability issues and get to a place where we've got lots of cars in the road, lots of them are speaking to each other and collaborating with one another. And also, their algorithms are much better than they used to be. And because of all these factors, we've got a situation where we don't need one autonomous operator for every 47 vehicles. We get to a point where one autonomous, one remote operator can operate a thousand autonomous vehicles. And then we eventually get to this point where I liked your analogy with training wheels, right? We can raise the training wheels slowly, slowly, slowly until we get to the point where we say, hey, it's safe. Go, kid, go back on your own. You don't need training wheels anymore. I feel like that's where we're getting to with autonomous vehicles at some point in the future. This team from MIT being the ones that designed the training wheels that help us get there.

Farbod: Yeah, no, I'm with you, man. With that said, do you want to do a quick recap of everything we talked about? Because there's a lot of content here and we kind of went on some tangents.

Daniel: Yeah, I'll jump into it. So, my question here, can we trust self-driving cars with our lives? I think there's one school of thought that says there should always be a human behind the wheel. There's another school of thought that says like, to hell with it. I'm super excited. Let my car drive me around. Well, this team from MIT is really smart. They're nuanced. They're trying to balance these two schools of thought to bring safe self-driving cars to reality. We mentioned it's just like a kid learning how to ride the bike with the help of a parent. And then eventually getting into training wheels, self-driving cars could use some help with the tricky tasks like merging onto highways and exiting from highways, et cetera. Self-driving car can do simple stuff alone but they have really challenging time doing the tough stuff. And that's where we have safety issues with them right now. So, this team from MIT suggests having human supervisors that oversee numerous different remote autonomous vehicles for the hard tasks. And they also suggest letting the cars talk to each other to lessen the need for human help in the future. This team from MIT, their calculations show that if one in three cars can talk to one another and human supervisors can oversee multiple cars, we can achieve 99.999% reliability during merging with only one person overseeing every 50 cars. So that's a great ratio there of humans to robots. An added benefit there is that traffic can move 33% faster this way. That's a win for everyone who's listening to this right now from a rush hour traffic jam. Overall, this technology isn't in use net, but it is a promising idea from MIT to help make self-driving cars safer, much more reliable and speed up the adoption of this new technology.

Farbod: Absolutely killed it. All the main points right there.

Daniel: Thanks, my dude.

Farbod: I'm pretty stoked about this, man. I feel like, you know, we say this about certain topics, but I feel like this is definitely one of those episodes that we're going to come back to and then give an update on. It's so relevant. Like it's, this is one of the hot topics in the automotive industry. We're excited about it. It could make our lives better. The comment you made about if you're stuck in traffic and listening to this, it's going to resonate with you. It's resonating with me already, and I'm not even in traffic right now. So, I don't know. This felt good. This was, this was one of the fire episodes. I think one of the most fire that we've done recently.

Daniel: I agree, dude. Good spot to wrap it up.

Farbod: Yep. Let's do it. Folks in Azerbaijan, I don't know how you keep doing it, but you keep doing it. We're still trending, baby. Thank you so much and glad that you're rocking with us. I hope we can keep the love going. Please message us. We'd love to hear from you guys what you're thinking. Please, if you haven't done so already, go on Apple podcasts, Spotify, give us a five-star rating. If you don't think we deserve a five-star rating, message us, tweet at us, TikTok at us. I don't know. Smoke signal. We'd love to hear it.

Daniel: We would love to do whatever we can to earn that rating from you so that you go and tell your friends, this is the best podcast. And by the way, I helped them think of this awesome idea that they implemented. That could be you. So, reach out to us. Like, like for both sides, we're pretty much everywhere and we're waiting for it. So let us know what you think.

Farbod: Absolutely. And as always, everyone, thank you so much for listening and we'll catch you in the next one.

Daniel: Peace.

-------

That's all for today The NextByte Podcast is produced by Wevolver, and to learn more about the topics with discussed today visit Wevolver.com.

If you enjoyed this episode, please review and subscribe, via Apple podcasts Spotify or one of your favorite platforms. I'm Farbod and I'm Daniel. Thank you for listening and we'll see you in the next episode.


As always, you can find these and other interesting & impactful engineering articles on Wevolver.com.

To learn more about this show, please visit our shows page. By following the page, you will get automatic updates by email when a new show is published. Be sure to give us a follow and review on Apple podcasts, Spotify, and most of your favorite podcast platforms!

--

The Next Byte: We're two engineers on a mission to simplify complex science & technology, making it easy to understand. In each episode of our show, we dive into world-changing tech (such as AI, robotics, 3D printing, IoT, & much more), all while keeping it entertaining & engaging along the way.

article-newsletter-subscribe-image

The Next Byte Newsletter

Fuel your tech-savvy curiosity with “byte” sized digests of tech breakthroughs.