Developing an Autonomous Racing car: Interview with Roborace's Chief Engineer

Coming from a traditional motorsports background, Alan Cocks has led Roborace’s engineering team to overcome the challenge of building world’s fastest autonomous racing vehicles. Wevolver CEO Bram Geenen asked him all about standing on the crossroads between AI and very fast cars.

author avatar

20 Mar, 2020

Image: Roborace and Daniel Simon

Image: Roborace and Daniel Simon

This interview was held on November 19th, 2019, as part of Wevolver's research for the 2020 Autonomous Vehicle Technology Report. In the report Roborace is featured as a case study of cutting edge autonomous vehicle technology.

About Roborace 

Roborace is a future media and entertainment company showcasing the very latest technology in autonomous racing vehicles. By creating safe yet extreme environments in which to push the technology to its limits, Roborace is advancing these new technologies at a much faster rate than is possible on public roads or purely in simulation.

Roborace was announced at the end of 2015 and Robocar was launched in aFebruary 2016. Since then they have performed autonomous demonstrations in cities around the world including New York, Montreal, Buenos Aires, Paris, Marrakech, Berlin and Rome using their autonomous racecar, the Robocar, which currently holds the Guinness World Record for fastest autonomous vehicle. Next to the Robocar, they’ve developed a second vehicle platform, the DevBot 2.0. 

The hardware is managed centrally and is the same for each team, meaning that the only differentiator is the AI driver software the teams develop for the competition. DevBot 2.0 is all-electric and runs on the Nvidia DRIVE platform when in autonomous mode. It can be driven by a human or AI driver, such that the users can explore the relationship between man and machine for assisted and autonomous technologies.

The DevBot is used in the Season Alpha programme, their debut competition. Here multiple teams are pitched against one another in head to head races. It launched with 3 teams in 2019, and will evolve into Season Beta in 2020, exploring new formats and adding 2 more teams.

About Alan Cocks

Until the beginning of 2020, Alan was the Chief Engineer of Roborace. In his role he was responsible for the operational engineering of all of the RoboRace programmes, leading a team of engineers to build, prepare and run autonomous race race cars at variety of events, tests and challenges. We asked him how he ended up in his position, how his role changed coming  from a Formula 1 background and more about he expects different technology that comes into play when building an autonomous racing vehicle.

 


Devbots during season Alpha. Image: Roborace

The Interview

Bram: You’ve worked in racing teams before. Old school racing teams, with people in cars. Could you tell me about that background and how you ended up becoming a chief engineer in Roborace.

 Alan: I’ve always been interested in cars right from being a baby. I ended up in motorsports by accident. My friend was working in the road car industry and I was lucky to get an industrial placement at Honda Formula 1 when I was at university. I sort of fell into motorsports from there. I liked it and I’ve been able to pursue a career off of the back of that. 

My background for most of my career has been in single seater racing in Formula 3, Formula 1 and Formula E. I worked up through Formula Junior and ended into Formula 1 as a performance engineer. I then left Formula 1 and went to Formula E when it was still quite new. I’ve never been particularly interested in internal combustion engines but electric motorsports and electric cars really excited me. 

It kind of makes sense to move towards Formula E. It feels like the future. I spent a few years in performance and race engineering in Formula E until an opportunity came up at Roborace. I knew a few of the guys there from past lives in and through the Formula E, at the time when Roborace was doing a lot of demos alongside the Formula E. It’s just a fantastic, interesting, unique project. It’s too good an opportunity to pass up when you do something so different. It was the opportunity to go and be able to set world records. It’s hard to turn that kind of opportunity down.

Bram: How would you describe your current role at Roborace?

Alan: I’m the chief engineer. I’m responsible for all of the engineers within Roborace and all of the track operations from an engineering perspective. I manage everything from organizing which cars need to be available, to when maintain the cars or how to fit in a specific new sensor. On the autonomous side I’m responsible in terms of planning test items or planning upcoming settings that we’re going to use on the autonomous driver. Furthermore I’m responsible for the traditional running of race cars on the mechanical side of things. What set up we’re going to run, how we built the car, how we maintain the cars etc.

Bram: That is a lot.

Alan: It’s a fairly full on role. There’s quite a lot that it covers.

That sounds like it. How many cars do you actually have?

Alan: We have eight cars currently built. Only seven of those are running. That number consists of two running Robocars, and a fleet of four DevBot 2.0s. We also have a test DevBot 2.0, which is specced out slightly different. We use it to develop or test any developments. We also have one of the original DevBots, which was the car without any bodywork. The original Robocar is no longer running, we use it as a showcar – it’s currently in the Science Museum in London as the focus of their autonomous car exhibit.


Type

Robocar

DevBot2.0

Perception Sensors

- Lidar

- Ultrasonic sensors

- Front Radar,

- Cameras (5x)

- Military spec GPS (with antennas at both end of the car for heading)

Battery type

Custom design

Battery capacity

52 kwh

36 kwh

Peak voltage

729V

725V

Motor

4x integral powertrain 

CRB with each 135 kW

 (one per wheel)

2x integral powertrain CRB

 with each 135 kW

Total Power

540kW

270kW

Top speed (achieved) 

300kph.

217 kph.*

Range

15-20 mins**

15 mins**

*On track, note that no specific top speed runs have been attempted
**At full racing performance, similar to a 1st generation Formula E car.


Bram: That original Devbot  might be one that is going to be very special and important in 10-20 years from now. It started this entire adventure.

Alan: Yes, exactly. That was the idea of keeping one of them built. Just in case it is ever needed in a museum or something. It’s the first ever autonomous race car. It’s kind of where it all started. It might not be the most beautiful but it’s significant in other ways.

How different is your role now from what it was when you were still in Formula E or the Formula 1 racings?

Alan: The main difference, obviously, is not having to deal with race drivers.  Most of my time in Formula E was spent preparing documentation to show to race drivers. I spent a lot of my time creating interpretations to explain the engineering work that we had to give to the racing driver. 

You can do as much engineering work as you want and find every last 10th of a second in the performance, but if the racing driver doesn’t understand it, or act upon that information, then it’s a waste of time. 

A lot of effort goes into communicating all those engineering findings to someone who isn’t fundamentally an engineer and in trying to educate them in how to make the most out of their car. In a race in any autonomous environment we obviously don’t have to educate the driver. Instead we directly input our engineering results into our software. It’s a simplified process in many ways because you’re taking out the part where the knowledge is being linked. The software has been written by engineers, so the inputs that you need for the software are similar to the results you get from your analysis. You don’t have to translate into a more human friendly, communicable way of thinking. You can leave it in engineering speak, if that makes sense.

Bram: Yes, it does. Maybe you can expand a little bit on that. You do your engineering. You get some of the results. How does that go into the algorithms or the code that the software team writes?

Alan: It depends entirely on which project and what we do. It’s probably worth just explaining to you how our software works first. In Season Alpha we provide what we call the Roborace base layer. This is an entirely internal, Autonomous Driving System (ADS). The software is designed to be a starting point, a basis for various teams and projects to use and develop a bot. 

We did a project with Volkswagen who used the entire base layer but then added their own object detection and live path running (object avoidance code).  Similarly, the same base layer has been used by various Season Alpha university-based teams in different ways. Some of them modify the tracking algorithms, some of them modify the Light Detection And Ranging (LIDAR) algorithms. They basically use the same base code and add their own features to it. Every project with every team has effectively had different software. The details of what we put in depends entirely on who we are working with and what they have developed.

Bram: Do you provide an interface for them to interact with your base layer, like an Application Programming Interface (API), or do you provide raw codes?

Alan: Both actually. All of our code is open sourced and available to them but we do also provide an API just to kind of simplify things.

Bram: In terms of software and approach, is the fact that you have this base layer on top of which you can add features, a significant difference from what a standard autonomous vehicle set up would be? 

Alan: I think our base layer is probably quite unique. Not necessarily in terms of technology, but in terms of its feature sets. That’s because there isn’t anyone else aiming their autonomous software at driving on circuit. 

We have various sort of functionalities for learning to access the maximum settings that they can use or on learning the performance boundaries live on the track. We also have features within our base layer to deal with overtaking, with track boundaries or various racing scenarios, like yellow flags. It doesn’t need to know a lot about traffic sign recognition or speed limits, but more so on the limits of car performance, and the meaning of yellow and chequered flags. Obviously a normal self-driving car doesn’t need to know about overtaking on a race track. Our software is unique because of its user case instead of being too advanced.

Work on a prototype of the Robocar. Image: Roborace

Bram: Apart from the software, what are components that a racing car needs to have on top of a standard road car?  I’m thinking of sensors, processors, or other controls that the DevBot has compared to a normal autonomous vehicle.

Alan: We actually have the same AI suite for both the processing and the sensors.  The perception sensors are the same across both DevBot and Robocar. The rest of the car is actually fairly familiar to a normal road car, both mechanically and electronically. It’s only that we have to apply actuators to the steering and the brakes because if it’s a human driving, you’re doing that yourself. 

We do have two real time focusing computers. First, the Nvidia DRIVE PX2, which is a fairly common real time computer for autonomous driving. A lot of companies use them for their autonomous vehicles and trials. One of the reasons that we use it is that it is almost an industry standard in the autonomous car industry.

Next to that we have something called Speedgoat. From a motorsport point of view it is used quite commonly on simulators where they need real time processing.  You wouldn’t normally use it live on a car though. 

We have both of those. They each have slightly different qualities and areas that they’re better at. We use them for different tasks but fundamentally they’re very similar. Next, we have a range of perception sensors, like a military grade GPS. From the base station we will know where the car is to within a couple of millimetres. Less than a centimetre normally. If the accuracy is more than two centimetres then we consider something to be wrong. We stop running, and figure out what has happened. There are two huge antennas, one at each end of the car, so we can see exactly which way the car is facing as well.

We also have LIDARs all around the car so we can create an infrared image of where things are. A LIDAR sends out an infrared beam which is reflected back to sensors so it can position where objects are. By positioning those, we have a view of where any objects are and what needs to be avoided all around the car. 

Furthermore, we have five cameras linked to the AI system so we can do stereo vision. We could have just a single camera but it means we have quite a bit of redundancy. You don’t need that many cameras and the LIDARs and the GPS but it gives a particular team the ability to use any combination of the different technologies they like. We fitted everything into the car as an all-inclusive package for whatever project is going to be coming up.

Bram: In terms of sensors or processors you don’t have anything that would be significantly different from standard autonomous vehicles (AV),  but due to the nature of the company you have an extra amount of sensors compared to a normal AV?

Alan: Exactly. Tesla is quite a good example. It has all of the hardware needed for autonomous driving. The cost of our autonomous driving system however, is more expensive than an entire Tesla. In a real commercial environment, you obviously can’t fit that many sensors into the car because it doesn’t make financial sense. However, if you look at companies like FiveAI, they would be very similar to some of the units that we’re using. The research type cars will be quite similar to what we have. It’s just a different use case.

Components of the Robocar. Image: Roborace

Bram: I want to shift gears a little bit but stay focussed on the components and the technology that you use. Which of the technological developments like 5G arriving or solid state LIDARs have made the biggest impact for you to be able to do what you do now?

Alan: The biggest in terms of the things we implemented is probably upgrading the LIDAR. For the original LIDARs that were fitted in the DevBots we had to align four separate LIDARs around the car. This required huge amounts of work to make sure everything was perfectly aligned. Every time the car went out, it only gave a 2D image. Now with the latest LIDARs, we already have a single one that is operating in 360 degrees and produces multiple rows of data, giving us a 3D image. It’s much less work. We can be far more productive in a day’s testing with the newer LIDARs than we could with the older technology. It just makes the LIDARs far more usable, whereas before we kind of avoided them because they were so much of a hassle. Now they’re easy to use and it just means that we can put the time into developing the actual autonomous side rather than fine-tuning the sensors. 

Next, the big thing that we haven’t actually implemented yet but are definitely looking into is 5G. We insist on having full, live data telemetry so we know exactly what the car is doing all of the time. For safety reasons we don’t have any humans in the car at high speeds. Apart from that we have a video stream, we have access to all of the data from the sensors on the car and we have the ability to adjust settings on the car while it is driving. This includes shifting the car if need be or changing anything else we don’t like the look of. 

At the moment, to do that, we have to basically create a 5 GHz network the whole way around the track. If we go somewhere like Silverstone, that requires several kilometres of fibre, it requires numerous road-side units, it requires numerous batteries. It’s a huge set of investment in time for every single investment that we run.

“we have to basically create a 5 GHz network the whole way around the track”

Moving to 5G would allow us to basically run anywhere, by just using the signals that are in the air, assuming that we have a network available. Otherwise, there’d be various options where we just put up a single mast and cover the whole area that we want to run. There is a huge advantage there in reducing the time and the work it takes to go run these cars. It doesn’t directly affect the actual autonomous driving but much like the LIDARs it will just give us the ability to give teams more time to develop the software. The area that needs the most development is the software and testing various edge cases and functionalities. The more learning we can get, the more data we collect, the more we can fine-tune things. A lot of the hardware development is aimed at allowing this to be more efficient: with the testing, to get more running, to collect more data within that time.

Bram: Not many would have thought that you basically need to set up your own network around the track to really get your car running. I could definitely see how 5G would make your life a whole lot easier there. What other surprises did you run into when engineering the cars? What things were more difficult, easier or just different than what you expected when you set out?

Alan: That is a difficult one. When I joined Roborace I hadn’t particularly thought in detail how they went about doing what they were doing. It’s not that any of it was particularly a surprise because I didn’t have any preconceived ideas of what they were doing. The biggest thing I wasn’t expecting was indeed this need for a network.  When you think about it, it makes complete sense, but if you haven’t, it kind of catches you by surprise.

In the company a lot of things have taken us by surprise. It’s mainly a lot of small edge case scenarios that can take you by surprise. We’ve had a few incidents whereby a team created some code and everything looks sensible. We got to think about how to incorporate it and how to compare it with how the car would run with a human driver. It all seemed to make sense. Then you test it with the ADS, and something just doesn’t quite happen like you want, because a human driver would adjust in a certain way that the ADS won’t.


We’ve had some examples of a software working perfectly fine, having worked at previous racetracks. We then used the exact same philosophy and settings, in a scenario as simple as a cool down lap after a qualifying lap. So we adjusted the settings to push to the maximum limits, but before the car came into the garage, we asked it to go a lot slower on the end lap. That is something that is common in normal motorsport. What we missed here was a particular edge case in which the team decided to slow down the car quite a bit later than we expected to. By switching to the new settings later, the car realized it needed to hit the brakes 20 meters in the past. Obviously it can’t time travel. With these new settings it hadn’t braked early enough but it also couldn’t just brake fully. A human in that situation would remember that he could brake maximally two corners ago and so continue to do that. 

If it had been 20 meters earlier or later it wouldn’t have been a problem. Because they hit it on this window it caused a problem that hadn’t been seen at several tracks before because it just hadn’t hit that very specific edge case. 

It’s things like that, where you have these small oversights in which you forget that if something doesn’t go quite right, the human will react differently to the computer. 

We have to try and think of all of those and occasionally we miss one, but I think we have got quite good at most of them now. We got very good at planning things out and thinking about the details of what a particular change in settings will mean to the vast majority of those cases. It’s getting quite rare that we have those issues now but it still happens occasionally.

 Bram: Do you have to train yourselves to think more as an AI? Like software would do instead of thinking in the human way, just to catch those glitches?

Alan: Yes, I guess. In many ways, the way the AI thinks isn’t that different to humans. It’s far more rigid though. We have to be far more detailed in our planning. 

Going back to my experience in more traditional motorsports: you give an instruction to a driver to slow down for a cool down lap after the finish line. You don’t need any more detail than that. The computer needs to know: how much do I slow down? What is my new limit? When do I slow down in exact number of meters? You go from a very approximate description that you would use in traditional motorsport to a very well defined series of numbers to be able to program into the ADS. Every small detail has to be done with such extra depth.

Devbot2.0 on track. Image: Roborace

Bram: Makes sense. What do you think are the biggest misconceptions that engineers have about autonomous vehicle technologies and its current trajectory?

Alan: I had spoken to various colleagues before joining Roborace and they were saying that the vehicle dynamics problem is really interesting. In Formula 1 or Formula E we have written traction control systems, we have written ADS systems. We try and help the driver to maximize the car. 

But if you have all the sensors on the car and you don’t have to have the driver, you can do a much better job. In theory. 

You can take all of the information. All of the tire loads, the brake temperatures and the suspension loads. You take all of this information to a very accurate model so you exactly know how much grip the tire has. Therefore you know the exact amount of torque, lateral G, steering or braking that is needed to reach the exact limit without taking the tire past its limit. 

And in theory you can do that independently for each wheel. That’s something a human can’t do. It’s an incredibly fascinating vehicle dynamics problem that shouldn’t be hugely new. It’s a lot of calculations that you would already use for details in simulations. It’s just unusual to use all of those calculations on a race car and apply them live. 

That is something that sounds really interesting but when I got here, to Roborace, that wasn’t where the majority of our work has gone. Eventually we will get to that stage, but there’s so much work involved in the localization and the perception layer. 

Teaching the car to know where it is and where it needs to be going rather than maximizing performance. First things first, the car needs to know where it is on track. That is actually the much bigger, much harder problem to solve. Certainly within the motorsport community the misconception is about how much work would go into the performance optimization because most don’t realize how complex the localization issues are.

Bram:  So you’re earlier on the curve than you would expect.

Alan: Yes, that is basically it.

Bram: There are some more base problems to solve before you can get to optimization and perfectionism.

Alan: Yes, exactly. In the last few months we’ve gotten to the point where we actually have algorithms in the car learning. It’s self-teaching what the performance limits of the car are. There’s still a lot of room for that to be optimized. The base fundamentals of the car, knowing where it is, its localization and its basic car controls have actually taken far more of our work. 


It’s exactly like you say where we’re not as far along the curve. It’s not even that we’re not as far along the curve as you imagine, it’s the common misconception of where that curve actually starts. Most people from a motorsport background think that the problems start two thirds along the curve already. We’re only just getting to that point now. We spend the last few years developing all of the localization issues to get here. That is what most people in motorsports factor out, because it’s not something that you would normally have to worry about in a race car with a driver.

Bram: Which companies, universities or maybe even individuals are the most impactful and far ahead in motorsport or in autonomous vehicles besides you folks? What are the people and organizations that you look up to basically?

Alan: We’re kind of insular and separate from the automotive world because of the way we’re doing things within the motorsport. It’s so unique that there isn’t really any single kind of company that is doing the same as us. There’s obviously huge amounts of companies out there specializing in this or that, but in terms of the motorsport aspect of autonomous vehicles there really isn’t anyone to look up to in that respect. 

We worked alongside and collaborated within projects with Volkswagen, with various universities, such as the Technical University of Munich, the Technical University of Graz, the University of Pisa and have helped them with lots of their research into autonomous driving. Those collaborations, certainly the ones with the university, they get a better understanding of where to take their research out of it. By putting theory into practice they’re getting a real world example of what happened. It allows them to prioritise the way they develop rather than trying to develop their research direction based on theory. It’s been quite interesting working for lots of universities in the theoretical side there. There’s huge amounts of companies working on very similar things with the autonomous side. 

In terms of motorsport there really isn’t that much out there. The closest is actually high-end traditional motorsport in things like Le Mans Prototype 1 (LMP), which is the top Le Mans category, in wet conditions. They have some fairly advanced software for various driver aids. Similarly, in Formula 1, there have been interesting things in driving aid that use similar fundamental basics to what we use on the motorsport side of things. We are using bits of that combined with bits of software that are being developed by a Microsoft or Waymo or FiveAI. There’s huge amounts of companies doing very similar things, but they’re all quite secretive about exactly what they’re doing on the road car side. We’re the halfway house between them and traditional motorsport.

Bram: Is there actually a new DevBot in the making, a version 3.0 or is the focus now on the 2.0 at the moment?

Alan: The focus in terms of DevBot is on the 2.0. That production has now come to an end, we finished building those and now we’re ready to use them. The next cars that we are rebuilding would be the Robocars. There will be some planned developments to Robocar from an ease of use side and some from the driving side to help us be able to run the car more efficiently. Similar to what I was saying about the LIDAR range and how 5G would affect us. It’s about optimizing our testing time. 

The vast majority of our development is software however. We change the software every week with updates.  

Two Devbot2.0's on the Monteblanco racing circuit. Image: Roborace

We will also be building more vehicles in the future that allow us to visit very different environments and create more and more interesting projects going forward.

Bram: I’ll be looking forward to see those happening Alan. I’m conscious of your time. I could go on for a lot more because there’s so much there, but you've got work to do.  Thank you so much


This interview was held on November 19th, 2019, as part of Wevolver's research for the 2020 Autonomous Vehicle Technology Report. 
Because of the depth of Alan's story it was decided to publish the interview in its full length alongside the report. Complement this interview with a deep dive into the state of the art of self-driving technology by reading the full report here.

20 Mar, 2020

CEO and co-founder of Wevolver. Trained as an industrial designer. Previously founded a design studio that pioneered 3D printing large functional objects in the late 2000s. I also worked a lot with composite materials. Wevolver was a side-project that got positively out of hand...

Stay Informed, Be Inspired

Wevolver’s free newsletter delivers the highlights of our award winning articles weekly to your inbox.