2020 Autonomous Vehicle Technology Report

The guide to understanding the state of the art in hardware & software for self-driving vehicles

Hero Image

Image: Benedict Redgrove

About the Contributors

Ali Nasseri Vancouver, Canada

  • Lab manager at the Programming Languages for Artificial Intelligence (PLAI) research group at theUniversity of British Columbia.
  • Previously Chair of the Space Generation Advisory Council.
  • Cum Laude PhD. in Engineering Physics, Politecnico di Torino.

Adriaan Schiphorst Amsterdam, The Netherlands

  • Technology journalist.
  • MSc Advanced Matter & Energy Physics at University of Amsterdam and the California Institute ofTechnology.
  • Previously editor at Amsterdam Science Journal.

Norman di Palo Rome, Italy

  • Robotics and Machine Learning researcher, conducting research on machine learning for computervision and control at the Istituto Italiano di Tecnologia, Genova, Italia.
  • Cum Laude MSc. Engineering in Artificial Intelligence and Robotics, Sapienza Università diRoma, and graduate of the Pi School of Artificial Intelligence.

Jordan Sotudeh Los Angeles, USA

  • Senior Strategic Analyst at NASA Jet Propulsion Laboratory.
  • Master International Science and Technology Policy, Elliott School of International Affairs,Washington DC, USA.

Fazal Chaudry Headington, United Kingdom

  • Product Development Engineer.
  • Master of Science, Space Studies, International Space University,   Illkirch Graffenstaden,France.

Jeremy Horne San Felipe, Baja California, Mexico

  • President Emeritus of the American Association for the Advancement of Science, Southwest Division.
  • Science advisor and curriculum coordinator at the Inventors Assistance Center.
  • Ph.D. in philosophy from the University of Florida, USA.

Drue Freeman Cupertino, California, USA

  • CEO of the Association for Corporate Growth, Silicon Valley.
  • Former Sr. Vice President of Global Automotive Sales & Marketing for NXP Semiconductors.
  • Board Director at Sand Hill Angels. Advisory Board Member of automotive companies Savari and RidarSystems, and Advisory Board Member of Silicon Catalyst, a semiconductor focused incubator.
  • Bachelor of Science in Electrical Engineering, San Diego State University, MBA from PepperdineUniversity, Los Angeles.

Mark A. Crawford Jr.Baoding City, China

  • Chief Engineer for Autonomous Driving Systems at Great Wall Motor Co.
PhD. Industrial and Systems Engineering - Global Executive Track, at Wayne State University.
  • Previously Technical Expert at Ford.

Akbar LadakBangalore, India

  • Founder, CEO, Kaaenaat, which develops autonomous robots for logistics, retail and security use cases,as well as Advanced Driver Assistance Systems (ADAS) for 2- & 4- wheeler vehicles for chaoticdriving conditions in Asia & Africa.
  • Master in Electrical Engineering, Georgia Institute of Technology.

Shlomit Hacohen Tel Aviv, Israel

  • VP of Marketing at Arbe Robotics; developing ultra high-resolution 4D imaging radar technology.
  • MBA at Technion, the Israel Institute of Technology.

Zeljko Medenica Birmingham, Michigan, USA

  • Principal Engineer and Human Machine Interface (HMI) Team Lead at the US R&D Center of Changan, amajor Chinese automobile manufacturer. Previously led research on novel and intuitive HMI for AdvancedDriver Assistance Systems at Honda.
  • PhD. in Electrical and Computer Engineering from the University of New Hampshire.

Maxime Flament Brussels, Belgium

  • Chief Technology Officer, 5G Automotive Association (5GAA)
  • PhD. in Wireless Communication Systems, Chalmers University of Technology, Göteborg, Sweden.

Joakim Svennson Norrköping, Sweden

  • Senior ADAS Engineer, Function Owner Traffic Sign Recognition and Traffic Light Recognition at Veoneer.
  • MSc. Media Technology, Linköping University, Sweden.

William MorrisDetroit, Michigan, USA

  • Automotive Engineer

Matthew Nancekievill Manchester, United Kingdom

  • Postdoctoral researcher submersible robotics, University of Manchester, UK.
  • PhD. Electrical and Electronics Engineering, University of Manchester, UK.
  • CEO Ice Nine Robotics.

Bureau MerkwaardigAmsterdam, The Netherlands

  • Award winning designers Anouk de l’Ecluse and Daphne de Vries are a creative duo based in Amsterdam.They are specialized in visualizing the core of an artistic problem. Bureau Merkwaardig initiates,develops and designs.

Sabina BegovićPadua, Italy

  • Croation born Sabina is a visual and interaction designer. She obtained a Master's in Visual andCommunication Design at Iuav, University of Venice, and a Masters in Art Education at the Academy ofApplied Art, Rijeka, Croatia.

Benedict Redgrove London, United Kingdom

  • Benedict has a lifelong fascination with technology, engineering, innovation and industry, and is adedicated proponent of modernism. This has intuitively led him to capturing projects and objects attheir most cutting edge. He has created an aesthetic of photography that is clean, pure and devoid ofany miscellaneous information, winning him acclaim and numerous awards.
  • Redgrove has amassed a following and client base from some of the most advanced companies in the world.A career spent recording the pioneering technology of human endeavours has produce a photographic artform that gives viewers a window into an often unseen world, such as Lockheed Martin Skunk Works, UKMoD, European Space Agency, British Aerospace and NASA. Whether capturing the U-2 reconnaissance pilotsand stealth planes, the Navy Bomb Disposal Division or spending time documenting the NASA’s past,present and future, Benedict strives to capture the scope and scale of advancements and what they meanto us as human beings.
  • His many awards include the 2009 AOP Silver, DCMS Best of British Creatives, and the Creative ReviewPhotography Annual 2003, 2008, and 2009.
  • At Wevolver we are a great fan of Benedict’s work and how his pictures capture a spirit of innovation.We’re grateful he has enabled us to use his beautiful images of the Robocar to form the perfect backdrop for this report.

For the latest developments in Autonomous Vehicles, please read our updated Autonomous Vehicle Report.

Introduction

Motorized transportation has changed the way we live. Autonomous vehicles are about to do so once more. This evolution of our transport - from horses and carriages, to cars, to driverless vehicles, - has been driven by both technical innovation and socioeconomic factors. In this report we focus on the technological aspect.

Looking at the state of autonomous vehicles at the start of the 2020s we can see that impressive milestones have been achieved, such as companies like Waymo, Aptiv, and Yandex offering autonomous taxis in dedicated areas since mid-2018. At the same time, technology developers have run into unforeseen challenges.

“It’s been an enormously difficult, complicated slog, and it’s far more complicated and involved than we thought it would be, but it is a huge deal.”
 Nathaniel Fairfield, distinguished software engineer and leader of the ‘behavior  team’ at Waymo, December 2019 [1] 

Some industry leaders and experts have scaled back their expectations, and others have spoken out against optimistic beliefs and predictions.[2,3] Gartner, a global research and advisory firm, weighs in by now placing ‘autonomous vehicles’ in the Trough of Disillusionment of their yearly Hype Cycle.[4]

The engineering community is less affected by media hype: Over 22% of the engineers visiting the Wevolver platform do so to gain more knowledge on autonomous vehicle technology.[5] Despite how much topics like market size and startup valuations have been covered globally by the media, many engineers have expressed to our team at Wevolver that comprehensive knowledge to grasp the current technical possibilities is still lacking. 

Therefore, this report’s purpose is to enable you to be up to date and understand autonomous vehicles from a technical viewpoint. We have compiled and centralized the information you need to understand what technologies are needed to develop autonomous vehicles. We will elaborate on the engineering considerations that have been and will be made for the implementation of these technologies, and we’ll discuss the current state of the art in the industry. 

This reports’ approach is to describe technologies at a high level, to offer the baseline knowledge you need to acquire, and to use lots of references to help you dive deeper whenever needed. 

Most of the examples in the report will come from cars. However, individual personal transportation is not the only area in which Autonomous Vehicles (AVs) will be deployed and in which they will have a significant impact. Other areas include public transportation, delivery & cargo and specialty vehicles for farming and mining. All of these come with their own environment and specific usage requirements that are shaping AV technology. At the same time, all of the technologies described in this report form the ingredients for autonomy, and thus will be needed in various applications.

How this report came to be: a collaborative effort

Once the decision was made to create this report, we asked our community for writers with expertise in the field, and for other experts who could provide input. A team of writers and editors crafted a first draft, leveraging many external references. Then, in a second call-out to our community we found many engineers and leaders from both commercial and academic backgrounds willing to contribute significant amounts of their time and attention to providing extensive feedback and collaborating with us to shape the current report through many iterations. We owe much to their dedication, and through their input this report has been able to incorporate views from across the industry and 11 different countries.

Because this field continues to advance, we don’t consider our work done. We intend to update this report into new editions regularly as new knowledge comes available and our understanding of the topic grows. You are invited to play an active role and contribute to this evolution, be it through brief feedback or by submitting significant new information and insights to our editorial team (info@wevolver.com), your input is highly appreciated and invaluable to further the knowledge on this topic.

This report would not have been possible without the sponsorship of Nexperia, a semiconductor company shipping over 90Bn components annually, the majority of which are within the automotive industry. Through their support, Nexperia shows a commitment to the sharing of objective knowledge to help technology developers innovate. This is the core of what we do at Wevolver.

The positive impact these technologies could possibly have on both individual lives, and our society and planet as a whole are an inspiring and worthwhile goal. At Wevolver we hope this report provides the information and inspiration for you in any way possible to be a part of that evolution.

Bram Geenen
Editor in Chief,
CEO of Wevolver

Levels of Autonomy

When talking about autonomous vehicles, it is important to keep in mind that each vehicle can have a range of autonomous capabilities. To enable classification of autonomous vehicles, the Society Of Automotive Engineers (SAE) International established its SAE J3016™ "Levels of Automated Driving" standard. Its levels range from 0-5 and a higher number designates an increase in autonomous capabilities.[6] 

  • Level 0 (L0): No automation
  • Level 1 (L1): Advanced Driver Assistance Systems (ADAS) are introduced: features that either control steering or speed to support the driver. For example, adaptive cruise control that automatically accelerates and decelerates based on other vehicles on the road. 
  • Level 2 (L2): Now both steering and acceleration are simultaneously handled by the autonomous system. The human driver still monitors the environment and supervises the support functions. 
  • Level 3 (L3): Conditional automation: The system can drive without the need for a human to monitor and respond. However, the system might ask a human to intervene, so the driver must be able to take control at all times. 
  • Level 4 (L4): These systems have high automation and can fully drive themselves under certain conditions. The vehicle won’t drive if not all conditions are met.  
  • Level 5 (L5): Full automation, the vehicle can drive, wherever, whenever.  

Levels of driving automationLevels of driving automation summary. Adapted from SAE by Wevolver.  

The context and environment (including rules, culture, weather, etc.) in which an autonomous vehicle needs to operate greatly influences the level of autonomy that can be achieved. On a German Autobahn, the speed and accuracy of obstacle detection, and the subsequent decisions that need to be made to change the speed and direction of the vehicle need to happen within a few milliseconds, while the same detection and decisions can be much slower for a vehicle that never leaves a corporate campus. In a similar matter, the models needed to drive in sunny Arizona are more predictable than those in New York City, or Bangalore. That also means an automated driving system (ADS) capable of L3 automation in the usual circumstances of e.g. Silicon Valley, might need to fall back to L2 functionality if it would be deployed on snowy roads or in a different country. 

The capabilities of an autonomous vehicle determine its Operational Design Domain (ODD). The ODD defines the conditions under which a vehicle is designed to function and is expected to perform safely. The ODD includes (but isn’t limited to) environmental, geographical, and time-of-day restrictions, as well as traffic or roadway characteristics. For example, an autonomous freight truck might be designed to transport cargo from a seaport to a distribution center 30 Km away, via a specific route, in day-time only. This vehicles ODD is limited to the prescribed route and time-of-day, and it should not operate outside of it.[7-9]

Level 5 ADS have the same mobility as a human driver: an unlimited ODD. Designing the autonomous vehicle to be able to adjust to all driving scenarios, in all road, weather and traffic conditions is the biggest technical challenge to achieve. Humans have the capability to perceive a large amount of sense information and fuse this data to make decisions using both past experience and our imagination. All of this in milliseconds. A fully autonomous system needs to match (and outperform) us in these capabilities. The question of how to assess the safety of such a system needs to be addressed by legislators. Companies have banded together, like in the Automated Vehicle Safety Consortium, to jointly develop new frameworks for safety.[10]

Major automotive manufacturers, as well as new entrants like Google (Waymo), Uber, and many startups are working on AVs. While design concepts differ, all these vehicles rely on using a set of sensors to perceive the environment, advanced software to process input and decide the vehicle’s path and a set of actuators to act upon decisions. [11] The next sections will review the technologies needed for these building blocks of autonomy.

“Autonomous vehicles are already here – they’re just not very evenly distributed”
William Gibson, Science fiction writer, April 2019 [12]


Sensing

Because an autonomous vehicle operates in an (at least partially) unknown and dynamic environment, it simultaneously needs to build a map of this environment and localize itself within the map. The input to perform this Simultaneous Localization and Mapping (SLAM) process needs to come from sensors and pre-existing maps created by AI systems and humans.

Example of variety of static and moving objects autonomous vehicleExample of the variety of static and moving objects that an autonomous vehicle needs to detect and distinguish from each other. Image: Wevolver, based on a photo by Dan Smedley.Environmental mapping

In order to perceive a vehicle’s direct environment, object detection sensors are used. Here, we will make a distinction between two sets of sensors: passive and active. Passive sensors detect existing energy, like light or radiation, reflecting from objects in the environment, while active sensors send their own electromagnetic signal and sense its reflection. These sensors are already found in automotive products at Level 1 or 2, e.g. for lane keeping assistance. 

autonomous cars sensorsAn example of typical sensors used to perceive the environment. Note that various vehicle manufacturers may use different combinations of sensors and might use all of the displayed sensors. For example, increasingly multiple smaller LIDAR sensors are being used, and long range backward facing RADAR can be incorporated to cover situations like highway lane changing and merging. The placing of the sensors can vary as well. Image: Wevolver

Passive sensors

Due to the widespread use of object detection in digital images and videos, passive sensors based on camera technology were one of the first sensors to be used on autonomous vehicles. Digital cameras rely on CCD (charge-coupled device) or CMOS (complementary metal-oxide semiconductor) image sensors which work by changing the signal received in the 400-1100 nm wavelengths (visible to near infrared spectra) to an electric signal.[13,14] 

The surface of the sensor is broken down into pixels, each of which can sense the intensity of the signal received, based on the amount of charge accumulated at that location. By using multiple sensors that are sensitive to different wavelengths of light, color information can also be encoded in such a system.

While the principle of operation of CCD and CMOS sensors are similar, their actual operation differs. CCD sensors transport charge to a specific corner of the chip for reading, while each pixel in a CMOS chip has its own transistor to read the interaction with light. Colocation of transistors with sensor elements in CMOS reduces its light sensitivity, as the effective surface area of the sensor for interaction with the light is reduced. 

This leads to higher noise susceptibility for CMOS sensors, such that CCD sensors can create higher quality images. Yet, CMOS sensors use up to 100 times less power than CCDs. Furthermore, they’re easier to fabricate using standard silicon production processes. Most current sensors used for autonomous vehicles are CMOS based and have  a 1-2 megapixel resolution.[15]

While passive CMOS sensors are generally used in the visual light spectrum, the same CMOS technology could be used in thermal imaging cameras which work in the infrared wavelengths of 780 nm to 1 mm. They are useful sensors for detection of hot bodies, such as pedestrians or animals, and for peak illumination situations such as the end of a tunnel, where a visual sensor will be blinded by the light intensity.[16]  

perception sensorsThe electromagnetic spectrum and its usage for perception sensors. [16]In most cases, the passive sensor suite aboard the vehicle consists of more than one sensor pointing in the same direction. These stereo cameras can take 3D images of objects by overlaying the images from the different sensors. Stereoscopic images can then be used for range finding, which is important for autonomous vehicle application.

The main benefits of passive sensors are[17]:

  • High-resolution in pixels and color across the full width of its field of view.
  • Constant ‘frame-rate’ across the field of view.
  • Two cameras can generate a 3D stereoscopic view.
  • Lack of transmitting source reduces the likelihood of interference from another vehicle.
  • Low cost due to matured technology.
  • The images generated by these systems are easy for users to understand and interact with

Indeed, Tesla cars mount an array of cameras all around the vehicle to gather visual field information, and London based startup Wayve claims that its cars which only rely on passive optic sensors are safe enough for use in cities. The main shortcoming of passive sensors is their performance in low light or poor weather conditions; due to the fact that they do not have their own transmission source they cannot easily adapt to these conditions. These sensors also generate 0.5-3.5 Gbps of data,[18]  which can be a lot for onboard processing or communicating to the cloud. It is also more than the amount of data generated by active sensors.

“Once you solve cameras for vision, autonomy is solved; if you don’t solve vision, it’s not solved … You can absolutely be superhuman with just cameras.”
 Elon Musk, 2017[19] 

“At the moment, lidar lacks the capabilities to exceed the capabilities of the latest technology in radar and cameras,”
Tetsuya Iijima, General Manager of Advanced Technology Development for Automated Driving, Nissan, May 2019[20]

“Let’s be candid, lidar is unaffordable in consumer vehicles, but if a lidar unit were available today that had good performance and was affordable, it would quietly show up in a Tesla car and this whole hubbub would go away.”
Bill Colleran, CEO, Lumotive, June 2019[21]


If a passive camera sensor suite is used on board an autonomous vehicle, it will likely need to see the whole surrounding of the car. This can be done by using a rotating camera that takes images at specific intervals, or by stitching the images of 4-6 cameras together through software. In addition, these sensors need a high dynamic range (the ability to image both highlights and dark shadows in a scene),  of more than 100 dB,[22] giving them the ability to work in various light conditions and distinguish between various objects. 

Dynamic range is measured in decibel (dB); a logarithmic way of describing a ratio. Humans have a dynamic range of about 200 dB. That means that in a single scene, the human eye can perceive tones that are about 1,000,000 times darker than the brightest ones. Cameras have a narrower dynamic range, though are getting better. 


Active Sensors

Active sensors have a signal transmission source and rely on the principle of time-of-flight (ToF) to sense the environment. ToF measures the travel time of a signal from its source to a target, by waiting for the reflection of the signal to return. 

The frequency of the signal used determines the energy used by the system, as well as its accuracy. Therefore, determining the correct wavelength plays a key role in choosing which system to use.

 

active sensorTime of flight principle illustrated. The distance can be calculated using the formula d=(v⋅t)/2 where d is the distance, v is the speed of the signal (the speed of sound for sound waves, and the speed of light for electromagnetic waves) and t is the time for the signal to go to reach the object and reflect back. This calculation method is the most common but has limitations and more complex methods have been developed; for example, using the phase-shift in a returning wave.
Image: Wevolver.
Ultrasonic sensors (also referred to as SONAR; SOund NAvigation Ranging) use ultrasound waves for ranging and are by far the oldest and lowest cost of these systems. As sound waves have the lowest frequency (longest wavelengths) among the sensors used, they are more easily disturbed. This means the sensor is easily affected by adverse environmental conditions like rain and dust. Interference created by other sound waves can affect the sensor performance as well and needs to be mitigated by using multiple sensors and relying on additional sensor types. In addition, as sound waves lose energy as distance increases, this sensor is only effective over short distances such as in park assistance. More recent versions rely on higher frequencies, to reduce the likelihood of interference.[24]

RADAR (RAdio Detection And Ranging) uses radio waves for ranging.  Radio waves travel at the speed of light and have the lowest frequency (longest wavelength) of the electromagnetic spectrum. RADAR signals are reflected especially well by materials that have considerable electrical conductivity, such as metallic objects. Interference from other radio waves can affect RADAR performance, while transmitted signals can easily bounce off curved surfaces, and thus the sensor can be blind to such objects. At the same time, using the bouncing properties of the radio waves can enable a RADAR sensor to ‘see’ beyond objects in front of it. RADAR has lesser abilities in determining the shape of detected objects than LIDAR.[25]

Overall, the main benefits of RADAR are its maturity, low cost, and resilience against low light and bad weather conditions. However, radar can only detect objects with low spatial resolution and without much information about the spatial shape of the object, thus distinguishing between multiple objects or separating objects by direction of arrival can be hard. This has relegated radars to more of a supporting role in automotive sensor suites.[17]

Imaging radar is particularly interesting for autonomous cars. Unlike short range radar which relies on 24GHz radio waves, imaging radar uses higher energy 77-79 GHz waves. This allows the radar to scan a 100 degree field of view for up to a 300 m distance. This technology eliminates former resolution limitations and generates a true 4D radar image of ultra-high resolution.[15,26,27]

“We need more time for the car to react, and we think imaging radar will be a key to that.”
 Chris Jacobs, Vice President of Autonomous Transportation and Automotive Safety, Analog Devices Inc, January 2019[26]

LIDAR (LIght Detection And Ranging) uses light in the form of a pulsed laser. LIDAR sensors send out 50,000 - 200,000 pulses per second to cover an area and compile the returning signals into a 3D point cloud. By comparing the difference in consecutive perceived point clouds, objects and their movement can be detected such that a 3D map, of up to 250m in range, can be created.[28] 


LiDAR provides a 3D point cloudLiDAR provides a 3D point cloud of the environment. Image: Renishaw 

There are multiple approaches to LIDAR technology:

Mechanical scanning LIDARS use rotating mirrors and/or mechanically rotate the laser. This setup provides a wide Field Of Vision but is also relatively large and costly. This technology is the most mature. 

Microelectromechanical mirrors (MEMS) based LIDARS distribute the laser pulses via one or multiple tiny tilting mirrors, whose angle is controlled by the voltage applied to them. By substituting the mechanical scanning hardware with an electromechanical system, MEMS LIDARS can achieve an accurate and power-efficient laser deflection, that is also cost-efficient.[29] 

LIDAR Systems that do not use any mechanical parts are referred to as solid-state, and sometimes as ‘LIDAR-on-a-chip.’

Flash LIDARS are a type of solid-state LIDARS that diffuse their laser beam to illuminate an entire scene in one flash. The returning light is captured by a grid of tiny sensors. A major challenge of Flash LIDARS is accuracy.[30]

Phased-Array LIDARS are another solid-state technology that is undergoing development. Such systems feed their laser beam into a row of emitters that can change the speed and phase of the light that passes through.[31] The laser beam gets pointed by incrementally adjusting the signal’s phase from one emitter to the next. 

Metamaterials: A relatively new development is to direct the laser by shining it onto dynamically tunable metamaterials.  Tiny components on these artificially structured metasurfaces can be dynamically tuned to slow down parts of the laser beam, which through interference results in a beam that’s pointing in a new direction. Lumotive, a startup funded by Bill Gates, claims its Metamaterial based LIDARS can scan 120 degrees horizontally and 25 degrees vertically.[32] 

“Almost everything is in R&D, of which 95 percent is in the earlier stages of research, rather than actual development, the development stage is a huge undertaking — to actually move it towards real-world adoption and into true series production vehicles. Whoever is able to enable true autonomy in production vehicles first is going to be the game changer for the industry. But that hasn’t happened yet.”
 Austin Russell, founder and CEO of Luminar, June 2019[21]

Interference from a source with the same wavelength, or changes in reflectivity of surfaces due to wetness can affect the performance of LIDAR sensors. LIDAR performance can also be affected by external light, including from other LIDARS.[33] While traditional LIDAR sensors use 900 nm wavelengths, new sensors are shifting to 1500 nm enabling the vehicle to see objects 150-250 m away.[26,28]

LIDAR has the benefits of having a relatively wide field of vision, with potentially full 360 degree 3D coverage (depending on the type of LIDAR chosen). Furthermore, it has a longer range, more accurate distance estimates compared to passive (optical) sensors and lower computing cost.[17] Its resolution however, is poorer and laser safety can put limits on the laser power used, which in turn can affect the capabilities of the sensor. 

These sensors have traditionally been very expensive, with prices of tens of thousands of dollars for the iconic rooftop mounted 360 degree units. However, prices are coming down: Market leader Velodyne announced in January 2020 a Metamaterials LIDAR that should ship for $100, albeit offering a narrower field of vision (60° horizontal x 10° vertical) and shorter range (100m).[34,35]


object mapping and detectionVarious object detection and mapping sensors are used for various purposes, and have complementary capabilities and ranges. Image: WevolverAmong the three main active, ToF based systems, SONAR is mainly used as a sensor for very close proximity due to the lower range of ultrasound waves. RADAR cannot make out complex shapes, but it is able to see through adverse weather such as rain and fog. LIDAR can better sense an object’s shape, but is shorter range and more affected by ambient light and weather conditions. Usually two active sensor systems are used in conjunction, and if the aim is to only rely on one, LIDAR is often chosen. Secondly, active sensors are often used in conjunction with passive sensors (cameras).

Choice of Sensors

While all the sensors presented have their own strengths and shortcomings, no single one would be a viable solution for all conditions on the road. A vehicle needs to be able to avoid close objects, while also sensing objects far away from it. It needs to be able to operate in different environmental and road conditions with challenging light and weather circumstances. This means that to reliably and safely operate an autonomous vehicle, usually a mixture of sensors is utilized. 

The following technical factors affect the choice of sensors:

  • The scanning range, determining the amount of time you have to react to an object that is being sensed.
  • Resolution, determining how much detail the sensor can give you.
  • Field of view or the angular resolution, determining how many sensors you would need to cover the area you want to perceive.
  • Ability to distinguish between multiple static and moving objects in 3D, determining the number of objects you can track.
  • Refresh rate, determining how frequently the information from the sensor is updated.
  • General reliability and accuracy in different environmental conditions.
  • Cost, size and software compatibility.
  • Amount of data generated.

Comparison of various sensors used in autonomous vehicles[14,18,26,36-38]:

Sensor 

Measurement distance (m)

Cost  ($)

Data rate (Mbps)

Cameras

0-250

4–200

500-3500

Ultrasound

0.02-10

30-400

< 0.01

RADAR

0.2-300

30-400

0.1-15

LIDAR

 Up to 250

1,000-75,000

20-100 

Note that these are typical ranges and more extreme values exist. For example, Arbe Robotics’ RADAR can generate 1GBps depending on requirements from OEMs. Also note that multiple low costs sensors can be required to achieve comparable performance to high-end sensors.

Vehicle manufacturers use a mixture of optical and ToF sensors, with sensors strategically located to overcome the shortcomings of the specific technology. By looking at their setup we can see example combinations used for perception:

  • Tesla’s Model S uses a forward mounted radar to sense the road, 3 forward facing cameras to identify road signs, lanes and objects, and 12 ultrasonic sensors to detect nearby obstacles around the car
  • Volvo-Uber uses a top mounted 360 degree Lidar to detect road objects, short and long range optical cameras to identify road signals and radar to sense closeby obstacles
  • Waymo uses a 360 degree LIDAR to detect road objects, 9 visual cameras to track the road and a radar for obstacle identification near the car.
  • Wayve uses a row of 2.3-megapixel RGB cameras with high-dynamic range, and satellite navigation to drive autonomously.[39]


Different Approaches by Tesla, Volvo-Uber, and Waymo:


different autonomous vehicle manufactures Tesla Model S. Volvo-Uber XC90. Waymo Chrysler Pacifica. Companies take different approaches to the set of sensors used for autonomy, and where they are placed around the vehicle. Tesla’s sensors contain heating to counter frost and fog, Volvo’s cameras come equipped with a water-jet washing system for cleaning their nozzles, and the cone that contains the cameras on Waymo’s Chrysler has water jets and wipers for cleaning. Volvo provides a base vehicle with pre-wiring and harnessing for Uber to directly plug in its own self-driving hardware, which includes the rig with LIDAR and cameras on top of the vehicle. Images: adapted from Tesla, Volvo, Waymo, by Wevolver. 

Once the autonomous vehicle has scanned its environment, it can find its location on the road relative to other objects around it. This information is critical for lower-level path planning to avoid any collisions with objects in the vehicle’s immediate vicinity.


On top of that, in most cases the user communicates the place they would like to go to in terms of a geographical location, which translates to a latitude and longitude. Hence, in addition to knowing its relative position in the local environment, the vehicle needs to know its global position on Earth in order to be able to determine a path towards the user’s destination.

The default geolocalization method is satellite navigation, which provides a general reference frame for where the vehicle is located on the planet. Different Global Navigation Satellite Systems (GNSS) such as the American GPS, the Russian GLONASS, the European Galileo or the Chinese Beidou can provide positioning information with horizontal and vertical resolutions of a few meters. 

While GPS guarantees a global signal user range error (URE) of less than 7.8 m, its signal’s actual average range error has been less than 0.71 m. The real accuracy for a user however, depends on local factors such as signal blockage, atmospheric conditions, and quality of the receiver that’s used.[46] Galileo, once fully operational, could deliver a < 1m URE.[47] Higher accuracy can be achieved using multi-constellation; where the receiver leverages signals from multiple GNSS systems. Furthermore, accuracy can be brought down to ~ 1cm levels using additional technologies that augment the GNSS system. 

To identify the position of the car, all satellite navigation systems rely on the time of flight of a signal between the receiver and a set of satellites. GNSS receivers triangulate their position using their calculated distance from at least four satellites.[48] By continuously sensing, the path of the vehicle is revealed. The heading of the vehicle can be determined using two GNSS antennas, by using dedicated onboard sensors such as a compass, or it can be calculated based on input from vision sensors.

While accurate, GNSS systems are also affected by environmental factors such as cloud cover and signal reflection. In addition, signals can be blocked by man-made objects such as tunnels or large structures. In some countries or regions, the signal might also be too weak to accurately geolocate the vehicle.

To avoid geolocalization issues, an Inertial Measurement Unit (IMU) is integrated with the system.[50,51] By using gyroscopes and accelerometers, such a unit can extrapolate the data available to estimate the new location of the vehicle when GNSS data is unavailable. 

In the absence of additional signals or onboard sensors, dead-reckoning may be used, where the car's navigation system uses wheel circumference, speed, and steering direction data to calculate a position from occasionally received GPS data and the last known position.[52] In a smart city environment, additional navigational aid can be provided by transponders that provide a signal to the car; by measuring its distance from two or more signals the vehicle can find its location within the environment.

Maps

Today, map services such as Google Maps are widely used for navigation. However, autonomous vehicles will likely need a new class of high definition (HD) maps that represent the world at up to two orders of magnitude more detail. With an accuracy of a decimeter or less, HD maps increase the spatial and contextual awareness of autonomous vehicles and provide a source of redundancy for their sensors. 

3d map intersectionA 3D HD map covering an intersection. Image: HereBy triangulating the distance from known objects in a HD map, the precise localization of a vehicle can be determined. Another benefit is that the detailed information a high definition map contains could narrow down the information that a vehicle’s perception system needs to acquire, and enable the sensors and software to dedicate more efforts towards moving objects.[53]

HD maps can represent lanes, geometry, traffic signs, the road surface, and the location of objects like trees. The information in such a map is represented in layers, with generally at least one of the layers containing 3D geometric information of the world in high detail to enable precise calculations. 

Challenges lie in the large efforts to generate high definition maps and keep them up to date, as well as in the large amount of data storage and bandwidth it takes to store and transfer these maps.[54]

“If we want to have autonomous cars everywhere, we have to have digital maps everywhere,” Amnon Shashua, Chief Technology Officer at Mobileye, 2017[55]

Most in the industry express HD maps to be a necessity for high levels of autonomy, in any case for the near future as they have to make up for limited abilities of AI. However, some disagree or take a different approach. 

According to Elon musk Tesla “briefly barked up the tree of high precision lane line [maps], but decided it wasn't a good idea.”[56] In 2015 Apple, for its part, patented an autonomous navigation system that lets a vehicle navigate without referring to external data sources. The system in the patent leverages AI capabilities and vehicle sensors instead.[57]

As another example, London based startup Wayve only uses standard sat-nav and cameras. They aim to achieve full autonomy by using imitation learning algorithms to copy the behavior of expert human drivers, and consequently using reinforcement learning to learn from each intervention of their human safety driver while training the model in autonomous mode.[58]

Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) also took a ‘map-less’ approach and developed a system that uses LIDAR sensors for all aspects of navigation, only relying on GPS for a rough location estimate.[59-61]

“The need for dense 3-D maps limits the places where self-driving cars can operate.”
 Daniela Rus, director of MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), 2018


Thinking & Learning

Based on the raw data captured by the AV’s sensor suite and the pre-existing maps it has access to, the automated driving system needs to construct and update a map of its environment while keeping track of its location in it. Simultaneous localization and mapping (SLAM) algorithms let the vehicle achieve just that. Once its location on its map is known, the system can start planning which path to take to get from one point to another. 

SLAM and Sensor Fusion

SLAM is a complex process because a map is needed for localization and a good position estimate is needed for mapping. Though long considered a fundamental chicken-or-egg problem for robots to become autonomous, breakthrough research in the mid-1980s and 90s solved SLAM on a conceptual and theoretical level. Since then, a variety of SLAM approaches have been developed, the majority of which uses probabilistic concepts.[62,63]

In order to perform SLAM more accurately, sensor fusion comes into play. Sensor fusion is the process of combining data from multiple sensors and databases to achieve improved information. It is a multi-level process that deals with the association, correlation, and combination of data, and enables to achieve less expensive, higher quality, or more relevant information than when using a single data source alone.[64]

autonomous vehicle decision makingThe complex computation and decision making environment of an autonomous vehicle. Image: Wevolver

For the all processing and decision making required to go from sensor data to motion in general two different AI approaches are used [66]

  1. Sequentially, where the driving process is decomposed into components of a hierarchical pipeline. Each step (sensing, localization and mapping, path planning, motion control) is handled by a specific software element, with each component of the pipeline feeding data to the next one, or
  2. An End-to-End solution based on deep learning that takes care of all these functions.

Two main approaches to the AI architecture of autonomous vehicles: 1) sequential perception‐planning‐action pipelines 2) an End2End system.

The question which approach is best for AVs is an area of ongoing debate.  The traditional, and most common approach consists of decomposing the problem of autonomous driving into a number of sub-problems and solving each one sequentially with a dedicated machine learning algorithm technique from computer vision, sensor fusion, localization, control theory, and path planning.[67]

End-to-End (e2e) learning increasingly gets interest as a potential solution to the challenges of the complex AI systems for autonomous vehicles. End-to-end (e2e) learning applies iterative learning to a complex system as a whole, and has been popularized in the context of deep learning. An End-to-End approach attempts to create an autonomous driving system with a single, comprehensive software component that directly maps sensor inputs to driving actions. Because of breakthroughs in deep learning the capabilities of e2e systems have increased as such that they are now considered a viable option. These systems can be created with one or multiple different types of machine learning methods, such as Convolutional Neural Networks or Reinforcement Learning, which we will elaborate on later in this report.[67,68] 

First, we’ll review how the data from the sensors is processed to reach a decision regarding the robotic vehicle’s motion. Depending on the different sensors used onboard the vehicle, different software schemes can be used to extract useful information from the sensor signals.

There are several algorithms that can be used to identify objects in an image. The simplest approach is edge detection, where changes in the intensity of light or color in different pixels are assessed.[69] One would expect pixels that belong to the same object to have similar light properties; hence looking at changes in the light intensity can help separate objects or detect where one object turns to the next. The problem with this approach is that in low light intensity (say at night) the algorithm cannot perform well since it relies on differences in light intensity. In addition, as this analysis has to be done on each shot and on multiple pixels, there is a high computational cost.

LIDAR data can be used to compute the movement of the vehicle with the same principle. By comparing two point clouds taken at consecutive instants, some objects will have moved closer or further from the sensor. A software technique called iterative closest point iteratively revises the transformation between the two point clouds, which enables to calculate the translation and rotation the vehicle had. 

While useful, the preceding approaches consume much computing time, and cannot easily be scaled to the case of a self-driving vehicle operating in a continuously changing environment. That is where machine learning comes into play, relying on computer algorithms that have already learned to perform a task from existing data. 

object classification algorithmAlgorithms turn input from sensors into object classifications and a map of the environment. Image: Wayve

Machine Learning Methods

Different types of machine learning algorithms are currently being used for different applications in autonomous vehicles. In essence, machine learning maps a set of inputs to a set of outputs, based on a set of training data provided. Convolutional Neural Networks (CNN), Recurrent Neural Networks (RNN) and Deep Reinforcement Learning (DRL) are the most common deep learning methodologies applied to autonomous driving.[66] 

CNNs are mainly used to process images and spatial information to extract features of interest and identify objects in the environment. These neural networks are made of a convolution layer: a collection of filters that tries to distinguish elements of an image or input data to label them. The output of this convolution layer is fed into an algorithm that combines them to predict the best description of an image. The final software component is usually called an object classifier, as it can categorize an object in the image, for example a street sign or another car.[69-71]

RNNs are powerful tools when working with temporal information such as videos. In these networks the outputs from the previous steps are fed into the network as input, allowing information and knowledge to persist in the network and be contextualized.[72-74] 

DRL combines Deep Learning (DL) and Reinforcement Learning. DRL methods let software-defined ‘agents’ learn the best possible actions to achieve their goals in a virtual environment using a reward function. These goal-oriented algorithms learn how to attain an objective, or how to maximize along a specific dimension over many steps. While promising, a challenge for DRL is the design of the correct reward function for driving a vehicle. Deep Reinforcement Learning is considered to be still in an early stage regarding application in autonomous vehicles.[75,76]

These methods don’t necessarily sit in isolation. For example, companies like Tesla rely on hybrid forms, which try to use multiple methods together to increase accuracy and reduce computational demands.[77,78] 

Training networks on several tasks at once is a common practice in deep learning, often called multi-task training or auxiliary task training. This is to avoid overfitting, a common issue with neural networks. When a machine learning algorithm is trained for a particular task, it can become so focused imitating the data it is trained on that its output becomes unrealistic when an interpolation or extrapolation is attempted. By training the machine learning algorithm on multiple tasks, the core of the network will specialize in finding general features that are useful for all purposes instead of specializing only on one task. This can make the outputs more realistic and useful for applications. 

Gathering Data

In order for these algorithms to be used, they need to be trained on data sets that represent realistic scenarios. With any machine learning process, a part of the data set is used for training, and another part for validation and testing. As such, a great amount of data is annotated by autonomous vehicle companies to achieve this goal.[77] Many datasets, with semantic segmentation of street objects, sign classification, pedestrian detection and depth prediction, have been made openly available by researchers and companies including Aptiv, Lyft, Waymo, and Baidu. This has significantly helped to push the capabilities of the machine learning algorithms forward.[79-81]

One way to gather data is by using a prototype car. These cars are driven by a driver. The perception sensors onboard are used to gather information about the environment. At the same time, an on-board computer will record sensors readings coming from the pedals, the steering wheel, and all other information that can describe how the driver acts. Due to the large amount of data that needs to be gathered and labelled by humans, this is a costly process. According to Andrej Karpathy, Director of AI at Tesla, most of the efforts in his group are dedicated to getting better and better data.[77] 


simulation Simulators are used to explore thousands of variable scenarios. Image: Autoware.AI

 Alternatively, simulators may be used. "Current physical testing isn’t enough; therefore, virtual testing will be required," says Jamie Smith, Director of Global Automotive Strategy at National Instruments.[82] By building realistic simulators, software companies can create thousands of virtual scenarios. This brings the cost of data acquisition down but introduces the problem of realism: these virtual scenarios are defined by humans and are less random that what a real vehicle goes through. There is growing research in this area, called sim-to-real transfer, that studies methods to transfer the knowledge gathered in simulation in the real world.[83]

“We have quite a good simulation, too, but it just does not capture the long tail of weird things that happen in the real world.”
Elon Musk, April 2019[84]

“At Waymo, we’ve driven more than 10 million miles in the real world, and over 10 billion miles in simulation,” Waymo CTO Dmitri Dolgov, July 2019[85]

Using all the data from the sensors and these algorithms, an autonomous vehicle can detect objects surrounding it. Next, it needs to find a path to follow.

Path Planning

With the vehicle knowing the objects in its environment and its location, the large scale path of the vehicle can be determined by using a voronoi diagram (maximizing distance between vehicle and objects), an occupancy grid algorithm, or with a driving corridors algorithm.[86] However, these traditional approaches are not enough for a vehicle that is interacting with other moving objects around it and their output needs to be fine-tuned.

Some autonomous vehicles rely on machine learning algorithms to not only perceive their environment but also to act on that data to control the car. Path planning can be taught to a CNN through imitation learning, in which the CNN tries to imitate the behavior of a driver. In more advanced algorithms, DRL is used, where a reward is provided to the autonomous system for driving in an acceptable manner. Usually, these methods are hybridized with more classical methods of motion planning and trajectory optimization to make sure that the paths are robust. In addition, manufacturers can include additional objectives, such as reducing fuel use, for the model to take into account as it tries to identify optimal paths.[87]


path planningAutonomous vehicles deploy algorithms to plan the vehicle’s own path, as well as estimate the path of other moving objects (in this case the system also estimates the path of the 2 red squares that represent bicyclists). Image: Waymo

Training neural networks and inference during operations of the vehicle requires enormous computing power. Until recently, most machine learning tasks were executed on cloud-based infrastructure with excessive computing power and cooling. With autonomous vehicles, that is no longer possible as the vehicle needs to be able to simultaneously react to new data. As such, part of the processing required to operate the vehicle needs to take place onboard, while model refinements could be done on the cloud.

Recent advances in machine learning are focusing on how the huge amount of data generated by the sensors onboard AVs can be efficiently processed to reduce the computational cost, using concepts such as attention [88] or core-sets.[89] In addition, advances in chip manufacturing and miniaturization are increasing the computing capacity that can be mounted on an autonomous vehicle. With advances in networking protocols, cars might be able to rely on low-latency network-based processing of data to aid them in their autonomous operation.

“In most cases, if you look at what went wrong during a disengagement [the moment when the AV needs human intervention - note by editor], the role of hardware failure is 0.0 percent. Most of the time, it’s a software failure, that is, software failing to predict what the vehicles are gonna be doing or what the pedestrians are gonna be doing.”
 Anthony Levandowski, autonomous vehicle technology pioneer, April 2019[90]

Acting

How does a vehicle act based upon all this information? In current cars driven by humans, the vehicle’s actions such as steering, braking, or signaling are generally controlled by the driver. A mechanical signal from the driver is translated by an electronic control unit (ECU) into actuation commands that are executed by electric or hydraulic actuators on board the car. A small number of current vehicle models contain Drive-by-Wire systems, where mechanical systems like the steering wheel column are replaced by an electronic system.

In a (semi-)autonomous car, such functionality is replaced by drive control software directly communicating to an ECU. This can provide opportunities to change the structure of the vehicle and to reduce the number of components, especially those added to specifically translate mechanical signals from the driver to electric signals for the ECUs. 

Today’s vehicles contain multiple ECUs, from around 15-20 in standard cars to around a hundred in high-end vehicles.[91] An ECU is a simple computing unit with its own microcontroller and memory, and it uses those to process the input data it receives into output commands for the subsystem it controls, for example to shift an automatic gearbox.

Generally speaking, ECUs can be either responsible for operations that control the vehicle, for safety features, or running infotainment and interior applications.[92] Most ECUs support a single application like electronic power steering, and locally run algorithms and process sensor data.[93]

Architectures: Distributed versus Centralized 

Increasing demands and complexity challenge engineers to design the right electronic architecture for the system that needs to perform sensor fusion and simultaneously distribute decisions in a synchronized way to the lower level subsystems that act on the instructions.[94,95]

In theory, at one extreme of the possible setups one can choose a completely distributed architecture, where every sensing unit processes its raw data and communicates with the other nodes in the network. At the other end of the spectrum we have a centralized architecture, where all Remote Control Units (RCUs) are directly connected to a central control point that collects all information and performs the sensor fusion process.[96,97]

In the middle of this spectrum are hybrid solutions that combine a central unit working at higher abstraction levels with domains that perform dedicated sensor processing and/or execute decision making algorithms. Such domains can be based on location within the vehicle, e.g. domains for the front and back of the car, on the type of function they control, or on the type of sensors they process (e.g. cameras).[93] 


cad renderCAD render of the wire harness of a Bentley Bentayga. This is a Level 1 automated vehicle with advanced driver assistance systems: including adaptive cruise control, automatic braking in cities,  pedestrian detection, night vision (which recognizes people and animals), traffic sign recognition and a system that changes speed in line with local speed limits. Image: Bentley

In a centralized architecture the measurements from different sensors are independent quantities and not affected by other nodes. The data is not modified or filtered at the edge nodes of the system, providing the maximum possible information for sensor fusion, and there is low latency. The challenge is that huge amounts of data needs to be transported to the central unit and be processed there. That not only requires a powerful central computer, but also a heavy wire harness with a high bandwidth. Today’s vehicles contain over a kilometer of wires, weighing tens of kilo’s.[98]


A distributed architecture can be achieved with a lighter electrical system but is more complex. Although the demand related to bandwidth and centralized processing is reduced greatly in such an architecture, it introduces latency between actuation and sensing phases and increases the challenges to the validation of data.

Power, Heat, Weight, and Size challenges

Next to increased complexity of the system, automation also poses challenges on the power consumption, thermal footprint, weight, and size of the vehicle components.

Regardless of how much the architecture is distributed or centralized, the power requirements of the autonomous system are significant. The prime driver for this are the computational requirements, which can easily be up to 100 times higher for fully autonomous vehicles than the most advanced vehicles in production today.[99] 

This power-hungriness of autonomous vehicles increases the demands on the performance of the battery and the capabilities of semiconductor components in the system. For fully electric vehicles, the driving range is negatively impacted by this power demand. Therefore, some companies like Waymo and Ford have opted to focus on hybrid vehicles, while Uber uses a fleet of full gasoline SUVs. However, experts point to full electric ultimately being the powertrain of choice because of the inefficiency of combustion engines in generating electric power used for onboard computing.[98,100]

“To put such a system into a combustion-engined car doesn’t make any sense, because the fuel consumption will go up tremendously,”  Wilko Stark, Vice President of Strategy, Mercedes-Benz, 2018[101] 

The increased processing demand and higher power throughput heats up the system. To keep electronic components performing properly and reliably, they must be kept within certain temperature ranges, regardless of the vehicle’s external conditions. Cooling systems, especially those that are liquid based, can further add to the weight and size of the vehicle. 

Extra components, extra wiring, and thermal management systems put pressure on reducing the weight, size, and thermal capabilities of any part of the vehicle. From reducing the weight of large components like LIDARs, to tiny ones, like the semiconductor components that make up the electronic circuitry, there is a huge incentive for the suppliers of automotive components to change their products accordingly. 

Semiconductor companies are creating components with smaller footprints, improved thermal performance and lower interference, all while actually increasing reliability. Beyond evolving the various silicon components such as MOSFET’s, bipolar transistors, diodes and integrated circuits, the industry also looks at using novel materials. Components that are based on Gallium Nitride (GaN) are seen as having a high impact on future electronics. GaN would enable to create smaller devices for a given on-resistance and breakdown voltage compared to silicon because it can conduct electrons much more effectively.[102-104] 


GPU NvidiaA GPU processor based hardware platform for autonomous driving. Image: Nvidia

 To execute all the algorithms and processes for autonomy requires significant computing and thus powerful processors. A full autonomous vehicle will likely contain more lines of code than any software platform or operating system that has been created so far. GPU-accelerated processing is currently the industry standard, with Nvidia being the market leader. However, increasingly companies are pursuing different solutions; much of Nvidia’s competition is focusing their chip design on Tensor Processing Units (TPU), which accelerate the tensor operations that are the core workload of deep learning algorithms. GPUs on the other hand were developed for graphic processing and thus prevent deep learning algorithms from harnessing the full power of the chip.[105]


As seen, both the physique as the software of vehicles will change significantly as vehicles increase their level of automation. Next to that, greater autonomy in vehicles will also impact how you as a user will interact with them.

User experience

On a daily basis we interact with vehicles in various roles, including as a driver, fellow traffic participant (car, bicycle, etc.), pedestrian, or as a passenger. The shift towards autonomy must take into account the full spectrum of these subtle interactions. How do these interactions evolve, and what role do the new sensors and software play? How will they reshape interactions with the vehicle?

To start, we can look at two major players in the field, Waymo and Tesla, who have taken different approaches towards user experience.

Tesla is working on evolving what a car can do as a product. The car, by itself, does not change much. Yet when your car can autonomously drive you anywhere, park itself and be summoned back, your experience changes dramatically. Suddenly, all non-autonomous cars seem ridiculously outdated. Tesla’s strategy at the moment, is to build a feature that will radically differentiate them in the market.

Waymo, on the other hand, is trying to answer a completely different question: Do you really need to own a car? If your city has a pervasive service of autonomous taxis that can drive people around, why even bother owning your own? 

Hence, Waymo is trying to build an infrastructure of cars as a service. The user experience of their autonomous vehicles is completely different: you summon a car like you would summon an Uber. The only difference: no one is at the wheel. The car will safely drop you wherever you need to go, and you do not need to worry about anything after you reach your destination. 

Due to the novelty of the technology, trust-building measures are highly important during the initial years of autonomous driving. People have trouble trusting machines and are quick to lose confidence in them. In a 2016 study, people were found to forgive a human advisor but stop trusting a computer advisor–for the same, single mistake.[106]

“Trust building is the major problem at the moment”
Zeljko Medenica, Principal Engineer and Human Machine Interface (HMI) Team Lead at the US R&D Center of Changan, January 2020.

Whether the car is the product or part of a service, to make autonomous vehicles work, users must feel good inside and outside of them. Setting the right expectations, building trust with the user, and communicating clearly with them as needed are the cornerstones of the design process.[107] We’ll review what the experience of riding in a self-driving car looks like now, at the beginning of the decade.

Inside the vehicle

While driving a Tesla with AutoPilot, an L2 autonomous vehicle, the user must be behind the wheel and needs to be aware of what’s happening. The car’s display shows the vehicles and what it sees on the road, allowing the user to assess the cars ability to perceive its environment correctly. On a highway, the experience is smooth and reportedly 9 times safer than a human driver.[108]

Autopilot TeslaInside a Tesla with its Autopilot feature

Waymo has achieved Level 4, meaning the vehicle can come to a safe stop without a human driver taking over, although generally a safety driver is still involved.[109] Inside a Waymo, you feel like you are riding a taxi. The main design problem for this, as stated by Ryan Powell, UX Designer at Waymo, is reproducing the vast array of nonverbal communication that happens between the driver and the passenger.[110]

Even from the backseat, watching the behavior of the driver can tell you a lot about what is going to happen. The gaze of the driver directly shows what is drawing their attention, and the passenger can feel safe by seeing that they saw the woman crossing the road or the oncoming car at the intersection. This sensation is lost without the driver, and the Waymo passenger is left to passively interact with the vehicle through a screen in the backseat.

While the vehicle has a 360 degree, multi-layered view on its surroundings that is obtained with an array of various sensors, on the screen the user only sees a very minimal depiction of the surrounding cars, buildings, roads, and pedestrians; just enough to understand that the car is acknowledging their presence and planning accordingly.[111] 

Self-driving taxi WaymoInside a Waymo self-driving taxi. Image: Waymo

At an intersection, a driver usually looks to the left and right to see if there are oncoming cars. To show the user that the autonomous driving system is taking this into account, the map rotates to the left and right at intersections, thus showing the user that the vehicle is paying attention to traffic. This is quite an interesting design artifact: the system always takes into account the whole range of data and does not need to “look right and left”, but that map movement emulates the behavior of a human driver, helping the passenger to feel safe. The screen is hence both the “face” of the car and a minimal description of its environment, used as said to replicate the nonverbal communication that would happen with a human driver.

Waymo UIInterface of the Waymo self-driving taxi at an intersection. Image: Waymo

Until humans are familiar interacting with autonomous vehicles, the experience of riding one needs to emulate what we are used to: a driver paying attention to the surroundings. Increasing automation impacts both User Experience and User Acceptance: Research indicates that when levels of automation increase beyond level 1 ADAS systems, the perceived control and experience of fun decrease for users, and users can feel a loss of competence in driving.[112,113] 

As trust in the technology increases, the user interface will probably simplify, as individuals will no longer care to know every single step that the vehicle is planning to do. 

Preventing mode confusion is one element that contributes to growing trust in these systems. Mode confusion arises when a driver is unsure about the current state of the vehicle, for example whether autonomous driving is active or not. Addressing this issue becomes more important as the levels of autonomy increase (L2+). The simplest way to do this is to make sure that the user interface for the autonomous mode is significantly different from the one which is used in the manual mode. 

During the initial phases of autonomous driving implementation, autonomy will likely only be restricted to the defined predefined operational design domains. During a domain change, drivers may need to engage and control the vehicle. This transfer of control is another aspect that the user interface needs to facilitate. Bringing a driver back into the loop can be challenging, especially if the driver was disengaged for a long period of time. The transfer of control can be even more complex if the situation on the road is such that the driver needs to take over control immediately (this is the case with SAE L3 autonomous vehicles). 

The question that simultaneously arises is what the vehicle should do if the driver does not take over control when requested? One approach that most automobile manufacturers are currently taking is gradually stopping the vehicle in its lane. However, in some situations such as on a busy highway, bridge or in a tunnel, this kind of behavior may not be appropriate. A different approach would be to keep some of the automation active in order to keep the driver safe until he/she takes over or until the vehicle finds a more convenient place to pull over. 

In the automated modes of driving, it is important that the logic of the system matches the way the user interprets how the system works, or in the case where it doesn’t match expectations, that the logic is communicated to the user.

When technology evolves to the levels of autonomy that do not require any human driving capabilities, the user experience will undergo the most dramatic change.

Naturally, at level 4 and 5 of automation steering wheels, pedals, and gear controls can be removed, shifting towards a system where the vehicle is controlled with a map interface on a screen, as is done on other robotic systems. Furthermore, the consoles designed on current cars aim to reduce distracted driving, a requirement that no longer holds at high and full autonomy.[114]

Removing steering wheel, pedals, and changing the role of the console leaves 2 functions for the interface of a fully autonomous vehicle[115]:

  • Clear and adequate communication with the passengers
  • Providing some form of manual control

On top of that, there are four challenges to address when designing interfaces for autonomous vehicles[116]

  1. Assuring safety
  2. Transforming vehicles into places for productivity and play
  3. Taking advantage of new mobility options (with autonomous cars moving from something we own to a service)
  4. Preserving user privacy and data security

Finally, highly and fully automated vehicles could provide mobility to elderly and people with disabilities. The opportunity for these previously excluded users can only be seized when the User Experience design takes their role and abilities into account.

The external experience

While customers want a great experience while riding an AV, we must not forget about all the other drivers, pedestrians and infrastructure that the vehicle interacts with. Driving is a collective dance, defined by rules but also shaped by nonverbal communication and intuition.

The way a vehicle moves suggests what it is about to do, and human drivers expect a car to behave based on their experiences with other drivers on the road. Hence, from the perspective of human-machine interaction, it is fundamental to shape the behavior of a self-driving vehicle such that its intentions are clear. The need for this is made poignant by the numerous cases of a human driver rear-ending an autonomous vehicle because it behaved unexpectedly.[117]

In general, we have previously relied on simple signals (like turn indicators) and human-to-human interaction. Some of these ‘human’ habits apply to autonomous cars, such as signaling, but others, such as human-to-human interaction, need to be emulated using another method. In general, it’s easier for robots to interact with other robots than with humans, and this goes vice versa.

drive.ai vehicle Drive.ai’s vehicles featured displays to communicate with other road participants. The Orange and blue color scheme of the vehicle was designed to draw attention. Image: Drive.ai

For example, when pedestrians cross the road and see a car approaching from a distance, they will safely assume that it will brake, especially if they can see the driver looking directly at them. This makes sure they know that they have their attention. How can we emulate this aspect in driverless cars? 

Melissa Cefkin, a human-machine interaction researcher at Nissan, recently described how they are developing intent indicators that are outside the car, like screens able to display words or symbols. That allows to clearly and simply suggest what the autonomous vehicle is about to do: for example, it can communicate to the pedestrian that it has seen them, and they can cross the road safely.[110] 

Ford together with the Virginia Tech Transportation Institute has experimented with using external indicator lights to standardize signaling to other road participants. Ford placed a LED light bar on top of the windshield (where a pedestrian or bicyclist would look to make eye contact a driver) If the vehicle was yielding the lights would slowly move side-to-side, acceleration was communicated with rapid blinking, and when steadily driving the light would shine completely solid.[118]  

Drive.ai was another company that paid attention to teaching autonomous vehicles to communicate. Founded in 2015 by masters and PhD students in the Artificial Intelligence Lab at Stanford University, Drive.ai was acquired by Apple in summer 2019. Their vehicles featured LED displays around the vehicle that communicated its state and intentions with messages icons, and animations. Initially their vehicles contained 1 large display on the roof of the vehicle, but the company learned that’s not where people look for clues. Other lessons learned from their work with user focus groups include the importance of the right phrase of a message. For example, just “Waiting” didn’t communicate the vehicle’s state clearly enough and needed to be replaced with “Waiting for You.”[29,119,120] 

“We want to be cognizant of the context in which you see the car, and be responsive to it,”
Bijit Halder, Product and Design lead, Drive.ai, 2018 [119]

When studying fleets of self-driving cars moving in the city, there is another area that must be analyzed: machine-machine interaction. It is crucial to understand if an AI trained to predict human behavior can also safely predict the intent of another AI. Enabling vehicles to connect and communicate can have a significant impact on their autonomous capabilities.

Communication & Connectivity

Enabling vehicles to share information with other road participants as well as traffic infrastructure increases the amount and type of available information for autonomous vehicles to act upon. Vice versa it can provide data for better traffic management. Connectivity also enables autonomous vehicles to interact with non-autonomous traffic and pedestrians to increase safety.[12,121-123]

Furthermore, AVs will need to connect to the cloud to update their software and maps, and share back information to improve the collectively used maps and software of their manufacturer.

The digitalization of transport is expected to impact both individual vehicles, public transport, traffic management, and emergency services. The communication needed can be summed under the umbrella term of Vehicle-to-Everything (V2X) communications.[124] This term encompasses a larger set of specific communication structures, such as Vehicle-to-Vehicle (V2V), Vehicle-to-infrastructure (V2I), Vehicle-to-Network (V2N), and Vehicle-to-Person (V2P).  

Vehicle-to-Everything (V2X) communicationThe concept of Vehicle-to-Everything (V2X) communication covers various types of entities that a connected vehicle communicates with. Image: Wevolver

 

A way for inter-vehicle coordination to impact the driving environment is through cooperative maneuvering. One application getting much attention is ‘platooning.’ When autonomous / semi-autonomous vehicles platoon they move in a train-like manner, keeping only small distances between vehicles, to reduce fuel consumption and achieve efficient transport. Especially for freight trucks this is a highly investigated area as it could save up to 16% of fuel.[125] 

Another example application of V2X was recently demonstrated by Fiat Chrysler Automobiles, Continental, and Qualcomm: V2V equipped cars broadcasted a message to following vehicles in the case of sudden braking to notify them timely of the potentially dangerous situation.[126]

The network enabling these features must be highly reliable, efficient and capable of sustaining the data traffic load. V2X communication is predominantly supported by two networking standards, each with significantly different design principles[124,127]

  1. Dedicated short-range communication (DSRC), based on the IEEE 802.11p automobile specific WiFi standard. DSRC uses channels of 10 MHz bandwidth in the 5.9 GHz band (5.850–5.925 GHz),[128]
  2. Cellular V2X (C-V2X), standardized through the 3GPP release 15 (3GPP is a global cooperation of six independent committees that define specifications for cellular standards). The Cellular-V2X radio access technology can be split in older LTE-based, and the newer 5G New Radio (5G-NR) based C-V2X, which is being standardized at the moment.[129]

DSRC and C-V2X both allow for communication between vehicles and other vehicles or devices directly without network access through an interface called PC5.[130] This interface is useful for basic safety services such as sudden braking warnings, or for traffic data collection.[131] C-V2X also provides another communication interface called Uu, which allows the vehicle to communicate directly to the cellular network, a feature that DSRC does not provide.

Both technologies are going through enhancements (802.11bd  and  5G-NR V2X) to support the more advanced applications that require reliability, low latency, and high data throughput.[132] 

Current fourth generation (LTE/4G) mobile network are fast enough for gaming or streaming HD content, but lack the speed and resilience required to sustain autonomous vehicle network operations.[133] 5G brings three main capabilities to the table: greater data rate speed (25-50% faster than 4G LTE), lower latency (25-40% lower than 4G LTE), and the ability to serve more devices.[134] 

In the case of V2N over a cellular connection, using the Uu interface, the requirements of a 5G network are[135]:

• Real data rates of 1 to 10 Gbit/s.
• 1ms end-to-end latency.
• Ability to support 1000 times the bandwidth of today’s cell phones.
• Ability to support 10 to 100 times the number of devices.
• A 99.999% perceived availability and 100% perceived coverage.
•    Lower power consumption.

5G does not necessarily bring all of these at the same time, but it gives developers the ability to choose the performance needed for specific services. In addition, 5G could offer network slicing (creating multiple logical networks, each dedicated to a particular application within the same hardware infrastructure) and cloud management techniques (edge computing) to manage data traffic and capacity on demand.[136] 

Applications supporting fully autonomous vehicles could generate huge amounts of data every second. This has led semiconductor manufacturers such as Qualcomm and Intel to develop new application-specific integrated circuits. These combine large 5G bandwidth with innovative digital radio and antenna architectures, to change the autonomous vehicle into a mobile data center.[137,138] 

In an autonomous car, we have to factor in cameras, radar, sonar, GPS and LIDAR –components as essential to this new way of driving as pistons, rings and engine blocks. Cameras will generate 20–60 MB/s, radar upwards of 10 kB/s, sonar 10–100 kB/s, GPS will run at 50 kB/s, and LIDAR will range between 10–70 MB/s. Run those numbers, and each autonomous vehicle will be generating approximately 4,000 GB –or 4 terabytes –of data a day.’

Brian Krzanich, CEO of Intel, 2016[139]

At the same time, it may be noted that high data loads are not always needed. Choosing what the relevant and minimally required data is, and transferring it at the right time to the right receiver can enable a lot of uses cases to transfer less data. 

A demo of V2X system warning for vulnerable road users. Image: Audi

DSRC or C-V2X

The question whether DSRC or C-V2X is the best choice and which will prevail is the subject of strong debate. Performance and capabilities, deployment costs, and technology readiness level are among the considerations in this discussion. To make the two technologies co-exists in a geographic region would require overcoming the challenges of spectrum management and operational difficulties.[140] 

DSRC is the oldest of the technologies, and the current standard, 802.11p, was approved in 2009. In 1999, the U.S. government allocated a section of the 5.9 GHz band spectrum for automotive DSRC. During the Obama administration a rulemaking process was initiated to make DSRC required in all cars sold in 2023 and onwards, though this process stalled. In December 2019 the US Federal Communications Commission proposed splitting up the band in the 5.9GHz spectrum that had been allocated to DSRC, and instead reserve big parts of it for commercial WiFi and C-V2X. According to the FCC slow traction on DSRC prompted the changes.

The European Union also had been working towards enforcing DSRC as a standard, but recently most of its member states voted against DSRC and in favor of C-V2X.[141]

China moves singularly in the direction of 5G, cellular based V2X. The country has plans to require C-V2X equipment in newly built cars from 2025 onwards. This stems from China’s existing ambitious investment in 5G connectivity, with renders C-V2X a choice that fits well with existing investments. In 2019, about 130,000 5G base stations were expected to become operational in China, with a projected 460 million 5G users by the end of 2025.[126]

Different automotive manufacturers are prioritizing different approaches for V2X. In 2017 Cadillac was one of the first companies to launch a production vehicle with V2X capabilities, and chose to incorporate DSRC. The new Golf model from Volkswagen will also be equipped with the WiFi based technology. In contrast BMW, AUDI, PSA and Ford are currently working on Cellular-V2X compatible equipment. Mid-2019 Toyota halted its earlier plans to install DSRC on U.S. vehicles by 2021, citing “a range of factors, including the need for greater automotive industry commitment as well as federal government support to preserve the 5.9 GHz spectrum band.”[141-143]

“We’ve been looking at DSRC for a number of years along with Toyota, GM and Honda, so this is not a step that we take lightly in the sense of dismissing DSRC. But we think this is the right step to make given where we see the technology headed.”
 Don Butler, Executive Director, Connected Vehicle Platform And Products, Ford, January 2019[144]

Wi-Fi is the only safe and secure V2X technology that has been tested for more than 10 years and is ready for immediate volume rollout.
Lars Reger, Chief Technology Officer at NXP Semiconductors, October 2019[145]

In terms of technical performance requirements for higher levels of autonomy, many experts voice that 5G-NR V2X is the technology of choice and that and that DSRC (nor LTE-V2X PC5) won’t sufficiently support some key AV features. 

The semiconductor manufacturer Qualcomm together with Ford compared the performance of C-V2X and DSRC in lab and field tests in Ann Arbor, Michigan and in San Diego, California. In a presentation to the 5G Automotive Association they concluded that C-V2X has a more extensive range and outperforms DSRC technology in robustness against interference, and in a number of scenarios, such as when a stationary vehicle obstructs V2V messages between two passing vehicles.[146]

Beyond the communication standard, the cloud network architecture is also a key component for autonomous vehicles. On that end, the infrastructure already developed by companies such as Amazon AWS, Google Cloud and Microsoft Azure for other applications is already mature enough to handle autonomous vehicle applications.[147-149]  

Use Case: Autonomous Racing

robocarThe Robocar. Image: Benedict Redgrove.

 

A company called ‘Roborace’ has been pushing the limits of technology in their autonomous racing vehicles. Roborace was announced at the end of 2015 and Robocar, their autonomous race car, was launched in February 2016. The Robocar currently holds the Guinness World Record for fastest autonomous vehicle at a speed of 282.42 km/h (175.49 mph). Next to the Robocar, Roborace developed a second vehicle platform, the DevBot 2.0, which contrary to the former, also allows space and controls for a human driver. 

Type

Robocar

DevBot2.0

Perception Sensors

- Lidar

- Ultrasonic sensors

- Front Radar,

- Cameras (5x)

- Military spec GPS (with antennas at both end of the car for heading)

Battery type

Custom design, built by Rimac

Battery capacity

52 kwh

36 kwh

Peak voltage

729V

725V

Motor

4x integral powertrain 

CRB with each 135 kW

 (one per wheel)

2x integral powertrain CRB

 with each 135 kW

Total Power

540kW

270kW

Top speed (achieved) 

300kph.

217 kph.*

Range

15-20 mins**

15 mins**

*On track, note that no specific top speed runs have been attempted
**At full racing performance, similar to a 1st generation Formula E car.

The DevBot is used in the Season Alpha program, Roborace’s debut competition. Here multiple teams are pitched against one another in head to head races. The hardware of both the vehicles is managed centrally and is the same for each team, meaning that the only differentiator is the AI driver software the teams develop for the competition. For example, improved live path running, or modifying LIDAR algorithms

Roborace provides the teams with a base software layer. This is an entirely internal Automated Driving System (ADS), designed to be a starting point, a basis for various teams and projects to use and develop on top. The code is open source and available to all teams, and next to that Roborace provides an API to simplify software development.

Driving in the controlled environment of a racing circuit can remove much of the unpredictability and variability that cars encounter in the real world. Therefore, Roborace is looking to augment the tracks with obstacles (both real and virtual) to simulate real-world environments. Furthermore, not needing to take care of passenger safety and user experience removes many other constraints. Roborace can focus on seeking the performance limits of their vehicles. That means their software is constantly learning the maximum new settings the vehicles can use and learning the edge of performance possibilities live on the track in order to advance autonomous software at a faster rate.

“in an autonomous environment we don’t have to educate the driver. Instead we directly input those engineering results into our software”
Alan Cocks, Chief Engineer, Roborace, November 2019 [150]

Devbot and Robocar host a Nvidia Drive PX 2 computer, which is fairly common for autonomous vehicles. It’s a liquid-cooled machine that sports 12 CPU cores and has 8 teraflops worth of processing power, enabling it to achieve 24 trillion operations a second. On top of that, to adjust to racing conditions, they’ve added a Speedgoat computer, common in motorsport, to allow real-time processing aimed at increasing performance.

Furthermore, Roborace cars differ from normal autonomous vehicles in the abundance of sensors that have been included in order to provide a base system for multiple development teams to work on. You don’t need that many cameras and the LIDARs and the GPS, but their availability allows the teams to choose which system and setup they want to utilize for the race.

Looking forward, the big thing that will impact Roborace will be 5G. Roborace insists on having full, live data telemetry at all times, so they know exactly what the car is doing. Next to that they have a constant video stream.  This means they have to create a 5G network around the entire racetrack. For each new race this requires several kilometers of fibre, numerous roadside units, and a lot of batteries.

Moving to 5G would allow the Roborace cars to basically run anywhere assuming a network is available. Hugely reducing the time and work it takes to deploy these vehicles will enable to focus development on the cars’ software performance and on acquiring data. And that, according to Roborace, is exactly the area that in which autonomous vehicles need the most development; their software and testing various cases and situations

Roborace is not only pioneering on a technical level. The company is also experimenting with future racing formats that combine the real and the virtual and Roborace explores how to bring this entertainment to a global fanbase. Their second season, Season Beta, will begin in 2020 with 5 competing teams.[150]

Summary

At the start of the 2020s, the state of autonomous vehicles is such that they have achieved the ability to drive without human supervision and interference, albeit under strictly defined conditions. This so-called level 4, or high automation, has been reached among many unforeseen challenges for technology developers and scaled back projections.

No technology is yet capable of Level 5, full automation, and some experts claim this level will never be achieved. The most automated personal vehicles on the market perform at level 2, where a human driver still needs to monitor and judge when to take over control, for example with Tesla’s Autopilot.  One major challenge towards full autonomy is that the environment (including rules, culture, weather, etc.) greatly influences the level of autonomy that vehicles can safely achieve, and performance in e.g. sunny California, USA, cannot easily be extrapolated to different parts of the world.

Beyond individual personal transportation, other areas in which autonomous vehicles will be deployed include public transportation, delivery & cargo, and specialty vehicles for farming and mining.  And while all applications come with their own specific requirements, the vehicles all need to sense their environment, process input and make decisions, and subsequently take action. 

Generally, a mixture of passive (cameras) and active (e.g. RADAR) sensors is used to sense the environment. Of all perception sensors, LIDAR is seen by most in the industry as a necessary element. Some are going against this conventional wisdom, including Tesla (relying on cameras RADAR, and ultrasound), Nissan, and Wayve (relying on cameras only). 

These sensors are all undergoing technological development to improve their performance and increase efficiency. LIDAR sees the most innovation, as it’s moving away from the traditional, relatively bulky and costly mechanical scanning systems. Newer solutions include microelectromechanical mirrors (MEMS), and systems that do not use any mechanical parts; solid-state LIDAR, sometimes dubbed ‘LIDAR-on-a-chip.’

For higher-level path planning (determining a route to reach a destination), different Global Navigation Satellite Systems beyond the American GPS have become available. By leveraging multiple satellite systems, augmentation techniques and additional sensors to aid in positioning, sub-centimeter accuracy for positioning can be achieved.

Another essential source of information for many current autonomous vehicles are high definition maps that represent the world’s detailed features with an accuracy of a decimeter or less. In contrast, some companies, including Tesla and Apple, envision a map-less approach.

For the whole process of simultaneously mapping the environment while keeping track of location (SLAM), combining data from multiple sources (sensor fusion), path planning and motion control two different AI approaches are generally used:

  1. Sequentially, where the problem is decomposed into a pipeline with specific software for each step. This is the traditional, and most common approach.
  2. An End-to-End (e2e) solution based on deep learning. End-to-End learning increasingly gets interest as a potential solution because of recent breakthroughs in the field of deep learning.

For either architectural approach, various types of machine learning algorithms are currently being used: Convolutional Neural Networks (CNN), Recurrent Neural Networks (RNN) and Deep Reinforcement Learning (DRL) are the most common. These methods don’t necessarily sit in isolation and some companies rely on hybrid forms to increase accuracy and reduce computational demands.

In terms of processors, most AV companies rely on GPU-accelerated processing. However, increasingly different solutions are becoming available, such as Tensor Processing Units (TPU) that are developed around the core workload of deep learning algorithms. More electronics, greater complexity, and increasing performance demands are met by semiconductor innovations that include smaller components and the use of novel materials like Gallium Nitride instead of silicon. Engineers also face questions about how much to distribute or centralize vehicles’ electrical architecture.  

To increase the available data for autonomous driving systems to act upon and increase safety, vehicles need to share information with other road participants, traffic infrastructure, and the cloud.

For this ‘Vehicle-to-Everything’ (V2X) communication, two major networking technologies can be chosen:

  1. Dedicated short-range communication (DSRC), based on a WiFi standard,
  2. Cellular V2X (C-V2X), which for AV applications needs to be based on 5G.

At the moment both DSRC and C-V2X are going through enhancements. The question whether DSRC or C-V2X is the best choice is a subject of debate. Due to its rapid progress and performance, the latter is increasingly preferred, and experts express that DSRC won’t sufficiently support some key AV features.

In parallel with technological development, user experience design is an important factor for autonomous vehicles. For lower level automated vehicles, where humans at times have to take control and drive, mode confusion can arise when the state of the vehicle is unclear, e.g. whether autonomous driving is active or not.

Other key challenges for user experience design are trust-building and communicating the intentions of self-driving vehicles. Internally, for the passengers, human driver behavior is often emulated on displays. For external communication companies are researching displays with words or symbols to substitute the human interaction that people heavily rely on when participating in traffic.

Wevolver’s community of engineers has expressed a growing interest in autonomous vehicle technology, and hundreds of companies, from startups to established industry leaders, are investing heavily in the required improvements. Despite a reckoning with too optimistic expectations it’s expected we will see continuous innovation happening and autonomous vehicles will be an exciting field to follow and be involved in. 

“The corner cases involving bad weather, poor infrastructure, and chaotic road conditions are proving to be tremendously challenging.  Significant improvements are still required in the efficacy and cost efficiency of the existing sensors. New sensors, like thermal, will be needed which have the ability to see at night and in inclement weather.  Similarly, AI computing must become more efficient as measured by meaningful operations (e.g., frames or inferences) per watt or per dollar.”

Drue Freeman, CEO of the Association for Corporate Growth, Silicon Valley, and former Sr. Vice President of Global Automotive Sales & Marketing for NXP Semiconductors, December 2019[151]  

About Nexperia

Nexperia is a global semiconductor manufacturer with over 11,000 employees, headquartered in Nijmegen, the Netherlands. Nexperia owns 5 factories: 2 wafer fabs in Hamburg (Germany) and Manchester (UK), as well as assembly centers in China, Malaysia, and the Philippines. They produce over 90 Billion units per year. According to Nexperia, virtually every electronic design in the world uses Nexperia parts. Their product range includes Discretes,  MOSFETs, and Analog & Logic ICs.

MOSFETs

In 2017, Nexperia spun out of NXP where it formed Standard Products business unit, to become its own, independent company. NXP itself was formerly Philips Semiconductors, effectively giving Nexperia over 60 years of experience.

According to the company, miniaturization, power efficiency, and protection & filtering are the 3 major engineering challenges Nexperia aims to support with its products. Its portfolio consists of over 15.000 different products, and more than 800 new ones are added each year. Recently Nexperia launched Gallium Nitride (GaN) based high voltage power FETs as an alternative to traditional silicon based high voltage MOSFETs.

nexperia automotive semiconductor componentsTwo automotive semiconductor components produced by Nexperia.

The automotive sector is Nexperia's most important market, and the company supplies to many key players in the field of autonomous vehicles. Those include OEMs like Hyundai, pioneering AV technology developers like Aptiv, and tier 1 suppliers like Bosch, Continental, Denso and Valeo.

Nexperia products show up in many areas of contemporary vehicles: In the powertrain they are part of components like converters, inverters, engine control units, transmission, and batteries. In the interior they enable infotainment and comfort & control applications such as HVAC (heating ventilation and air conditioning) and power windows. Furthermore, Nexperia powers ADAS systems such as adaptive cruise control, and is expected to be a major supplier for the autonomous vehicle industry.

About Wevolver

Wevolver is a digital media platform & community dedicated to helping people develop better technology. At Wevolver we aim to empower people to create and innovate by providing access to engineering knowledge.

Therefore, we bring a global audience of engineers informative and inspiring content, such as articles, videos, podcasts, and reports, about state of the art technologies.

We believe that humans need innovation to survive and thrive. Developing relevant technologies and creating the best possible solutions require an understanding of the current cutting edge. There is no need to reinvent the wheel.

We aim to provide access to all knowledge about technologies that can help individuals and teams develop meaningful products. This information can come from many places and different kinds of organizations: We publish content from our own editorial staff, our partners like MIT, or contributors from our engineering community. Companies can sponsor content on the platform.

Our content reaches millions of engineers every month. For this work Wevolver has won the SXSW Innovation Award, the Accenture Innovation Award, and the Top Most Innovative Web Platforms by Fast Company.

Wevolver is how today's engineers stay cutting edge.

Many thanks to: 

The people at Roborace, specifically Victoria Tomlinson and Alan Cocks.

Edwin van de Merbel, Dirk Wittdorf, Petra Beekmans - van Zijll and all the other people at Nexperia for their support.

Our team at Wevolver; including Sander Arts, Benjamin Carothers, Seth Nuzum, Isidro Garcia, Jay Mapalad, and Richard Hulskes. Many thanks for the proofreads and feedback.
The Wevolver community for their support, knowledge sharing, and for making us create this report.

Many others that can’t all be listed here have helped us in big or small ways. Thank you all.
Beyond the people mentioned here we owe greatly to the researchers, engineers, writers, and many others who share their knowledge online. Find their input in the references listed below.

Media Partners

SupplyFrame

Supplyframe is a network for electronics design and manufacturing. The company provides open access to the world’s largest collection of vertical search engines, supply chain tools, and online communities for engineering.
Their mission is to organize the world of engineering knowledge to help people build better hardware products, and at Wevolver we support that aspiration and greatly appreciate that Supplyframe contributes to the distribution of this report among their network.

EngineeringClicks

EngineeringClicks.com is the No. 1 mechanical design engineering portal with a strong social media following. Join them to be at the forefront of mechanical design engineering action. Connect with like-minded professionals, build your network, learn something new today! Visit their website and join one of the LinkedIn groups.

References

  1. Hawkins AJ.
    Waymo’s driverless car: ghost-riding in the backseat of a robot taxi.
    In: The Verge [Internet]. The Verge; 9 Dec 2019 [cited 27 Dec 2019].
    https://www.theverge.com/2019/12/9/21000085/waymo-fully-driverless-car-self-driving-ride-hail-service-phoenix-arizona ⇧
  2. Romm J.
    Top Toyota expert throws cold water on the driverless car hype.
    In: ThinkProgress [Internet]. 20 Sep 2018 [cited 8 Jan 2020].
    https://thinkprogress.org/top-toyota-expert-truly-driverless-cars-might-not-be-in-my-lifetime-0cca05ab19ff/ ⇧
  3. Ramsey M.
    The 2019 Connected Vehicle and Smart Mobility HC. 
    In: Twitter [Internet]. 31 Jul 2019 [cited 9 Jan 2020].
    https://twitter.com/MRamsey92/status/1156626888368054273 
  4. Ramsey M.
    Hype Cycle for Connected Vehicles and Smart Mobility, 2019. Gartner; 2019 Jul. Report No.: G00369518. https://www.gartner.com/en/documents/3955767/hype-cycle-for-connected-vehicles-and-smart-mobility-2014 ⇧
  5. Wevolver
    2019 Engineering State of Mind Report. 
    In: Wevolver [Internet]. 22 Dec 2019 [cited 8 Jan 2020].
    https://www.wevolver.com/article/2019.engineering.state.of.mind.report/ 
  6. Shuttleworth J.
    SAE Standards News: J3016 automated-driving graphic update. 
    In: SAE International [Internet]. 7 Jan 2019 [cited 26 Nov 2019].
    https://www.sae.org/news/2019/01/sae-updates-j3016-automated-driving-graphic 
  7. On-Road Automated Driving (ORAD) committee.
    Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles. SAE International; 2018 Jun. Report No.: J3016_201806.
    https://saemobilus.sae.org/viewhtml/J3016_201806/  
  8. [No title]. [cited 29 Jan 2020].
    https://users.ece.cmu.edu/~koopman/pubs/Koopman19_SAFE_AI_ODD_OEDR.pdf 
  9. Czarnecki K.
    Operational Design Domain for Automated Driving Systems - Taxonomy of Basic Terms. 2018 [cited 4 Feb 2020]. doi:10.13140/RG.2.2.18037.88803 ⇧
  10. Sotudeh J.
    A Review Of Autonomous Vehicle Safety And Regulations. 
    In: Wevolver [Internet]. 31 january, 2002 [cited 31 january, 2020].
    https://www.wevolver.com/article/a.review.of.autonomous.vehicle.safety.and.regulations 
  11. Self-Driving Cars Explained. 
    In: Union of Concerned Scientists [Internet]. 26 Jan 2017 [cited 11 Dec 2019].
    https://www.ucsusa.org/resources/self-driving-cars-101 
  12. Beevor M.
    Driving autonomous vehicles forward with intelligent infrastructure. 
    In: Smart Cities World [Internet]. 11 Apr 2019 [cited 19 Dec 2019].
    https://www.smartcitiesworld.net/opinions/opinions/driving-autonomous-vehicles-forward-with-intelligent-infrastructure
  13. What is the difference between CCD and CMOS image sensors in a digital camera? 
    In: HowStuffWorks [Internet]. HowStuffWorks; 1 Apr 2000 [cited 8 Dec 2019].
    https://electronics.howstuffworks.com/cameras-photography/digital/question362.htm 
  14. Nijland W.
    Basics of Infrared Photography. In: Infrared Photography [Internet]. [cited 27 Dec 2019].
    https://www.ir-photo.net/ir_imaging.html 
  15. Rudolph G, Voelzke U.
    Three Sensor Types Drive Autonomous Vehicles. 
    In: FierceElectronics [Internet]. 10 Nov 2017 [cited 16 Dec 2019].
    https://www.fierceelectronics.com/components/three-sensor-types-drive-autonomous-vehicles 
  16. Rosique F, Navarro PJ, Fernández C, Padilla A.
    A Systematic Review of Perception System and Simulators for Autonomous Vehicles Research. 
    Sensors. 2019;19: 648.
    doi:10.3390/s19030648 
  17. Marshall B.
    Lidar, Radar & Digital Cameras: the Eyes of Autonomous Vehicles. 
    In: Design Spark [Internet]. 21 Feb 2018 [cited 19 Dec 2019].
    https://www.rs-online.com/designspark/lidar-radar-digital-cameras-the-eyes-of-autonomous-vehicles 
  18. Dmitriev S.
    Autonomous cars will generate more than 300 TB of data per year. 
    In: Tuxera [Internet]. 28 Nov 2017 [cited 12 Dec 2019].
    https://www.tuxera.com/blog/autonomous-cars-300-tb-of-data-per-year/ 
  19. Greene B.
    What will the future look like? Elon Musk speaks at TED2017. 
    In: TED Blog [Internet]. 28 Apr 2017 [cited 27 Dec 2019]. https://blog.ted.com/what-will-the-future-look-like-elon-musk-speaks-at-ted2017/
  20. Tajitsu N.
    On the radar: Nissan stays cool on lidar tech, siding with Tesla. 
    In: Reuters [Internet]. Reuters; 16 May 2019 [cited 29 Jan 2020].
    https://www.reuters.com/article/us-nissan-lidar-autonomous-idUSKCN1SM0W2  ⇧
  21. Coldewey D.
    Startups at the speed of light: Lidar CEOs put their industry in perspective. 
    In: TechCrunch [Internet]. TechCrunch; 29 Jun 2019 [cited 29 Jan 2020].
    http://social.techcrunch.com/2019/06/29/lidar-startup-ceos/
  22. Ohta J.
    Smart CMOS Image Sensors and Applications.
    2017.
    doi:10.1201/9781420019155 
  23. Royo S, Ballesta-Garcia M.
    An Overview of Lidar Imaging Systems for Autonomous Vehicles.
    Applied Sciences. 2019. p. 4093.
    doi:10.3390/app9194093 
  24. Thompson J.
    Ultrasonic Sensors: More Than Parking | Level Five Supplies. 
    In: Level Five Supplies [Internet]. [cited 31 Jan 2020].
    https://levelfivesupplies.com/ultrasonic-sensors-more-than-just-parking/ 
  25. Tesla
    Upgrading Autopilot: Seeing the World in Radar. 
    In: Tesla Blog [Internet]. 11 Sep 2016 [cited 29 Jan 2020].
    https://www.tesla.com/blog/upgrading-autopilot-seeing-world-radar 
  26. Murray C.
    Autonomous Cars Look to Sensor Advancements in 2019.
    In: Design News [Internet]. 7 Jan 2019 [cited 16 Dec 2019].
    https://www.designnews.com/electronics-test/autonomous-cars-look-sensor-advancements-2019/95504860759958
  27. Marenko K.
    Why Hi-Resolution Radar is a Game Changer. 
    In: FierceElectronics [Internet]. 23 Aug 2018 [cited 16 Dec 2019].
    https://www.fierceelectronics.com/components/why-hi-resolution-radar-a-game-changer  
  28. Koon J.
    How Sensors Empower Autonomous Driving. 
    In: Engineering.com [Internet]. 15 Jan 2019 [cited 16 Dec 2019].
    https://www.engineering.com/IOT/ArticleID/18285/How-Sensors-Empower-Autonomous-Driving.aspx
  29. Yoo HW, Druml N, Brunner D, Schwarzl C, Thurner T, Hennecke M, et al.
    MEMS-based lidar for autonomous driving.
    e & i Elektrotechnik und Informationstechnik. 2018. pp. 408–415.
    doi:10.1007/s00502-018-0635-2 
  30. Lee TB.
    Why experts believe cheaper, better lidar is right around the corner. 
    In: Ars Technica [Internet]. 1 Jan 2018 [cited 29 Jan 2020].
    https://arstechnica.com/cars/2018/01/driving-around-without-a-driver-lidar-technology-explained/ 
  31. Christopher V. Poulton and Michael R. Watts.
    MIT and DARPA Pack Lidar Sensor Onto Single Chip. 
    In: IEEE Spectrum: Technology, Engineering, and Science News [Internet]. 4 Aug 2016 [cited 29 Jan 2020].
    https://spectrum.ieee.org/tech-talk/semiconductors/optoelectronics/mit-lidar-on-a-chip 
  32. Ross PE.
    Lumotive Says It’s Got a Solid-State Lidar That Really Works. 
    In: IEEE Spectrum: Technology, Engineering, and Science News [Internet]. 21 Mar 2019 [cited 29 Jan 2020].
    https://spectrum.ieee.org/cars-that-think/transportation/sensors/lumotive-says-its-got-a-solidstate-lidar-that-really-works ⇧
  33. Sun W, Hu Y, MacDonnell DG, Weimer C, Baize RR.
    Technique to separate lidar signal and sunlight.
    Opt Express, OE. 2016;24: 12949–12954.
    doi:10.1364/OE.24.012949 
  34. Velodyne Lidar Introduces VelabitTM. 
    In: Business Wire [Internet]. 7 Jan 2020 [cited 29 Jan 2020].
    https://www.businesswire.com/news/home/20200107005849/en/Velodyne-Lidar-Introduces-Velabit%E2%84%A2/ ⇧
  35. Ross PE.
    Velodyne Will Sell a Lidar for $100. 
    In: IEEE Spectrum: Technology, Engineering, and Science News [Internet]. 20 Jan 2020 [cited 29 Jan 2020].
    https://spectrum.ieee.org/cars-that-think/sensors/automotive-sensors/velodyne-will-sell-a-lidar-for-100 ⇧
  36. Tsyktor V.
    LIDAR vs Radar vs Sonar: Which Is Better for Self-Driving Cars? 
    In: CyberPulse [Internet]. 28 May 2018 [cited 9 Dec 2019].
    https://cyberpulse.info/lidar-vs-radar-vs-sonar/ 
  37. Tompkinson W, van Rees E.
    Blickfeld Cube Range has a range of up to 250 meters
    In: SPAR 3D [Internet]. 17 Oct 2019 [cited 12 Dec 2019].
    https://www.spar3d.com/news/lidar/blickfelds-latest-lidar-sensor-has-a-range-up-to-250-meters/ 
  38. Thusu R.
    The Growing World of the Image Sensors Market. 
    In: FierceElectronics [Internet]. 1 Feb 2012 [cited 27 Dec 2019].
    https://www.fierceelectronics.com/embedded/growing-world-image-sensors-market 
  39. Sawers P.
    Wayve raises $20 million to give autonomous cars better AI brains. 
    In: VentureBeat [Internet]. VentureBeat; 18 Nov 2019 [cited 27 Dec 2019].
    https://venturebeat.com/2019/11/17/wayve-raises-20-million-to-give-autonomous-cars-better-ai-brains/ ⇧
  40. Lambert F.
    A look at Tesla’s new Autopilot hardware suite: 8 cameras, 1 radar, ultrasonics & new supercomputer 
    In: Electrek [Internet]. 20 Oct 2016 [cited 9 Jan 2020].
    https://electrek.co/2016/10/20/tesla-new-autopilot-hardware-suite-camera-nvidia-tesla-vision/ 
  41. Tesla.
    Autopilot. 
    In: Tesla [Internet]. [cited 9 Jan 2020].
    https://www.tesla.com/autopilot   
  42. Bashir E.
    Opinion Post: What failed in Uber’s Accident that resulted in the death of a Pedestrian. 
    In: Automotive Electronics [Internet]. 23 Mar 2018 [cited 9 Jan 2020].
    https://www.automotivelectronics.com/uber-driverless-car-accident-technology/ 
  43. IntelliSafe surround. 
    In: Volvo Cars [Internet]. [cited 9 Jan 2020].
    https://www.volvocars.com/intl/why-volvo/human-innovation/future-of-driving/safety/intellisafe-surround
  44. Volvo Cars and Uber present production vehicle ready for self-driving. 
    In: Volvo Cars Global Newsroom [Internet]. 12 Jun 2019 [cited 9 Jan 2020].
    https://www.media.volvocars.com/global/en-gb/media/pressreleases/254697/volvo-cars-and-uber-present-production-vehicle-ready-for-self-driving/li>
  45. Waymo.
    Waymo Safety Report 2018: On the Road to Fully Self-Driving.
    Waymo;
    https://storage.googleapis.com/sdc-prod/v1/safety-report/Safety%20Report%202018.pdf 
  46. GPS.gov: GPS Accuracy. 
    In: GPS.gov [Internet]. 5 Dec 2017 [cited 10 Dec 2019].
    https://www.gps.gov/systems/gps/performance/accuracy/ 
  47. GMV. Galileo General 
    Introduction. In: ESA Navipedia [Internet]. [cited 29 Jan 2020].
    https://gssc.esa.int/navipedia/index.php/Galileo_General_ Introduction ⇧
  48. What is GNSS? 
    In: OxTS [Internet]. 14 Aug 2019 [cited 29 Jan 2020].
    https://www.oxts.com/what-is-gnss/ ⇧
  49. Gade K.
    The Seven Ways to Find Heading.
    J Navig. 2016;69: 955–970.
    doi:10.1017/S0373463316000096 
  50. Noboru Noguchi, Mlchio Kise, John F. Reid and Qin Zhang.
    Autonomous Vehicle Based on GPS and Inertial Sensors.
    IFAC Proceedings Volumes. 2001;34: 105–110.
    doi:10.1016/S1474-6670(17)34115-0 
  51. Teschler L. 
    Inertial measurement units will keep self-driving cars on track. In: Microcontroller Tips [Internet]. 15 Aug 2018 [cited 29 Jan 2020]. https://www.microcontrollertips.com/inertial-measurement-units-will-keep-self-driving-cars-on-track-faq/
  52. Edelkamp S, Schrödl S.
    Heuristic Search: Theory and Applications. 
    Elsevier Inc.; 2012.
    https://doi.org/10.1016/C2009-0-16511-X 
  53. Waymo Team.
    Building maps for a self-driving car. 
    In: Medium [Internet]. Waymo; 13 Dec 2016 [cited 29 Jan 2020].
    https://medium.com/waymo/building-maps-for-a-self-driving-car-723b4d9cd3f4 ⇧
  54. Lyft.
    Rethinking Maps for Self-Driving. 
    In: Medium [Internet]. Lyft Level 5; 15 Oct 2018 [cited 29 Jan 2020].
    https://medium.com/lyftlevel5/https-medium-com-lyftlevel5-rethinking-maps-for-self-driving-a147c24758d6
  55. Boudette NE.
    Building a Road Map for the Self-Driving Car. 
    In: New York Times [Internet]. 2 Mar 2017 [cited 29 Jan 2020].
    https://www.nytimes.com/2017/03/02/automobiles/wheels/self-driving-cars-gps-maps.html ⇧
  56. Templeton B.
    Elon Musk Declares Precision Maps A “Really Bad Idea” -- Here’s Why Others Disagree. 
    In: Forbes [Internet]. Forbes; 20 May 2019 [cited 29 Jan 2020].
    https://www.forbes.com/sites/bradtempleton/2019/05/20/elon-musk-declares-precision-maps-a-really-bad-idea-heres-why-others-disagree/ ⇧
  57. Ahmad Al-Dahle and Matthew E. Last and Philip J. Sieh and Benjamin Lyon.
    Autonomous Navigation System.
    US Patent. 2017 /0363430 Al , 2017.
    https://pdfaiw.uspto.gov/.aiw?Docid=20170363430 
  58. Kendall A.
    Learning to Drive like a Human. 
    In: Wayve [Internet]. Wayve; 3 Apr 2019 [cited 29 Jan 2020].
    https://wayve.ai/blog/driving-like-human 
  59. Conner-Simons A, Gordon R.
    Self-driving cars for country roads. 
    In: MIT News [Internet]. 7 May 2018 [cited 29 Jan 2020].
    http://news.mit.edu/2018/self-driving-cars-for-country-roads-mit-csail-0507
  60. Teddy Ort and Liam Paull and Daniela Rus.
    Autonomous Vehicle Navigation in Rural Environments without Detailed Prior Maps.
    2018. https://toyota.csail.mit.edu/sites/default/files/documents/papers/ICRA2018_AutonomousVehicleNavigationRuralEnvironment.pdf ⇧
  61. Matheson R.
    Bringing human-like reasoning to driverless car navigation. 
    In: MIT News [Internet]. 22 May 2019 [cited 29 Jan 2020].
    http://news.mit.edu/2019/human-reasoning-ai-driverless-car-navigation-0523 ⇧
  62. Wolfram Burgard, Cyrill Stachniss, Kai Arras, Maren Bennewitz.
    SLAM: Simultaneous Localization and Mapping. Introduction to Mobile Robotics;
    http://ais.informatik.uni-freiburg.de/teaching/ss12/robotics/slides/12-slam.pdf ⇧
  63. Huang B, Zhao J, Liu J.
    A Survey of Simultaneous Localization and Mapping.
    2019. http://arxiv.org/abs/1909.05214 ⇧
  64. Castanedo F. A
    Review of Data Fusion Techniques. 
    The Scientific World Journal. 2013;
    doi:10.1155/2013/704504 
  65. Holstein T, Dodig-Crnkovic G, Pelliccione P.
    Ethical and Social Aspects of Self-Driving Cars. 
    Arxiv. 2018.
    https://arxiv.org/abs/1802.04103 
  66. Grigorescu S, Trasnea B, Cocias T, Macesanu G.
    A Survey of Deep Learning Techniques for Autonomous Driving. 
    Journal of Field Robotics. 2019; 1–25. doi:10.1002/rob.21918 
  67. Haavaldsen H, Aasboe M, Lindseth F.
    Autonomous Vehicle Control: End-to-end Learning in Simulated Urban Environments.
    2019. http://arxiv.org/abs/1905.06712 ⇧
  68. Zhang J.
    End-to-end Learning for Autonomous Driving. 
    New York University; 2019 May.
    https://cs.nyu.edu/media/publications/zhang_jiakai.pdf 
  69. Camera Based Image Processing. 
    In: Self Driving Cars [Internet]. 26 Sep 2017 [cited 11 Dec 2019].
    https://sites.tufts.edu/selfdrivingisaac/2017/09/26/camera-based-image-processing/ 
  70. Prabhu R.
    Understanding of Convolutional Neural Network (CNN) — Deep Learning. 
    In: Medium [Internet]. Medium; 4 Mar 2018 [cited 12 Dec 2019].
    https://medium.com/@RaghavPrabhu/understanding-of-convolutional-neural-network-cnn-deep-learning-99760835f148 ⇧
  71. Convolutional Neural Network Architecture: Forging Pathways to the Future. 
    In: MissingLink.ai [Internet]. [cited 12 Dec 2019].
    https://missinglink.ai/guides/convolutional-neural-networks/convolutional-neural-network-architecture-forging-pathways-future/ ⇧
  72. A Beginner’s Guide to LSTMs and Recurrent Neural Networks. 
    In: Pathmind [Internet]. [cited 12 Dec 2019].
    http://pathmind.com/wiki/lstm
  73. Banerjee S.
    An Introduction to Recurrent Neural Networks.
    In: Medium [Internet]. Explore Science & Artificial Intelligence; 23 May 2018 [cited 12 Dec 2019].
    https://medium.com/explore-artificial-intelligence/an-introduction-to-recurrent-neural-networks-72c97bf0912
  74. Introduction to Recurrent Neural Network.
    In: GeeksforGeeks [Internet]. 3 Oct 2018 [cited 12 Dec 2019].
    https://www.geeksforgeeks.org/introduction-to-recurrent-neural-network/ 
  75. Andreas Folkers Matthias Rick.
    Controlling an Autonomous Vehicle with Deep Reinforcement Learning.
    https://arxiv.org/abs/1909.12153 
  76. Talpaert V, Sobh I, Kiran BR, Mannion P, Yogamani S, El-Sallab A, et al.
    Exploring applications of deep reinforcement learning for real-world autonomous driving systems. 
    2019. http://arxiv.org/abs/1901.01536
  77. Karpathy A.
    Multi-Task Learning in the Wilderness.
    SlidesLive; 2019.
    https://slideslive.com/38917690 ⇧
  78. Eight F.
    TRAIN AI 2018 - Building the Software 2.0 
    Stack. 2018.
    https://vimeo.com/272696002 
  79. Open Datasets - Scale.
    [cited 29 Jan 2020].
    https://scale.com/open-datasets 
  80. Dataset | Lyft Level 5. 
    In: Lyft Level 5 [Internet]. [cited 29 Jan 2020].
    https://level5.lyft.com/dataset/ 
  81. Open Dataset – Waymo. 
    In: Waymo [Internet]. [cited 29 Jan 2020].
    https://waymo.com/open/ 
  82. Smith J.
    Why Simulation is the Key to Building Safe Autonomous Vehicles. 
    In: Electronic Design [Internet]. 3 Oct 2019 [cited 29 Jan 2020].
    https://www.electronicdesign.com/markets/automotive/article/21808661/why-simulation-is-the-key-to-building-safe-autonomous-vehicles
  83. Pan X, You Y, Wang Z, Lu C.
    Virtual to Real Reinforcement Learning for Autonomous Driving.
    2017. http://arxiv.org/abs/1704.03952 ⇧
  84. Hawkins AJ.
    It’s Elon Musk vs. everyone else in the race for fully driverless cars. 
    In: The Verge [Internet]. The Verge; 24 Apr 2019 [cited 27 Dec 2019].
    https://www.theverge.com/2019/4/24/18512580/elon-musk-tesla-driverless-cars-lidar-simulation-waymo ⇧
  85. Etherington D.
    Waymo has now driven 10 billion autonomous miles in simulation. 
    In: TechCrunch [Internet]. TechCrunch; 10 Jul 2019 [cited 29 Jan 2020].
    http://social.techcrunch.com/2019/07/10/waymo-has-now-driven-10-billion-autonomous-miles-in-simulation/ ⇧
  86. Shukla D.
    Design Considerations For Autonomous Vehicles. 
    In: Electronics For You [Internet]. 16 Aug 2019 [cited 17 Dec 2019].
    https://electronicsforu.com/market-verticals/automotive/design-considerations-autonomous-vehicles 
  87. Katrakazas C, Quddus M, Chen W-H, Deka L.
    Real-time motion planning methods for autonomous on-road driving: State-of-the-art and future research directions. 
    Transp Res Part C: Emerg Technol. 2015;60: 416–442.
    doi:10.1016/j.trc.2015.09.011 
  88. Venkatachalam M.
    Attention in Neural Networks. 
    In: Medium [Internet]. Towards Data Science; 7 Jul 2019 [cited 19 Dec 2019].
    https://towardsdatascience.com/attention-in-neural-networks-e66920838742 
  89. Baykal C, Liebenwein L, Gilitschenski I, Feldman D, Rus D.
    Data-Dependent Coresets for Compressing Neural Networks with Applications to Generalization Bounds. 
    Arxiv. 2018. http://arxiv.org/abs/1804.05345 ⇧
  90. Anthony Levandowski on lessons learned at TC Sessions: Robotics+AI.
    [Youtube]. Tech Crunch; 2019.
    https://www.youtube.com/watch?v=fNgEG5rCav4 
  91. Grand View Research (GVN).
    Automotive Electronic Control Unit Market Size, Share, & Trends Analysis Report By Application, By Propulsion Type, By Capacity, By Vehicle Type, By Region, And Segment Forecasts, 2019 - 2025.
    Grand View Research (GVN); 2019 Jul. Report No.: 978-1-68038-367-6.
    https://www.grandviewresearch.com/industry-analysis/automotive-ecu-market 
  92. Intellias.
    Everything You Wanted to Know About Types of Operating Systems in Autonomous Vehicles. 
    In: Intellias (Intelligent Software Engineering) [Internet]. 15 May 2019 [cited 27 Dec 2019].
    https://www.intellias.com/everything-you-wanted-to-know-about-types-of-operating-systems-in-autonomous-vehicles/ ⇧
  93. van Dijk L.
    Future Vehicle Networks and ECUs: Architecture and Technology considerations. 
    NXP Semiconductors; 2017.
    https://www.nxp.com/docs/en/white-paper/FVNECUA4WP.pdf 
  94. Scobie J, Stachew M.
    Electronic control system partitioning in the autonomous vehicle. 
    In: eeNews Automotive [Internet]. 29 Oct 2015 [cited 27 Dec 2019].
    https://www.eenewsautomotive.com/content/electronic-control-system-partitioning-autonomous-vehicle ⇧
  95. Estl H.
    Sensor fusion: A critical step on the road to autonomous vehicles. 
    In: eeNews Europe [Internet]. 11 Apr 2016 [cited 27 Dec 2019].
    https://www.eenewseurope.com/news/sensor-fusion-critical-step-road-autonomous-vehicles 
  96. Sheikh AF.
    How Advanced Driver-Assistance Systems (ADAS) Impact Automotive Semiconductors. 
    In: Wevolver [Internet]. 12 Nov 2019 [cited 27 Dec 2019].
    https://www.wevolver.com/article/how.advanced.driverassistance.systems.adas.impact.automotive.semiconductor  
  97. Murray C.
    What’s the Best Computing Architecture for the Autonomous Car? 
    In: Design News [Internet]. 17 Aug 2017 [cited 27 Dec 2019].
    https://www.designnews.com/automotive-0/what-s-best-computing-architecture-autonomous-car/87827789257286
  98. Complexity in basic cars: SEAT Ateca SUV has 2.2 km of wire, 100 sensors and control units. 
    In: Green Car Congress [Internet]. 24 Feb 2019 [cited 27 Dec 2019].
    https://www.greencarcongress.com/2019/02/20190224-ateca.html 
  99. Nvidia.
    Self-Driving Safety Report.
    Nvidia Corporation; 2018.
    https://www.nvidia.com/content/dam/en-zz/Solutions/self-driving-cars/safety-report/auto-print-safety-report-pdf-v16.5%20(1).pdf
  100. Gawron JH, Keoleian GA, De Kleine RD, Wallington TJ, Kim HC.
    Life Cycle Assessment of Connected and Automated Vehicles: Sensing and Computing Subsystem and Vehicle Level Effects.
    Environmental Science & Technology. 2018;2: 3249–3256.
    doi:10.1021/acs.est.7b04576 
  101. Stewart J.
    Self-Driving Cars Use Crazy Amounts of Power, and It’s Becoming a Problem. 
    In: Wired [Internet]. WIRED; 6 Feb 2018 [cited 27 Dec 2019].
    https://www.wired.com/story/self-driving-cars-power-consumption-nvidia-chip/ 
  102. Preibisch JB.
    Putting high-performance computing into cars: automotive discrete semiconductors for autonomous driving. 
    In: Wevolver [Internet]. 11 Dec 2019 [cited 27 Dec 2019].
    https://www.wevolver.com/article/putting.highperformance.computing.into.cars.automotive.discrete.semiconductors.for.autonomous.driving/
  103. Efficient Power Conversion Corporation. What is GaN?
    [cited 27 Dec 2019].
    https://epc-co.com/epc/GalliumNitride/WhatisGaN.aspx 
  104. Davis S.
    GaN Basics: FAQs. 
    In: Power Electronics [Internet]. 2 Oct 2013 [cited 27 Dec 2019].
    https://www.powerelectronics.com/technologies/gan-transistors/article/21863347/gan-basics-faqs 
  105. Wang J.
    Deep Learning Chips — Can NVIDIA Hold On To Its Lead? 
    In: ARK Investment Management [Internet]. 27 Sep 2017 [cited 27 Dec 2019].
    https://ark-invest.com/research/gpu-tpu-nvidia 
  106. International Communication Association.
    Humans less likely to return to an automated advisor once given bad advice.
    In: Phys.org [Internet]. 25 May 2016 [cited 8 Dec 2019].
    https://phys.org/news/2016-05-humans-automated-advisor-bad-advice.html 
  107. Punchcut.
    UX Design for Autonomous Vehicles. 
    In: Medium [Internet]. Medium; 7 Aug 2019 [cited 20 Dec 2019].
    https://medium.com/punchcut/ux-design-for-autonomous-vehicles-9624c5a0a28f 
  108. Houser K.
    Tesla: Autopilot Is Nearly 9 Times Safer Than the Average Driver. 
    In: Futurism [Internet]. The Byte; 24 Oct 2019 [cited 10 Dec 2019].
    https://futurism.com/the-byte/tesla-autopilot-safer-average-driver 
  109. Ohnsman A.
    Waymo Says More Of Its Self-Driving Cars Operating “Rider Only” With No One At Wheel.
    In: Forbes [Internet]. Forbes; 28 Oct 2019 [cited 29 Jan 2020].
    https://www.forbes.com/sites/alanohnsman/2019/10/28/waymos-autonomous-car-definition-if-you-need-a-drivers-license-its-not-self-driving/
  110. Design Is [Autonomous] – In Conversation with Ryan Powell, Melissa Cefkin, and Wendy Ju 
    [Youtube]. Google Design; 2018.
    https://www.youtube.com/watch?v=5hLEiBGPrNI 
  111. Niedermeyer E.
    Hailing a driverless ride in a Waymo. 
    In: TechCrunch [Internet]. TechCrunch; 1 Nov 2019 [cited 8 Dec 2019].
    http://social.techcrunch.com/2019/11/01/hailing-a-driverless-ride-in-a-waymo/
  112. Rödel C, Stadler S, Meschtscherjakov A, Tscheligi M.
    Towards Autonomous Cars: The Effect of Autonomy Levels on Acceptance and User Experience. 
    Proceedings of the 6th International Conference on Automotive User Interfaces and Interactive Vehicular Applications. ACM Digital Library; pp. 1–8.
    doi:10.1145/2667317.2667330 
  113. Eckoldt K, Knobel M, Hassenzahl M, Schumann J.
    An Experiential Perspective on Advanced Driver Assistance Systems. 
    Information Technology. 2012;54: 165–171.
    doi:10.1524/itit.2012.0678 
  114. Els P.
    Braking and steering systems to control a new generation of autonomous vehicle. 
    In: Automotive IQ [Internet]. Automotive IQ; 8 May 2019 [cited 12 Dec 2019].
    https://www.automotive-iq.com/chassis-systems/columns/braking-and-steering-systems-to-control-a-new-generation-of-autonomous-vehicle ⇧
  115. Moyers S.
    Current UX Design Challenges for Driverless Cars. 
    In: Digital Agency Network [Internet]. 5 Dec 2017 [cited 19 Dec 2019].
     https://digitalagencynetwork.com/current-ux-design-challenges-for-driverless-cars/ 
  116. Kun AL, Boll S, Schmidt A.
    Shifting Gears: User 
    Interfaces in the Age of Autonomous Driving - IEEE Journals & Magazine. IEEE Pervasive Computing. 15: 32–38.
    doi:10.1109/MPRV.2016.14 
  117. Song Wang ZL.
    Exploring the mechanism of crashes with automated vehicles using statistical modeling approaches.
    PLoS One. 2019;14.
    doi:10.1371/journal.pone.0214550 
  118. Shutko J.
    How Self-Driving Cars Could Communicate with You in the Future. 
    In: Ford Social [Internet]. 13 Sep 2017 [cited 9 Jan 2020].
    https://social.ford.com/en_US/story/ford-community/move-freely/how-self-driving-cars-could-communicate-with-you-in-the-future.html ⇧
  119. Davies A.
    The Self-Driving Startup Teaching Cars to Talk. 
    In: Wired [Internet]. WIRED; 20 Aug 2018 [cited 29 Jan 2020].
     https://www.wired.com/story/driveai-self-driving-design-frisco-texas/ 
  120. Contributors to Wikimedia projects.
    Drive.ai - Wikipedia. 
    In: Wikimedia Foundation, Inc. [Internet]. 27 Jul 2018 [cited 31 Jan 2020].
    https://en.wikipedia.org/wiki/Drive.ai 
  121. Zoria S.
    Smart Cities: A New Look at the Autonomous-Vehicle Infrastructure. 
    In: Medium [Internet]. 19 Nov 2019 [cited 19 Dec 2019].
    https://medium.com/swlh/smart-cities-a-new-look-at-the-autonomous-vehicle-infrastructure-3e00cf3e93b2
  122. Litman T.
    Autonomous Vehicle Implementation Predictions: Implications for Transport Planning. 
    Victoria Transport Policy Institute; 2019 Oct.
    https://www.vtpi.org/avip.pdf 
  123. Macleod A.
    Autonomous driving, smart cities and the new mobility future.
    Siemens; 2018.
    https://www.techbriefs.com/autonomous-driving-smart-cities-and-the-new-mobility-future/file 
  124. Hoeben R.
    V2X is Here to Stay—Now Let’s Use It for Autonomous Cars. 
    In: Electronic Design [Internet]. 22 Aug 2018 [cited 19 Dec 2019]. https://www.electronicdesign.com/markets/automotive/article/21806892/v2x-is-here-to-staynow-lets-use-it-for-autonomous-cars
  125. Vinel A.
    5G V2X – Communication for Platooning - Högskolan i Halmstad. 
    In: Halmstad University [Internet]. 16 Dec 2019 [cited 29 Jan 2020].
    https://www.hh.se/english/research/research-environments/embedded-and-intelligent-systems-eis/research-projects-within-eis/5g-v2x---communication-for-platooning.html ⇧
  126. Wittdorf D.
    Towards 5G Mobility: The role of efficient discrete semiconductors.
    [cited 31 Jan 2020].
    https://www.wevolver.com/article/towards.5g.mobility.the.role.of.efficient.discrete.semiconductors/ ⇧
  127. Krasniqi X, Hajrizi E.
    Use of IoT Technology to Drive the Automotive 
    Industry from Connected to Full Autonomous Vehicles. IFAC-PapersOnLine. 2016;49: 269–274.
    doi:10.1016/j.ifacol.2016.11.078 
  128. IEEE Standards Association.
    IEEE 802.11p-2010 - IEEE Standard for Information technology-- Local and metropolitan area networks-- Specific requirements-- Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications Amendment 6: Wireless Access in Vehicular Environments. 
    2010 Jul. Report No.: 802.11p-2010.
    https://standards.ieee.org/standard/802_11p-2010.html 
  129. 3GPP.
    Release 15. [cited 12 Jan 2020].
    https://www.3gpp.org/release-15 
  130. Flynn K. 
    Initial Cellular V2X standard completed. [cited 12 Jan 2020].
    https://www.3gpp.org/news-events/3gpp-news/1798-v2x_r14 
  131. Dedicated Short Range Communications. 
    In: Clemson Vehicular Electronics Laboratory [Internet]. [cited 12 Jan 2020].
    https://cecas.clemson.edu/cvel/auto/systems/DSRC.html 
  132. Gaurang Naik Biplav Choudhury.
    IEEE 802.11bd & 5G NR V2X: Evolution of Radio Access Technologies for V2X Communications. 
    ArXiv. 2019. https://arxiv.org/abs/1903.08391 
  133. Llanasas R.
    5G’s Important Role in Autonomous Car Technology. 
    In: Machine Design [Internet]. 11 Mar 2019 [cited 8 Dec 2019].
    https://www.machinedesign.com/mechanical-motion-systems/article/21837614/5gs-important-role-in-autonomous-car-technology ⇧
  134. Segan S.
    What Is 5G? 
    In: PC Magazine [Internet]. 31 Oct 2019 [cited 10 Dec 2019].
    https://www.pcmag.com/article/345387/what-is-5g 
  135. 5G Implementation Guidelines  
    In: Future Networks [Internet]. 28 Mar 2019 [cited 29 Jan 2020].
    https://www.gsma.com/futurenetworks/wiki/5g-implementation-guidelines/ 
  136. Chaudry F.
    Towards A System Of Systems: Networking And Communication Between Vehicles. 
    In: Wevolver [Internet]. 31 january, 2020 [cited 31 january, 2020].
    https://www.wevolver.com/article/towards.a.system.of.systems.networking.and.communication.between.vehicles ⇧
  137. Leswing K.
    Qualcomm announces chips for self-driving cars that could be in cars by 2023. 
    In: CNBC [Internet]. CNBC; 6 Jan 2020 [cited 12 Jan 2020].
    https://www.cnbc.com/2020/01/06/qualcomm-snapdragon-ride-system-announced-for-self-driving-cars.html ⇧
  138. Autonomous Driving at Intel. 
    In: Intel Newsroom [Internet]. 8 Jan 2020 [cited 12 Jan 2020].
    https://newsroom.intel.com/press-kits/autonomous-driving-intel/ 
  139. Cottrill DCD.
    Data and digital systems for UK transport: change and its implications.
    UK government’s Foresight Future of Mobility project; 2018 Dec.
    https://aura.abdn.ac.uk/bitstream/handle/2164/12742/Dataanddigital.pdf;jsessionid=1AF1CB7BEE7C498F8EA713CC3D7C1255?sequence=1 ⇧
  140. Naik G, Choudhury B, Park J-M.
    IEEE 802.11bd & 5G NR V2X: Evolution of Radio Access Technologies for V2X Communications. IEEE Access. 2019. pp. 70169–70184. doi:10.1109/access.2019.2919489 
  141. Yoshida J.
    The DSRC vs 5G Debate Continues. 
    In: EET Asia [Internet]. 29 Oct 2019 [cited 29 Jan 2020].
    https://www.eetasia.com/news/article/The-DSRC-vs-5G-Debate-Continues 
  142. Shepardson D.
    Toyota abandons plan to install U.S connected vehicle tech by 2021. 
    In: U.S. [Internet]. Reuters; 26 Apr 2019 [cited 31 Jan 2020].
    https://www.reuters.com/article/us-autos-toyota-communication-idUSKCN1S2252 
  143. Why C-V2X may yet become the global automotive connectivity standard 
    In: Futurum [Internet]. 14 Nov 2019 [cited 31 Jan 2020].
    https://futurumresearch.com/the-war-between-c-v2x-and-dsrc-looks-to-be-steering-itself-towards-c-v2x/
  144. Naughton K.
    Ford Breaks With GM, Toyota on Future of Talking-Car Technology. 
    In: Bloomberg [Internet]. 7 Jan 2019 [cited 29 Jan 2020].
    https://www.bloomberg.com/news/articles/2019-01-07/ford-breaks-with-gm-toyota-on-future-of-talking-car-technology ⇧
  145. Reger L.
    VW Golf Brings WiFi-Based Safe, Secure V2X to the Masses
    In: NXP Blog [Internet]. 30 Oct 2019 [cited 29 Jan 2020].
     https://blog.nxp.com/automotive/vw-golf-brings-wifi-based-safe-secure-v2x-to-the-masses 
  146. V2X Technology Benchmark Testing. 
    2018 Sep.
    https://www.qualcomm.com/media/documents/files/5gaa-v2x-technology-benchmark-testing-dsrc-and-c-v2x.pdf
  147. Designing a Connected Vehicle Platform on Cloud IoT Core. 
    In: Google Cloud [Internet]. 10 Apr 2019 [cited 12 Jan 2020].
    https://cloud.google.com/solutions/designing-connected-vehicle-platform 
  148. ADAS and Autonomous Driving. 
    In: Amazon Web Services, Inc. [Internet]. [cited 12 Jan 2020].
    https://aws.amazon.com/automotive/autonomous-driving/ 
  149. Autonomous Vehicle Solutions. 
    In: Microsoft [Internet]. [cited 12 Jan 2020].
    https://www.microsoft.com/en-us/industry/automotive/autonomous-vehicle-deployment 
  150. Geenen B.
    Developing An Autonomous Racing Car: 
    Interview With Roborace’s Chief Engineer. In: Wevolver [Internet]. 31 Jan 2020 [cited 31 Jan 2020].
    https://www.wevolver.com/article/developing.an.autonomous.racing.car.interview.with.roboraces.chief.engineer ⇧
  151. Freeman D.
    Following the Autonomous Vehicle Hype.
    In: Wevolver [Internet]. 31 January, 2020 [cited 31 January, 2020].
     https://www.wevolver.com/article/following.the.autonomous.vehicle.hype

Go To Top