The most recent people targeted for replacement by robots? Car drivers—one of the most common occupations around the world. Automotive players face a self-driving-car disruption driven largely by the tech industry, and the associated buzz has many consumers expecting their next cars to be fully autonomous. But a close examination of the technologies required to achieve advanced levels of autonomous driving suggests a significantly longer timeline; such vehicles are perhaps five to ten years away.
Mapping a technology revolution
The first attempts to create autonomous vehicles (AVs) concentrated on assisted-driving technologies (see sidebar, “What is an autonomous vehicle?,” for descriptions of SAE International’s levels of vehicle autonomy). These advanced driver-assistance systems (ADAS)—including emergency braking, backup cameras, adaptive cruise control, and self-parking systems—first appeared in luxury vehicles. Eventually, industry regulators began to mandate the inclusion of some of these features in every vehicle, accelerating their penetration into the mass market. By 2016, the proliferation of ADAS had generated a market worth roughly $15 billion.
Stay current on your favorite topics
Around the world, the number of ADAS systems (for instance, those for night vision and blind-spot vehicle detection) rose from 90 million units in 2014 to about 140 million in 2016—a 50 percent increase in just two years. Some ADAS features have greater uptake than others. The adoption rate of surround-view parking systems, for example, increased by more than 150 percent from 2014 to 2016, while the number of adaptive front-lighting systems rose by around 20 percent in the same time frame (Exhibit 1).
Both the customer’s willingness to pay and declining prices have contributed to the technology’s proliferation. A recent McKinsey survey finds that drivers, on average, would spend an extra $500 to $2,500 per vehicle for different ADAS features. Although at first they could be found only in luxury vehicles, many original-equipment manufacturers (OEMs) now offer them in cars in the $20,000 range. Many higher-end vehicles not only autonomously steer, accelerate, and brake in highway conditions but also act to avoid vehicle crashes and reduce the impact of imminent collisions. Some commercial passenger vehicles driving limited distances can even park themselves in extremely tight spots.
But while headway has been made, the industry hasn’t yet determined the optimum technology archetype for semiautonomous vehicles (for example, those at SAE level 3) and consequently remains in the test-and-refine mode. So far, three technology solutions have emerged:
- Camera over radar relies predominantly on camera systems, supplementing them with radar data.
- Radar over camera relies primarily on radar sensors, supplementing them with information from cameras.
- The hybrid approach combines light detection and ranging (lidar), radar, camera systems, and sensor-fusion algorithms to understand the environment at a more granular level.
The cost of these systems differs; the hybrid approach is the most expensive one. However, no clear winner is yet apparent. Each system has its advantages and disadvantages. The radar-over-camera approach, for example, can work well in highway settings, where the flow of traffic is relatively predictable and the granularity levels required to map the environment are less strict. The combined approach, on the other hand, works better in heavily populated urban areas, where accurate measurements and granularity can help vehicles navigate narrow streets and identify smaller objects of interest.
Would you like to learn more about our Automotive & Assembly Practice?
Addressing challenges in autonomous-vehicle technology
AVs will undoubtedly usher in a new era for transportation, but the industry still needs to overcome some challenges before autonomous driving can be practical. We have already seen ADAS solutions ease the burdens of driving and make it safer. Yet in some cases, the technology has also created problems. One issue: humans trust or rely on these new systems too much. This is not a new phenomenon. When airbags moved into the mainstream, in the 1990s, some drivers and passengers took this as a signal that they could stop wearing their seatbelts, which they thought were now redundant. This illusion resulted in additional injuries and deaths.
Similarly, ADAS makes it possible for drivers to rely on automation in situations beyond its capabilities. Adaptive cruise control, for example, works well when a car directly follows another car but often fails to detect stationary objects. Unfortunately, real-life situations, as well as controlled experiments, show that drivers who place too much trust in automation end up crashing into stationary vehicles or other objects. The current capabilities of ADAS are limited—something many early adopters fail to understand.
There remains something of a safety conundrum. In 2015, accidents involving distracted drivers in the United States killed nearly 3,500 people and injured 391,000 more in conventional cars, with drivers actively controlling their vehicles. Unfortunately, experts expect that the number of vehicle crashes initially will not decline dramatically after the introduction of AVs that offer significant levels of autonomous control but nonetheless require drivers to remain fully engaged in a backup, fail-safe role.
Safety experts worry that drivers in semiautonomous vehicles could pursue activities such as reading or texting and thus lack the required situational awareness when asked to take control. As drivers reengage, they must immediately evaluate their surroundings, determine the vehicle’s place in them, analyze the danger, and decide on a safe course of action. At 65 miles an hour, cars take less than four seconds to travel the length of a football field, and the longer a driver remains disengaged from driving, the longer the reengagement process could take. Automotive companies must develop a better human–machine interface to ensure that the new technologies save lives rather than contributing to more accidents.
We’ve seen similar problems in other contexts: in 2009, a commercial airliner overshot its destination airport by 150 miles because the pilots weren’t engaged while their plane was flying on autopilot. For semiautonomous cars, the “airspace” (the ground) is much more congested, and the “pilots” (the drivers) are far less well trained, so it is even more dangerous for preoccupied drivers to operate on autopilot for extended periods.
Evolving toward full autonomy
In the next five years, vehicles that adhere to SAE’s high-automation level-4 designation will probably appear. These will have automated-driving systems that can perform all aspects of dynamic mode-specificity AVs, even if human drivers don’t respond to requests for intervention. While the technology is ready for testing at a working level in limited situations, validating it might take years because the systems must be exposed to a significant number of uncommon situations. Engineers also need to achieve and guarantee reliability and safety targets. Initially, companies will design these systems to operate in specific use cases and specific geographies, which is called geofencing. Another prerequisite is tuning the systems to operate successfully in given situations and conducting additional tuning as the geofenced region expands to encompass broader use cases and geographies.
The challenge at SAE’s levels 4 and 5 centers on operating vehicles without restrictions in any environment—for instance, unmapped areas or places that don’t have lanes or include significant infrastructure and environmental features. Building a system that can operate in (mostly) unrestricted environments will therefore require dramatically more effort, given the exponentially increased number of use cases that engineers must cover and test. In the absence of lane markings or on unpaved roads, for example, the system must be able to guess which areas are appropriate for moving vehicles. This can be a difficult vision problem, especially if the road surface isn’t significantly different from its surroundings (for example, when roads are covered with snow).
Fully self-driving cars could be more than a decade away
Given current development trends, fully autonomous vehicles won’t be available in the next ten years. The main stumbling block is the development of the required software. While hardware innovations will deliver the required computational power, and prices (especially for sensors) appear likely to go on falling, software will remain a critical bottleneck (infographic).
In fact, hardware capabilities are already approaching the levels needed for well-optimized AV software to run smoothly. Current technology should achieve the required levels of computational power—both for graphics processing units (GPUs) and central processing units (CPUs)—very soon.
Cameras for sensors have the required range, resolution, and field of vision but face significant limitations in bad weather conditions. Radar is technologically ready and represents the best option for detection in rough weather and road conditions. Lidar systems, offering the best field of vision, can cover 360 degrees with high levels of granularity. Although these devices are currently pricey and too large, a number of commercially viable, small, and inexpensive ones should hit the market in the next year or two. Several high-tech players claim to have reduced the cost of lidar to under $500, and another company has debuted a system that’s potentially capable of enabling full autonomy (with roughly a dozen sensors) for approximately $10,000. From a commercialization perspective, companies need to understand the optimal number of sensors required for a level-5 (fully autonomous) vehicle.
Daunting software issues remain
The software to complement and utilize the full potential of autonomous-vehicle hardware still has a way to go. Development timelines have stalled given the complexity and research-oriented nature of the problems.
One issue: AVs must learn how to negotiate driving patterns involving both human drivers and other AVs. Localizing vehicles with a very high degree of accuracy using error-prone GPS sensors is another complexity that needs to be addressed. Solving these challenges requires not only significant upfront R&D but also long test and validation periods.
Three types of issues illustrate the software problem more specifically. First, object analysis, which detects objects and understands what they represent, is critical for autonomous vehicles. The system, for example, should treat a stationary motorcycle and a bicyclist riding on the side of the street in different ways and must therefore capture the critical differences during the object-analysis phase.
The initial challenge in object analysis is detection, which can be difficult, depending on the time of day, the background, and any possible movement. Also, the sensor fusion required to validate the existence and type of an object is technically challenging to achieve given the differences among the types of data such systems must compare—the point cloud (from lidar), the object list (from radar), and images (from cameras).
Decision-making systems are the second issue. To mimic human decision making, they must negotiate a plethora of scenarios and undergo intensive, comprehensive “training.” Understanding and labeling the different scenarios and images collected is a nontrivial problem for an autonomous system, and creating comprehensive “if-then” rules covering all possible scenarios of door-to-door autonomous driving generally isn’t feasible. However, developers can build a database of if-then rules and supplement it with an artificial-intelligence (AI) engine that makes smart inferences and takes action in scenarios not covered by if-then rules. Creating such an engine is an extremely difficult task that will require significant development, testing, and validation.
The system also needs a fail-safe mechanism that allows a car to fail without putting its passengers and the people around it in danger. There is no way to check every possible software state and outcome. It would be daunting even to build safeguards to ensure against the worst outcomes and control vehicles so they can stop safely. Redundancies and long test times will be required.
Blazing a trail to fully autonomous driving
As companies push the software envelope in their attempts to create the first fully autonomous vehicle, they need to resolve the issues surrounding several sets of factors (Exhibit 2).
Perception, localization, and mapping
To perfect self-driving cars, companies in the AV space are now working on different approaches, focused on perception, mapping, and localization.
Perception. The goal—to achieve reliable levels of perception with the smallest number of test and validation miles needed. Two approaches are vying to win this race.
- Radar, sonar, and cameras. To perceive vehicles and other objects in the environment, AVs use radars, sonars, and camera systems. This approach doesn’t assess the environment on a deeply granular level but requires less processing power.
- Lidar augmentation. The second approach uses lidar, in addition to the traditional sensor suite of radar and camera systems. It requires more data-processing and computational power but is more robust in various environments—especially tight, traffic-heavy ones.
Experts believe lidar augmentation will ultimately become the approach favored by many future AV players. The importance of lidar augmentation can be observed today by looking at the test vehicles of many OEMs, tier-1 suppliers, and tech players now developing AVs.
Mapping. AV developers are pursuing two mapping options.
- Granular, high-definition maps. To construct high-definition (HD) maps, companies often use vehicles equipped with lidar and cameras. These travel along the targeted roads and create 3-D HD maps with 360-degree information (including depth information) about the surroundings.
- Feature mapping. This approach, which doesn’t necessarily need lidar, can use cameras (often in combination with radar) to map only certain road features, which enable navigation. The map, for example, captures lane markings, road and traffic signs, bridges, and other objects relatively close to roads. While this approach provides lower levels of granularity, processing and updating are easier.
Captured data is (manually) analyzed to generate semantic data, for example, speed signs with time limitations. Mapmakers can enhance both approaches by using a fleet of vehicles, either manned or autonomous, with the sensor packages required to collect and update the maps continuously.
Localization. By identifying a vehicle’s exact position in its environment, localization is a critical prerequisite for effective decisions about where and how to navigate. A couple of approaches are common.
- HD mapping. This approach uses onboard sensors (including GPS) to compare an AV’s perceived environment with corresponding HD maps. It provides a reference point the vehicle can use to identify, on a very precise level, exactly where it is located (including lane information) and what direction it’s heading toward.
- GPS localization without HD maps. Another approach relies on GPS for approximate localization and then uses an AV’s sensors to monitor the changes in its environment and thus refine the positioning information. Such a system, for example, uses GPS location data in conjunction with images captured by onboard cameras. Frame-by-frame comparative analysis reduces the error range of the GPS signal. The 95 percent confidence interval for horizontal geolocation of the GPS is around eight meters, which can be the difference between driving in the right lane or in the wrong (opposite) direction.
Both approaches also rely heavily on inertial navigation systems and odometry data. Experience shows that the first approach is generally much more robust and enables more accurate localization, while the second is easier to implement, since HD maps are not required. Given the differences in accuracy between the two, designers can use the second approach in areas (for example, rural and less populated roads) where precise information on the location of vehicles isn’t critical for navigation.
Decision making
Fully autonomous cars can make thousands of decisions for every mile traveled. They need to do so correctly and consistently. Currently, AV designers use a few primary methods to keep their cars on the right path.
- Neural networks. To identify specific scenarios and make suitable decisions, today’s decision-making systems mainly employ neural networks. The complex nature of these networks can, however, make it difficult to understand the root causes or logic of certain decisions.
- Rule-based decision making. Engineers come up with all possible combinations of if-then rules and then program vehicles accordingly in rule-based approaches. The significant time and effort required, as well as the probable inability to include every potential case, make this approach unfeasible.
- Hybrid approach. Many experts view a hybrid approach that employs both neural networks and rule-based programming as the best solution. Developers can resolve the inherent complexity of neural networks by introducing redundancy—specific neural networks for individual processes connected by a centralized neural network. If-then rules then supplement this approach.
The hybrid approach, especially combined with statistical-inference models, is the most popular one today.
Test and validation
The automotive industry has significant experience with test-and-validation techniques. Here are some of the typical approaches used to develop AVs.
-
Brute force. Engineers expose vehicles to millions of driving miles to determine statistically that systems are safe and operate as expected. The challenge is the number of miles required, which can take a significant amount of time to accumulate. Research indicates that about 275 million miles would be required for AVs to demonstrate, with 95 percent confidence, that their failure rate was at most 1.09 fatalities per 100 million miles—the equivalent of the 2013 US human-fatality rate. To demonstrate better-than-human performance, the number of miles required can quickly reach the billions.
If 100 autonomous vehicles drove 24 hours a day, 365 days a year, at an average speed of 25 miles an hour, it would take more than ten years to achieve 275 million miles.1
- Software-in-the-loop or model-in-the loop simulations. A more feasible approach combines real-world tests with simulations, which can greatly reduce the number of testing miles required and is already familiar in the automotive industry. Simulations run vehicles through algorithms for various situations to demonstrate that a system can make the right decisions in a variety of circumstances.
- Hardware-in-the-loop (HIL) simulations. To validate the operation of actual hardware, HIL simulations test it but also feed prerecorded sensor data into the system. This approach lowers the cost of testing and validation and increases confidence in its results.
Ultimately, companies will probably implement a hybrid approach that involves all of these methods to achieve the required confidence levels in the least amount of time.
Disruptive trends that will transform the auto industry
Speeding up the process
While current assessments indicate that the introduction of fully autonomous vehicles is probably over a decade away, the industry could compress that time frame in several ways.
First, AV players should recognize that it will be extremely challenging for a single company, on its own, to develop the entire software and hardware stack required for autonomous vehicles. They need to become more adept at collaborating and forming industry partnerships. Specifically, they could link up with nontraditional industry participants, such as technology start-ups and OEMs. At a granular level, this means collaborating with companies (such as lidar and mapping suppliers) from strategically important segments.
Next, proprietary solutions may be prohibitively expensive to develop and validate, since they would require a few AV players to take all the responsibility and share the risk. An open mind-set and agreed-upon standards will not only accelerate the timeline but also make the system being developed more robust. As a result, interoperable components will encourage a modular, plug-and-play system-development framework.
Another way to speed up the process would be to make the shift to integrated system development. Instead of the current overwhelming focus on components with specific uses, the industry needs to pay more attention to developing actual (system of) systems, especially given the huge safety issues surrounding AVs. In fact, reaching the levels of reliability and durability, across a vehicle’s entire life cycle, now seen in aircraft will in all likelihood become the industry’s new mandate, and an emphasis on system development is probably the best way to achieve that goal.
The arrival of fully autonomous cars might be some years in the future, but companies are already making huge bets on what the ultimate AV archetype will look like. How will autonomous cars make decisions, sense their surroundings, and safeguard the people they transport? Incumbents looking to shape—and perhaps control—strategic elements of this industry face a legion of resourceful, highly competitive players with the wherewithal to give even the best-positioned insider a run for its money. Given the frenetic pace of the AV industry, companies seeking a piece of this pie need to position themselves strategically to capture it now, and regulators need to play catch-up to ensure the safety of the public without hampering the race for innovation.