Cars, trucks, buses and heavy machinery are on the path to full autonomy – meaning fully capable of self-driving in all conditions and terrain. However, full autonomous driving is not yet available. And whether that path to full autonomy takes five or 20 years, highly-automated vehicles (HAVs), which range anywhere from cars with collision mitigation braking to fully autonomous (a.k.a. “self-driving”) vehicles, will need a robust, precise, and data-rich remote sensing technology to enable them to “see” their entire surroundings.
No HAV could possibly hope to navigate any real-world setting without a detailed and a continuously updated 3D map or “point cloud” in every direction (360º horizontally), with broad vertical field-of-view, high precision and extending out 100 meters or more. Today, there is only one technology that can reliably deliver on these stringent requirements: Light Detection and Ranging, or LiDAR. It is laser-precise, increasingly affordable and arguably represents the future of autonomous or semi-autonomous driving.
For Safety’s Sake
Seat belts, anti-lock brakes, airbags, child car seats: All these now standard and government-mandated automotive safety features were once new innovations that some consumers criticized or even shrugged off. And yet over the past half-century car safety technologies have saved more than 600,000 lives in the U.S. One recent study by the National Highway Traffic Safety Administration found that seat belts alone save some 15,000 lives each year – a quiet improvement in public health and mortality rates comparable to completely curing bladder or kidney cancer.
Today, the automobile is undergoing a radical transformation unlike anything since the dawn of the internal combustion engine. HAVs, are today poised to yield hundreds of billions of dollars in estimated economic benefits to the U.S. economy alone. And early indications suggest HAVs could drastically reduce the 94 percent of all car, truck and bus crashes that are due to human error.
A key technology that will enable nearly all variations of HAVs is LiDAR, a rapid and eye-safe laser 3D sensing technique that’s similar to radar but uses infrared light instead of radio waves. Just as a bat uses echolocation to “see” its surroundings, LiDAR uses infrared lasers. LiDAR maps out a 3D space pixel-by-pixel, measuring distances to each pixel by timing how long it takes for brief, infrared laser pulses reach an object, reflect off it and return to the sensor. The LiDAR unit then calculates the distance to the object using the formula distance equals rate times time, with in this case, rate being the speed of light and time being the measured time-of-flight of the laser pulse.
Velodyne LiDAR goes two further steps by combining multiple laser/detector pairs (up to 64) and rotating the entire unit 360º up to 20 times per second. The rapid laser pulses, fast rotation speeds and multiple lasers enable Velodyne LiDAR to achieve some of the highest resolution and most reliable data in the industry today – 360º 3D maps consisting of up to 1.3 million data points (pixels) per second, representing objects up to 200 meters away at a resolution of just two centimeters. In addition, such data-rich performance is possible while driving at highway speeds.
Chief LiDAR Technologies
In all, the chief LiDAR technologies in the marketplace are:
- Flash LiDAR: Illuminates a narrow field of view with a flash of infrared light, requiring six or more sensors to achieve a full 360º view. Although often lower priced than mechanically rotating units, Flash LiDAR generally suffers from lower functionality.
- Solid State LiDAR: Solid state LiDAR (with no mechanical components) remains a relatively new field for LiDAR technology. Velodyne has developed its own solid-state LiDAR sensors, using a monolithic gallium nitride (GaN) integrated circuit. The design results in significant advances in sensor miniaturization, reliability and cost reduction. Each integrated circuit is less than 4mm square.
- Rotating, multi-laser (or multi-channel) LiDAR: Velodyne’s patented 16, 32 and 64-laser beam sensors remain the gold standard of LiDAR accuracy and reliability. Today, Velodyne’s multi-channel LiDAR sensors are used by virtually every car manufacturer and tier-1 supplier in the auto industry, as well as by many manufacturers of heavy equipment, trucks and marine navigation.
Due to its laser precision and the high-speed processing engines that typically power it, LiDAR is unrivaled in both the quality and pixel density of the millisecond-by-millisecond 3D maps it generates. As a complement to other remote sensing technologies – onboard cameras, sonar, radar – LiDAR is, as Consumer Reports recently noted, “the most… important technological piece of the self-driving puzzle.”
But the LiDAR marketplace is also increasingly cluttered with competing companies, claims, technologies and standards. What are the benefits and downsides to the technology? And what difference can it make in increasing automotive safety and reducing the fatalities, injuries and crashes on the roadways of the 2020s and beyond?
Handling the Eight Hazards of Automated Driving
LiDAR greatly improves safety of automated, semi-automated or ADAS-enabled vehicles encountering any of the following eight hazards of automated driving:
- Nighttime driving
Statistically, driving at night entails a three-times greater risk of fatal crashes per mile of travel than driving during the daytime. Night road conditions involving low-luminance, low-contrast objects (vehicles, trees, animals, pedestrians, etc.), some of which are themselves traveling at high speeds, fundamentally strains the limits of human vision. HAVs will increase nighttime seeing capabilities and reduce the hazards of operating a vehicle after dark.
Yet not all HAV remote sensing technologies are well suited for nighttime use. For example, short-range or long-range radar (at 24 GHz and 77 GHz, respectively) can resolve objects equally well in daylight and nighttime. But neither short-range nor long-range radar can resolve their far or near fields, respectively, at useful resolutions. So a workable solution using both radar formats would have to incorporate additional, complicating sensor-fusion software that stitches together signals from both sensors, as well as wasting precious milliseconds in assembling a raw 3D point cloud to send to the HAV’s computers.
Ultrasonic/sonar and infrared (a.k.a. night vision) cameras offer slightly better prospects for low-light, low-contrast seeing conditions. But at any range other than 0 to 2 meters, ultrasonic also offers poor resolution. And at nominal to long-range distances, infrared cameras (presuming at least two placed on the car for binocular depth perception) offer little in the way of 3D information.
By contrast a single LiDAR unit resolves near-field (0-2 m), nominal (2-30 m) and long-range (30-100 m) objects, with full 3D resolution. At both narrow and wide vertical fields-of-view, LiDAR (especially multi-laser LiDAR) also offers ideal or near-ideal seeing compared to every other sensor technology available today.
When Ford recently tested a self-driving Ford Fusion Hybrid autonomous vehicle in complete darkness without headlights, it used a suite of Velodyne HDL-32E LiDAR units as its essential nighttime seeing sensor. “The main purpose,” says Jim McBride, Ford senior technical leader for autonomous vehicles, “was, in a visceral way, to educate people that we don’t need lights to drive in the dark.”
- Long-range seeing
Human vision, for all its versatility and adaptability, can still be pushed to its limits in challenging driving situations. At long ranges, for instance, eyeballing a moving object’s speed in low-light is notoriously unreliable and potentially puts the driver at greater risk of an accident. Also, drivers viewing long-range objects suffer increasingly unreliable 3D depth perception as available light decreases.
Ultrasonic, visible light and infrared cameras all fall short in delivering reliable and sufficient information about the 3D environment surrounding the car at distances of 30 meters or more. And while some long-range radar units can deliver satisfactory results (albeit at lower resolutions than LiDAR), radar also suffers from background noise problems and reflectivity discrepancies with metallic versus non-metallic objects.
So LiDAR is simply an essential set of long-range eyes to have onboard a vehicle, either as a component of an advanced driver assistance system (ADAS) or as part of the detector suite for a self-driving vehicle.
- Blind spots and wide-angle hazards
Blind spots in any vehicle can be deadly. Whether they’re from the windshield pillar, the side-view mirror, the interior rear-view mirror or any other visual obstructions inside the cabin, blind spots can represent a dangerous breakdown in field-of-view continuity. According to NHTSA, there 840,000 blind spot accidents in the U.S. every year, causing 300 deaths. And yet, for the manifest dangers blind spots pose to human drivers, blind spots in remote sensing technologies (cameras, radar, sonar, ultrasonic, LiDAR) can be just as substantial.
As noted above, the wrong combination of remote sensing technologies could yield some dangerous levels of vehicular myopia. By contrast, a smart combination of modalities could greatly reduce and mitigate the problems of human (and machine) blind spots.
Depending on sensor placement on the vehicle and the line-of-sight blockage caused by other parts of the car, a rotating, multi-laser LiDAR unit represents a crucial component of a vehicle’s wide-angle sensor suite. No other single technology can yield an immediate 360º sweep of its environment, updated 20 times per second.
- Difficult, real-world object detection and recognition
To be broadly adopted in the consumer marketplace, HAVs are today being safety engineered out to “six 9s,” meaning they must be at least 99.9999% reliable. This entails being able to handle not just run-of-the-mill, everyday driving scenarios but also the “perfect storm” ones too: A summer sunset producing a brief burst of blinding road glare combined with an oncoming car swerving into the lane at exactly the wrong moment, for example.
A human driver might not do well with such an unfortunate set of circumstances, but LiDAR as a key component to a vehicle’s remote sensing suite could assist and potentially mitigate or eliminate some of these kinds of problems.
Of course, it’s not surprising that carmakers consistently want more and better data about the vehicle’s surroundings from its remote sensors – radar, sonar, infrared, cameras, and LiDAR. And time is always of the essence. A fraction of a second can be the difference between completely avoiding a collision and something worse than complete avoidance.
So in the scenario above, where road glare is a key contributing factor, a sensor unaffected by the vagaries of ambient lighting such as LiDAR would play a crucial role. And the broadness of the sensor’s field-of-view (360º horizontal, 30º to 40º vertical, for Velodyne LiDAR) is equally significant.
Just as with a consumer camera, the higher the pixel count generated with every 360º sweep, the richer the 3D map (a.k.a. “point cloud”), which allows for the highest level of object recognition and tracking capabilities in real-time.
Moreover, having a 360º field of view allows not only distinguishing objects directly in front, but also on the side and rear. LiDAR data can discern a person from a streetlight standing on the curb, as well as identify a fast-closing object from the sides or rear. Rotating multi-laser LiDAR’s surround-view capability is a major advantage for object recognition and safety compared to mostly directional, limited field of view, limited laser count sensors.
- Trajectory forecasting and tracking
Highly-autonomous vehicles must not only have a second-by-second, data-rich 3D map of their surroundings, they also need accurate trajectories for all the vehicles and other nearby objects in motion to forecast where they’ll be in the seconds and milliseconds ahead.
Unlike cameras that can’t access laser-precise, time-of-flight distance information, LiDAR is inherently 3D and thus an inherently trajectory-yielding technology.
Super Bowl 50 in 2016 proved an unexpected showcase of LiDAR’s trajectory-mapping talents. The MVP Interactive Quarterback Challenge enabled participants to stand on a patch of Astroturf and try their hand at a simulated pro football play. The goal was to take the snap and throw real footballs as quickly and accurately as possible toward the life-sized video screen depicting each play. A Velodyne VLP-16 LiDAR read off each tossed football’s speed and direction to predict where the ball would land.
“The same company that provides next-generation LiDAR sensors for Ford’s self-driving cars now offers technology that’s fast and accurate enough to capture a football’s trajectory in fractions of a second, in real-time 3D, for MVP’s Quarterback Challenge,” says Velodyne’s former Director of Technical Solutions David Oroshnik. “Beyond this football-themed solution, the simulator bodes well for an entire range of applications that can benefit from real-time multi-channel LiDAR technology.”
In fact, since it helped five DARPA Urban Challenge teams complete the legendary self-driving car competition in 2007, Velodyne LiDAR has probably mapped and forecast more vehicle trajectories than any other LiDAR technology in the world. (And when China held its own self-driving car competition in 2015, the Intelligent Vehicle Future Challenge, Velodyne LiDAR powered 17 of the 20 competitors, including all top 5 finishers.)
“In this sensor competition, the clear winner was Velodyne, a Morgan Hill company run by brothers David and Bruce Hall, who developed a non-visual detection system that they sold as a steering input to five of the six teams finishing the race,” said The San Francisco Chronicle of DARPA’s Urban Challenge competition in 2007. “The system, conceived by engineering genius David Hall, uses a spinning ball that shoots out 64 lasers up to 300 feet and computes how long it takes the light to bounce back.”
- Inclement weather
According to the U.S. Department of Transportation, inclement weather accounts for 22 percent of all vehicle crashes, 19 percent of all crash injuries and 16 percent of all crash fatalities. Falling snow, sleet or rain, of course, represents a challenge for any remote sensing technology. If it’s a camera (using ambient light from the vehicle’s headlights), it’s going to be no better off than a human driver trying to peer through the sheets of precipitation. Radar may at first glance seem the ideal technology, because radio waves can peer through at least light levels precipitation.
But LiDAR is an important sensing modality not to be sidelined by bad weather either. Multi-laser LiDAR provides the greatest assist in rain or snow, as even a completely obstructed beam by a large snowflake, hailstone or raindrop would just be one of 16, 32, or 64 possible light sources at any given moment. Moreover, Velodyne uses lenses on its LiDAR sensors that spread out the individual beams into effectively millions of infrared beams at different angles, enabling it to find “holes” between snowflakes to continue their path toward the solid objects they’re actually intended to illuminate.
In fact, Ford and the University of Michigan have developed an algorithm for its Velodyne LiDAR HDL-32 sensors to enable better seeing in rain, snow or sleet. Ford’s “Snowtonomy” program effectively subtracts off the spurious reflections from snow and rain. It does this by piecing together echoes of the LiDAR pulse that were diverted by the intervening raindrop and then reflected back by the ground. Solid objects won’t do this, and so they won’t produce echoes in this way. Small, transient things like raindrops will have echoes, and they can be detected and ultimately subtracted by the computer processing the LiDAR signal.
Ford’s McBride said, “If you record not just the first thing your laser hits, but subsequent things, including the last thing, you can reconstruct a whole ground plane behind what you’re seeing, and you can infer that a snowflake is a snowflake.”
- Potholes, road construction, unmapped hazards
Unmapped hazards and road complications like construction sites and potholes are always a threat to any driver, whether human or an autonomous system. Humans have the advantage of intuition and “street sense” that can enable split-second judgment calls. But judgment calls can only be as good as the sensory (or sensor) information behind them. And because of the rich and reliable detail of the 3D maps it makes (2-centimeter resolution out to 100-meter distances), LiDAR will be a crucial component of HAV sensor suites seeking situation awareness of even unmapped terrain or obstacles.
Because of LiDAR’s established track record in the emerging self-driving car industry, HAV engineers and developers have also built up an industry’s worth of software and hardware LiDAR interfaces and processing code.
“We have long been the industry’s reference point in LiDAR, delivering innovative, versatile products based on proven technology to some 500 customers thus far,” says Velodyne president Mike Jellen. “Within the last eight years, vehicles equipped with Velodyne LiDAR sensors are estimated to have carried in excess of 3,000 drivers and traveled more than 50 million incident-free miles. That’s the track record of an undisputed market leader.”
- Unpredictable actions by human drivers, animals, pedestrians, bicyclists, etc.
Finally, there’s always the X factor presented by human drivers, pedestrians, bicyclists, animals and other inherently unpredictable actors in a scene. Here, LiDAR has also proved itself through road testing with a city known for its notoriously unpredictable driving conditions.
“Fully autonomous driving under mixed road conditions is universally challenging, with complexity further heightened by Beijing’s road conditions and unpredictable driver behavior,” says Wang Jing, SVP of Baidu and General Manager of Baidu’s newly established Autonomous Driving Business Unit.
“Velodyne LiDAR has become the de facto standard for autonomous vehicles and we’re honored to have assisted Baidu in its rigorous inaugural test,” said Wei Weng, Velodyne Asia Sales Director. “Baidu’s autonomous car performed superbly, completing nuanced maneuvers that included recognizing road lanes and the distance between – and speed – of other vehicles. These are genuine breakthroughs for the company.”
In today’s world, cars, trucks, buses and heavy machinery are all increasingly being automated. Ultimately, within a five- to 20-year timeframe, depending on the forecast one consults, those semi-automated vehicles will eventually become fully automated. However, both during the transition and after the transition, the remote sensing technologies that highly automated vehicles use will matter greatly. Whether it’s nighttime driving, or long-range or wide-angle seeing, bad weather or real-world “perfect storms,” the HAV will only be as effective and safe as the “eyes” it possesses.
LiDAR provides a laser-precise 3D point cloud or map of a vehicle’s surrounding environment. In the case, the solutions Velodyne LiDAR produces represent the gold standard of sensor accuracy and reliability. No wonder nearly all the major automakers as well as Google/Waymo, Baidu and numerous other research, industry, robotics and heavy machinery companies rely on Velodyne as their preferred LiDAR provider.
The remaining hurdle to widespread LiDAR adoption in the HAVs of the future is price. As Consumer Reports also noted, “Lidar systems are accurate, but cost as much as $7,500 per car.” However, with the opening of its own Megafactory and development of its own next-generation, low-cost and solid-state LiDAR sensor, Velodyne is now paving the way toward a price point in the hundreds, not thousands of dollars. Many industry observers now suspect if LiDAR makers can clear the affordability hurdle, laser-precise 3D LiDAR seeing could become as standard an amenity for cars of the future as an engine and four wheels.
To learn more about, visit Velodyne LiDAR’s website.
About Velodyne LiDAR
Founded in 1983 by David S. Hall, Velodyne Acoustics Inc. first disrupted the premium audio market through Hall’s patented invention of virtually distortionless, servo-driven subwoofers. Hall subsequently leveraged his knowledge of robotics and 3D visualization systems to invent ground breaking sensor technology for self-driving cars and 3D mapping, introducing the HDL-64 Solid-State Hybrid LiDAR sensor in 2005. Since then, Velodyne LiDAR has emerged as the leading supplier of solid-state hybrid LiDAR sensor technology used in a variety of commercial applications including advanced automotive safety systems, autonomous driving, 3D mobile mapping, 3D aerial mapping and security. The compact, lightweight HDL-32E sensor is available for applications including UAVs, while the VLP-16 LiDAR Puck is a 16-channel LiDAR sensor that is both substantially smaller and dramatically less expensive than previous generation sensors.