Sense and sensitivity in self-driving cars


India’s ecosystem for self-driving technology is booming, with innovations at every level, including sensor technology.

The Consumer Electronic Show (CES) is an influential tech event held every January. At this year’s edition (CES 2022), General Motors announced plans to introduce an “autonomous personal vehicle” by 2025. MobilEye, a leader in autonomous driving, presented its technology roadmap in emphasizing robustness and safety. These companies are not alone. Waymo has been operating driverless taxis in Phoenix since 2019. Apple reportedly plans to build a self-driving car in the next few years. At the heart of this technology are three sensors: camera, radar and LIDAR (Light Detection and Ranging), all of which help the vehicle accurately perceive its surroundings. Surprisingly, much of this sensor technology is already in cars on the roads today. Cameras and radar sensors routinely provide “driver assistance” functionality such as: ensuring cars stay within lane markings, warning of approaching vehicles when changing lanes, and maintaining a safe distance from the vehicle in front.


  • Self-driving technology is advancing rapidly with several big players investing heavily. The technology is fundamentally based on 3 sensors: cameras, radar and LIDAR.
  • A camera can discern colors, shapes, recognize traffic signs, etc. However, it does not transmit any detection signal and depends on ambient light reflected from objects. A radar can transmit its own signals but it cannot discern color or recognize traffic signs and also has poor “spatial resolution”. A LIDAR scans the environment with a laser beam. In many ways, it combines the best features of radar and camera, but it can’t penetrate fog or discern colors.
  • Considering the market potential, many efforts have been made to reduce the costs and fill the performance gaps of these sensors.

The three sensors

A camera system works much like a human eye: it can discern colors, shapes, recognize road signs, lane markings, etc. Most cars are equipped with stereo cameras, that is, two cameras separated by a short distance. This allows him to perceive depth (like humans). However, a camera has its limits. It transmits no detection signals and relies on ambient light reflected from objects. Thus, the lack of adequate ambient light (at night) limits its ability, as do other environmental conditions like fog and blinding sunlight.

A radar sensor transmits its own signals, which bounce off targets and are returned to the radar. Thus, unlike a camera, a radar is not dependent on ambient light. In addition, a radar transmits radio waves that can pass through fog. The radar measures the time between signal transmission and the arrival of a reflected signal from a target to estimate the distance to the target. A moving target induces a frequency shift in the signal (“Doppler shift”) which allows the radar to instantly and accurately measure the speed of the target. Thus, the radars can accurately measure the range and speed of targets largely independent of environmental conditions such as fog, rain and sunlight. However, unlike a camera, a radar cannot discern color or recognize traffic signs. A radar also has poor “spatial resolution”. So an approaching car would be visible as a drop – and the individual features (such as wheels, body outline, etc.) would not be noticeable as they would be in a camera. Thus, the capabilities of a camera and a radar sensor complement each other, which is why many cars are equipped with both cameras and radars.

LIDAR is another sensor used in autonomous vehicles. A LIDAR scans the environment with a laser beam. In many ways, LIDAR combines the best features of radar and camera. Like a radar, it generates its own transmit signal (so it is not dependent on daylight) and can accurately determine distances by measuring the time difference between the transmitted signal and the reflected signal. The narrow laser beam used for detection guarantees spatial resolution similar to that of a camera. However, LIDAR has its drawbacks – LIDAR signals cannot penetrate fog, discern color, or read traffic signs. The technology is also much more expensive than radar or camera.

Given the potential of the market, many efforts have been made both to reduce costs and to fill the performance gaps of each of these sensors. As radar companies develop imaging radars that greatly improve the spatial resolution of radar, new technologies are being explored that can lower the cost of LIDAR. At the same time, camera vision perception capabilities continue to be improved through the application of Deep Learning. However, each sensor has its limitations based on physics and technology. Although only a camera can recognize traffic signs, it cannot match the performance of a radar in adverse weather conditions. Similarly, a radar cannot match the spatial resolution of a camera or LIDAR. Experts agree that driverless vehicle technology cannot rely on just one type of sensor. There is, however, debate over an optimal sensor suite that is both safe and cost-effective. Some researchers believe that camera and radar with a good deep learning back-end can eliminate the need for LIDAR.

Sensor technology in India

India’s ecosystem for self-driving technology is booming, with innovations at every level, including sensor technology. Most of Texas Instruments’ automotive radar R&D takes place at its development center in India. Velodyne, a pioneer in LIDAR technology, recently opened a development center in Bangalore. Steredian Semiconductors, a start-up based in India, has developed an imaging radar solution. Many of the major semiconductor companies (NXP, TI, Qualcomm) are developing, in their R&D centers in India, hardware and software for perception algorithms that feed on these sensors.

Sandeep Rao is with Texas Instruments


Comments are closed.