Choosing the Eyes of Self-Driving Cars: A Fight of Sensors, Strategies and Tradeoffs

By 2030, the self-driving car market is expected to exceed $2.2 trillion, with millions of cars sailing using AI and advanced sensor systems. Yet, in this rapid growth process, a fundamental debate remains unresolved: Which sensors are best for autonomous driving – lasers, cameras, radars or something completely new?
This question is far from academic. The choice of sensors affects everything from security, performance to cost and energy efficiency. Some companies, such as Waymo, bet on redundancy and diversity and equip vehicles with a set of lasers, cameras and radar. Others, such as Tesla, rely heavily on camera and software innovation with a more minimalist and cost-effective approach.
Let’s explore these different strategies, the technical paradox they face, and the business logic that drives their decisions.
Why smart machines require smarter energy solutions
This is indeed an important question. When I launched a drone-related startup in 2013, I faced a similar dilemma. We are trying to create drones that can track human movements. At that time, this idea was in the future, but soon there was an obvious technological paradox.
In order to track the drone of the object, it must analyze sensor data that requires computing power – an onboard computer. However, the stronger the functions a computer requires, the higher the energy consumption. Therefore, more capacity batteries are needed. However, larger batteries add weight to the drone and weigh more and require more energy. A vicious cycle emerges: increased power demand leads to higher energy consumption, weight and ultimate cost.
The same problem applies to self-driving cars. On the one hand, you want to equip your vehicle with all possible sensors to collect as much data as possible, synchronize and make the most accurate decisions. On the other hand, this greatly increases the cost and energy consumption of the system. It is important to consider not only the cost of the sensor itself, but also the energy required to process the data.
The amount of data is increasing and the computing load is growing. Of course, over time, the computing system becomes more compact and energy-efficient, and the software becomes more optimized. In the 1980s, it could take several hours to process 10×10 pixel images. Today, the system will analyze 4K video in real time and perform other calculations on the device without consuming too much energy. However, performance dilemma still exists, and AV companies can improve not only sensors, but also computing hardware and optimization algorithms.
Processing or perceiving?
The system must determine that the performance issues with the data to be deleted are mainly due to computational limitations rather than issues with lidar, camera or radar sensors. These sensors act as the eyes and ears of the vehicle, constantly capturing a large amount of environmental data. But if the onboard computing “brain” lacks the processing power to process all this information in real time, it becomes overwhelming. As a result, the system must prioritize certain data streams over others, thus ignoring certain objects or scenarios in a particular situation to focus on higher priority tasks.
This computing bottleneck means that even if the sensors run perfectly and there is often redundancy to ensure reliability, the vehicle may still struggle to process all data efficiently. In this case, it is inappropriate to blame the sensor on it, because the problem lies in the data processing capability. Enhanced computing hardware and optimization algorithms are important steps to alleviate these challenges. By increasing the system’s ability to process large amounts of data, autonomous vehicles can reduce the likelihood of losing critical information, resulting in safer and more reliable operations.
LIDAR, MERA and Radar Systems: PROS & CONS
It is impossible to say that one sensor is better than the other – each sensor has its own purpose. Solve the problem by selecting the appropriate sensor for a specific task.
While providing accurate 3D mapping, LIDAR is expensive and struggles with adverse weather conditions such as rain and fog, which can spread its laser signal. It also requires a large amount of computing resources to process its intensive data.
While cost-effective, the camera is highly dependent on lighting conditions and performs poorly under low light, glare or rapid lighting changes. They also lack inherent depth perception and struggle with obstacles like dirt, rain or snow on the camera.
The radar reliably detects objects under various weather conditions, but its low resolution makes it difficult to distinguish between small or closely spaced objects. It often produces false positives, detecting unrelated items that may trigger unnecessary responses. Furthermore, unlike cameras, radar cannot visually decrypt or help visually identify objects.
By leveraging sensor fusion – combining data from LiDAR, RADAR and cameras – these systems have a more comprehensive and accurate understanding of their environment, enhancing security and real-time decision-making. Keymakr’s collaboration with leading ADAS developers demonstrates that this approach is critical to system reliability. We have been studying diverse high-quality datasets to support model training and improvement.
Waymo vs Tesla: The story of two independent visions
There are few debates as much as Tesla and Waymo in AV. Both create a future for liquidity, but have a completely different philosophy. So why does Waymo cars look like sensor-packed spacecraft, while Tesla has few external sensors?
Let’s take a look at the Waymo vehicle. This is the basic jaguar used for autonomous driving. There are dozens of sensors on the roof: laser lenses, cameras, rotating laser systems (so-called “rotators”) and radar. There are indeed many: the camera in the mirror, the sensors on the front and rear bumpers, the remote viewing system – all of which are synchronized.
If an accident occurs in such a vehicle, the engineering team will add new sensors to collect the lost information. Their approach is to use the maximum number of available technologies.
So, why doesn’t Tesla follow the same path? One of the main reasons is that Tesla has not released its robots to the market. Furthermore, their approach focuses on cost minimization and innovation. Tesla believes that using LIDARS is impractical due to its high costs: RGB cameras cost about $3, while LiDar may cost $400 or more. Additionally, LIDARS contains mechanical parts – rotating mirrors and motors, which make them more susceptible to failure and replacement.
By contrast, the camera is static. They have no moving parts, are more reliable, and can run for decades until the shell degrades or the lens becomes dark. Furthermore, cameras are easier to integrate into the car’s design: they can be hidden inside the body and are almost invisible.
There are also big differences in production methods. Waymo uses the existing platform (producing Jaguar) to install the sensors into it. They have no choice. Tesla, on the other hand, produces vehicles from scratch and can integrate sensors into the body from the beginning to mask them. Formally, they will be listed in the specifications, but visually they will hardly attract attention.
Currently, Tesla uses eight cameras around the car – front, rear, side mirrors and doors. Will they use other sensors? I believe it.
Based on my experience as a Tesla driver, he also rides a Waymo vehicle, and I believe that integrating into Lidar will improve Tesla’s complete autonomous driving system. In my opinion, Tesla’s FSD currently lacks accuracy when driving. Adding lidar technology can enhance its ability to drive challenging conditions, such as a lot of sunlight, air dust or fog. This improvement may make the system safer and more reliable than relying on the camera alone.
But from a business perspective, when a company develops its own technology, its goal is to compete – technological advantage. If it can create a more efficient and cheaper solution, it will open the door to market dominance.
Tesla follows this logic. Musk does not want to take the path of other companies such as Volkswagen or Baidu, which have also made great progress. Even systems like Mobileye and Isight, installed in older cars, have demonstrated decent autonomy.
But Tesla’s goal is unique – it’s business logic. If you don’t offer something better, the market won’t choose you.