Always-On Sensor Fusion: Why bitsensing combines Radar and Camera at all times

Autonomous vehicles generally follow one of two approaches to perception.
Some systems switch sensors depending on conditions. Others rely on multiple sensors simultaneously, at all times.
At bitsensing, we are clear about our position: safety is not achieved by choosing which sensor to trust. It is achieved by seeing together, always.
True reliability in autonomy does not come from physically combining sensors. It comes from aligning visual and physical data in real time through an always-on fusion architecture.
The Risk of On-Demand Sensing: When Perception Breaks
Many perception stacks rely primarily on cameras under normal conditions, activating radar only when visibility degrades due to rain or fog. While practical in theory, this “on-demand” model introduces a critical weakness: discontinuity in perception.
Switching sensors requires data alignment and synchronization. Even a short delay can create a perception gap in high-speed driving.
According to data from the U.S. National Highway Traffic Safety Administration(NHTSA) and SAE studies, at 100 km/h (27.7 m/s), a system delay of just 0.1 seconds means a vehicle travels approximately 2.77 meters without control intervention. In collision scenarios, that distance can be decisive.
Even in clear weather, cameras struggle with accurate distance and velocity estimation. Research from the University of Michigan Transportation ResearchInstitute (UMTRI) shows that monocular camera distance errors increase exponentially beyond 50 meters. Under adverse weather conditions, error rates can rise to 20–30%.
Without radar continuously operating alongside vision, there is no real-time cross-validation mechanism to correct these visual uncertainties.
Cameras are also vulnerable to backlighting, tunnel entry and exit lighting transitions, and low-light night environments. When radar engagement is delayed, maintaining high Automotive Safety Integrity Levels(ASIL) becomes significantly more challenging.
bitsensing’s Approach: Always-On Sensor Fusion
bitsensing integrates radar and camera sensing simultaneously, forming a unified perception system rather than a primary–backup structure.
Radar delivers highly precise velocity and distance measurements (within ±0.1 m/s accuracy), while cameras provide semantic and structural object information. These data streams continuously validate each other in real time.
All fusion and decision-making processes occur at the edge, near the sensor level.
Edge computing reduces transmission latency by up to 80% compared to centralized or cloud-based architectures, enabling faster response to unexpected events. Industry reports, including technical studies from Waymo and Aptiv, indicate that radar–camera fusion systems can improve mean Average Precision (mAP) in object detection by approximately 15–25% compared to camera-only systems.

Beyond Late Fusion: Moving Toward Early Fusion
bitsensing’s sensor fusion architecture goes beyond simply merging recognition outputs at the final stage.
Instead of combining results after each sensor independently interprets its data (late fusion), bitsensing leverages raw data characteristics at the earliest possible stage — integrating physical and visual information before independent interpretation introduces uncertainty.
Distance, velocity, and motion vectors from radar are considered simultaneously with shape and semantic classification from vision within a unified frame.
IEEE research indicates that early fusion approaches can reduce object detection failure rates in adverse weather conditions by more than 30% compared to late fusion systems.
Data-level fusion is particularly resilient in noisy urban environments, extracting meaningful signals while filtering environmental interference. This is essential for maintaining reliability in complex city driving.
Safety Is Not a Mode — It Is the Default
As long as radar remains a “backup” or secondary layer, it is difficult to meetthe extreme safety requirements demanded by higher levels of autonomy.
For bitsensing, radar and camera form a single, cohesive perception system. Always-on fusion reduces uncertainty quantitatively and strengthens trust in autonomous systems.
Autonomy is not defined by how well a vehicle performs in ideal conditions.
It is defined by how consistently it perceives reality when conditions change.
Always-on sensing is not an enhancement.
It is the foundation.
bitsensing| Radar Reimagined