Centralized vs. Decentralized Sensor Fusion Architectures
The architecture chosen for sensor fusion — whether data flows to a single processing node or is distributed across multiple agents — determines latency tolerance, fault resilience, bandwidth requirements, and computational overhead across the entire system. This page characterizes both fusion architectures, their operating mechanisms, the deployment conditions where each is appropriate, and the technical boundaries that separate one from the other. The distinctions are fundamental to any engineering decision in sensor fusion architecture and apply across domains from autonomous vehicles to aerospace.
Definition and Scope
Centralized sensor fusion routes raw or minimally preprocessed data from all sensors to a single processing unit, where the full fusion computation — state estimation, data association, and output generation — is performed. Every sensor in the network feeds one node; that node holds the complete world model.
Decentralized sensor fusion distributes processing across multiple nodes, each of which may perform local estimation using its own sensor subset. Nodes then share either local estimates or processed tracks with peer nodes or a coordinator, rather than transmitting raw sensor streams to a central location.
A third variant — hierarchical (federated) fusion — uses a two-tier structure in which local nodes perform pre-processing and the results are aggregated at a master filter. The federated Kalman filter architecture, described in technical reports by the U.S. Air Force Research Laboratory and formalized in work by Howard Sorenson at the Naval Ocean Systems Center in the 1970s and 1980s, is the canonical reference implementation for this intermediate class.
IEEE Standard 1451, maintained by the IEEE Standards Association (IEEE 1451), provides a framework for smart transducer interfaces that underpins interoperability requirements across sensor nodes regardless of which fusion architecture is selected. The standard's network-layer definitions directly affect how raw data or local estimates travel between sensor nodes and fusion processors.
Scope for this page covers architectures used in real-time state estimation systems. Static batch-processing frameworks used in geospatial surveying or laboratory signal processing follow different design constraints and are outside this comparison.
How It Works
Centralized Architecture
- Data ingestion — Each sensor transmits raw measurements (voltage, distance, angular rate, pixel arrays) over a dedicated channel to the central fusion processor.
- Preprocessing — The central node applies calibration corrections, timestamp alignment, and coordinate frame transformations. See sensor calibration for fusion and sensor fusion data synchronization for the preprocessing requirements feeding this stage.
- State estimation — A single filter — typically a Kalman filter variant, an unscented Kalman filter, or a particle filter — processes all sensor inputs simultaneously against a unified state vector. This is covered in detail in Kalman filter sensor fusion and particle filter sensor fusion.
- Output generation — The fused state estimate (position, velocity, orientation, or higher-level object classification) is published to downstream consumers.
Because all measurements are processed together, the centralized approach is mathematically optimal under linear-Gaussian assumptions — it minimizes mean-square error across the full sensor suite.
Decentralized Architecture
- Local sensing — Each node collects measurements from its co-located sensor subset.
- Local estimation — The node applies an independent filter, producing a local state estimate with an associated covariance matrix.
- Track-to-track fusion — Local estimates are transmitted to peer nodes or a coordinator. The receiving node applies a track fusion algorithm — often a covariance intersection method — to combine estimates without requiring knowledge of cross-correlations between nodes.
- Consistency maintenance — Nodes propagate their estimates between communication epochs using their own motion models, maintaining a local world model even during communication gaps.
Decentralized fusion inherently sacrifices some statistical optimality relative to centralized processing because track-to-track fusion discards measurement-level information. Covariance intersection, a conservative fusion method, is designed to produce consistent estimates even when cross-correlations are unknown, at the cost of wider confidence bounds (Julier and Uhlmann, 1997).
Common Scenarios
Where Centralized Fusion Dominates
- Autonomous vehicle perception stacks — A single high-compute platform fuses LiDAR-camera fusion data, radar sensor fusion, and GNSS sensor fusion in real time. Latency budgets for AV perception loops are typically under 100 milliseconds, making centralized processing on an onboard compute unit the standard architecture for production deployments. Waymo's published perception architecture and the SAE International J3016 autonomy levels taxonomy both presuppose a unified perception module.
- Aerospace inertial navigation — Military and commercial aircraft consolidate IMU sensor fusion with GPS and air data into a single avionics unit. DO-178C (RTCA) governs software qualification for airborne systems and requires deterministic behavior compatible with centralized processing pipelines. Sensor fusion in aerospace covers the specific compliance requirements.
- Industrial robotics — Single-arm manipulation systems with a small fixed sensor suite naturally centralize fusion; latency is predictable and bandwidth is not the binding constraint.
Where Decentralized Fusion Dominates
- Multi-robot systems — In robotics sensor fusion applications involving swarms or multi-agent coordination, transmitting raw sensor streams from 10 or more robots to a single node is bandwidth-prohibitive. Each robot maintains its local map and shares compressed track data.
- IoT sensor networks — Large-scale IoT sensor fusion deployments in smart infrastructure often span hundreds of nodes over wide geographic areas. Sensor fusion in smart infrastructure deployments at the city level rely on decentralized estimation because transmitting raw data from 500+ environmental sensors to one node is not operationally viable.
- Healthcare wearables — Multi-sensor wearable devices perform local fusion on low-power microcontrollers before transmitting condensed state estimates, reducing power consumption and protecting raw biometric data. Sensor fusion in healthcare details the clinical and regulatory constraints that shape this architecture choice.
Decision Boundaries
The selection between centralized and decentralized architectures is not a binary preference — it is determined by a structured set of measurable system constraints.
| Criterion | Centralized Favored | Decentralized Favored |
|---|---|---|
| Sensor count | 2–8 sensors, tightly co-located | 9+ sensors, spatially distributed |
| Bandwidth | High-bandwidth backbone available | Constrained or intermittent links |
| Latency budget | Under 50 ms end-to-end | Tolerant of per-node processing delays |
| Fault tolerance requirement | Single system with redundant compute | Must survive node failures independently |
| Statistical optimality | Required by specification | Acceptable to sacrifice for consistency |
| Computational cost | Concentrated budget at one node | Distributed budget across nodes |
Fault tolerance is the deciding constraint in safety-critical deployments. A centralized processor represents a single point of failure; its loss eliminates the entire fused output. Decentralized nodes degrade gracefully — the loss of one node reduces coverage without total system failure. Sensor fusion security and reliability addresses how this distinction maps to functional safety standards, including ISO 26262 for automotive and IEC 61508 for industrial applications.
Latency and real-time performance represents the second dominant boundary. The sensor fusion latency and real-time requirements for a flight control system differ categorically from those of a building occupancy detection grid. Where deterministic sub-50 ms response is contractually or regulatorily required, centralized architectures on dedicated hardware — including FPGA sensor fusion implementations — are the established solution.
Algorithm selection is constrained by architecture. Deep learning sensor fusion models that require access to raw sensor tensors from multiple modalities simultaneously are architecturally incompatible with fully decentralized systems; they presuppose centralized data aggregation. Complementary filter sensor fusion methods, by contrast, are lightweight enough to run on edge nodes in a decentralized configuration.
For teams evaluating implementation paths, sensor fusion algorithms, sensor fusion software platforms, and ROS sensor fusion provide the next level of architectural detail. A general entry point into the broader field is available through sensor fusion fundamentals, and the full scope of the technology service landscape is indexed at sensorfusionauthority.com.
References
- IEEE Standard 1451 — Smart Transducer Interface — IEEE Standards Association
- Julier, S. J., & Uhlmann, J. K. (1997). A Non-divergent Estimation Algorithm in the Presence of Unknown Correlations — AIAA
- DO-178C: Software Considerations in Airborne Systems and Equipment Certification — RTCA (Radio Technical Commission for Aeronautics)
- SAE International J3016: Taxonomy and Definitions for Terms Related to Driving Automation Systems — SAE International
- ISO 26262: Road Vehicles — Functional Safety — International Organization for