Centralized vs. Decentralized Sensor Fusion Architectures
The architecture chosen for combining sensor data — whether all raw measurements flow to a single processing node or are reduced locally before aggregation — fundamentally shapes the accuracy, latency, fault tolerance, and scalability of any sensor fusion system. This page maps the structural distinctions between centralized and decentralized fusion architectures, their mechanical differences, the engineering drivers that favor one over the other, and the persistent misconceptions that lead to suboptimal design choices. Coverage spans autonomous vehicles, aerospace, industrial IoT, and robotics deployments where these architectural decisions carry measurable operational consequences.
- Definition and Scope
- Core Mechanics or Structure
- Causal Relationships or Drivers
- Classification Boundaries
- Tradeoffs and Tensions
- Common Misconceptions
- Checklist or Steps
- Reference Table or Matrix
Definition and Scope
In sensor fusion, centralized architecture designates a topology in which raw or minimally pre-processed measurements from all sensor nodes are transmitted to a single fusion center, where state estimation and inference occur. Decentralized architecture distributes estimation across multiple nodes, each performing local fusion on its own subset of sensor data, with results subsequently communicated to peer nodes or a coordinator.
A third configuration — the federated architecture — operates as a hybrid: local Kalman or Bayesian filters run independently at each sensor node, and only their state estimates and associated covariance matrices are forwarded to a master filter for global combination. The federated Kalman filter, formalized in work by Howard Sorenson and later elaborated in NASA Technical Memorandum 89139 (1987), remains a canonical reference for distributed navigation systems in aerospace.
The scope of this architectural distinction applies wherever sensor fusion appears in engineered systems. The key dimensions and scopes of sensor fusion — from signal modality to deployment scale — all interact with the architectural choice. Architectures are not algorithm-specific: a centralized node may run a Kalman filter, a particle filter, or a deep neural network. The architecture describes the topology of data flow, not the estimation method.
Core Mechanics or Structure
Centralized fusion operates through three sequential stages:
- Raw data ingestion — all sensor streams (e.g., LiDAR point clouds, radar Doppler returns, IMU acceleration vectors) arrive at the fusion center over a shared communication bus or network.
- Joint state estimation — a single estimator, commonly an Extended Kalman Filter (EKF) or an Unscented Kalman Filter (UKF), processes the combined measurement vector. The full joint covariance matrix is maintained, preserving all cross-sensor correlations.
- Output dissemination — the unified state estimate is distributed to downstream consumers (controllers, planners, actuators).
Decentralized fusion distributes stages 1 and 2. Each local node ingests its own sensor subset, maintains a local state estimate, and transmits a compressed representation — typically a state vector and covariance — to neighboring nodes or a global aggregator. The aggregation step combines local estimates using information-theoretic or covariance-intersection methods to avoid double-counting correlated data.
The federated filter introduced an exact information-sharing protocol: the master filter's information is partitioned among local filters using a scalar coefficient β (beta factor), ensuring that information is neither lost nor duplicated. This protocol is documented in NASA TM-89139 and remains the basis for tightly coupled GPS/INS fusion in aviation.
Sensor fusion algorithms such as the Extended Kalman Filter and particle filter can be instantiated within either architectural topology, though computational constraints often govern which combination is practical.
Causal Relationships or Drivers
Four primary engineering pressures determine which architecture a system gravitates toward:
Bandwidth and communication cost. Centralized fusion requires raw data transmission. A single automotive-grade LiDAR sensor can generate between 10 and 100 Mbps of point cloud data. Aggregating 5 such sensors at a central processor requires a bus capable of sustaining 500 Mbps of sustained throughput, before radar, camera, and IMU streams are added. CAN bus, limited to 1 Mbps, is categorically insufficient for centralized raw fusion at this scale; automotive Ethernet (100BASE-T1 or 1000BASE-T1, standardized in IEEE 802.3bw and 802.3bp) emerged partly in response to this constraint.
Computational concentration risk. Centralizing estimation creates a single point of failure. Functional safety standard ISO 26262, which governs road vehicles, assigns Automotive Safety Integrity Levels (ASIL) A through D, with ASIL D demanding the highest fault tolerance. Centralized architectures achieving ASIL D require hardware redundancy at the fusion center itself, driving cost. Decentralized designs distribute fault exposure across nodes.
Latency requirements. Real-time sensor fusion applications in autonomous driving typically require end-to-end latency below 100 milliseconds for safety-critical decisions. Transmitting uncompressed data to a central node and awaiting a fused estimate adds network round-trip delays; edge-local fusion can reduce this by 30–60% in practice, depending on bus topology.
Accuracy and correlation preservation. Centralized systems, by retaining the full cross-covariance structure between all sensors, achieve theoretically optimal estimates. Decentralized systems must approximate or discard cross-sensor correlations; covariance intersection algorithms handle unknown correlations conservatively, at the cost of increased estimate uncertainty.
Classification Boundaries
Three architecture classes are distinguishable by the level at which data is combined:
| Class | Data Transmitted | Correlation Preservation | Primary Standard Reference |
|---|---|---|---|
| Centralized | Raw measurements | Full | N/A (topology-level, not standardized) |
| Decentralized (track-to-track) | State estimates + covariances | Partial/approximated | IEEE Std 1858 (sensor performance) |
| Federated | Partitioned information states | Exact under protocol | NASA TM-89139 |
The boundary between decentralized and federated is frequently blurred in engineering literature. The distinguishing criterion is whether the information-sharing protocol guarantees no double-counting of correlated measurements. Federated filters use explicit β-partitioning to enforce this. Generic decentralized filters using covariance intersection are conservative rather than exact — a meaningful distinction in high-accuracy applications such as aerospace sensor fusion.
Data-level fusion, feature-level fusion, and decision-level fusion represent a separate classification axis (abstraction level of the fused quantity), which is orthogonal to the centralized/decentralized topology axis.
Tradeoffs and Tensions
Accuracy vs. scalability. The centralized architecture's theoretical optimality degrades in practice as the number of sensors grows, because the joint covariance matrix scales as O(n²) in the number of state dimensions. A system fusing 12 heterogeneous sensors with 6-DOF states each operates on a 72×72 covariance matrix; real-time inversion at update rates above 100 Hz is computationally intensive without specialized hardware.
Fault tolerance vs. estimate quality. Decentralized systems survive individual node failures gracefully; a failed local estimator removes only its contribution. But when cross-sensor correlations are ignored or approximated, the global estimate carries greater uncertainty. In autonomous vehicles sensor fusion, this manifests as degraded object-track confidence in scenarios where LiDAR and radar measurements of the same target are treated as independent.
Bandwidth vs. freshness. Transmitting raw data ensures the fusion center always operates on unprocessed measurements, but network congestion can cause measurement drops. Transmitting compressed state estimates reduces bandwidth by 1–2 orders of magnitude but introduces information loss from local linearization.
Standardization tension. IEEE Std 1451 (Smart Transducer Interface) addresses sensor interoperability at the hardware interface level but does not specify fusion topology. NIST's work on cyber-physical systems (NIST SP 1500-201, Framework for Cyber-Physical Systems) discusses distributed sensing architectures without mandating topology choices. No single regulatory body governs sensor fusion architecture selection in commercial deployments.
Common Misconceptions
Misconception: Decentralized always means less accurate.
Correction: When cross-sensor correlations are genuinely absent (e.g., sensors placed far apart measuring independent phenomena), decentralized fusion achieves accuracy equivalent to centralized. The accuracy gap exists only when correlated measurements are incorrectly treated as independent.
Misconception: Federated and decentralized are synonyms.
Correction: Federated filters are a specific decentralized subtype with a mathematically defined information-sharing protocol. Generic decentralized filters lack this guarantee. The distinction matters in aviation and defense contexts where navigation accuracy specifications are contractually defined.
Misconception: Centralized fusion always requires more hardware.
Correction: Centralized fusion concentrates computation but may require less total silicon than a decentralized system deploying capable local processors at every node. The cost comparison depends on node count and the computational demands of local estimation.
Misconception: Edge computing and decentralized fusion are the same thing.
Correction: Edge computing sensor fusion refers to the physical placement of computation near sensors; it does not specify whether local nodes fuse data independently or forward raw data to a central server. A centralized fusion algorithm can run at the edge.
Checklist or Steps
The following phases describe the structural evaluation process applied when assessing architecture selection for a sensor fusion deployment. This is a reference sequence, not prescriptive advice.
Phase 1 — Sensor inventory
- Count sensor nodes and modalities
- Document raw data rates per sensor (bits per second)
- Identify sensor correlation structure (do sensors observe overlapping state variables?)
Phase 2 — Communication infrastructure audit
- Determine bus protocol and maximum sustained throughput
- Measure worst-case network latency between sensor nodes and candidate fusion points
- Assess whether IEEE 802.3bw (100BASE-T1) or equivalent is available
Phase 3 — Safety and fault tolerance requirements
- Identify applicable safety integrity standard (ISO 26262 for automotive, DO-178C/DO-254 for aviation, IEC 61508 for industrial)
- Determine required ASIL or SIL level
- Map single points of failure in candidate topologies
Phase 4 — Computational resource mapping
- Benchmark estimation algorithm (EKF, UKF, particle filter) at target update rate on candidate hardware
- Assess O(n²) covariance scaling for centralized option
- Identify available platforms from sensor fusion hardware platforms
Phase 5 — Architecture selection and validation
- Select topology class (centralized, federated, or decentralized)
- Define information-sharing protocol if federated
- Validate accuracy against sensor fusion accuracy metrics
- Document rationale against safety standard requirements
Reference Table or Matrix
| Attribute | Centralized | Federated | Decentralized |
|---|---|---|---|
| Estimation optimality | Optimal (full covariance) | Exact (with β-protocol) | Conservative (covariance intersection) |
| Communication load | High (raw data) | Medium (states + covariances) | Low to medium |
| Single point of failure | Yes (fusion center) | Partial (master filter) | No |
| Scalability (sensor count) | Limited by O(n²) covariance | Moderate | High |
| Latency profile | Higher (network round-trip) | Moderate | Lower (local estimation) |
| Standards alignment | ISO 26262 (redundancy req.) | NASA TM-89139 | IEC 61508 (distributed safety) |
| Typical deployment | Prototype AV platforms, aerospace labs | Aviation GPS/INS, defense navigation | Industrial IoT, large-scale robotics |
| Applicable fusion level | Data-level | Feature-level | Decision-level |
The sensor fusion standards landscape in the US provides the regulatory and standards framework within which these architecture decisions are evaluated for compliance. For a broader orientation to how these architectural categories fit within the overall field, the sensor fusion authority index maps the full scope of covered domains.