Conditional independence is perhaps the most pervasive of all assumptions used in data fusion and estimation algorithms. If a noise corrupted observation is conditionally independent of the system state, Bayes rule can be used in an optimal, recursive manner. However, this assumption rarely holds true in practice. One cause of dependency is modelling errors: any process or observation model is an approximation and incorrectly describes the true response of the system. A second cause is simplification. Even if the models are precisely known, the full Bayesian computations could be prohibitively expensive and some approximation must be used. The problem of unmodelled dependency is often addressed through judicious tuning. However this can lead, in some circumstances, to an unacceptable degradation of estimator performance. One class of problems where this can arise is distributed data fusion (DDF), where unmodelled dependencies lead to double counting.