TY - JOUR
T1 - The Performance Limit for Distributed Bayesian Estimation with Identical One-Bit Quantizers
AU - Li, Xia
AU - Guo, Jun
AU - Chen, Hao
AU - Rogers, Uri
PY - 2015/1/1
Y1 - 2015/1/1
N2 - In this paper, a performance limit is derived for a distributed Bayesian parameter estimation problem in sensor networks where the prior probability density function of the parameter is known. The sensor observations are assumed conditionally independent and identically distributed given the parameter to be estimated, and the sensors employ independent and identical quantizers. The performance limit is established in terms of the best possible asymptotic performance that a distributed estimation scheme can achieve for all possible sensor observation models. This performance limit is obtained by deriving the optimal probabilistic quantizer under the ideal setting, where the sensors observe the parameter directly without any noise or distortion. With a uniform prior, the derived Bayesian performance limit and the associated quantizer are the same as the previous developed performance limit and quantizers under the minimax framework, where the parameter is assumed to be fixed but unknown. This proposed performance limit under distributed Bayesian setting is compared against a widely used performance bound that is based on full-precision sensor observations. This comparison shows that the performance limit derived in this paper is comparatively much tighter in most meaningful signal to-noise ratio (SNR) regions. Moreover, unlike the unquantized observations performance limit which can never be achieved, this performance limit can be achieved under certain noise observation models.
AB - In this paper, a performance limit is derived for a distributed Bayesian parameter estimation problem in sensor networks where the prior probability density function of the parameter is known. The sensor observations are assumed conditionally independent and identically distributed given the parameter to be estimated, and the sensors employ independent and identical quantizers. The performance limit is established in terms of the best possible asymptotic performance that a distributed estimation scheme can achieve for all possible sensor observation models. This performance limit is obtained by deriving the optimal probabilistic quantizer under the ideal setting, where the sensors observe the parameter directly without any noise or distortion. With a uniform prior, the derived Bayesian performance limit and the associated quantizer are the same as the previous developed performance limit and quantizers under the minimax framework, where the parameter is assumed to be fixed but unknown. This proposed performance limit under distributed Bayesian setting is compared against a widely used performance bound that is based on full-precision sensor observations. This comparison shows that the performance limit derived in this paper is comparatively much tighter in most meaningful signal to-noise ratio (SNR) regions. Moreover, unlike the unquantized observations performance limit which can never be achieved, this performance limit can be achieved under certain noise observation models.
KW - 1-bit quantization
KW - asymptotic performance limit
KW - cramer-rao lower bound
KW - distributed bayesian estimation
UR - https://scholarworks.boisestate.edu/electrical_facpubs/305
UR - http://dx.doi.org/10.1109/DSP-SPE.2015.7369521
U2 - 10.1109/DSP-SPE.2015.7369521
DO - 10.1109/DSP-SPE.2015.7369521
M3 - Article
JO - Electrical and Computer Engineering Faculty Publications and Presentations
JF - Electrical and Computer Engineering Faculty Publications and Presentations
ER -