Research Papers

Distributed Gaussian Process Regression Under Localization Uncertainty

[+] Author and Article Information
Sungjoon Choi

Department of Electrical and Computer Engineering,
ASRI, Seoul National University,
Seoul 151-744, Korea
e-mail: sungjoon.choi@cpslab.snu.ac.kr

Mahdi Jadaliha

Department of Mechanical Engineering,
Michigan State University,
East Lansing, MI 48824-1226
e-mail: jadaliha@egr.msu.edu

Jongeun Choi

Department of Mechanical Engineering,
Department of Electrical and Computer Engineering,
Michigan State University,
East Lansing, MI 48824-1226
e-mail: jchoi@egr.msu.edu

Songhwai Oh

Department of Electrical and Computer Engineering,
ASRI, Seoul National University,
Seoul 151-744, Korea
e-mail: songhwai.oh@cpslab.snu.ac.kr

1Corresponding author.

Contributed by the Dynamic Systems Division of ASME for publication in the JOURNAL OF DYNAMIC SYSTEMS, MEASUREMENT, AND CONTROL. Manuscript received December 19, 2013; final manuscript received July 16, 2014; published online October 21, 2014. Assoc. Editor: Dejan Milutinovic.

J. Dyn. Sys., Meas., Control 137(3), 031007 (Oct 21, 2014) (11 pages) Paper No: DS-13-1519; doi: 10.1115/1.4028148 History: Received December 19, 2013; Revised July 16, 2014

In this paper, we propose distributed Gaussian process regression (GPR) for resource-constrained distributed sensor networks under localization uncertainty. The proposed distributed algorithm, which combines Jacobi over-relaxation (JOR) and discrete-time average consensus (DAC), can effectively handle localization uncertainty as well as limited communication and computation capabilities of distributed sensor networks. We also extend the proposed method hierarchically using sparse GPR to improve its scalability. The performance of the proposed method is verified in numerical simulations against the centralized maximum a posteriori (MAP) solution and a quick-and-dirty solution. We show that the proposed method outperforms the quick-and-dirty solution and achieve an accuracy comparable to the centralized solution.

Copyright © 2015 by ASME
Your Session has timed out. Please sign back in to continue.


Culler, D., Estrin, D., and Srivastava, M., 2004, “Guest Editors' Introduction: Overview of Sensor Networks,” Computer, 7(8), pp. 41–49. [CrossRef]
Estrin, D., Culler, D., Pister, K., and Sukhatme, G., 2002, “Connecting the Physical World With Pervasive Networks,” IEEE Pervasive Comput., 1(1), pp. 59–69. [CrossRef]
Lynch, K., Schwartz, I., Yang, P., and Freeman, R., 2008, “Decentralized Environmental Modeling by Mobile Sensor Networks,” IEEE Trans. Rob., 24(3), pp. 710–724. [CrossRef]
Choi, J., Oh, S., and Horowitz, R., 2009, “Distributed Learning and Cooperative Control for Multi-Agent Systems,” Automatica, 45(12), pp. 2802–2814. [CrossRef]
Gu, D., and Hu, H., 2012, “Spatial Gaussian Process Regression With Mobile Sensor Networks,” IEEE Trans. Neural Networks Learn. Syst., 23(8), pp. 1279–1290. [CrossRef]
Rasmussen, C., and Williams, C., 2006, Gaussian Processes for Machine Learning, Vol. 1, MIT, Cambridge, MA.
Choi, J., Lee, J., and Oh, S., 2008, “Swarm Intelligence for Achieving the Global Maximum Using Spatio-Temporal Gaussian Processes,” Proceedings of the American Control Conference, Seattle, WA, pp. 135–140.
Xu, Y., Choi, J., and Oh, S., 2011, “Mobile Sensor Network Navigation Using Gaussian Processes With Truncated Observations,” IEEE Trans. Rob., 27(6), pp. 1118–1131. [CrossRef]
Williams, C., and Seeger, M., 2001, “Using the Nystrom Method to Speed up Kernel Machines,” Proceedings of the Advances in Neural Information Processing Systems, Vancouver, British Columbia, Canada, pp. 682–688.
Lawrence, N., Seeger, M., and Herbrich, R., 2002, “Fast Sparse Gaussian Process Methods: The Informative Vector Machine,” Proceedings of the Advances in Neural Information Processing Systems, Montreal, Quebec, Canada, pp. 609–616.
Quiñonero-Candela, J., and Rasmussen, C. E., 2005, “A Unifying View of Sparse Approximate Gaussian Process Regression,” J. Mach. Learn. Res., 6, pp. 1939–1959.
Chen, J., Low, K. H., Tan, C. K.-Y., Oran, A., Jaillet, P., Dolan, J. M., and Sukhatme, G. S., 2012, “Decentralized Data Fusion and Active Sensing With Mobile Sensors for Modeling and Predicting Spatiotemporal Traffic Phenomena,” Proc. of the Conference on Uncertainty in Artificial Intelligence, Catalina Island, CA, pp. 163–173.
Oguz-Ekim, P., Gomes, J., Xavier, J., and Oliveira, P., 2011, “Robust Localization of Nodes and Time-Recursive Tracking in Sensor Networks Using Noisy Range Measurements,” IEEE Trans. Signal Process., 59(8), pp. 3930–3942. [CrossRef]
Karlsson, R., and Gustafsson, F., 2006, “Bayesian Surface and Underwater Navigation,” IEEE Trans. Signal Process., 54(11), pp. 4204–4213. [CrossRef]
Mysorewala, M., Popa, D., and Lewis, F., 2009, “Multi-Scale Adaptive Sampling With Mobile Agents for Mapping of Forest Fires,” J. Intell. Rob. Syst., 54(4), pp. 535–565. [CrossRef]
Jadaliha, M., Xu, Y., Choi, J., Johnson, N., and Li, W., 2013, “Gaussian Process Regression for Sensor Networks Under Localization Uncertainty,” IEEE Trans. Signal Process., 61(2), pp. 223–237. [CrossRef]
Choi, S., Jadaliha, M., Choi, J., and Oh, S., 2013, “Distributed Gaussian Process Regression for Mobile Sensor Network Under Localization Uncertainty,” Proceedings of the IEEE Conference on Decision and Control, Florence, Italy, pp. 4766–4771.
Smola, A., and Bartlett, P., 2001, “Sparse Greedy Gaussian Process Regression,” Proceedings of the Advances in Neural Information Processing Systems, Vancouver, British Columbia, Canada.
Tierney, L., and Kadane, J., 1986, “Accurate Approximations for Posterior Moments and Marginal Densities,” J. Am. Stat. Assoc., 81(393), pp. 82–86. [CrossRef]
Tierney, L., Kass, R., and Kadane, J. B., 1989, “Fully Exponential Laplace Approximations to Expectations and Variances of Nonpositive Functions,” J. Am. Stat. Assoc., 84(407), pp. 710–716. [CrossRef]
Miyata, Y., 2004, “Fully Exponential Laplace Approximations Using Asymptotic Modes,” J. Am. Stat. Assoc., 99(468), pp. 1037–1049. [CrossRef]
Miyata, Y., 2010, “Laplace Approximations to Means and Variances With Asymptotic Modes,” J. Stat. Plann. Inference, 140(2), pp. 382–392. [CrossRef]
Bertsekas, D., and Tsitsiklis, J., 1989, Parallel and Distributed Computation: Numerical Methods, Prentice-Hall, Englewood Cliffs, NJ.
Udwadia, F., 1992, “Some Convergence Results Related to the JOR Iterative Method for Symmetric, Positive-Definite Matrices,” Appl. Math. Comput., 47(1), pp. 37–45. [CrossRef]
Cortés, J., 2009, “Distributed Kriged Kalman Filter for Spatial Estimation,” IEEE Trans. Autom. Control, 54(12), pp. 2816–2827. [CrossRef]
Olfati-Saber, R., Fax, J. A., and Murray, R. M., 2007, “Consensus and Cooperation in Networked Multi-Agent Systems,” Proc. IEEE, 95, pp. 215–233. [CrossRef]
Olshevsky, A., and Tsitsiklis, J. N., 2009, “Convergence Speed in Distributed Consensus and Averaging,” SIAM J. Control Optim., 48(1), pp. 33–55. [CrossRef]
Kempe, D., and McSherry, F., 2008, “A Decentralized Algorithm for Spectral Analysis,” J. Comput. Syst. Sci., 74(1), pp. 70–83. [CrossRef]
Forero, P. A., Cano, A., and Giannakis, G. B., 2008, “Consensus-Based k-Means Algorithm for Distributed Learning Using Wireless Sensor Networks,” Proceedings of the Workshop on Sensors, Signal and Info. Process., Sedona, AZ, pp. 11–14.
Forero, P. A., Cano, A., and Giannakis, G. B., 2010, “Convergence Analysis of Consensus-Based Distributed Clustering,” IEEE International Conference on Acoustics Speech and Signal Processing (ICASSP), Dallas, TX, pp. 1890–1893.
Choi, S., Jadaliha, M., Choi, J., and Oh, S., “Distributed Gaussian Process Regression Under Localization Uncertainty (Longer Version),” Technical Report No. 2014-04-1.


Grahic Jump Location
Fig. 1

Results of different spectral clustering methods. The color of each node indicates its cluster membership. (Left) A clustering result using a centralized method. (Middle) A clustering result using DOI for computing eigenvectors and centralized k-means. (Right) Fully distributed spectral clustering using DOI and primal-dual k-means.

Grahic Jump Location
Fig. 2

An overview of the HD-GPR algorithm. (a) A connected sensor network. (b) Groups of sensing agents formed using distributed spectral clustering. (c) Estimated agent positions using the distributed mode estimator. (d) The field estimated by HD-GPR incorporating both position and measurement noises.

Grahic Jump Location
Fig. 3

An average number of communications per agent required to perform Algorithm 1

Grahic Jump Location
Fig. 4

Reconstruction errors of three algorithms (QDS, MAP–GPR, and D-GPR) for ten different scenarios. Error bars indicate one standard deviation from ten independent runs for each scenario. For each run, 20 agents are deployed in the field.

Grahic Jump Location
Fig. 5

An example of a reference field and fields reconstructed by three algorithms (QDS, MAP–GPR, and D-GPR). The reference field is shown in the upper left corner and, clockwise from the top, fields reconstructed using QDS, MAP–GPR, and D-GPR. The crosses for the reference field and the reconstructed field using QDS represent true positions and noisy positions, respectively. For the field estimated by MAP–GPR and D-GPR, gray crosses represent the MAP estimates of sensor positions.

Grahic Jump Location
Fig. 6

Convergence of parameters using JOR. The upper and middle figure indicate Γ1 and [B](1) of agent 1, respectively. The bottom figure shows the norm of the gradient of x∧, the solution of the MAP estimator in Eq. (10).

Grahic Jump Location
Fig. 7

Convergence of the DAC method. With an increasing number of iterations, θi from Algorithm 2 of all agents converges. The value of each agent is represented by a different color.

Grahic Jump Location
Fig. 8

Results of different GPR methods. From left to right, we have the reference field, the predicted field using a centralized GPR (full-GPR) with exact locations, full-GP with noisy locations, D-GPR with noisy locations, PITC with exact locations, PITC with noisy locations, and HD-GPR with noisy locations. Full-GPR and PITC indicate original GPR and PITC approximation of GPR, respectively. Excluding the first column, the first row indicates the predicted mean and the second row indicates the predicted variance of each algorithm.

Grahic Jump Location
Fig. 9

(a) Average reconstruction error as a function of the number of groups. (b) Average reconstruction errors of ten scenarios.

Grahic Jump Location
Fig. 10

(a) Average reconstruction errors of ten scenarios. Sensory fields are fixed in each scenario. (b) Average computation time per agent.




Some tools below are only available to our subscribers or users with an online account.

Related Content

Customize your page view by dragging and repositioning the boxes below.

Related Journal Articles
Related eBook Content
Topic Collections

Sorry! You do not have access to this content. For assistance or to subscribe, please contact us:

  • TELEPHONE: 1-800-843-2763 (Toll-free in the USA)
  • EMAIL: asmedigitalcollection@asme.org
Sign In