0
Research Papers

Adiabatic Markov Decision Process: Convergence of Value Iteration Algorithm

[+] Author and Article Information
Thai Duong

School of Electrical Engineering and
Computer Science,
Oregon State University,
Corvallis, OR 97331
e-mail: duong@eecs.oregonstate.edu

Duong Nguyen-Huu

School of Electrical Engineering and
Computer Science,
Oregon State University,
Corvallis, OR 97331
e-mail: nguyendu@eecs.oregonstate.edu

Thinh Nguyen

School of Electrical Engineering and
Computer Science,
Oregon State University,
Corvallis, OR 97331
e-mail: thinhq@eecs.oregonstate.edu

Contributed by the Dynamic Systems Division of ASME for publication in the JOURNAL OF DYNAMIC SYSTEMS, MEASUREMENT, AND CONTROL. Manuscript received November 7, 2014; final manuscript received February 22, 2016; published online April 6, 2016. Assoc. Editor: Srinivasa M. Salapaka.

J. Dyn. Sys., Meas., Control 138(6), 061009 (Apr 06, 2016) (12 pages) Paper No: DS-14-1460; doi: 10.1115/1.4032875 History: Received November 07, 2014; Revised February 22, 2016

Markov decision process (MDP) is a well-known framework for devising the optimal decision-making strategies under uncertainty. Typically, the decision maker assumes a stationary environment which is characterized by a time-invariant transition probability matrix. However, in many real-world scenarios, this assumption is not justified, thus the optimal strategy might not provide the expected performance. In this paper, we study the performance of the classic value iteration algorithm for solving an MDP problem under nonstationary environments. Specifically, the nonstationary environment is modeled as a sequence of time-variant transition probability matrices governed by an adiabatic evolution inspired from quantum mechanics. We characterize the performance of the value iteration algorithm subject to the rate of change of the underlying environment. The performance is measured in terms of the convergence rate to the optimal average reward. We show two examples of queuing systems that make use of our analysis framework.

FIGURES IN THIS ARTICLE
<>
Copyright © 2016 by ASME
Your Session has timed out. Please sign back in to continue.

References

Bellman, R. , 1957, Dynamic Programming, Princeton University Press, Princeton, NJ.
Howard, R. , 1960, Dynamic Programming and Markov Processes, MIT Press, Cambridge, MA.
d'Epenoux, F. , 1960, “ Sur un probleme de production et de stockage dans laléatoire,” Rev. Fr. Rech. Opér., 14, pp. 3–16.
Derman, C. , 1970, Finite State Markovian Decision Processes, Academic Press, Orlando.
Puterman, M. L. , 1994, Markov Decision Processes: Discrete Stochastic Dynamic Programming, 1st ed., Wiley, New York.
Born, M. , and Fock, V. , 1928, “ Beweis des adiabatensatzes,” Z. Phys., 51, pp. 165–180. [CrossRef]
Messiah, A. , 1962, Quantum Mechanics, 1st ed., Vol. 2, Wiley, New York. [PubMed] [PubMed]
Kovchegov, Y. , 2010, “ A Note on Adiabatic Theorem for Markov Chains,” Stat. Probab. Lett., 80, pp. 186–190. [CrossRef]
Bradford, K. , and Kovchegov, Y. , 2011, “ Adiabatic Times for Markov Chains and Applications,” J. Stat. Phys., 143(5), pp. 955–969. [CrossRef]
Szita, I. , Takács, B. , and Lörincz, A. , 2002, “ ɛ-MDPS: Learning in Varying Environments,,” J. Mach. Learn. Res., 3, pp. 145–174.
Bradford, K. , Kovchegov, Y. , and Nguyen, T. , 2016, “ Stable Adiabatic Times for Markov Chains,” Stochastics, 88(4), pp. 567–585.
Rosenwald, R. , Meyer, D. , and Schmitt, H. , 2004, “ Applications of Quantum Algorithms to Partially Observable Markov Decision Processes,” 5th Asian Control Conference, Melbourne, Australia, June 20–23, Vol. 1, pp. 420–427.
Zacharias, L. , Nguyen, T. , Kovchegov, Y. , and Bradford, K. , 2012, “ Analysis of Adaptive Queuing Policies Via Adiabatic Approach,” 2013 International Conference on Computing, Networking and Communications (ICNC), Network Algorithm and Performance Evaluation Symposium, San Diego, CA, Jan. 28–31, pp. 1053–1057.
Duong, T. , Nguyen-Huu, D. , and Nguyen, T. , 2013, “ Adiabatic Markov Decision Process With Application to Queuing Systems,” 47th Annual Conference on Information Sciences and Systems (CISS), Baltimore, MD, Mar. 20–22, pp. 1–6.
Derman, C. , and Strauch, R. E. , 1966, “ A Note on Memoryless Rules for Controlling Sequential Control Processes,” Ann. Math. Stat., 37(1), pp. 276–278. [CrossRef]
Seneta, E. , 1981, Non-Negative Matrices and Markov Chains, Springer-Verlag, New York.
Levin, A. D. , Peres, Y. , and Wilmer, E. L. , 2008, Markov Chains and Mixing Times, American Mathematical Society, Providence, RI.
Kleinrock, L. , 1976, Queuing Systems: Theory, Vol. 1, Wiley, New York.
Kleinrock, L. , 1976, Queuing Systems: Computer Applications, Wiley, New York.
Gautam, N. , 2012, Analysis of Queues: Methods and Applications (Operations Research Series), Taylor & Francis, Boca Raton.
Lawler, G. , 2006, Introduction to Stochastic Processes (Chapman and Hall/CRC Probability Series), Chapman & Hall/CRC, Boca Raton.
Bremaud, P. , 1999, Markov Chains: Gibbs Fields, Monte Carlo Simulation, and Queues (Texts in Applied Mathematics), Springer, Boca Raton.
Briem, U. , Theimer, T. , and Kröner, H. , 1991, “ A General Discrete-Time Queuing Model: Analysis and Applications,” International Teletraffic Congress, Vol. 13, pp. 13–19.
Morrison, J. A. , 1980, “ Analysis of Some Overflow Problems With Queuing,” Bell Syst. Tech. J., 59(8), pp. 1427–1462. [CrossRef]

Figures

Grahic Jump Location
Fig. 1

The classic value iteration

Grahic Jump Location
Fig. 2

The value iteration in an adiabatic setting

Grahic Jump Location
Fig. 4

An example of estimated λî and it bounds for actual λ = 40

Grahic Jump Location
Fig. 5

The Φ(·) function for simulation scenario 1

Grahic Jump Location
Fig. 6

The actual distance and its upper bound for λî=(1+ai)λ from Theorem 2 (simulation scenario 1)

Grahic Jump Location
Fig. 7

The actual distance and its upper bound for λî=(1+ai)λ from Theorem 3 (simulation scenario 1)

Grahic Jump Location
Fig. 8

The actual distance and its upper bound for λî=(1−bi)λ from Theorem 2 (simulation scenario 1)

Grahic Jump Location
Fig. 9

The actual distance and its upper bound for λî=(1−bi)λ from Theorem 3 (simulation scenario 1)

Grahic Jump Location
Fig. 10

The actual distance and its upper bound from Theorem 2 (simulation scenario 2)

Grahic Jump Location
Fig. 11

The actual distance and its upper bound from Theorem 3 (simulation scenario 2)

Tables

Errata

Discussions

Some tools below are only available to our subscribers or users with an online account.

Related Content

Customize your page view by dragging and repositioning the boxes below.

Related Journal Articles
Related eBook Content
Topic Collections

Sorry! You do not have access to this content. For assistance or to subscribe, please contact us:

  • TELEPHONE: 1-800-843-2763 (Toll-free in the USA)
  • EMAIL: asmedigitalcollection@asme.org
Sign In