0
Research Papers

Partial-State Stabilization and Optimal Feedback Control for Stochastic Dynamical Systems PUBLIC ACCESS

[+] Author and Article Information
Tanmay Rajpurohit

School of Aerospace Engineering,
Georgia Institute of Technology,
Atlanta, GA 30332-0150
e-mail: tanmay.rajpurohit@gatech.edu

Wassim M. Haddad

School of Aerospace Engineering,
Georgia Institute of Technology,
Atlanta, GA 30332-0150
e-mail: wm.haddad@aerospace.gatech.edu

Contributed by the Dynamic Systems Division of ASME for publication in the JOURNAL OF DYNAMIC SYSTEMS, MEASUREMENT, AND CONTROL. Manuscript received November 30, 2015; final manuscript received March 6, 2017; published online June 5, 2017. Assoc. Editor: Suman Chakravorty.

J. Dyn. Sys., Meas., Control 139(9), 091001 (Jun 05, 2017) (18 pages) Paper No: DS-15-1602; doi: 10.1115/1.4036033 History: Received November 30, 2015; Revised March 06, 2017

In this paper, we develop a unified framework to address the problem of optimal nonlinear analysis and feedback control for partial stability and partial-state stabilization of stochastic dynamical systems. Partial asymptotic stability in probability of the closed-loop nonlinear system is guaranteed by means of a Lyapunov function that is positive definite and decrescent with respect to part of the system state which can clearly be seen to be the solution to the steady-state form of the stochastic Hamilton–Jacobi–Bellman equation, and hence, guaranteeing both partial stability in probability and optimality. The overall framework provides the foundation for extending optimal linear-quadratic stochastic controller synthesis to nonlinear-nonquadratic optimal partial-state stochastic stabilization. Connections to optimal linear and nonlinear regulation for linear and nonlinear time-varying stochastic systems with quadratic and nonlinear-nonquadratic cost functionals are also provided. Finally, we also develop optimal feedback controllers for affine stochastic nonlinear systems using an inverse optimality framework tailored to the partial-state stochastic stabilization problem and use this result to address polynomial and multilinear forms in the performance criterion.

FIGURES IN THIS ARTICLE
<>

In Ref. [1], we extended the framework developed in Refs. [2,3] to address the problem of optimal partial-state stabilization, wherein stabilization with respect to a subset of the system state variables is desired. Partial-state stabilization arises in many engineering applications [4,5]. Specifically, in spacecraft stabilization via gimballed gyroscopes, asymptotic stability of an equilibrium position of the spacecraft is sought while requiring Lyapunov stability of the axis of the gyroscope relative to the spacecraft [5]. Alternatively, in the control of rotating machinery with mass imbalance, spin stabilization about a nonprincipal axis of inertia requires motion stabilization with respect to a subspace instead of the origin [4]. The most common application where partial stabilization is necessary is adaptive control, wherein asymptotic stability of the closed-loop plant states is guaranteed without necessarily achieving parameter error convergence.

In this paper, we extend the framework developed in Ref. [1] to address the problem of optimal partial-state stochastic stabilization. Specifically, we consider a notion of optimality that is directly related to a given Lyapunov function that is positive definite and decrescent with respect to part of the system state. In particular, an optimal partial-state stochastic stabilization control problem is stated, and sufficient Hamilton–Jacobi–Bellman conditions are used to characterize an optimal feedback controller. Another important application of partial stability and partial stabilization theory is the unification it provides between time-invariant stability theory and stability theory for time-varying systems [3,6]. We exploit this unification and specialize our results to address optimal linear and nonlinear regulation for linear and nonlinear time-varying stochastic systems with quadratic and nonlinear-nonquadratic cost functionals.

Our approach focuses on the role of the Lyapunov function guaranteeing stochastic stability of the closed-loop system and its connection to the steady-state solution of the stochastic Hamilton–Jacobi–Bellman equation characterizing the optimal nonlinear feedback controller. In order to avoid the complexity in solving the stochastic steady-state, Hamilton–Jacobi–Bellman equation, we do not attempt to minimize a given cost functional, but rather, we parameterize a family of stochastically stabilizing controllers that minimizes a derived cost functional that provides the flexibility in specifying the control law. This corresponds to addressing an inverse optimal stochastic control problem [713].

The inverse optimal control design approach provides a framework for constructing the Lyapunov function for the closed-loop system that serves as an optimal value function and, as shown in Refs. [11,12], achieves desired stability margins. Specifically, nonlinear inverse optimal controllers that minimize a meaningful (in the terminology of Refs. [11,12]) nonlinear-nonquadratic performance criterion involving a nonlinear-nonquadratic, non-negative-definite function of the state and a quadratic positive-definite function of the feedback control are shown to possess sector margin guarantees to component decoupled input nonlinearities in the conic sector (1/2,).

The paper is organized follows. In Sec. 2, we establish notation, definitions, and present some key results on partial stability of nonlinear stochastic dynamical systems. In Sec. 3, we consider a stochastic nonlinear system with a performance functional evaluated over the infinite horizon. The performance functional is then evaluated in terms of a Lyapunov function that guarantees partial asymptotic stability in probability. We then state a stochastic optimal control problem and provide sufficient conditions for characterizing an optimal nonlinear feedback controller guaranteeing partial asymptotic stability in probability of the closed-loop system. These results are then used to address a stochastic optimal control problem for uniform asymptotic stabilization in probability of nonlinear time-varying stochastic dynamical systems.

In Sec. 4, we develop optimal feedback controllers for affine stochastic nonlinear systems using an inverse optimality framework tailored to the partial-state stochastic stabilization problem. This result is then used to derive time-varying extensions of the results in Refs. [14,15] involving nonlinear feedback controllers minimizing polynomial and multilinear performance criteria. In Sec. 5, we provide two illustrative numerical examples that highlight the optimal partial-state stochastic stabilization framework. In Sec. 6, we present conclusions and highlight some future research directions. Finally, we note that a preliminary version of this paper appeared in Ref. [16]. The present paper considerably expands on Ref. [16] by providing detailed proofs of all the results along with examples and additional motivation.

In this section, we establish notation, definitions, and review some basic results on partial stability of nonlinear stochastic dynamical systems [1722]. Specifically, denotes the set of real numbers, + denotes the set of positive real numbers, ¯+ denotes the set of non-negative numbers, + denotes the set of positive integers, n denotes the set of n × 1 real column vectors, n×m denotes the set of n × m real matrices, n denotes the set of n × n non-negative-definite matrices, and n denotes the set of n × n positive-definite matrices. We write Bε(x) for the open ball centered at x with radius ε, ||·|| for the Euclidean vector norm or an induced matrix norm (depending on context), ·F for the Frobenius matrix norm, AT for the transpose of the matrix A, ⊗ for the Kronecker product, ⊕ for the Kronecker sum, and In or I for the n × n identity matrix. Furthermore, Bn denotes the σ-algebra of Borel sets in Dn, and S denotes a σ-algebra generated on a set Sn.

We define a complete probability space as (Ω,F,), where Ω denotes the sample space, F denotes a σ-algebra, and defines a probability measure on the σ-algebra F; that is, is a non-negative countably additive set function on F such that (Ω)=1 [20]. Furthermore, we assume that w(⋅) is a standard d-dimensional Wiener process defined by (w(·),Ω,F,w0), where w0 is the classical Wiener measure [22, p. 10], with a continuous-time filtration {Ft}t0 generated by the Wiener process w(t) up to time t. We denote a stochastic dynamical system by G generating a filtration {Ft}t0 adapted stochastic process x:¯+×ΩD on (Ω,F,x0) satisfying FτFt,0τ<t, such that {ωΩ:x(t,ω)B}Ft,t0, for all Borel sets Bn contained in the Borel σ-algebra Bn. Here, we use the notation x(t) to represent the stochastic process x(t, ω) omitting its dependence on ω.

We denote the set of equivalence classes of measurable, integrable, and square-integrable n or n×m (depending on context) valued random processes on (Ω,F,) over the semi-infinite parameter space [0, ) by L0(Ω,F,),L1(Ω,F,), and L2(Ω,F,), respectively, where the equivalence relation is the one induced by -almost-sure equality. In particular, elements of L0(Ω,F,) take finite values -almost surely (a.s.). Hence, depending on the context, n will denote either the set of n × 1 real variables or the subspace of L0(Ω,F,) comprising n random processes that are constant almost surely. All inequalities and equalities involving random processes on (Ω,F,) are to be understood to hold -almost surely. Furthermore, E[·] and Ex0[·] denote, respectively, the expectation with respect to the probability measure and with respect to the classical Wiener measure x0.

Finally, we write tr(⋅) for the trace operator, (·)1 for the inverse operator, V(x)((V(x))/x) for the Fréchet derivative of V at x, V(x)((2V(x))/x2) for the Hessian of V at x, and Hn for the Hilbert space of random vectors xn with finite average power, that is, Hn{x:Ωn:E[xTx]<}. For an open set Dn,HnD{xHn:x:ΩD} denotes the set of all the random vectors in Hn induced by D. Similarly, for every x0n,Hnx0{xHn:x=x0a.s.}. Furthermore, C2 denotes the space of real-valued functions V:D that are two-times continuously differentiable with respect to xDn.

In this paper, we consider nonlinear stochastic autonomous dynamical systems G of the form Display Formula

(1)dx1(t)=f1(x1(t),x2(t))dt+D1(x1(t),x2(t))dw(t),x1(t0)=x10a.s.,tt0
Display Formula
(2)dx2(t)=f2(x1(t),x2(t))dt+D2(x1(t),x2(t))dw(t),x2(t0)=x20a.s.

where, for every tt0,x1(t)Hn1D and x2(t)Hn2 are such that x(t)[x1T(t),x2T(t)]T is a Ft-measurable random state vector, x(t0)Hn1D×Hn2,Dn1 is an open set with 0D, w(t) is a d-dimensional independent standard Wiener process (i.e., Brownian motion) defined on a complete filtered probability space (Ω,F,{Ft}tt0,),x(t0) is independent of (w(t)w(t0)),tt0, and f1:D×n2n1 is such that, for every x2n2,f1(0,x2)=0 and f1(⋅, x2) is locally Lipschitz continuous in x1, and f2:D×n2n2 is such that, for every x1D,f2(x1,·) is locally Lipschitz continuous in x2. In addition, the function D1:D×n2n1×d is continuous such that, for every x2n2,D1(0,x2)=0, and D2:D×n2n2×d is continuous.

A n1+n2-valued stochastic process x:[t0,τ]×ΩD×n2 is said to be a solution of Eqs. (1) and (2) on the interval [t0, τ] with initial condition x(t0) = x0 a.s., if x(⋅) is progressively measurable (i.e., x(⋅) is nonanticipating and measurable in t and ω) with respect to {Ft}tt0,f(x1,x2)[f1T(x1,x2),f2T(x1,x2)]TL1(Ω,F,),D(x1,x2)[D1T(x1,x2),D2T(x1,x2)]TL2(Ω,F,), and Display Formula

(3)x(t)=x0+t0tf(x(s))ds+t0tD(x(s))dw(s)a.s.,t[t0,τ]

where the integrals in Eq. (3) are Itô integrals. Note that for each fixed t ≥ t0, the random variable ωx(t,ω) assigns a vector x(ω) to every outcome ω ∈ Ω of an experiment, and for each fixed ω ∈Ω, the mapping tx(t,ω) is the sample path of the stochastic process x(t), t ≥ t0. A pathwise solution tx(t) of Eqs. (1) and (2) in (Ω,{Ft}tt0,x0) is said to be right maximally defined if x cannot be extended (either uniquely or nonuniquely) forward in time. We assume that all right maximal pathwise solutions to Eqs. (1) and (2) in (Ω,{Ft}tt0,x0) exist on [t0, ), and hence, we assume that Eqs. (1) and (2) are forward complete. Sufficient conditions for forward completeness or global solutions to Eqs. (1) and (2) are given by Corollary 6.3.5 of Ref. [20].

Furthermore, we assume that f:D×n2n1+n2 and D:D×n2(n1+n2)×d satisfy the uniform Lipschitz continuity condition Display Formula

(4)f(x)f(y)+D(x)D(y)FLxy,x,yD×n2

and the growth restriction condition Display Formula

(5)f(x)2+D(x)F2L2(1+x2),xD×n2

for some Lipschitz constant L > 0, and hence, since x(t0)Hn1D×Hn2 and x(t0) is independent of (w(t)w(t0)),tt0, it follows that there exists a unique solution xL2(Ω,F,) of Eqs. (1) and (2) in the following sense. For every xHn1D×Hn2, there exists τx > 0 such that, if xI:[t0,τ1]×ΩD×n2 and xII:[t0,τ2]×ΩD×n2 are two solutions of Eqs. (1) and (2); that is, if xI,xIIL2(Ω,F,), with continuous sample paths almost surely, solve Eqs. (1) and (2), then τxmin{τ1,τ2} and (xI(t)=xII(t),t0tτx)=1. Sufficient conditions for forward existence and uniqueness in the absence of the uniform Lipschitz continuity condition and growth restriction condition can be found in Refs. [23,24].

A solution t[x1T(t),x2T(t)]T is said to be regular if and only if x0(τe=)=1 for all x(0)Hn1D×Hn2, where τe is the first stopping time of the solution to Eqs. (1) and (2) from every bounded domain in D×n2. Recall that regularity of solutions implies that solutions exist for t ≥ t0 almost surely. Here, we assume regularity of solutions to Eqs. (1) and (2), and hence, τx =  [18, p. 75]. Moreover, the unique solution determines a n1+n2-valued, time-homogeneous Feller continuous Markov process x(⋅), and hence, its stationary Feller transition probability function is given by (Refs. [18, Theorem 3.4] and [20, Theorem 9.2.8]) (x(t)B|x(t0)=a.s.x0)=(tt0,x0,0,B) for all x0D×n2 and t ≥ t0, and all Borel subsets B of D×n2, where (s,x,t,B),ts, denotes the probability of transition of the point xD×n2 at time instant s into the set BD×n2 at time instant t. Finally, recall that every continuous process with Feller transition probability function is also a strong Markov process [18, p. 101].

Definition 2.1 [22, Definition 7.7]. Let x() be a time-homogeneous Markov process inHn1D×Hn2 and letV:D×n2. Then, the infinitesimal generatorL of x(t), t ≥ 0, with x(0) = x0 a.s., is defined byDisplay Formula

(6)LV(x0)limt0+Ex0[V(x(t))]V(x0)t,x0D×n2

If VC2 and has a compact support, and x(t), t ≥ t0, satisfies Eqs. (1) and (2) , then the limit in Eq. (6) exists for all xD×n2 and the infinitesimal generator L of x(t), t ≥ t0, can be characterized by the system drift and diffusion functions f(x) and D(x) defining the stochastic dynamical system (1) and (2) with system state x(t), t ≥ t0, and is given by [22, Theorem 7.9] Display Formula

(7)LV(x)V(x)xf(x)+12trDT(x)2V(x)x2D(x),xD×n2

In the following definition, we introduce the notion of stochastic partial stability.

Definition 2.2. (i) The nonlinear stochastic dynamical systemG given by Eqs.(1) and(2) is Lyapunov stable in probability with respect to x1 uniformly in x20if, for every ε > 0 and ρ > 0, there existδ=δ(ρ,ε)>0 such that, for allx10Bδ(0)Display Formula

(8)x0(suptt0x1(t)>ε)ρ

for all t ≥ 0 and allx20n2.

(ii)G is asymptotically stable in probability with respect to x1 uniformly in x20ifG is Lyapunov stable in probability with respect to x1 uniformly in x20 andDisplay Formula

(9)limx100x0(limtx1(t)=0)=1

uniformly in x20 for allx20n2.

(iii)G is globally asymptotically stable in probability with respect to x1 uniformly in x20 if G is Lyapunov stable in probability with respect to x1 uniformly in x20 andx0(limtx1(t)=0)=1 holds uniformly in x20 for all(x10,x20)n1×n2.

Remark 2.1. It is important to note that there is a key difference between the stochastic partial stability definitions given in Definitions 2.2 and the definitions of stochastic partial stability given in Ref. [21]. In particular, the stochastic partial stability definitions given in Ref. [21] require that both the initial conditions x10 and x20 lie in a neighborhood of origin, whereas in Definition 2.2 x20 can be arbitrary. As will be seen below, this difference allows us to unify autonomous stochastic partial stability theory with time-varying stochastic stability theory. An additional difference between our formulation of the stochastic partial stability problem and the stochastic partial stability problem considered in Ref. [21] is in the treatment of the equilibrium of Eqs. (1) and (2). Specifically, in our formulation, we require the weaker partial equilibrium condition f1(0, x2) = 0 and D1(0, x2) = 0 for every x2n2, whereas in Ref. [21] the author requires the stronger equilibrium condition f1(0,0)=0,f2(0,0)=0,D1(0,0)=0, and D2(0, 0) = 0.

Remark 2.2. A more general stochastic stability notion can also be introduced here involving stochastic stability and convergence to an invariant (stationary) distribution. In this case, state convergence is not to an equilibrium point but rather to a stationary distribution. This framework can relax the vanishing perturbation assumption D1(0,x2)=0,x2n2, and requires a more involved analysis and synthesis framework showing stability of the underlying Markov semigroup [25].

As shown in Refs. [3] and [6], an important application of deterministic partial stability theory is the unification it provides between time-invariant stability theory and stability theory for time-varying systems. A similar unification can be provided for stochastic dynamical systems. Specifically, consider the nonlinear time-varying stochastic dynamical system given by Display Formula

(10)dx(t)=f(t,x(t))dt+D(t,x(t))dw(t),x(t0)=