0
Research Papers

Trading Safety Versus Performance: Rapid Deployment of Robotic Swarms With Robust Performance Constraints

[+] Author and Article Information
Yin-Lam Chow

Institute for Computational and
Mathematical Engineering,
Stanford University,
Stanford, CA 94305
e-mail: ychow@stanford.edu

Marco Pavone

Department of Aeronautics and Astronautics,
Stanford University,
Stanford, CA 94305
e-mail: pavone@stanford.edu

Brian M. Sadler

Army Research Laboratory,
Adelphi, MD 20783
e-mail: brian.m.sadler6.civ@mail.mil

Stefano Carpin

School of Engineering,
University of California,
Merced, CA 95343
e-mail: scarpin@ucmerced.edu

Such mass distribution not only exists, but can be explicitly computed.

Prx0π[Xt=x] is the probability that Xt=x given the initial state x0X' and the policy π.

More in general, there exist more than one linear programming formulation that can be used, and methods based on Lagrange multipliers have been introduced as well. However, they will not be considered in this paper.

The number of vertices indeed depends on the value of Γ. In the extreme case where Γ = 0 the uncertainty set U has only one vertex.

Contributed by the Dynamic Systems Division of ASME for publication in the JOURNAL OF DYNAMIC SYSTEMS, MEASUREMENT, AND CONTROL. Manuscript received February 1, 2014; final manuscript received June 7, 2014; published online October 21, 2014. Assoc. Editor: Dejan Milutinovic.

J. Dyn. Sys., Meas., Control 137(3), 031005 (Oct 21, 2014) (11 pages) Paper No: DS-14-1057; doi: 10.1115/1.4028117 History: Received February 01, 2014; Revised June 07, 2014

In this paper, we consider a stochastic deployment problem, where a robotic swarm is tasked with the objective of positioning at least one robot at each of a set of pre-assigned targets while meeting a temporal deadline. Travel times and failure rates are stochastic but related, inasmuch as failure rates increase with speed. To maximize chances of success while meeting the deadline, a control strategy has therefore to balance safety and performance. Our approach is to cast the problem within the theory of constrained Markov decision processes (CMDPs), whereby we seek to compute policies that maximize the probability of successful deployment while ensuring that the expected duration of the task is bounded by a given deadline. To account for uncertainties in the problem parameters, we consider a robust formulation and we propose efficient solution algorithms, which are of independent interest. Numerical experiments confirming our theoretical results are presented and discussed.

FIGURES IN THIS ARTICLE
<>
Copyright © 2015 by ASME
Your Session has timed out. Please sign back in to continue.

References

Figures

Grahic Jump Location
Fig. 1

A sigmoidal shape for the safety function Se associated with the edges in the graph

Grahic Jump Location
Fig. 2

Given a graph G = (X, E) and a policy π, multiple stochastic paths from the deployment vertex v0 to the target vertex set T exist. Whenever a failure occurs, the state enters S (dashed arrows). States outside the box labeled M are in X.

Grahic Jump Location
Fig. 3

The map used to experimentally evaluate the deployment policies is the same as the one used in Ref. [10]. The deployment vertex is marked with a triangle, whereas goal vertices are indicated by crosses. Edges between vertices indicate that a path exists.

Grahic Jump Location
Fig. 4

Success rate as a function of the number of robots for different temporal deadlines using random uniform assignment

Grahic Jump Location
Fig. 5

Success rate as a function of the number of robots for different temporal deadlines using optimal target assignment

Tables

Errata

Discussions

Some tools below are only available to our subscribers or users with an online account.

Related Content

Customize your page view by dragging and repositioning the boxes below.

Related Journal Articles
Related eBook Content
Topic Collections

Sorry! You do not have access to this content. For assistance or to subscribe, please contact us:

  • TELEPHONE: 1-800-843-2763 (Toll-free in the USA)
  • EMAIL: asmedigitalcollection@asme.org
Sign In