0
Research Papers

Position-Based Visual Servoing of a Micro-Aerial Vehicle Operating Indoor

[+] Author and Article Information
Hanoch Efraim

Department of Mechanical Engineering,
Ben Gurion University of the Negev,
Beer Sheva 8410501, Israel
e-mail: hanoche@post.bgu.ac.il;
Department of Electrical
and Electronics Engineering,
Shamoon College of Engineering,
Beer Sheva 8434231, Israel

Amir Shapiro

Department of Mechanical Engineering,
Ben Gurion University of the Negev,
Beer Sheva 8410501, Israel
e-mail: ashapiro@bgu.ac.il

Moshe Zohar

Department of Electrical
and Electronics Engineering,
Shamoon College of Engineering,
Beer Sheva 8434231, Israel
e-mail: moshezo@ac.sce.ac.il

Gera Weiss

Department of Computer Science,
Ben Gurion University of the Negev,
Beer Sheva 8410501, Israel
e-mail: geraw@cs.bgu.ac.il

Contributed by the Dynamic Systems Division of ASME for publication in the JOURNAL OF DYNAMIC SYSTEMS, MEASUREMENT,AND CONTROL. Manuscript received July 24, 2017; final manuscript received July 7, 2018; published online September 7, 2018. Assoc. Editor: Evangelos Papadopoulos.

J. Dyn. Sys., Meas., Control 141(1), 011003 (Sep 07, 2018) (12 pages) Paper No: DS-17-1382; doi: 10.1115/1.4040920 History: Received July 24, 2017; Revised July 07, 2018

In this work, we suggest a novel solution to a very specific problem—calculating the pose (position and attitude) of a micro-aerial vehicle (MAV) operating inside corridors and in front of windows. The proposed method makes use of a single image captured by a front facing camera, of specific features whose three-dimensional (3D) model is partially known. No prior knowledge regarding the size of the corridor or the window is needed, nor is the ratio between their width and height. The position is calculated up to an unknown scale using a gain scheduled iterative algorithm. In order to compensate for the unknown scale, an adaptive controller that ensures consistent closed loop behavior is suggested. The attitude calculation can be used as is, or the results can be fused with angular velocity sensors to achieve better estimation. In this paper, the algorithm is presented and the approach is demonstrated with simulations and experiments.

Copyright © 2019 by ASME
Your Session has timed out. Please sign back in to continue.

References

Coza, C. , and Macnab, C. J. B. , 2006, “A New Robust Adaptive-Fuzzy Control Method Applied to Quadrotor Helicopter Stabilization,” Annual Meeting of the North American Fuzzy Information Processing Society, Montreal, QC, Canada, June 3–6, Paper No. NAFIPS 2006-2006, pp. 454–458.
Mahony, R. , Cha, S.-H. , Hamel, T. , and Antipolis, F. , 2006, “A Coupled Estimation and Control Analysis for Attitude Stabilisation of Mini Aerial Vehicles,” Australasian Conference on Robotics and Automation, Auckland, New Zealand, Dec. 6–8, pp. 1–10. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.368.9509&rep=rep1&type=pdf
Mellinger, D. , Michael, N. , and Kumar, V. , 2012, “Trajectory Generation and Control for Precise Aggressive Maneuvers With Quadrotors,” Int. J. Rob. Res., 31(5), pp. 664–674. [CrossRef]
Pounds, P. , Mahony, R. , and Corke, P. , 2010, “Modelling and Control of a Large Quadrotor Robot,” Control Eng. Pract., 18(7), pp. 691–699. [CrossRef]
Moore, R. J. D. , Dantu, K. , Barrows, G. L. , and Nagpal, R. , 2014, “Autonomous MAV Guidance With a Lightweight Omnidirectional Vision Sensor,” IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, China, May 31–June 7, pp. 3856–3861.
Engel, J. , Sturm, J. , and Cremers, D. , 2014, “Scale-Aware Navigation of a Low-Cost Quadrocopter With a Monocular Camera,” Rob. Auton. Syst., 62(11), pp. 1646–1656. [CrossRef]
García Carrillo, L. R. , Rondon, E. , Sanchez, A. , Dzul, A. , and Lozano, R. , 2011, “Stabilization and Trajectory Tracking of a Quad-Rotor Using Vision,” J. Intell. Rob. Syst., 61(1–4), pp. 103–118. [CrossRef]
Falanga, D. , Mueggler, E. , Faessler, M. , and Scaramuzza, D. , 2017, “Aggressive Quadrotor Flight Through Narrow Gaps With Onboard Sensing and Computing Using Active Vision,” IEEE International Conference on Robotics and Automation (ICRA), Singapore, May 29–June 3, pp. 5774–5781.
Chu, T. , Guo, N. , Backén, S. , and Akos, D. , 2012, “Monocular Camera/Imu/Gnss Integration for Ground Vehicle Navigation in Challenging GNSS Environments,” Sensors, 12(3), pp. 3162–3185. [CrossRef] [PubMed]
Zsedrovits, T. , Bauer, P. , Zarandy, A. , Vanek, B. , Bokor, J. , and Roska, T. , 2014, “Error Analysis of Algorithms for Camera Rotation Calculation in GPS/IMU/Camera Fusion for UAV Sense and Avoid Systems,” International Conference on Unmanned Aircraft Systems (ICUAS), Orlando, FL, May 27–30, pp. 864–875.
Saripalli, S. , Montgomery, J. F. , and Sukhatme, G. S. , 2002, “Vision-Based Autonomous Landing of an Unmanned Aerial Vehicle,” IEEE International Conference on Robotics and Automation (ICRA), Washington, DC, May 11–15, pp. 2799–2804.
Lin, F. , Peng, K. , Dong, X. , Zhao, S. , and Chen, B. M. , 2014, “Vision-Based Formation for UAVS,” 11th IEEE International Conference on Control Automation (ICCA), Taichung, Taiwan, June 18–20, pp. 1375–1380.
Malis, E. , and Chaumette, F. , 2000, “2 1/2 D Visual Servoing With Respect to Unknown Objects Through a New Estimation Scheme of Camera Displacement,” Int. J. Comput. Vision, 37(1), 79–97.
Espiau, B. , Chaumette, F. , and Rives, P. , 1992, “A New Approach to Visual Servoing in Robotics,” IEEE Trans. Rob. Autom., 8(3), pp. 313–326. [CrossRef]
Papanikolopoulos, N. P. , and Khosla, P. K. , 1993, “Adaptive Robotic Visual Tracking: Theory and Experiments,” IEEE Trans. Autom. Control, 38(3), pp. 429–445. [CrossRef]
Lee, D. , Nataraj, C. , Burg, T. C. , and Dawson, D. M. , 2011, “Adaptive Tracking Control of an Underactuated Aerial Vehicle,” American Control Conference (ACC), San Francisco, CA, June 29–July 1, pp. 2326–2331.
Hua, M. D. , Hamel, T. , Morin, P. , and Samson, C. , 2013, “Introduction to Feedback Control of Underactuated VTOL Vehicles: A Review of Basic Control Design Ideas and Principles,” IEEE Control Syst., 33(1), pp. 61–75. [CrossRef]
Bills, C. , Chen, J. , and Saxena, A. , 2011, “Autonomous MAV Flight in Indoor Environments Using Single Image Perspective Cues,” IEEE International Conference on Robotics and Automation (ICRA), Shanghai, China, May 9–13, pp. 5776–5783.
Araar, O. , and Aouf, N. , 2014, “Visual Servoing of a Quadrotor UAV for Autonomous Power Lines Inspection,” 22nd Mediterranean Conference on Control and Automation, Palermo, Italy, June 16–19, pp. 1418–1424.
Efraim, H. , Arogeti, S. , Shapiro, A. , and Weiss, G. , 2017, “Vision Based Output Feedback Control of Micro Aerial Vehicles in Indoor Environments,” J. Intell. Rob. Syst., 87(1), pp. 169–186. [CrossRef]
Leishman, R. C. , Macdonald, J. C. , Beard, R. W. , and McLain, T. W. , 2014, “Quadrotors and Accelerometers: State Estimation With an Improved Dynamic Model,” IEEE Control Syst., 34(1), pp. 28–41. [CrossRef]
Chen, H. H. , 1990, “Pose Determination From Line-to-Plane Correspondences: Existence Condition and Closed-Form Solutions,” Third International Conference on Computer Vision (ICCV), Osaka, Japan, Dec. 4–7, pp. 374–378.
Mirzaei, F. M. , and Roumeliotis, S. I. , 2011, “Globally Optimal Pose Estimation From Line Correspondences,” IEEE International Conference on Robotics and Automation (ICRA), Shanghai, China, May 9–13, pp. 5581–5588.
Dornaika, F. , and Garcia, C. , 1999, “Pose Estimation Using Point and Line Correspondences,” Real-Time Imaging, 5(3), pp. 215–230. [CrossRef]
David, P. , DeMenthon, D. , Duraiswami, R. , and Samet, H. , 2003, “Simultaneous Pose and Correspondence Determination Using Line Features,” IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Madison, WI, June 18–20, p. II.
Zhang, X. , Zhang, Z. , Li, Y. , Zhu, X. , Yu, Q. , and Ou, J. , 2012, “Robust Camera Pose Estimation From Unknown or Known Line Correspondences,” Appl. Opt, 51(7), pp. 936–948. [CrossRef] [PubMed]
Moreno-Noguer, F. , Lepetit, V. , and Fua, P. , 2008, “Pose Priors for Simultaneously Solving Alignment and Correspondence,” Computer Vision–ECCV 2008, D. Forsyth , P. Torr , and A. Zisserman , eds., Springer, Berlin, pp. 405–418.
Elqursh, A. , and Elgammal, A. , 2011, “Line-Based Relative Pose Estimation,” Computer Vision and Pattern Recognition (CVPR 2011), Colorado Springs, CO, June 20–25, pp. 3049–3056.
Criminisi, A. , Reid, I. , and Zisserman, A. , 2000, “Single View Metrology,” Int. J. Comput. Vision, 40(2), pp. 123–148. [CrossRef]
Shi, W. , and Samarabandu, J. , 2006, “Corridor Line Detection for Vision Based Indoor Robot Navigation,” Canadian Conference on Electrical and Computer Engineering, Ottawa, ON, Canada, May 7–10, pp. 1988–1991.
Bristeau, P.-J. , Callou, F. , Vissière, D. , and Petit, N. , 2011, “The Navigation and Control Technology Inside the Ar.Drone Micro UAV,” IFAC Proc. Volumes, 44(1), pp. 1477–1484. [CrossRef]

Figures

Grahic Jump Location
Fig. 1

Micro-aerial vehicle cascaded control architecture

Grahic Jump Location
Fig. 2

The highlighted lines are the visual measurements of the suggested controller. Lcl stands for line ceiling left, Lcr stands for line ceiling right, Lfl stands for line floor left, and Lfr stands for line floor right. The only additional measurements required are angular velocity measurements which can be obtained using a micro electro mechanical system gyroscope sensor for a low level angular velocity controller.

Grahic Jump Location
Fig. 3

Inertial and body frames of reference. As is commonly used in aviation, the X̂B axis is directed “forward,” the ŶB axis is directed to the right, and the ẐB axis is directed downward. On the left, the inertial frame A is defined such that X̂A is the “forward” direction of the corridor. The line [t, 0, 0]T with t∈ℝ is located within the floor plane and in the middle of the corridor. On the right—the inertial frame A is defined such that its origin (0, 0, 0) is located at the center of the window. ŶA and ẐA are within the window plane, ẐA is directed down, and ŶA is directed to the right (when viewing from the side of the MAV at the beginning of the maneuver).

Grahic Jump Location
Fig. 4

Algorithm structure. The iterative algorithm is repeated for each image taken. In each iteration, the image is compared with a virtual image constructed from the current pose and corridor estimation. The differences between these images are used to generate a correction term that is used to update the pose and corridor proportion estimate for the next iteration.

Grahic Jump Location
Fig. 5

Corridor lines detected in the image Lfl, Lfr,Lcl, Lcr and corridor lines calculated in each iteration Lfl, Lfr, Lcl, Lcr

Grahic Jump Location
Fig. 6

Simulation of the algorithm using low gains. The purpose of this figure is to help visualize the concept behind the algorithm. The unchanging lines in each row depict the corridor lines as viewed from the front facing camera. The evolving lines are the lines calculated according to the pose estimate at each iteration. n is the iteration number. The initial estimate is always θ̂=0,ϕ̂=0,ψ̂=0,P̂z=az/2, and âyz=1. In this simulation, the pose of the camera is identical to the initial estimate, apart from only one parameter—in the top row, lateral position, Py = −0.8. In the second row, the roll angle, ϕ = 0.25 rad. In the third row, the corridor width to height ratio ayz is different from the initial value of the corresponding state variable âyz. The fourth row is intended to show the simultaneous convergence of all of the pose parameters, that is, when the MAV is far from being centered in the corridor.

Grahic Jump Location
Fig. 7

The convergence of the algorithm to the pose of the MAV and the proportions of the corridor when all the MAV pose parameters are far from the initial value of the algorithm statevariables simultaneously. In this simulation, the pose of the camera is θ=0.26 rad (15 deg),ϕ=0.26 rad,ψ=0.26 rad,Py=0.8ay,Pz=−0.2az, and ayz = 1 The initial estimate is ϕ̂=0,ψ̂=0,P̂y=0,P̂z=0, and âyz=0.6.

Grahic Jump Location
Fig. 8

Wide corridor initial acquisition simulations. The plots show the average absolute value of the pose and corridor proportions as a function of the number of iterations from a total of 1,386,315 simulations.

Grahic Jump Location
Fig. 9

The gain scheduled Gy and Gz. The dots represent the values chosen for a specific value of ayz and the blue line represents the polynomial curve calculated using the matlab polynomial fitting algorithm. The polynomial is used to calculate the correction gains according to corridor proportions.

Grahic Jump Location
Fig. 10

Parameter convergence during gain scheduled continuous acquisition simulations. The plots show the average absolute value of the pose and corridor proportions as a function of the number of iterations from a total of 2,405,916 simulations.

Grahic Jump Location
Fig. 11

Simulation results depicting algorithm convergence in the image plane. The fixed quadrilateral simulates the window image and the evolving quadrilateral simulates the window image according to the current iteration pose estimation.

Grahic Jump Location
Fig. 12

The MAV initially takes off in an open loop mode to estimate the thrust required to compensate for its weight, and at t = 2 sec, the system enters the closed loop mode with a sinusoidal reference signal. The MAV altitude converges to the ideal response generated by the model reference system.

Grahic Jump Location
Fig. 13

The evolution of the adaptation parameter during the experiment with a sinusoidal reference signal (see Fig. 12 for the altitude and reference signal in this experiment).

Grahic Jump Location
Fig. 14

In this figure, each square represents a pixel in the image. The highlighted pixel in the middle is the candidate pixel. The highlighted circle of pixels represents represents the pixels that are examined to determine if the candidate pixel is a line edge pixel. If the pixels on the circle can be divided into 12 consecutive pixels whose value is below a certain threshold, followed by 12 pixels whose value is above another threshold, the candidate pixel is detected as a line edge.

Grahic Jump Location
Fig. 15

Images of the corridor taken during flight. Left image—the four pairs of white points along the corridor lines mark the beginning and end of each detected line. Right image—the image is blurred due to a combination of MAV motion, low lighting, and relatively long exposure. The corridor lines are not detected accurately.

Tables

Errata

Some tools below are only available to our subscribers or users with an online account.

Related Content

Customize your page view by dragging and repositioning the boxes below.

Related Journal Articles
Related eBook Content
Topic Collections

Sorry! You do not have access to this content. For assistance or to subscribe, please contact us:

  • TELEPHONE: 1-800-843-2763 (Toll-free in the USA)
  • EMAIL: asmedigitalcollection@asme.org
Sign In