Automated aerial suspended cargo delivery through reinforcement learning
作者:
摘要
Cargo-bearing unmanned aerial vehicles (UAVs) have tremendous potential to assist humans by delivering food, medicine, and other supplies. For time-critical cargo delivery tasks, UAVs need to be able to quickly navigate their environments and deliver suspended payloads with bounded load displacement. As a constraint balancing task for joint UAV-suspended load system dynamics, this task poses a challenge. This article presents a reinforcement learning approach for aerial cargo delivery tasks in environments with static obstacles. We first learn a minimal residual oscillations task policy in obstacle-free environments using a specifically designed feature vector for value function approximation that allows generalization beyond the training domain. The method works in continuous state and discrete action spaces. Since planning for aerial cargo requires very large action space (over 106 actions) that is impractical for learning, we define formal conditions for a class of robotics problems where learning can occur in a simplified problem space and successfully transfer to a broader problem space. Exploiting these guarantees and relying on the discrete action space, we learn the swing-free policy in a subspace several orders of magnitude smaller, and later develop a method for swing-free trajectory planning along a path. As an extension to tasks in environments with static obstacles where the load displacement needs to be bounded throughout the trajectory, sampling-based motion planning generates collision-free paths. Next, a reinforcement learning agent transforms these paths into trajectories that maintain the bound on the load displacement while following the collision-free path in a timely manner. We verify the approach both in simulation and in experiments on a quadrotor with suspended load and verify the method's safety and feasibility through a demonstration where a quadrotor delivers an open container of liquid to a human subject. The contributions of this work are two-fold. First, this article presents a solution to a challenging, and vital problem of planning a constraint-balancing task for an inherently unstable non-linear system in the presence of obstacles. Second, AI and robotics researchers can both benefit from the provided theoretical guarantees of system stability on a class of constraint-balancing tasks that occur in very large action spaces.
论文关键词:Reinforcement learning,UAVs,Aerial cargo delivery,Probabilistic roadmaps,Motion planning,Trajectory planning,Robotics,Rotorcraft
论文评审过程:Received 7 September 2013, Revised 21 September 2014, Accepted 23 November 2014, Available online 19 December 2014, Version of Record 25 April 2017.
论文官网地址:https://doi.org/10.1016/j.artint.2014.11.009