Learning to Fly… Forever

Dynamic soaring is a biologically inspired flight strategy that could be used to extend the operational time and range of small unmanned aerial vehicles. Developing an autonomous flight controller that can take advantage of the energy in the wind has proven to be difficult. Recent advancements in the fields of artificial intelligence and deep reinforcement learning have shown promising results in continuous control problems with long time horizons.

The purpose of this effort is to show that deep reinforcement learning algorithms are capable of tuning control polices for dynamic soaring in a variety of conditions, including real time management of power systems. This tuning process can be initialized using a basis policy that contains randomly generated parameters. This means that no expert data, trajectories, or flight vehicle properties are inherent to the controller. This “model-free” approach (from the perspective of the controller) necessitates a virtual model of the environment, as it is not practical to use physical models for the training of flight vehicle maneuverability tasks at scale.