Machine Learning for Intelligent Control: Application of Reinforcement Learning Techniques to the Development of Flight Control Systems for Miniature UAV Rotorcraft

Type of content
Theses / Dissertations
Publisher's DOI/URI
Thesis discipline
Mechanical Engineering
Degree name
Master of Engineering
Publisher
University of Canterbury. Department of Mechanical Engineering
Journal Title
Journal ISSN
Volume Title
Language
Date
2013
Authors
Hayes, Edwin Laurie
Abstract

This thesis investigates the possibility of using reinforcement learning (RL) techniques to create a flight controller for a quadrotor Micro Aerial Vehicle (MAV).

A capable flight control system is a core requirement of any unmanned aerial vehicle. The challenging and diverse applications in which MAVs are destined to be used, mean that considerable time and effort need to be put into designing and commissioning suitable flight controllers. It is proposed that reinforcement learning, a subset of machine learning, could be used to address some of the practical difficulties.

While much research has delved into RL in unmanned aerial vehicle applications, this work has tended to ignore low level motion control, or been concerned only in off-line learning regimes. This thesis addresses an area in which accessible information is scarce: the performance of RL when used for on-policy motion control.

Trying out a candidate algorithm on a real MAV is a simple but expensive proposition. In place of such an approach, this research details the development of a suitable simulator environment, in which a prototype controller might be evaluated. Then inquiry then proposes a possible RL-based control system, utilising the Q-learning algorithm, with an adaptive RBF-network providing function approximation.

The operation of this prototypical control system is then tested in detail, to determine both the absolute level of performance which can be expected, and the effect which tuning critical parameters of the algorithm has on the functioning of the controller. Performance is compared against a conventional PID controller to maximise the usability of the results by a wide audience. Testing considers behaviour in the presence of disturbances, and run-time changes in plant dynamics.

Results show that given sufficient learning opportunity, a RL-based control system performs as well as a simple PID controller. However, unstable behaviour during learning is an issue for future analysis.

Additionally, preliminary testing is performed to evaluate the feasibility of implementing RL algorithms in an embedded computing environment, as a general requirement for a MAV flight controller. Whilst the algorithm runs successfully in an embedded context, observation reveals further development would be necessary to reduce computation time to a level where a controller was able to update sufficiently quickly for a real-time motion control application.

In summary, the study provides a critical assessment of the feasibility of using RL algorithms for motion control tasks, such as MAV flight control. Advantages which merit interest are exposed, though practical considerations suggest at this stage, that such a control system is not a realistic proposition. There is a discussion of avenues which may uncover possibilities to surmount these challenges. This investigation will prove useful for engineers interested in the opportunities which reinforcement learning techniques represent.

Description
Citation
Keywords
MAV, Micro Aerial Vehicle, UAV, Unmanned Aerial Vehicle, UAS, Unmanned Aerial System, Flight Control System, Machine Learning, Reinforcement Learning
Ngā upoko tukutuku/Māori subject headings
ANZSRC fields of research
Rights
Copyright Edwin Laurie Hayes