Global optimization: theory versus practice

View/ Open
Author
Date
1997Permanent Link
http://hdl.handle.net/10092/8430Thesis Discipline
MathematicsDegree Grantor
University of CanterburyDegree Level
DoctoralDegree Name
Doctor of PhilosophyThis thesis looks at some theoretical and practical aspects of global optimization - as we shall see they do not always coincide. Chapter 1 defines the global optimization problem, discusses applications, concluding there are fewer than often claimed, and a presents a survey of algorithms. A simple deterministic analogue to PAS, a theoretical stochastic algorithm known to have good convergence properties, is presented. The often-claimed minimax optimality of the Piyavskii-Shubert algorithm is shown not to apply for the first few iterations. The counter-example given also applies to Mladineo's algorithm. Chapter 2 concentrates on some theoretical results for global optimization algorithms. The results show that for both deterministic and stochastic algorithms, global information about the function is necessary to solve the global optimization problem. Chapter 3 introduces interval arithmetic as a tool for global optimization. A simpler and slightly more general proof of the convergence of the natural inclusion function than appears in the literature is provided. Interval arithmetic is generalised to apply to new classes of subdomains and take account of the structure of the function's expression. Examples show that generalised interval arithmetic can lead to dramatic improvements in inclusions and global optimization algorithms. Chapter 4 defines interval and bounding Hessians. The main result provides an optimal method of obtaining optimal (in two different senses) bounding Hessians from interval Hessians. Examples demonstrate the usefulness of bounding Hessians to global optimization. Chapter 5 brings together the theoretical results of the previous chapters into a new adaptive second derivative branch and bound algorithm. First, it presents a summary of the branch and bound framework and discusses the algorithms of Baritompa and Cutler. A counter-example shows that one of Baritompa and Cutler's algorithms is not valid in general and restricted sufficient conditions under which it is valid are given. The new algorithm is based (somewhat loosely in its final form) on Baritompa and Cutler's algorithms in a branch and bound framework. It achieves for the first time a cubic order of convergence in the bounding rule of a global optimization algorithm. Theoretical implications of a cubic order of convergence are also presented. Chapter 6 presents the results of testing an implementation of the new algorithm and variations. Four different bounding rules, three using adaptive second derivatives, are compared on 29 test functions. Conclusions are presented in the final chapter.