Three studies in numerical methods for statistical approximations. (2003)
AuthorsWare, Robert Stuartshow all
An explosive advance of numerical analysis techniques in recent years has paralleled the rapid increase and accessibility of computing power. This is not a coincidence. Many techniques that previously had been theoretical are now able to be applied. The most well-known of these is probably the Metropolis-Hastings algorithm which, although conceived in 1953, has only recently become practically applicable. Numerical methods are required because it is not always possible to derive explicit probabilistic models and analytically compute their associated estimators. Two major classes of numerical problems that arise in statistical inference are optimisation problems and integration problems. Optimisation problems involve determining the 'best' solutions to mathematically defined problems. Integration problems involve obtaining a numerical approximation of an integral, for cases when the integral cannot be found explicitly. Optimisation is generally associated with the likelihood approach and integration with the Bayesian approach, although these are not strict classifications. Bootstrap methods, for example, are concerned with the integration of marginal distributions, but are not Bayesian methods. The statistical techniques that we shall be primarily concerned with are Bayesian methods and the inferences that can be drawn from their use. The approaches we shall focus on are customarily associated with integration problems. In two of the three parts of this thesis we shall focus on the fact that continuous statistical models are always only approximations for measurement processes that are necessarily discrete. Numerical integration procedures provide almost unlimited scope for realistic statistical modelling. Until recently, acknowledging the full complexity and structure in many statistical problems was difficult, and often resulted in the development of specific methodology and purpose-built software. The alternative was to formulate the problem in the, often over-simple, framework of an available method. Modern integration techniques provide a unifying framework within which many complex problems can be analysed using standard computer programs. Recent numerical developments have unified researchers in all branches of applied statistics. Because traditional methods of analysis are not readily adaptable to all settings, researchers in individual disciplines have often developed original approaches for model fitting that are customised for their own problems. For example, the Metropolis-Hastings algorithm originated in the field of mechanical physics. This Thesis is split into three main Chapters; each is concerned with some branch of numerical approximation. Chapter 2 considers how to calculate the probability that the sum of the product of variables assessed with a Normal distribution is negative. The analysis was motivated by a specific problem in electrical engineering. To resolve the problem, two distinct steps are required. First, we consider ways in which we can assess the distribution for the product of two Normally distributed variables. Three different methods are compared: a numerical methods approximation, which involves implementing a numerical integration procedure on MATLAB, a Monte Carlo construction and an approximation to the analytic result using the Normal distribution. The second step considers how to assess the distribution for the sum of the products of two Normally distributed variables by applying the Convolution Formula. To conclude Chapter 2 the two steps are combined to compute the distribution for the sum of products of Normally distributed variables, and thus to calculate the probability that this sum of products is negative. The problem is also approached directly, using a Monte Carlo approximation. Chapter 3 investigates how well continuous conjugate theory can approximate real discrete mass functions in various measurement settings. All statistical measurements which represent the values of useful unknown quantities have a realm that is both finite and discrete. Thus our uncertainties about any measurement can be represented by discrete probability mass functions. Nonetheless, common statistical practice treats probability distributions as representable by continuous densities or mixture densities. Many statistical problems involve the analysis of sequences of observations that the researcher regards exchange ably. Often we wish to find a joint probability mass function over X1, X2 , .. . ,Xn , with interim interest in the sequence of updated probability mass functions f (xi+1 | Xi = xi) for i = 1,2, ... ,n - 1. We examine how well digital Normal mass functions and digital parametric mixtures are approximated by continuous Mixture Normal and Normal-Gamma Mixture Normal distributions for such items as E(Xi+1 | Xi = xi) and V (Xi+1 | Xi = xi). Digital mass functions are generated by specifying a finite realm of measurements for a quantity of interest, finding a density value of some specified function at each point, and then normalising the densities over the realm to generate mass values. Both a digitised prior mixing mass function and digitised information transfer function are generated and used, via Bayes' Theorem, to compute posterior mass functions. Approximating posterior densities using continuous conjugate theory are evaluated, and the two sets of results compared. The main achievement of this Chapter is to formalise a computing strategy that can be applied to many functional forms. An example is provided in the next Chapter. In Chapter 4 different approaches to flood frequency analysis are considered, with particular emphasis on estimating extreme hydrological events for a site, or group of sites. Flood risk has been the topic of a considerable number of publications over the last twenty years, yet there is still no consensus on how best to proceed. The problem is complicated by the need to estimate flood risk for return periods that exceed the length of observed record. Consequently much research has focused on methods emphasising data pooling. Chapter 4 begins with an examination of different frequentist approaches to flood estimation. We study at-site and regional estimates, and compare their accuracy and precision. Next, we assess flood exceedance quantiles using updated mixture mass functions as sequential forecasting distributions. These sequential forecasts are scored using three different scoring rules for distributions: the quadratic, logarithmic and spherical. The digital updating procedure is based on the work developed in Chapter 3. Both the frequentist methods and the digital forecasting procedures are applied to data collected from the Waimakariri River in Canterbury, New Zealand. We complete the Chapter by comparing the appropriateness of the frequentist and digital methods. It is found that the mixture distributions computed via the discrete digital method provide much more uniform forecasts across an array of proposed distribution families than do the frequentist forecasting methods. Before proceeding to the main body of work, we shall briefly introduce three different categories of numerical integration algorithms: non-Monte Carlo methods, non-iterative Monte Carlo methods and iterative Monte Carlo methods. Finally, the application of different methods of numerical approximation to the work contained in this Thesis will be discussed. Numerical integration algorithms approximate the generation of random variables from a posterior distribution when this distribution cannot be directly computed. Non-Monte Carlo methods of numerical integration consist of algorithms based on Simpson's method. They do not require the input of a stream of (pseudo) random numbers. Whereas algorithms based on Simpson's method evaluate a function for a sequence of equally spaced points, Monte Carlo methods are types of numerical integration based on repeated simulations. Non-iterative Monte Carlo methods, also known as traditional Monte Carlo methods, are algorithms that require a stream of (pseudo) random numbers as input and produce a sample from the posterior density as output. Examples include importance sampling and acceptreject methods. Iterative Monte Carlo methods, or Markov chain Monte Carlo (MCMC) methods, are algorithms that require a random input stream and also require iteration to realise a sample from the posterior distribution of interest. Examples include the Gibbs sampling algorithm and the Metropolis-Hastings (M-H) algorithm. Before we consider the differences between these three categories, we shall briefly introduce the Bayesian paradigm, illustrating the vital role of integration.