Essays on meta-regression analysis : two empirical studies on tax and economic growth and a simulation.
Degree GrantorUniversity of Canterbury
Degree NameDoctor of Philosophy
This thesis consists of three essays linked together by the methodology of meta-regression analysis. The first two essays address a long standing question of interest to both economists and policymakers which is whether taxes exert an important influence on economic growth and, if they do, how large this effect might be. To answer this question, I study two different setups. The first involves OECD countries and the second studies U.S. states. The last essay studies the performance of the FAT-PET-PEESE (FPP) procedure, a commonly employed approach for addressing publication selection bias in meta-regression analysis studies in economics and business.
In my first meta-regression analysis, I combine results from 42 studies containing 713 comparable estimates, all which endeavour to estimate the effect of taxes on economic growth in OECD countries (Chapter 2). I then switch from an institutionally and culturally diverse setup to a setup in which there is a set of common institutional features, U.S. states. I integrate 966 estimates derived from 29 studies investigating the effect of taxes on economic growth in the U.S. states (Chapter 3). The objective of these two studies is to answer the following questions: (Q1) What is the overall, mean effect of taxes on economic growth?; (Q2) Are some taxes (e.g., personal income taxes) more distortionary than others (e.g., value added taxes)?; (Q3) Is there any empirical evidence to support the conventional wisdom that “distortionary taxes” used to fund “unproductive expenditures” are especially harmful for economic growth?; and (Q4) What are the factors causing researchers to encounter different or even contradictory results? My results for OECD countries suggest that there is publication bias towards negative estimates. Controlling for publication bias, I find that the overall effect of taxes on economic growth is statistically insignificant and negligibly small. An increase in unproductive expenditure funded by distortionary taxes has a significant negative effect on growth. I find weak evidence to support the idea that some taxes are more distortionary than others. Lastly, several factors are present that can explain discrepancies among the reported estimates, such as estimation methods, types of standard errors, whether the original study was published in a peer-reviewed journal, the publication date, and so on. I find the following outcomes in the study of taxes in U.S. states: estimates in the literature are characterized by statistically significant negative publication bias. Once I control for publication bias, the overall effect is not particularly meaningful since it lumps together different kinds of tax policies. With respect to particular types of taxes, I could not find enough evidence to support that taxes on labour are more growth retarding than other types of taxes. Evidence regarding other types of taxes is mixed. Finally, as with results for OECD countries, there are several factors that appear to explain discrepancies among the reported estimates for U.S. states.
In the Chapter 4, I conduct a Monte Carlo analysis to evaluate the performance of the FPP procedure in detecting and correcting for publication bias. The main three objectives of applying FPP procedure are: (i) Funnel Asymmetry Testing (FAT) to test whether the sample of estimates is influenced by publication selection bias; (ii) Precision Effect Testing (PET) to test whether there is a genuine non-zero true effect of estimates once the publication bias is accommodated and corrected; and (iii) Precision Effect Estimate with Standard Error (PEESE) to obtain an improved estimate of the overall mean effect. I simulate two common types of publication bias. These are publication bias against insignificant results and publication bias against wrong-signed results (according to associated theory). I run these simulations in three different environments, Fixed Effects, Random Effects, and Panel Random Effects. My findings indicate that the FPP procedure performs well in the basic but unrealistic environment of “Fixed Effects”, when there is one true effect and sampling error is the only reason why studies produce different estimates. However, once I study its performance in more realistic data environments, where there is heterogeneity in the population effects between and within studies, the FPP procedure becomes unreliable for the first two objectives, and is less efficient than other estimators when estimating overall mean effects. Further, hypothesis tests about the overall, mean effect cannot be trusted. These results call into question the efficacy of using the FPP procedure to test and correct for publication selection bias in meta-regression analysis studies.