Engineering: Theses and Dissertations

Permanent URI for this collection

Browse

Recent Submissions

Now showing 1 - 20 of 3029
  • ItemOpen Access
    Computer simulation and controllability studies of multi-module ultrafiltration plants
    (1996) Winchester, James
    The operation and control of whey protein concentrating ultrafiltration plants seems, at times, to be difficult. These problems can cause plants to be less efficient due to losses in product quality and throughput. With the general rise in the size of the dairy industry and the subsequent increase in whey protein concentrate production it would seem important to solve these operation and control problems. This thesis presents work that has been done investigating these problems. The work can be divided into three areas. Firstly the development of suitable models for the description of UF whey concentrating plants. Secondly the use of these models to study the operation and control of UF plants in order to solve the problems mentioned above. Thirdly the study of the UF plant process from a state space perspective in order to determine the underlying reasons for the results found. The model developed for the simulation of UF plants was found to be qualitatively correct and applicable for the modelling of these plants. It showed that the control of UF plants is very closely linked with their operation. The main effect being the diafiltration ratios used in the operation of the plant. The results found, for the operation and control of UF plants, where confirmed and explained using state space analysis. The conclusions for the operation and control of UF plants are listed in section 5. 5 of chapter. This section shows an itemised list of the conclusions of chapter 5 which contains all the conclusions for the operation and control of UF plants.
  • ItemOpen Access
    Mathematical aspects of phylogenetic diversity measures
    (2024) Manson, Kerry
    Phylogenetic diversity (PD) is a popular measure of biodiversity, with particular applications to conservation management. It brings a focus to the evolutionary relationships between species that is missing in simpler approaches, such as the use of species richness. PD attains this focus by considering species in terms of their positions on a phylogenetic tree. The mathematical properties of PD, and a suite of methods derived from it, have been studied since its introduction in the early 1990’s. In this thesis, we explore these properties further, covering three main aspects of PD-related studies. The first strand of the thesis covers the study and comparison of the PD values of sets of a fixed size. We use combinatorial and algorithmic approaches to understand those sets of species that obtain the extreme PD scores for sets of their size. A combinatorial characterisation of maximum PD sets is provided. This leads to a polynomial-time algorithm for calculating the number of maximum PD sets of each size by applying a generating function. We then use this characterisation to maximise a linear function on the leaves of a phylogenetic tree, subject to the solution being a maximum PD set. Additionally, dynamic programming is used to find solutions to the dual problem, determining minimum PD sets of each size. The second strand involves phylogenetic diversity indices, a type of function that partitions the PD of a set of species among its constituent members. We give a formal definition of this class of function, and investigate the properties of functions in this class. This process is aided by a description of diversity indices as points within a convex space, whose dimension and extremal points we describe. Particularly, we show that rankings derived from these measures are susceptible to being disrupted by the extinction of some of the species being measured. We introduce a number of new measures that avoid this disruption to a greater extent than existing approaches. The third strand deals with the link between PD and feature diversity (FD), another means of measuring biodiversity. We provide models for evolution of features on phylogenetic trees that account for loss of features, such as the loss of flight in some bird species. Doing so leads to results showing that PD is an imperfect proxy for FD unless feature loss is (unrealistically) ignored. We show how our new measure, EvoHeritage, spans a continuum that connects PD and SR at the extremes, based on the rate of assumed feature loss. The distinct parts of this thesis are linked by an aim to better understand what is meant by the concept of biodiversity and to investigate how that understanding is reflected in the way that we measure this idea. We provide a mathematical approach, complemented by a number of algorithms that enable these ideas to be put into practice.
  • ItemOpen Access
    Enhancing approaches to glycaemic modelling and parameter identification.
    (2023) McHugh, Alexander Declan
    Diabetes mellitus is a metabolic disease involving degradation of the body’s endogenous mechanisms for regulating concentration of glucose in blood plasma. In a healthy body, the pancreas produces insulin, a hormone which facilitates uptake of glucose into cells for use as energy. Where the pancreas’s insulin-producing cells are damaged (type 1 diabetes), or the body becomes severely resistant to insulin (type 2 diabetes), glucose plasma concentrations can reach harmful levels, a state labelled hyperglycaemia. Complications arising from diabetes and hyperglycaemia can cause organ failure, neuropathy, and, in severe cases, death. Diabetes treatment presents a significant health, social, and economic cost. In 2021, its total cost to New Zealand was NZ$2.1 billion in 2021, equivalent to 0.67% of gross domestic product for that year, a figure which is forecast to grow. Causes of diabetes include a variety of genetic and lifestyle factors, but its increasing prevalence is primarily due to increasingly energy-rich diets combined with more sedentary lifestyles in recent decades. Many treatment approaches and diagnostic tests have been developed to assess and manage diabetes and its common co-morbidities. Treatment of fully progressed diabetes often involves insulin therapy, where analogues of human insulin are externally administered to replace or supplement poor endogenous action. Primarily, diagnostic tests aim to measure some metabolic aspect of an individual’s glycaemic system and coincide this with a development of diabetes or its precursors. Specific diagnostic metrics of interest include insulin sensitivity, how much plasma glucose levels change per unit of insulin; and endogenous secretion, the rate at which the pancreas produces insulin. Developments in computation over the last 50 years have enabled creation of computer-simulated numerical models to represent human physiology. Glycaemic models such as the Intensive Control Insulin-NutritionGlucose (ICING) model use a compartment model to numerically simulate concentrations of glucose, insulin, and other quantities in various components of the overall glycaemic system. This model is well-validated in many experimental, clinical, and in-silico analyses, where it has been used for glycaemic control, assessments of insulin sensitivity and kinetics, and other metabolic treatments. However, it still presents various opportunities for improvement, especially regarding simplified modelling assumptions and identifiability of its parameters. This thesis explores various modelling and identifiability aspects of these models, and potential applications to novel phenomena in glycaemic control and diabetes care. First, the existing ICING model is applied to two glucose tolerance test (GTT) trials, one with a small 1 U insulin modification and one with no insulin modification. Primarily, the trials were compared for the practical identifiability of insulin kinetic parameters representing hepatic clearances from plasma. Where a modelling approach is highly practically identifiable, clinical data provides unique, physiologically valid parameter values which produce computation simulations accurate to measurements. Where practical identifiability is poor, many combinations of parameter values, including non-physiological values, can provide an equivalently optimal fit to data, limiting the relevance and conclusions of the modelling approach. Identifiability analysis shows the trial with insulin modification yielded a domain of parameter values providing equivalently optimal fits which was 4.7 times smaller than the domain generated by the non-modified trial. This outcome shows insulin modification improves accurate assessment of subject-specific insulin kinetics and simulation, and suggests modification should be included in metabolic tests where accurate assessment of kinetics is a priority. Mathematical analysis of model equations, combined with practical numerical analysis, suggests parameter identifiability is improved where time profiles of parameter coefficients are distinct from each other and from other model equation terms. To this end, potential modelling benefits are posed by the expansion of previously constant parameters into more complex profiles which may more completely represent their physiological action. A parameter representing hepatic clearance rate was re-structured as a time-varying sum of mathematical basis splines to explore this approach, where parameter identification techniques identify optimal weightings of individuals splines in the structure. To maintain physiological validity, this identification was constrained based on literature analysis of changes in hepatic clearance rate over time, and of correlation between increasing hepatic clearance rate and decreasing glucose concentration. This more complex model achieves better outcome insulin fits at the expense of greater computation and constraint requirements. Improvements to existing parameter structure and identification approaches enable novel model expansions. Specifically, a potential loss dynamic reported for subcutaneous jet-injection insulin delivery can be modelled and identified by an expanded model structure, where this loss is not directly measurable nor quantified in a clinical setting. Thus, a new parameter is added to the existing model to represent a proportion of nominal insulin delivery lost at injection site. Where this dynamic possesses similar action to existing identified parameters, the enhanced time-varying structure allows robust per-trial identification of this novel parameter while maintaining or improving insulin simulation accuracy. This approach identified loss values of up to approximately 20% of a nominal 2 U dose in some patients. This identified loss proportion is consistent over a range of parameter identification protocols and values, demonstrating the robustness of this identification. Additionally, identified insulin sensitivity is shown not to vary significantly with identification of this loss factor, further validating that factor’s inclusion in the model and identification approach. Where modelling considered this potential loss and identified a loss of 5% or above, insulin fit accuracy is improved compared to the original case. Overall, this loss factor is shown to be present and quantifiable, and the model is shown to be expandable to novel loss dynamics without compromising fit accuracy or parameter identifiability. The time-varying basis spline approach to modelling hepatic clearance is further justified and strengthened with an appropriate sensitivity analysis. While previous analysis considers the general improvement gained from time-varying parameter structures, an optimal configuration of spline quantity, placement, and polynomial order is desired. For the analysed data, 20 equidistant splines of 2nd or 3rd order yielded equally optimal results and reduced RMS error of insulin simulations by up to 29% relative to a constant-value parameter model. Importantly, these modelling improvements required no change to measurement or trial protocol and impacted identified insulin sensitivity by less than 0.05% at the optimum configuration. These results further validate the analysis of subcutaneous jet injection loss, finding a greater loss was identified with a more accurate timevarying clearance model, and this loss was invariant to changes in configuration. These outcomes, while related to a single dataset and modelling case, demonstrate a methodology to justify and analyse model modifications. Overall, these analyses present various areas of improvement and development for glycaemic modelling. More accurate model predictions and parameter identification yields more precise assessment of diagnostic parameters and individual diabetic pathogenesis. Furthermore, the model’s expandability and its robustness in parameter identifiability allow novel treatments to be explored, analysed, and justified using numerical modelling techniques. While error, variability, and parameter trade-off will always pose challenges to modelling, the analyses and methodologies in this thesis ensure relevant and accurate modelling outcomes.
  • ItemOpen Access
    Utilizing augmented reality for attention guidance in outdoor cultural heritage storytelling.
    (2024) Ren, Ren
    Cultural heritage institutions have a tradition of storytelling. Storytelling is often conducted through audio guides that require visitors to pay attention to details that are essential for understanding the cultural and historical significance of cultural heritage. However, guiding attention to these details using solely audio can be challenging. While various modalities such as physical, auditory, tactile and social have been explored, they each present limitations, particularly in the complex environment of outdoor cultural heritage sites. This study explores an alternative method, specifically Augmented Reality (AR), to address these limitations. Through a user study conducted at the Kate Sheppard House involving 30 participants, and employing a within-subjects design, we compared two AR guidance techniques (Virtual Arrow and Green Laser) against the traditional Audio Only in terms of response time, knowledge retention and user preference. The study results indicate that, although there was no statistical difference in the objective data, trends suggest a potential advantage of AR in enhancing visitors’ attention. By integrating subjective feedback with observations, we offer implications for designers of outdoor cultural heritage sites and suggest directions for future research.
  • ItemOpen Access
    Creep ratchetting of Centralloy® G 4852 Micro-R reformer tube alloy.
    (2024) Caughey, Mackenzie
    Creep ratchetting is the accelerated damage experienced by a component under high-temperature (>0.4 Tm) creep conditions and fluctuating stresses. In the methanol production process, thick-walled (~15 mm) steammethane reformers consist of hundreds of vertical tubes operating at temperatures up to 950 °C and internal pressures of 2000-3500 kPa. Operating in creep conditions, these tubes are often subjected to fluctuating stresses in the form of plant shutdowns, local variations in temperature, and through-wall temperature gradients. As such, premature failure of reformer tubes has been experienced and is of particular interest to Methanex Ltd. NZ. In the current work, a brief analysis of the as-cast and aged microstructures of Centralloy® G 4852 Micro-R reformer tube alloy was completed. Scanning electron microscopy (SEM) and energy dispersive x-ray spectroscopy (EDS) techniques revealed as-cast microstructural phases consistent with those found in microalloyed reformer tube materials in industry and literature. Analysis of a macro-etched tube cross section revealed a 100% composition of columnar grains, indicating the removal of the equiaxed region during manufacturing or machining had occurred. Analysis of aged samples at 1050 °C for 1000 and 5000 hours revealed coarsening of the primary carbide network. Extensive precipitation and coarsening of smaller cube or needle shaped secondary carbides within the austenite matrix was observed. No clear difference in the structure, density, or distribution of precipitates was observed between the 1000-hour and 5000-hour aged samples. As-cast specimens of Centralloy® G 4852 Micro-R were subjected to creep and creep ratchetting (C-CR) testing at 975 °C, stresses of 30, 36, and 42 MPa, and a ratchetting ratio (dwell to hold) of 1:6. At 30 and 36 MPa stress levels, creep ratchetting tests were completed at various dwell times (20, 30, 40 minutes), with a secondary stress of 6 MPa. The 42 MPa creep ratchetting tests were completed with a dwell time of 30 minutes, and varying secondary stresses (2, 4, 6 MPa). These two testing regimes were used to identify the relative effect of elevated stresses in a dwell period on the performance of Centralloy® G 4852 Micro-R. Post processing and interpretation of this data has been completed. Further analysis of the collected data was completed using Robinson’s rule, Larson-Miller data, strain rate equations, the Omega method, the Gurson-Tvergaard-Needleman (GTN) model, and a proposed non-linear algebraic (NLA) equation. Finally, an assessment of the effectiveness of each technique on the modelling of creep ratchetting behaviour is provided.
  • ItemOpen Access
    Energy transition of dairy agriculture : scenario analysis and system concept engineering - with case study in Canterbury, New Zealand.
    (2024) Murphy, Samuel James
    Agriculture accounts for 49.2% of gross Greenhouse gas (GHG) emissions in New Zealand (NZ). Methane from enteric fermentation makes up 37% of New Zealand's gross emissions, with dairy cattle contributing significantly at 22.7% [1]. In the last three decades, there has been significant increases in both dairy cow numbers and synthetic nitrogen fertiliser use in the Canterbury region. In 1992 there were 50,000 dairy cows in Canterbury and 3.2kT nitrogen fertiliser used on dairy farms, in 2019 there were 1.2m dairy cows (including milking cows and dry cows) and 57kT of nitrogen fertiliser was used. The Canterbury dairy agriculture system must urgently reduce it’s GHG emissions in line with the 1.5ºC failure limit for global warming carbon while staying within environmental limits and continuing to provide nutrition for growing global population. This thesis explores opportunities to decarbonise dairy agriculture in Canterbury. It couples transition engineering methods with farm system modeling to evaluate the performance of theoretical farm systems with a variety of GHG mitigation strategies in place. Defining the essential activity of the Canterbury agriculture sector as providing nutrition, rather than necessarily producing milk solids allows us to explore scenarios where dairy production is downshifted and nutrients are provided by alternative food crops. The paper begins with a review of literature related to greenhouse gas mitigation strategies in global agriculture and New Zealand dairy agriculture, farm system modeling, regulation of NZ agriculture and environmental harms associated with dairy farming. Chapter 3 describes how transition engineering methods were used to aid with problem definition and development of research objectives. Chapter 4 characterises the present-day Canterbury dairy agriculture system in terms of production, financial performance and environmental performance. Chapter 5 presents a brief history of dairy agriculture in Canterbury. Chapter 6 describes scenario selection, farm system modeling results. Chapter 7 explores opportunities to transition to a low emissions Canterbury agriculture sector. Finally, the paper ends in Chapter 8 with final conclusions and recommendations for further research. The results suggest that there is no technology solution that achieves the deep emissions reduction required without decreasing dairy production. There will need to be a significant reduction of the dairy herd in Canterbury, coupled with increased production of plant-based protein sources. Synthetic nitrogen fertiliser must also be significantly reduced, along with an increase organic farming and other practices that improve soil health, biodiversity and ecological outcomes. Increasing production of plant based crops significantly increases food production per hectare and significantly decreases GHG emissions both in terms of perhectare of farm area and food production but with a significant decrease in profitability. The most profitable wheat scenario was organic wheat with electrified farm vehicles, and transport. With a carbon price of $165.40 the Organic and Electric Wheat scenario becomes as profitable as the business as usual (BAU) scenario with a high proportion of palm kernel extract purchased as supplementary feed. An organic mixed farm system where cow numbers are reduced by 75% decrease emissions by 74%, with only a 25% reduction in profitability compared with the BAU scenario.
  • ItemOpen Access
    A study on audience experience of diegetic portals as a scene transition device in cinematic virtual reality.
    (2024) Adams, Eleanor
    This thesis seeks to explore the use of portals applied in cinematic virtual reality (CVR) as a scene transition method, and to outline some key characteristics of a portal in this context. A 15 minute CVR prototype experience featuring portal transitions, triggered through interactions with a portkey object was developed in the Unity game engine. A user study was conducted with this prototype with the aim to qualitatively uncover dominant subjective perspectives of the portal transition in the CVR context. Two dominant social perspectives, that were termed Voyagers and Observers emerged from this sample. The two perspectives are distinct collective viewpoints, and each participant was sorted into one of these groups depending on the answers they gave in the user study. There was distinguishing opinions between the two collective groups, but also some opinions that were similar between both groups. The interpreted results provide a flexible best practice guideline, with four design principles for portal transitions in CVR narratives. This may be utilised by future directors and content producers of CVR experiences, as well as providing insight for researchers in the field of interactive digital narratives, and cinematic virtual reality.
  • ItemOpen Access
    Comparison of macroscopic and microscopic modelling for evaluating bus service reliability
    (2023) Khorasani, Gholamreza
    Public transportation plays a crucial role in people’s lives. High-quality public transportation is essential for individual well-being and promotes economic growth and productivity. Importantly, a reliable and efficient bus service can encourage people to opt for public transport over private vehicles, aiding transport authorities in their efforts to reduce road congestion. Bus service reliability is a pivotal factor in determining service quality for passengers. Given bus services' inherent complexity and stochastic behaviour, understanding the factors contributing to service unreliability is of utmost importance for transport authorities. This study adopts a before-and-after approach, focusing on a specific corridor in Christchurch known as Riccarton Road. It reviews the literature on several types of bus performance indicators and employs the coefficient of Variation (CoV) to assess the level of improvement in bus service components along this corridor before and after the implementation of bus priority changes. Additionally, the study investigates the presence of a correlation between dwell times at successive bus stops, link times at successive links, and the correlation between dwell times and link times. The significance of modelling for transport authorities in this context is invaluable. Accurate models can immensely benefit transport authorities in the planning and monitoring bus services, allowing them to identify bottlenecks, assess the impact of various interventions like bus priority measures, and allocate resources more efficiently. By employing both macro-simulation and micro-simulation models, this study aims to equip transport authorities with the tools necessary for maintaining and improving service reliability. Various sources of variation have been identified as contributory factors in bus service performance. Historically, macro-simulation models in existing studies have often ignored link time and dwell time distributions. Furthermore, no study has considered the correlation between link times and dwell times when assessing bus service reliability. This study leverages data from before and after the implementation of bus priority measures to investigate these overlooked aspects. It explores the correlation between various components of bus travel time and how accounting for these correlations can enhance the accuracy of macro-simulation models. The micro-simulation model developed in this study goes further by considering the interaction between buses and other vehicles at bus stops. This helps in comparing the reliability of bus services with and without bus priority measures at stops. The study concludes by offering a comprehensive comparison between micro-simulation and macro- simulation models. This comparison covers various aspects, from the accuracy of the models to the level of effort required to develop them, the skills needed, the level of detail each model offers, and the amount of data each requires. It also delves into their respective applications in the planning and monitoring of bus services, providing a holistic view for transport authorities to make informed decisions.
  • ItemOpen Access
    Application of an approach to Terrain Stability Assessment in the context of New Zealand Forestry Operations.
    (2024) Smith, Simon
    Terrain Stability Assessment (TSA) as it relates to forest harvesting operations is a process wherein an upcoming harvest area is evaluated on its hazards pertaining to land sliding/mass movement. While the process for TSA has undergone multiple changes with the advances in knowledge and technology over time, it is necessary to continue to evaluate them with regard to forestry operations to allow for optimal management decisions. This is relevant to current New Zealand conditions, where increasing frequency of weather events such as cyclone Gabrielle continue to put pressure on the forest industry to improve its practise, in order to maintain its social license to operate. The Pacific Northwest (PNW) region (British Columbia, Washington, Oregon, and California) shares several similarities with New Zealand forestry in terms of landscape, harvesting methods, and growing crop. As such, they share many of the same issues - one being landform instability. Several of these PNW states have developed and tested TSA processes for new harvest plans. Within the scope of this project, these processes were reviewed. They generally involve input from a third party such as a geologist or geotechnical engineer, with prescriptions that include ‘leave areas’ of standing timber, buffer zones around waterways and alternate methods of road construction (such as minimising side-cast). New technology and methods are also affecting the way this region conducts TSAs. For example, Oregon now utilises modelling derived from LiDAR and ground information to prescribe their ‘leave areas’. Although having slightly different environment management goals/requirements to New Zealand, this study helps show that this process can be learned from. To understand contemporary practise, interviews with nine different management companies in New Zealand highlighted an experience driven approach to the identification and management of unstable terrain. While there may not exist a definitive ‘written approach’ to assessing terrain stability in New Zealand, many processes were similar across companies in terms of resources used and steps taken. Generally, this involves the utilisation of available LiDAR data (or contour) to generate maps in GIS showing slope, hill shade, as well as aerial imagery. Sites are then visited and ‘walked’, which involves looking for ‘problem’ features which may have been identified by previous mapping. Ambiguity remains once unstable areas are identified, with respondents making the case for site specific evaluation and recommendations over blanket, nationwide prescriptions. Various assessment techniques were considered, with a goal of assessing the efficacy of implementing them under NZ steep slope plantation forestry conditions. Relevant techniques pertaining to the landslide issues faced in NZ forestry were compiled into one TSA process and are demonstrated via six case studies across New Zealand. This method involves three stages. Data collection and validation (LiDAR, geological information), modelling (susceptibility via empirical regression, ‘Spiekermann Model’; morphometric ratios, ‘Melton Ratio’), and field assessment (structured similarly to an approach from the British Columbia Ministry of Forests and Environment). The process illustrates an approach that a forest management company within New Zealand would be able to replicate to help inform operational decision making. Overall, the TSA methodology demonstrated in the case studies did a fair job of capturing the landslide hazards specific to each site. Combining initial modelling and a subsequent field visit allowed for the identification and recording of shallow slip and debris flow behaviour – however, some sites presented different erosion considerations. Finally, these limitations are discussed as well as any assumptions made throughout the process, with areas of future study identified.
  • ItemOpen Access
    3D printed monolith adsorption as an alternative to expanded bed adsorption for protein purification.
    (2023) Pei, Yuanjun
    The fluid flow and chromatographic performances of various cellulose three-dimensional-printed Monolith Adsorption (PMA) columns designed from a triply periodic minimal surface geometry, the Schoen Gyroid or Schwarz Diamond 2, were compared in this thesis. The structures examined had designed hydraulic diameters between 203 and 458 μm and voidages of 40% ~ 60% and were functionalized with a quaternary amine anion exchange ligand. The tests included four aspects: column efficiency, porosity, static binding capacity and dynamic binding capacity for various load volumes and flow rates. The results show that all Gyroid structures allowed efficient passage of yeast cells (> 97%) over a wide range of interstitial velocities (191 to 1911 cm/h) while maintaining a low pressure drop (< 0.1 MPa). The structure with a voidage of 40% and a hydraulic diameter of 203 μm showed the best performance in all aspects evaluated. Bovine serum albumin (BSA) recoveries for all Gyroid structures (27% ~ 91% when the loaded volume was 180 mL) were significantly affected by hydraulic diameter, mean channel wall thickness, velocity and voidage. Two anion exchange PMA columns of different structures and lengths were directly compared with commercial Expanded Bed Adsorption (EBA) (Cytiva Streamline Q XL) columns. Compared with a 60% PMA column and the EBA column at 4 and 6 mL/min, whose stationary phase volumes were the same, the 40% PMA column had the highest BSA binding capacity/per CV mL after loading a 4 L sample, regardless of the addition of yeast cells (above 85 mg/per CV mL when with yeast cells) with Height Equivalent to a Theortical Plate values of 0.90 ~ 1.13 cm. The PMA did not have the bed instability issue encountered in EBA and shortened overall operating time compared to that of EBA, because of a wider range of possible flow rates. Protein A could be immobilized on PMA columns made of agarose hydrogels via three different methods and all of these protein A immobilized columns could effectively bind with human immunoglobulin G (IgG). When the ligand density was 2 ~ 4 mg/per gel mL, a high level of IgG static binding capacity of over 20 mg/per gel mL could be achieved by using site-specific immobilization with spacers. PMA thus potentially provides an appealing alternative to EBA, retaining the latter’s advantages, while eliminating fluidisation issues and minimising both processing time and buffer consumption.
  • ItemOpen Access
    Heat transfer analysis in pulsed pressure mass transport.
    (2024) Mohamed, Salma Hamdy Radwan
    PPMOCVD is a technique for depositing thin-film coatings on a heated substrate. It includes controlled pulses of precursor solution injected through an ultrasonic nozzle into an evacuated low-pressure reactor. Heat and mass transfer are crucial in PPMOCVD and significantly influence deposition. Temperature profiles govern heat transfer, impacting chemical reactions and material deposition. Mass transfer is driven by solvent injection, which determines the quality and composition of the deposited material. The primary aim of this research is to extensively explore various aspects related to the PPMOCVD process, aiming to enhance understanding and optimize system performance. Critical focus areas include adapting and validating a resistively heated wire technique for use as the heated substrate, analyzing the heat transfer implications of different solvent injections, and assessing the influence of introducing a physical barrier "shield," on the heat transfer process. The research was initiated by adapting halogen light bulb technology and filament to develop a precise temperature measurement method. This method was employed to assess wire temperature uniformity under resistive heating using a comprehensive COMSOL model. Additionally, experiments in heated liquid baths facilitated reliable temperature measurements and the determination of the temperature coefficient (α) for halogen light bulb filaments. Further investigation delved into heat transfer dynamics within the PPMOCVD system. Initial measurements were taken to determine the wire temperature range where radiation appeared as the dominant heat transfer mode, a crucial factor in determining wire temperature before pulsing and ensuring measurement consistency. The pulsed heat transfer coefficient (ℎ𝑝) was the focus point, with detailed calculations and analysis conducted in response to alterations in PPMOCVD parameters, including solvent volume, pulsing duration, and resistively heated wire temperature. Another experiment was conducted to assess the changes in the heat transfer coefficient by varying the injected solvent, focusing on toluene, butanol, and isopropanol due to their favorable vaporization properties. The experimental findings provided valuable insights into various aspects of the PPMOCVD process. Firstly, the investigation into wire temperature ranges elucidated the dominance of heat conduction as the primary mode of heat transfer below 250 °C, shifting to radiation as the dominant mode beyond this threshold. This understanding is crucial for optimizing heat transfer efficiency within the system. Additionally, the experiments assessing changes in the pulsed heat transfer coefficient unveiled significant impacts of altering PPMOCVD parameters. Longer pulsing times and increased solvent volume were associated with reduced heat transfer coefficients, highlighting the importance of parameter optimization for efficient heat transport. The statistical analyses confirmed the significance of pulsing time, injected volume, and mean reactor pressure on the pulsed heat transfer coefficient, providing a comprehensive understanding of the factors influencing heat transfer efficiency in the PPMOCVD system. Furthermore, the investigation into different solvents revealed isopropanol as the most effective in enhancing heat transfer efficiency, emphasizing the role of solvent choice in optimizing system performance. These findings collectively contribute to a deeper understanding of heat transfer dynamics in PPMOCVD and provide practical insights for enhancing deposition processes in materials science and advanced electronics manufacturing. Overall, this research contributes to a deeper understanding of heat and mass transfer in the PPMOCVD system and provides practical insights for optimizing this crucial technology in materials science, vacuum, and advanced electronics manufacturing.
  • ItemOpen Access
    Materials with character : designing with biocomposites.
    (2023) Thundathil, Manu
    The material research world today is in a fervent pursuit of developing sustainable materials that could substitute non-renewable and fossil-based materials. Consumers, manufacturers and regulatory agencies favour the adoption of sustainable materials which reduce adverse environmental impacts and provide a sustainable alternative to fast-exhausting natural resources. Biobased composites provide an excellent material substitute, incorporating biobased ingredients and offering excellent material characteristics suitable for consumer products. While the general awareness regarding material sustainability has increased, this is not reflected in the raw material supply chains as well as consumer products. Biobased plastics (and biocomposites by extension) still form only a tiny sliver in comparison with the overall polymer production at present, and most of the attempts at achieving material sustainability have been primarily limited to incorporating biobased fillers into traditional polymers. The overarching aim of this thesis is to identify pathways to develop highvalue sustainable materials like biocomposites, which can address this gap in the market. Hence, the thesis begins with a detailed review of the present status of biocomposites as a material family, their advantages and disadvantages. This literature review revealed that while biobased composites provide a compelling material option, they are limited by technical and perceptual handicaps. Most of the research in this field has been aimed at addressing the technical drawbacks and preparing materials that could imitate the technical characteristics of conventional materials. However, perceptual issues such as lack of identity and poor perception of value and aesthetics, which are critical to the success of consumer products, have yet to be explored. Thus, this research attempts to understand the framework of material perception so that the critical material parameters influencing perception are identified; defining these variables and their relationships could help material designers create biobased composites that consumers favourably perceive. The Semantic Differential method was used to understand how people perceive various biobased materials, followed by a ranked-order comparison. This research saw four consumer perception studies, with the early studies being exploratory. Each study contributed to the subsequent ones by narrowing the survey scope to the key attributes. The first study concentrated only on the role of visual ( digital) stimuli in forming perception, and this revealed the role of visual characteristics like fibreness, visual order, contrast, and perceived roughness on material perception. This study also found no effect of age, gender or polymer type on material perception. A set of attribute-attribute relationships which could contribute to a desirable (beautiful, valuable, strong) and distinguishable (natural) perception was also determined. This study was followed by a second visual-tactile (physical) perception study, which was instrumental in revealing the significance of tactility in forming perception. Since most human interaction with objects is in visual-tactile mode, this helped define and identify the critical material properties and their relationships with emotional attributes. Some of these emotional attributes were also found to correlate with others, and the combination of material and attribute correlations led to the creation of a biocomposite perception framework. This framework portrayed the relationships between various key attributes (beauty, value and naturality) and the material characteristics (visual and tactile) that influenced the potency of these attributes. These results, in contrast with the digital study, were also helpful in understanding the limitations of perceiving objects digitally, especially in an ecommerce context. However, the analysis of these early studies was primarily quantitative, and to examine the robustness of the attribute selection and to probe for additional influences, subsequent studies included qualitative modules. One key finding from the qualitative data was the influence of past material experiences in forming material perception. These qualitative studies clarified the aptness of the elements in the proposed framework, and a final study was conducted to test the robustness of this framework. This validation study included consumers and designers as two distinct groups of respondents. A prediction of biocomposite performance (against the key attributes) based on the proposed framework was also made, and this ranking prediction was compared with the actual perceptual rankings from the consumers. The results showed statistically significant correlations between predicted ranks and actual ranks by consumers, indicating a reliable perception framework. This framework was converted into a prediction model with modified weights for material characteristics, and this produced even stronger correlations, pointing to a dependable model for predicting material perception. Interestingly, there was a perceptual dissonance between consumers and designers, revealing that designers might not accurately assess how consumers perceive materials. This dissonance may be attributed to the designers' familiarity with a wide range of materials and the biases due to their professional practice. Hence, the perception framework can also serve as a tool for assisting designers in material identification and selection. This thesis offers insights into how people perceive biocomposites as well as relationships between various material characteristics and perceptual attributes. This perception framework can be integrated with the conventional product design process as a perception-based material design process. This framework can help product designers and material scientists to collaborate and create biocomposites tailor-made for specific consumer segments or product categories. This approach would lead to perceptually attuned products which offer better consumer experiences, leading to extended product ownership and reduced obsolescence.
  • ItemOpen Access
    The sense of copresence in a job interview environment supported by an augmented reality device.
    (2024) Figueroa , Felipe
    Within an organization, personnel recruitment is a critical activity in the human resources strategy. Job interviews are considered one of the most widely used recruitment methods; however, the evolution that this type of method has undergone over time, especially with the use of new technology, has drawn the attention of researchers in the field of human interface technology. One of the disadvantages of e-recruitment, as the new technological tools of the recruitment process are known, is the negative reactions that the use of technological tools generates in the interviewees. Augmented reality (AR) has stood out for its use in the industrial and health fields thanks to the display of virtual information in the real world. If we think that the use of this tool could be massified to other work contexts, such as personnel selection, and on the other hand, we take into account the adverse responses that this technology could have on the interviewees, the study of these responses is then relevant. In order to know these responses, exploratory and experimental research has been conducted, grouping the participants in two contexts: one under conditions of interaction with AR technology and the other without such support. The results of this research show the ability of participants to perceptually isolate a possibly new stimulus (the AR headset) and focus their attention on the interview questions. On the other hand, the importance of the quality of the sustained rapport between interviewer and interviewee is also evidenced as an element that eliminates the theoretical barrier that the use of an augmented reality device could mean.
  • ItemOpen Access
    Numerically investigating the conformal field equations near spatial infinity.
    (2024) Markwell, Oliver
    This thesis presents work towards a global numerical evolution of the linearised conformal field equations near spatial infinity. We describe the mathematical framework for the initial boundary problem formulation of the conformal field equations. Numerical methods are developed to avoid the singular behaviour of the equations at the critical sets at the top and bottom of the cylinder at spatial infinity. Implicit numerical methods, along with the application of adaptive step sizes, are successfully applied to evolve initial data from past null infinity on to the cylinder. Then, a combination of spatial and temporal evolution equations are used to evolve around the future critical set and extract the data on future null infinity. We also consider briefly the application of these techniques to a fully compactified setting.
  • ItemOpen Access
    The effect of thermomechanical processing parameters on the texture of Ti-6Al-4V forgings as a precursor to coarse grain growth.
    (2023) Wiley, Richard
    This thesis explores the relationship between forging parameters - total strain, strain rate, and forging temperature - and the emergence of abnormal grains during subsequent heat treatment of Ti-6Al-4V. Abnormal grain growth can compromise material properties, making its understanding crucial for optimizing manufacturing processes. Through systematic experiments, varying the forging temperature between 875 and 975°C and the strain rate between 0.1s⁻¹ and 10s⁻¹ , the study reveals how different forging conditions influence the as-forged microstructure through the alteration of the kinetics of flow softening and how this affects abnormal grain formation. Microstructural analysis through optical and scanning electron microscopy methods demonstrate correlations between these forging parameters and the occurrence of non-uniform grain distributions. The insights gained offer strategies to mitigate abnormal grain growth during heat treatment, advancing materials science and manufacturing practices. The results show that forging temperature and strain rates impact prior beta grain size differently. Higher forging temperatures at 975°C lead to larger grain sizes due to increased boundary mobility favouring grain growth. Lower forging temperatures at 875°C produce more consistent grain sizes due to uniform boundary energies resulting from a weaker texture. For samples forged at 925°C, varying strain rates cause significant differences in grain size due to the shift from dynamic recovery to dynamic recrystallisation as the primary strain relief mechanism. This shift in mechanisms is also reflected in stress-strain responses, with dynamically recrystallised samples showing higher peak stress and more transience. The dynamic recrystallisation process leads to the nucleation of strain-free grains with distinct orientations, contributing to a weaker textured material. The study validates that abnormal grain growth is influenced by strain rate and forging temperature, with increased strain-strengthening precursor texture. Lower forging temperatures lead to weaker texture and more normal grain growth. From the findings, it is recommended to forge above critical rates for recrystallization, at the required forging temperature. The research suggests forging above 5s⁻¹ up to 925°C or 0.95 Tβ, although this is adjustable to 0.1s⁻¹ at 875°C/0.9 Tβ. Billet size indirectly affects abnormal grain occurrence by affecting the strain rate, with smaller billets displaying uniform grain size after annealing. Thus, changes in the thickness of Ti-6Al-4V parts could cause abnormal grain growth in thicker sections during annealing and should be considered prior to forging.
  • ItemOpen Access
    The biofactory : implementing a life cycle sustainability assessment decision making tool for quantifying integral sustainability benefits of the wastewater circular economy in Chile.
    (2023) Furness, Madeline
    The “Biofactory” is a circular economy-based concept for wastewater treatment that improves water quality, promotes efficient use of materials and energy, recovering resources, generating stakeholder collaboration, and decreasing both emissions and costs. This proposes a solution for the global challenge of integrated water and sanitation management. Due to socio-economic bottlenecks, such as typical high costs and low public acceptance of novel resource recovery scenarios in wastewater treatment, realizing the Biofactory goals becomes a difficult task. Decision makers are currently unable to appreciate the environmental and social benefits of the Biofactory, as most decision-making tools focus on mainly technical and economic aspects. This research is the first to quantify integral sustainability benefits of co-product recovery of treated effluent, biosolids, biogas and nutrient in two full-scale “Biofactory” wastewater circular economies in Chile. Life Cycle Sustainability Assessment (LCSA) was implemented, combining Life Cycle Assessment (LCA), Social Life Cycle Assessment (SLCA) and Life Cycle Costing (LCC) with a Multi-criteria Decision Making (MCDM) model to quantify integral environmental, socio-cultural, and economic sustainability impacts of two Plants, A and B. Three scenarios for each plant were considered, discharge of wastewater without treatment, conventional wastewater treatment with no resource recovery, and biofactory wastewater circular economy configurations, to determine if each plant decrease impacts and determine which had better performance. LCA results showed Plant A decreased overall environmental impact by -37 % compared to baseline conventional scenarios, while Plant B -31 %. SLCA results showed Plant A decreased social impacts – 56 %, while Plant B – 18 %, therefore, Plant A had better overall environmental and social performance. However, Plant B decreased economic impacts by -48 % compared to an increase of 20 % in Plant A. Therefore, when combining scores using a MCDM model, Plant A decreased total sustainability impacts by -30 % and Plant B by -58 %, therefore, the resource recovery systems implemented in Plant B had better overall sustainability performance. These results were discussed across process contributions to environmental, social, and economic benefits. Model limitations were discussed, and recommendations were made for future applications of this research. The investigation demonstrated that the transition to WW-CEs improved integral sustainability according to the LCSAMCDM model implemented in both Plants. The urgent need to adopt sustainable decision-making models was highlighted and discussed, to not only improve sanitation coverage, but also improve sustainability performance of the sanitation industry across the globe.
  • ItemOpen Access
    Teaching dance with mixed reality mirrors : comparing virtual instructors to other forms of visual feedback.
    (2024) Treffer, Anna
    This research aimed to assess whether a virtual instructor and visual feedback combination displayed on a Mixed Reality (MR) mirror can be used to teach a beginner a simple dance routine, replacing the traditional instructor and mirror methods. A prototype was developed using a camera and projector that displayed a digital mirror image of the participant as they learned dances, with the system able to overlay computer graphics onto the image. The camera used to capture the image and motion of the participants was a Microsoft Azure Kinect camera. Three visual feedback types were developed and used as randomized conditions in the user study based on input from expert interviews and an online survey. These were Spheres, Rubber Bands, and Arrows. Three simple dance routines were developed, motion captured, and presented in random order in the user study. During the user study participants learned the dances by following a virtual instructor in the MR mirror (present for each condition), with the MR mirror providing a different form of visual feedback for each dance. After practicing a dance three times with the feedback, participants then performed the dance in front of the MR mirror following the virtual instructor without any feedback, and the system measured the accuracy of their performance by comparing the amount of time that the user’s joints, such as shoulders and elbows, were within desired bounds for each pose. Participants filled out an AttrakDiff Questionnaire describing their experience for each form of feedback, and gave comparative opinions of the different forms of visual feedback in a final interview. The results showed that participants performed best with the Arrows feedback variant which were a directional feedback showing their depth difference, however they ranked this variant the lowest based on their own preference. The most preferred form of feedback was Spheres, which were the simplest feedback, not providing any guidance into the correct pose, but participants performed poorest with them.
  • ItemOpen Access
    Toward a numerical implementation of cauchy-characteristic matching
    (2024) Zidich, Jack
    Our modern understanding of gravity was first introduced in the early 20th Century by the pioneering work of Albert Einstein, who formulated the General Theory of Relativity. This began in 1905 with him reworking classical notions of space and time: from his two postulates that (1) the laws of electrodynamics and optics hold in all inertial reference frames, and (2) the speed of light in a vacuum is always the same, he showed it followed that these measures which were previously held to be absolute are actually relative to a particular observer or coordinate system [26]. While this original theory was only valid in the absence of gravity, and so would later be termed the Special Theory of Relativity, it was soon generalized in 1915, with the publication of the Einstein Field Equations (EFEs) [27] [29], to take this additional layer of complexity into account, and so to be capable in principle of explaining all the phenomena covered by Newtonian mechanics, in addition to providing new testable predictions, such as regarding the perihelion precession of Mercury, on which it was found to be in greater agreement with observations than earlier theories were. In the General Theory, space-time is described by the 10 components of a symmetrical 4-dimensional tensor, known as the metric, which describes how distances between points are determined. Gravity is understood then not as a true force, as it is in Newtonian gravity, but instead using the tools provided by differential geometry, in which gravity can instead be interpreted a property of the geometry of space-time, which is curved by mass- energy, with particles merely following geodesic paths through this geometry. While the most straightforward gravitational phenomenon is that of the curving of space-time by the presence of a massive body, as early as 1916 [28] Einstein additionally proposed the existence of gravitational waves: the radiation of energy from a system due to the periodic change in space-time curvature caused by the motion of massive bodies relative to each other. In most cases, these gravitational waves are exceptionally small, and indeed entirely negligible, however, for some of the most extreme astrophysical events, such as in the collision of black holes, this ceases to be true, with multiple solar masses worth of energy being radiated away. Interferometers have been built around the world to attempt to detect the gravitational waves emitted from such events, including LIGO in the United States of America [41], KAGRA in Japan [54], and VIRGO in Italy [57]. LIGO first observed a confirmed signal from a black hole-black hole merger in 2015 [42], and in the following years dozens more detections have been made, including of a neutron star-neutron star merger that was also observed in the electromagnetic spectrum by the Fermi Gamma-ray Burst Monitor [43]. Such gravitational wave astronomy provides a wide range of possibilities, both as a new way of testing the limits of General Relativity, and also of observing distant astrophysical events and objects, which previously could only be investigated via electromagnetic radiation. While currently this is limited to certain classes of black hole and neutron star mergers, the next generation of gravitational wave detectors, such as the space-based interferometer LISA that is planned to be launched in the 2030s [44], will not only increase the rate at which mergers are detected by orders of magnitude, but will also be able to detect new gravitational wave sources, including extreme mass-ratio inspirals and the gravitational background that originated in the early universe. Yet, while this field shows a lot of promise, even just making predictions out of Einstein’s theory is exceptionally challenging. The first full analytical solution to the EFEs, the Schwarzschild solution describing the curvature around a single non-rotating and uncharged point mass, was published in 1916 [52], only a short time after Einstein’s original proposal of General Relativity, yet it would take until 1963 for this to be extended to the case of a rotating mass [40]. To avoid these analytical complexities, a range of numerical approaches have been developed, yet these too are far from trivial - the biggest additional difficulty of numerical relativity, when compared to classical mechanics, is that time is not an absolute universal variable over which a simulation can straightforwardly be evolved, but is instead itself part of the variable space-time geometry. In this thesis I will be focusing on two of these numerical approaches in particular that are used in the context of binary black-hole simulations, as is needed for interpreting interferometer data. The first of these is BSSN [5] [53], one example of a 3+1 approach in which the 4-dimensions of space-time are ’foliated’ into 3-dimensional sheets between which it is possible to evolve (albeit with added complications, when compared with a classical simulation, as to how one moves between these sheets), which has found substantial success in simulating black holes - being used, for example, for one of the first full simulations of a binary black-hole merger [20]. Yet BSSN comes with certain drawbacks, in that it is computationally expensive when simulating large regions of space-time, as for example is needed in order to investigate the transmission of a gravitational wave from a distant source to a detector on Earth. The second approach I will look at, meanwhile, has the opposite problem. This is the characteristic formulation, which can easily be written in ’compactified’ coordinates, which allow for one to simulate across even an infinite distance in a finite computation time, but which is not capable of accurately evolving a simulation close to a source. To take advantage of the strengths of both of these approaches, while avoiding their weaknesses, it was proposed by Bishop in the 90s to take a mixed approach, combining the usage of both into a single simulation. Two possible ways of accomplishing this, termed Cauchy-Characteristic Extraction (CCE) [12] and Cauchy-Characteristic Matching (CCM) [11], were suggested, of which CCE has seen successful implementation in determining wave- forms from black-hole mergers [48]. CCM, however, has yet to see such success, with it’s viability even having been questioned recently [33], and it is this, along with the component BSSN and characteristic parts, which I aim to investigate in this thesis. In chapter 2, I will provide a literature review, covering the key past developments regarding CCM. Then, in chapter 3, I will describe my own numerical implementation. Here I begin in section 3.1 by briefly discuss the numerical tools I used for my Python implementation, highlighting in particular my usage of the COFFEE package. This is followed in section 3.2 by an in-depth treatment of the Cauchy side of CCM, building up from ADM to Cartesian BSSN and then to BSSN in spherical polar coordinates, as well as a discussion of my choices in gauge and initial data for the BSSN system, and the SAT method that I implemented as an approach to provide boundary data. Then, in 3.3, I will move on to the characteristic formulation, here building up from the Bondi-Sachs metric and providing an overview of the usage of spherical harmonics, before again looking at a possible choice for the initial data. Finally, in 3.4, I will turn to CCM itself, describing one suggestion for a gauge choice as well as the requisite coordinate transformation for moving between the BSSN and characteristic systems. On to chapter 4, I will give the results of my implementation, explaining my success at implementing both the BSSN and characteristic systems, and highlighting in particular the new results derived from my use of the SAT method for the BSSN system, but ultimate failure to fully develop a working CCM implementation. I will then summarise what I have accomplished in chapter 5, and also discuss future work that could be done on CCM based on my results.
  • ItemOpen Access
    Entering the realm of the wetlands: design and evaluation of engagement for a mobile augmented reality game
    (2024) Yin, Wenliang
    This thesis explores the potential of Augmented Reality (AR) in enhancing environmental edu- cation, with a focus on the conservation of New Zealand’s wetlands. Through the development and evaluation of a mobile AR game titled “NZ Wetlands Invasion,” this research investigates how different perspectives in the game impact player engagement and learning outcomes. Em- ploying a mixed-methods approach, the study integrates quantitative data from pre-test and post-test quizzes with qualitative feedback from participants to assess the effectiveness of AR in fostering environmental awareness and knowledge. Findings from this study reveal that the first-person view (FPV) in the AR game significantly enhances player engagement by providing a more immersive experience compared to the bird’s eye view (BEV). However, contrary to initial hypotheses, there was no significant difference in learning outcomes between FPV and BEV perspectives. This suggests that while FPV may offer a more engaging and immersive experience, both perspectives are equally effective in facilitating learning about wetland conservation. The research results also highlight the influence of environmental factors and physical comfort on the AR learning experience, underscoring the need for careful consideration of the physical and environmental context in which AR games are deployed. Additionally, the study addresses the concept of response shift bias, illustrating the complexity of measuring learning outcomes. In conclusion, this thesis contributes valuable insights into the design and implementation of AR in environmental education, offering recommendations for future AR game development aimed at engaging and educating users about environmental conservation. The findings suggest broader applications of AR in enhancing learning experiences across various domains, encouraging for the integration of immersive technologies in Game-Based Learning Environments(GBLEs) to boost a deeper understanding and appreciation of environmental issues.
  • ItemOpen Access
    Damping models for inelastic structures
    (1980) Chrisp, D. J. (David John)
    A comparative study of different damping models has been made in an effort to gauge the effect of the damping model on the inelastic response of multi-storey reinforced concrete frames. It will be shown that the higher mode response in an inelastic analysis cannot be ignored as it can in an elastic analysis and, in fact, affects the response markedly. As a consequence, the amount of damping imposed on the higher modes becomes important. Using a dynamic analysis program to analyse six and twelve storey one way frames keeping all other parameters constant, the amounts of damping in the various modes of vibration were varied. Elastic and inelastic analyses were carried out to contrast their response due to varying the damping modes. Through an analysis of the results, some recommendations are made as to the amounts of damping to be associated with the different modes of vibration to help obtain a realistic structural response.